When building an experience using an LLM, is the LLM the engine or the car?
Is it a component of the overall thing, or the overall thing itself?
Another related metaphor that emphasizes agency: is it the jockey or the horse?
Software is rigid and precise and predictable; when you give it free reign, you can know (mostly, most of the time) how it will operate.
But LLMs are squishy. They are more impressionistic. They lose the plot, especially the longer it's gone since the last checkpoint with whatever entity is guiding them and giving them direction.
It's a mistake to expect the precise, perfect execution of software out of LLMs.
These things are like impossibly precocious middle-schoolers, who never get bored, and who have read 1000x more books than you will in your whole lifetime.
But they're still middle schoolers, with the theory of mind of an 11 year old.
They easily get lost, ungrounded.
You should be careful about handing them moral agency to act fully on your behalf.
Englebart back in the 60's had a frame: not AI (Artificial Intelligence) but IA (intelligence amplification).
The agency and responsibility come from the human, the amplification comes from the computer.
The computer, in this frame, is like a telepathically controlled exoskeleton.
Another way to reign in this tendency of LLMs to get lost: give them shorter chunks of things to do: tasks, not jobs.
It's harder to get lost when they don't go very far.
And you can more quickly intervene to nudge them onto the right path if they get lost.
This is the same intuition as agile software vs waterfall development: reducing the length of feedback loops gives you better control and steerability.