I'm skeptical of the role of "agents" in whatever AI-native ecosystem emerges.
Agent implies agency, and agency implies something that could take actions behind your back… including stabbing you in the back.
If the agent can take actions with you out of the loop, you either have to constrain it significantly (by giving it less data or levers to pull)… or have a very small number that you trust enough to work with.
A self-limiting asymptote.
Instead I think we should focus on the role of tools that we co-create with AI, with humans in the loop.
Software is the ultimate tool.
LLMs are great at creating hallucinated software.
Hallucinations that are good we call simply "imagination."