In many AI products today, the ceiling is set by the LLM's quality.
That is, if the LLM doesn't work properly in a given situation, the product doesn't work.
In some ways this is reasonable: LLMs are improving the quality-per-unit-cost rapidly.
But a better approach is to design a system where the LLM's quality is the floor.
If the humans are always able to be inside the system configuring it, then the LLM becomes a bonus.
It can automatically configure many things for the user.
But if it fails in a given circumstance, then the user can pop open the hood and fix it.
This then gives a self-hoisting feedback loop to improve the quality for all users.
This latter approach is significantly more resilient for a cutting-edge technology with variable quality.
The AI sets the baseline that humans can improve.
The tool should be usable even if AI doesn't do a good job.