A good example of an LLM/human in the loop: systems that can be formally checked.
E.g. compilers, which will report errors preventing compilation.
Or a policy matching logic, which can report configurations that are not allowed by policy.
LLMs do a very good job at seeing the error and then proposing tweaks to get the error to go away.
This loop can be done without a human in the loop.
LLMs are also pretty good at judging the output quality of a thing.
This allows a human-in-the-loop style resilience without having to bother the actual human unless you get something viable and good.