LLMs in the computation loop can create more resilience.
Last week I riffed on the idea that systems with humans embedded are more resilient.
Systems with LLMs in the loop also get a similar kind of resilience.
Instead of a bare-metal calculation doing precisely what was programmed, you have components in between that can ask questions like:
"Is this intermediate output reasonable?"
"Does this run differ significantly from previous runs?"