A subtle reframe that could be made secure: the agents execute in loops that are inside of mechanistic loops.
- A subtle reframe that could be made secure: the agents execute in loops that are inside of mechanistic loops.
- The mechanistic loops are formal graphs of computation, which may inside them have LLMs calls, but which are sandboxed and limited.
- There is an agent loop but it makes a compute graph to execute that calls tools and also sub-agents whose job is to construct another graph to execute.
- The agent doesn't execute, it makes a graph to execute.
- The core ranking function would be "if this were to run how likely would the user be to accept its suggestions in this moment?"
- That's a nice self-steering quality metric.