Having LLMs generate turing-complete code on demand isn't good enough.
- Having LLMs generate turing-complete code on demand isn't good enough.
- First of all, if there's any untrusted context input, then the LLM could have been tricked into making the code malicious.
- Second, there's a non-trivial amount of time to generate it, and many of the generations won't work.
- The best approach is to cache working code and share that cache across the ecosystem.
- But now you have to trust whoever cached that code in the first place to not be malicious.