Another person noting that sandboxes are only a small part of the problem of securing LLMs.

ยท Bits and Bobs 2/2/26
    • "I think most people focusing on securing these are focusing on isolation, but that's really step 0 of a step 3 process they'll come to understand as they try it in practice. It's turtles all the way down. The problem is that LLMs make it impossible to trust the actions / outputs of anything coming from inside. Adding another level of bubble wrap doesn't change the fact that what people are trying to do--use LLMs to take action on their data--is fundamentally dangerous in today's model."
    • Sandboxing is necessary but nowhere near sufficient.

More on this topic

From other episodes