There is no solution to prompt injection in systems where LLMs call the shots.

· Bits and Bobs 7/28/25
  • There is no solution to prompt injection in systems where LLMs call the shots.
    • LLMs seeing raw data and being asked to make load-bearing security decisions cannot be made safe, no matter how good the model gets.
      • Even if the model is great, the trolley problem of having the model, not the user, be tricked, shifts the blame.
    • No mechanistic system can handle all the open-ended inputs that LLMs can cover, and LLMs are fundamentally confusable.
    • You need a new kind of approach that has mechanistic software at the core, and LLMs marbled inside in intentional, limited ways.

More on this topic

From other episodes