We spent decades making injection attacks invisible to developers.

· Bits and Bobs 9/2/25
  • We spent decades making injection attacks invisible to developers.
    • Modern frameworks auto-escape HTML.
    • ORMs parameterize queries.
    • Follow standard practices and you don't have to think about it.
    • Now LLMs make all text executable.
    • Frameworks don't help.
    • Everything is code.
    • XSS has a solution: we can parse HTML/JS with 100% accuracy and sanitize it.
    • Every major framework does this by default.
    • Developers rarely think about it.
    • Prompt injection has no solution[ay][az]: only LLMs can parse natural language, and the same LLMs parsing it can be tricked by it.
    • Without structurally addressing prompt injection, LLM agents can't safely reach mass market.
    • Anthropic's 11% attack success Simon Willison calls a "catastrophic failure rate"
    • "Smarter models" hit asymptotic returns.
    • A structural approach is necessary to unlock the potential.

More on this topic

From other episodes