We spent decades making injection attacks invisible to developers.
- We spent decades making injection attacks invisible to developers.
- Modern frameworks auto-escape HTML.
- ORMs parameterize queries.
- Follow standard practices and you don't have to think about it.
- Now LLMs make all text executable.
- Frameworks don't help.
- Everything is code.
- XSS has a solution: we can parse HTML/JS with 100% accuracy and sanitize it.
- Every major framework does this by default.
- Developers rarely think about it.
- Without structurally addressing prompt injection, LLM agents can't safely reach mass market.
- Anthropic's 11% attack success Simon Willison calls a "catastrophic failure rate"
- "Smarter models" hit asymptotic returns.
- A structural approach is necessary to unlock the potential.