An LLM can be trusted not to write code to attack you in particular.
- An LLM can be trusted not to write code to attack you in particular.
- But if it sees any untrusted context at all the LLM can become malicious.
- This is why prompt injection is so dangerous.