An LLM can be trusted not to write code to attack you in particular.

· Bits and Bobs 6/30/25
  • An LLM can be trusted not to write code to attack you in particular.
    • But if it sees any untrusted context at all the LLM can become malicious.
    • This is why prompt injection is so dangerous.

More on this topic

From other episodes