It's fascinating to me that when technologists see non-savvy technical users using AI recklessly, they blame the user.

· Bits and Bobs 6/9/25
  • It's fascinating to me that when technologists see non-savvy technical users using AI recklessly, they blame the user.
    • For example, here a non-technical person is livestreaming his vibe-coding of service, but leaving open many significant security issues.
      • The comments are mostly negative.
    • In this Hacker News thread about how Claude Code will route around restrictions the user set on `rm`, most of the response is, "yeah but of course it can, the user should not be surprised."
    • People reacted to the Github prompt injection attack by saying "well the user shouldn't have granted such a broadly scoped key."
    • MCP and LLMs make it so more and more people can put themselves in real danger and not realize it.
    • The answer is not to blame the users.
    • That's like blaming people who use Q-tips to clean their ears.
    • The protections around LLMs cannot contain their power. How would you contain them?
    • The model of "if the user clicked a permission prompt it's on them for getting pwned" is insufficient in a world of LLMs.
    • They're simply too powerful to be contained by our previous half-assed containment mechanisms.

More on this topic

From other episodes