This week we saw the first high-profile example of a sycosocial relationship that went off the rails, leading to what looks like a public psychotic break.
- This week we saw the first high-profile example of a sycosocial relationship that went off the rails, leading to what looks like a public psychotic break.
- Jeremy Howard gave a good explainer for what happened.
- "For folks wondering what's happening here technically, an explainer:
- When there's lots of training data with a particular style, using a similar style in your prompt will trigger the LLM to respond in that style. In this case, there's LOADS of fanfic:
- As a friend observed: "ChatGPT is effectively the memetic equivalent of Gain-of-Function research on viruses but without any containment whatsoever"
- I've interacted with folks who seem to be at an earlier stage of this slippery slope.
- If you're falling into a worldview where you feel like you're connecting with the LLM on a "deeper plane of resonance," or talk a lot about "recursion" or "containment", please seek the support of loved ones.
- This is going to happen again and again unless we build products that have intentional tech principles embedded.