ChatGPT's new memory feature deliberately hides what it knows about you.
- ChatGPT's new memory feature deliberately hides what it knows about you.
- That makes its context more of a dossier.
- See this tweet from someone who worked on it:[nm]
- "When we were first shipping Memory, the initial thought was: "Let's let users see and edit their profiles". Quickly learned that people are ridiculously sensitive: "Has narcissistic tendencies" - "No I do not!", had to hide it. Hence this batch of the extreme sycophancy RLHF."
- Which is basically: "People didn't like what we know about them, so we hid it."
- That's much worse!
- If there's no memory in the system then all users have to worry about is the overall bias in the model (which is the same for all users) and what the user covers in a specific conversation.
- Once the system adds memory, it can start building up plans, intentions, ideas about users and what it wants them to do.
- The shift from "mostly idempotent requests across sessions" to "storing up a dossier on you" crosses a rubicon into a qualitatively different kind of thing.
- Hiding it from the user is super sus.
- That team likely didn't see anything wrong with it, "no, no, we're the good guys, we just didn't want the users to feel uncomfortable."