A general design characteristic for things that have potential privacy or security implications: minimize the chance of a nasty surprise.
Show a proactive indication if the system did a thing that has privacy implications, so if the user thinks "wait why is that showing up right now, I don't want that" they can proactively discover it now, not passively discover it later and have an "oh crap" moment.
OSes do this now with system-level microphone / camera indicator.
ChatGPT's new personal memory feature doesn't do this.
It's possible to ask it to save little memories about you (e.g. "I typically program in Typescript").
But it turns out the system also saves things it thinks are important bits of context for you.
This can lead to nasty surprises. A friend found "I like cacti" in his memories, but it could have been something significantly more embarrassing.
A way to improve this UI: every time the system stores a memory from a message, show a little icon next to the message allowing the user to inspect the memory and delete it.