Stray thoughts on LLMs.
LLMs (just like people) are a lot better at critiquing things than coming up with new things.
LLMs have a theory of mind, in general.
But if they don't maintain memories about you, then they don't have a theory of your mind.
If an LLM doesn't understand a concept, it's a good sign that there isn't a clear consensus on it in society.
ChatGPT turns unknown unknowns into known unknowns.
It's useful when you don't even know what questions to ask.
You still don't know the answers, but at least you know the right questions now, and can make an informed bet on what the answers might be.
People who are better at lightly holding different lenses, and navigating uncertainty, will be able to more effectively use LLMs.
Phone support is a metaphor for how annoying the lack of memory for LLMs is.
Today, every session with an LLM feels like getting transferred to a new phone support rep.
In pre-computer business processes you'd have a process with lots of imperfect humans.
The humans could make local judgment calls but might miss larger patterns of abuse.
Instead, you designed for resilience at the level of the system.
When talking to phone support agents, it doesn't matter if you trick them or fall in love with them, because they have to go through very narrow channels to do anything.
That's one of the reasons interacting with phone support reps often feels kafkaesque.
They are clearly human, yet acting like robots.
When it comes to using LLMs, everyone is already here, but we don't know what it's for yet!
Typically with new technologies, there's a wave of early adopters who experiment and sense-make about the new technology, and share those best practices with new users as they join.
ChatGPT is weird in that we have maximum usage already but before the sense making happened.
That creates a lot of chaos and swirl. The mental model for what it is hasn't fully emerged out of collective tinkering yet.