A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
mental model appears in 72 chunks across 55 episodes, from 2023-10-09 to 2026-04-06.
Its densest episode is Bits and Bobs 3/10/25 (2025-03-10), with 4 observations on this topic.
Semantically it travels with llms, disconfirming evidence, and ground truth, while by chunk count it sits between Claude Code and schelling point; its yearly rank moved from #4 in 2023 to #19 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-10-09 to 2026-04-06Mean1.3 per episodePeak4 on 2025-03-10
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 72 observations sorted from latest to earliest.
...-competent person.
That is comforting and easy to use… but also gives the wrong mental model for what they can do.
LLMs can perform feats of patience and recall that no human ever could.
If you don't think about how LLMs are different from hu...
...ads of my curiosity, in some cases leading to deep insights that reconfigure my mental model of the world or myself.[alz][ama][amb]
These conversations[amc][amd] are long and meandering. They have various dead ends or odd paths that I later d...
... then later more signal comes in that requires our brain to snap to a different mental model.
Various optical illusions trigger this reliably.
When it happens, there's a kind of whooshing vertigo feeling as the whole world reorients around yo...
...ing party trick to show people how open-ended the system is.
You break people's mental model of what is possible, and suddenly everything seems possible.
The expected nastiness of a surprise is tied to the fidelity of the user's mental model and the stakes.
An inaccurate mental model and high stakes is a recipe for a massive nasty surprise.
...essfully own the system?
That is, maintain, fix, and extend it with an accurate mental model?
If not, it's not known to be sound.
Soundness normally requires careful layering, making sure that each layer is thin and understandable.
Thinner la...
...leverage.
If not, then the magic could have a very different behavior than your mental model of it.
Of course, in practice we can't peel back every layer; that's the whole point of abstraction.
But the bar is could you have understood it and ...
I love Anthea Roberts' concept of dragonfly thinking.
Any mental model we apply to a problem is a lens.
A lens must reduce the signal of the real world into an easier-to-consume distillation.
Lenses are great, because th...
...y than the absence.
The absence is easy to forget about.
You have to maintain a mental model of the now-hidden thing, and keep refreshing that memory or it evaporates from your awareness.
The presence is much harder to forget about--it's righ...
... an answer that is superficially great but actually bad for a subtle reason.
My mental model is a Simon Giertz-style ketchup robot.
After a few minutes of work the LLM agent plays a triumphant chime and happily delivers you… a steaming turd.
...
A mental model: data is "radioactive" if it could be tied to someone's identity.
If that data touches other data, or is shown or shared in the wrong context, there ...
A thing that makes it fun to play with your system: users have a rough mental model of "I bet if I did X with Y I'd get Z" and they do it and something interesting happens, even if it's not precisely Z.
Especially if there's an undo ...
...ou kick the tires of a problem and it's like you expected.
In these cases, your mental model doesn't have to update much, you just reduce uncertainty.
Sometimes you kick the tires and it's like kicking over a cardboard cutout and discovering ...
A kind of odd mental model for storing state in a system.
Pure functions that take inputs and produce outputs.
To store state, loop back function's output to the inputs of a ne...
...large legal JSON blobs.
They'll often miss a comma or a } or ].
This breaks our mental model; they're so good at generating things that match even subtle patterns, and this is something a simple pushdown automata could handle!
LLMs are not do...
...ils matter and which can be ignored is an act of judgment.
It requires having a mental model of what you expect, so you can see the things that don't fit the model (and are thus important to attend to).
This is one reason why observing the wh...
... to melt to interact with the softness of LLMs.
A mini app is still an app-like mental model.
Too hard for this new era.
Sand is hard but only at the micro level.
At the macro level it's soft.
The hardness is on such a small scale that it's n...
... nasty surprise.
A nasty surprise is something that violates the user's implied mental model, and might cause them to never want to use the service again.
When you're developing a new assistive service, you want to minimize how many people ha...