A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
My friend Anthea pointed out that my assertion that LLMs capture "all of society" is wrong.
LLMs give a slice of the content represented on the internet, which has a strong western, English bias.
She imagin...
...gy it might mean a new disruptive technology, like electricity, jet engines, or LLMs.
The new disruptive technology reconfigures the "fitness landscape" of viable ideas, by radically changing a key dimension of cost.
With all of the n...
Last week I framed LLMs as dowsing rods.
The more I think about it, the more I like that frame.
A dowsing rod is a fuzzy kind of imprecise 'magic' that you should hold light...
When extracting information from LLMs, we're like cavemen poking them in the dark.
LLMs encode vastly more information than we know how to retrieve.
We're in the very early stages of figu...
An interesting use case for LLMs: on-demand cozy schlock novels.
For example, fan fiction or formulaic romance novels.
These novels already aren't great literature, they're formulaic...
...uman in the loop, but humans are expensive and get bored.[anu][anv]
Now we have LLMs to do some of the squishy, high-context things that can float around the problem domain.
But that means that if you've iterated to find something you...
...eading this week.
Amelia Wattenberger's LLM fish eyes.
It does track to me that LLMs will be an ingredient that allows new kinds of UX to become possible that wasn't possible before.
The ability to generate high-quality summaries of p...
I like magic as a frame for things that are powered by LLMs.
Normal programming is mechanistic.
It does exactly what it was told to do, even if that's not exactly what the creator meant.
But LLM-powered experi...
LLMs are now good enough to be better than all but the experts in any given domain.
Which produces a problem: how can you judge if its answer is good in a...
LLMs should be good at generating possible multi-disciplinary insights.
LLMs are worse than domain experts, but better than most everyone else nowadays.
B...
...running an LLM.
The ruined society doesn't (yet) know how to build a way to run LLMs themselves.
That would require millions of specialists with knowhow to produce all the inputs necessary to build chips and servers and program them, ...
An agent is a bit of software that is animated by LLMs.
Not normal software that does precisely what it was programmed to do, agentic software that has some squishiness.
Expensive, dangerous, intrinsicall...
The most important characteristic of LLMs is their patience[apc].
They can do tasks that real humans would get distracted or bored by.
For example, carefully reading many pages of material to...
Don't use an LLM to write for you, use it as a thinking partner.
LLMs' ideas are never good.
They're always mush, just frog DNA.
I was chatting with an author who told me he refuses to use an LLM.
He told me you "write ...
LLMs compress nearly all of humanity's background context into a teensy weeny little hyperobject package.
They have effectively infinite background contex...
...eek I observed that the spec seems more important than the code in the world of LLMs.
Which layer is the most important?
The layer where you spend most of your time iterating.
This is especially true if there is a robust, automatic tr...
...opelling them through the problem.
This same ability is important in a world of LLMs too, where the hard part now is to sequence the incremental extensions of work in the same way a human programmer would, but have an LLM turn the exe...
One of LLMs' primary superpowers: they have human-level ability, but they never get bored.
For example, human engineers get bored very quickly writing tests for ...
Frog DNA is average, mush, generic.
That's one of the reasons LLMs pull everything to the bland centroid; they fill any ambiguities with mush.
What if you could make it so instead of using frog DNA, it used shark DNA...