A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
...are hard.
This is one of the reasons that unsupervised automation is tough with LLMs.
Even if you get it to work well 95% of the time, that last 5% of reliability is extremely hard to achieve.
If there's a human in the loop it doesn't...
Individuals are using LLMs way more effectively than organizations today.
Individuals are using LLMs in situated contexts, informally and as augmentation to their work, not aut...
...itten in the LLM era will be smaller single files, not separations of concerns. LLMs do better with smaller files with all local context.
Code that LLMs write will have this quality, and code that is written with LLMs in mind will als...
LLMs are inherently statistical summarizers.
Which is why they pull to the centroid.
"What is the most average answer, conditioned on the input so far?"
An interesting pattern: using LLMs to astroturf content in an ecosystem.
The challenge of an ecosystem is not so much the hill climbing of quality, it's the creation of the ecosystem t...
...prone.
Even if it works 95% of the time, that 5% it doesn't is hard to predict.
LLMs are great at answering a specific, unique question… but then the user needs to sit there and wait while the answer unspools.
Some use cases get enoug...
Another puzzle of LLMs: they're surprisingly bad at generating very large legal JSON blobs.
They'll often miss a comma or a } or ].
This breaks our mental model; they're so...
... in the same direction they've established for a decade, now using the power of LLMs behind the scenes to increase their search quality.
This model can create user value.
But it has a low ceiling, because the app model presumes a priv...
LLMs can't be trusted with private data or data that might try to prompt inject them.
But imagine a set of tubes that by construction can only be combined...
A few fun use cases for young kids and LLMs that some friends shared with me.
When driving somewhere with a kid in the car, ask ChatGPT, "Tell me about gas giants" and then help the kid ask fol...
LLMs don't do deep reasoning.
They do superficial detail matching crazy well.
But it turns out that a huge number of superficial details, if generated by ...
...learned the power of framing, montage, and other dynamics unique to film.
Using LLMs for human-like tasks is like recording a stage play.
What kinds of non-human-like tasks will LLMs be good at?
LLMs can be used as an intelligent lorem ipsum creator.
Lorem ipsum is placeholder text used when mocking up print layouts.
It used to just always be the ...
Apps are hard. LLMs are soft.
LLMs aren't going anywhere, they're a new fundamental primitive.
Everything hard will need to melt to interact with the softness of LLMs.
A...
Even with LLMs doing note-taking all the time, it still doesn't create tons of value.
Part of the value of note-taking is transmitting the information into the futu...
LLMs are not open-ended.
(At least in current architectures)
They are crystallized at a moment in time; after they are trained, they do not change or adap...
... platforms who have signed deals to allow their user's data to be used to train LLMs.
A bargain common in the same origin paradigm: "give me your data in exchange for getting this service for free."
What if it were possible for us as ...