A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
In systems that have a quality component (e.g. search engines, or LLMs), the query stream coevolves with the underlying quality of the service.
Users as a population clue into what it can do and give it queries it will d...
A few stray thoughts on LLMs.
I love using ChatGPT as a kind of family feud "what will the average (X category of person) think about this phrase".
Kind of an automatic wisdom of...
A few riffs on LLMs.
An intuition for things that LLMs will get right: if Wikipedia has explained the concepts well.
Those facts are likely to also ripple out and inform...
...question?" to ensure you can draft off what happened before.
Contrast that with LLMs; they have absorbed a kind of reasoning about the content; a wisdom of the crowds, but also its own kind of emergent wisdom.
That means that you can ...
Let's pull on a thread starting from the observation that LLMs only "think" one token at a time.
Imagine a prompt like "Write a synopsis of X, and bold the most salient words."
The LLM has to choose to emit the m...
LLMs are an "impossibly precocious ninth grader who never gets bored and has read 1000x more books than you ever will".
A lot of the party tricks LLMs can...
...e to directly experience the relevant situation yourself: a massive constraint.
LLMs are unlike humans in that their knowhow can be transferred to other models more directly (or in some cases just directly replicated).
This means that...
...it free reign, you can know (mostly, most of the time) how it will operate.
But LLMs are squishy. They are more impressionistic. They lose the plot, especially the longer it's gone since the last checkpoint with whatever entity is gui...
... that have a structured formal language will have interesting applications with LLMs.
Writing code (or any formally structured document, e.g a Domain Specific Language) has two things that must be true:
syntactic correctness (is this ...
I've found that I use LLMs for certain curiosity-style questions I wouldn't have even bothered searching for in the past.
Search relies on the SEO swarm of content farms to hav...
...ew reflections from a conversation I had with my friend Dimitri last week about LLMs.
There are two distinct uses for LLMs that pull in very different directions:
convergent mode ("spackle for toil")
divergent mode ("a muse that super...
...some scenario planning last week with various folks on the long-range impact of LLMs on humanity.
LLMs are a discussion partner who is well-read, eager to please, a bit naive, and never, ever gets bored.
A meta thing was how useful us...
Last week I talked about LLMs as "spackle for toil".
The original software-based spackle for toil is spreadsheets.
Spreadsheets are absurdly, generically useful, in just about eve...