A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
...e.
In intelligence there's more room for taste and differentiation.
The leading LLMs all have some distinctive differentiation and abilities.
Models aren't interchangeable, and there is a diversity of ones with different strengths... ...
... insight, not situated or personal.
Before you had to do that to scale.
But now LLMs give you qualitative insights at quantitative scale.
In the past you had to reduce the data down to its common denominator to do math on it.
Losing t...
...aying "well the user shouldn't have granted such a broadly scoped key."
MCP and LLMs make it so more and more people can put themselves in real danger and not realize it.
The answer is not to blame the users.
That's like blaming peopl...
Perhaps the metacrap fallacy isn't true in the age of LLMs.
The metacrap fallacy was "Once users have put enough meta-structure on their data, all kinds of automatic things will become possible."[ka][kb]
But ...
The logic of folksonomies works just as well for LLMs and humans.
Imagine tagging a person, and being about to put on the tag #husband, and seeing in the UI that there are ten times more uses of #spouse....
What's the "search, don't sort" insight in the age of LLMs?
One of Gmail's insights was "if search is fast and storage is cheap, search, don't sort."
LLMs make sifting through massive information fast.
What's...
...ns.
Like "How come you have red hair but everyone else in your family doesn't?"
LLMs should have at least that much tact, but sometimes they don't.
If someone asks "What's the most embarrassing thing you know about me?", the LLM shoul...
... mechanistic loops are formal graphs of computation, which may inside them have LLMs calls, but which are sandboxed and limited.
There is an agent loop but it makes a compute graph to execute that calls tools and also sub-agents whose...
...app all of our data is in.
But there's no viable alternatives.
For most people, LLMs are a "heck yes" but ChatGPT itself is not a "heck yes".[kg]
It's a "this is the best way to use LLMs today"
They'll jump to the better way of intera...
LLMs don't do novelty themselves.
But they can give novel answers to novel questions.
You need to bring the entropy to the LLM.
If you think LLMs give bor...
...ight it.
That effect will get super-linearly harder to fight.
"Just do what the LLMs guess the API is" is kind of like wu wei.
Although it's also "lazy" and if we all do it, we'll make it harder and harder for future creators to cut a...
...e.
Many people have a subscription to a walled garden (OpenAI) to get access to LLMs.
If you're going to have a subscription to get access to LLMs, why not pick the option that is the open ecosystem, that includes other Chatbots as ap...
The emergent intelligence of a system should come primarily from humans, not LLMs.
The LLMs can be the grease, the lubricant, for the system.
But they shouldn't be its emergent soul.
That should come from real humans doing real thi...
...ck in our own hyper personalized bubble only able to talk to others mediated by LLMs, all of which work for one overlord with goals not aligned with yours.
It's not possible for it to be aligned with your intentions.
...cape of human-generated slop drowning under a grotesque dogpile of ads.
But now LLMs put a stake through the heart of it and its soul is well and truly dead.
Step 4 is now completely replaced, because LLMs can just generate a high-qua...
...resentation from Tom Costello, one of the authors of the paper that showed that LLMs are great at changing the beliefs of conspiracy theorists.
Previously everyone assumed that conspiracy theorists were inherently hard to convince.
It...
LLMs have the Harry Potter problem that recommender systems have.
Imagine a recommender system that recommends books given that you liked a specific book....
The book Blindsight seems to have implications for society and LLMs.
I just finished Peter Watt's classic hard sci fi Blindsight, which is a meditation on consciousness and insight.
Afterwards, the comparison to LLMs'...
...ay more time on something than any reasonable person would think was worth it."
LLMs are infinitely patient.
If you let the tokens flow, LLMs could create magic.[lq]