A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
...n it easily.
It's child's play to prevent injection with a bit of escaping.
Now LLMs with tool use allow all data to be executable.
A massive expansion of threat surface area.
So now all of the systems builders are thrust into the wor...
Prompt injection sets the ceiling of potential of LLMs.
Claude and OpenAI will build integrations into chat via things like MCP.
Vibe coders will get stuck making dead end little island apps.
Both will ge...
The unlock for LLMs vs deep learning is they're general purpose.
Deep learning techniques of the mid 2010's relied on supervised learning.
They could do impressive feats...
LLMs are extremely confusable deputies.
In security, one type of vulnerability is the confused deputy.
A powerful entity is tricked into applying their po...
I like the metaphor of sleepwalking geniuses for LLMs.
It captures how powerful they are… and also how silly they can be if you don't constantly guide them.
LLMs allow moving from allocentric knowledge to egocentric knowledge.
In the world of maps, "allocentric" means world-aligned, and "egocentric" means pers...
One superpower of LLMs: patience too cheap to meter.[qz]
When you're dealing with another person, you don't want to waste their time or say something that will make them th...
LLMs will likely supercharge the amount of legalese.
Whoever uses the most well-applied legalese gets an edge over their counterparty.
Before, only lawyer...
You have to work to get disconfirming evidence from LLMs[rb].[rc][rd]
LLMs are too eager to please.
If you aren't careful they won't question you, even if you give it false premises.
A trick someone told me...
Is the LLM the one calling tools, or can tools use LLMs inside of themselves?[rg]
Which is on top of execution, a chat thread orchestrating tools, or a traditional bit of software that's orchestrating tool...
Don't use LLMs as the software, use them to write the software.[rh]
If LLMs make software for basically free, then you can have the LLM generate it on the fly.
An e...
A system that meets LLMs where they are for code will get the most out of them.
LLMs can write simple React components with high quality
They need a lot more hand-holding and...
...a natural affinity for programming) actively enjoy solving the puzzles.
But now LLMs can solve the puzzle for you, and it's up to you to just verify its work[ri] and think architecturally.
It's a different kind of puzzle, one that is ...
Might we see the return of tech small businesses in the era of LLMs?
In the 80's and 90's there were thousands of tech small businesses.
Later, as the cloud phase heated up, the efficiency of scale became more importa...
To LLMs, humans will seem like trees.
Humans can perceive at most about 10 bits of information per second.
This was explored in a paper called The Unbearable...
There's currently no mainstream voice that has grounded optimism about LLMs.
There are either the hyper tech "it will be wonderful, simply trust us" or the anti tech "this is terrible and you should be terrified".
Someone sho...
Anthropic's research on the inner workings of LLMs is fascinating.
They're studying LLMs less like an engineer would study a technical artifact and more like a neuroscientist would study a mind.
All k...