A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
...tural schelling points requires cognitive labor that used to be scarce.
But now LLMs might help sift through and find the obvious schelling points.
Of course, this would change the internal politics meta-game…
...ently.
It used to be a pain to think through it for each decision.
But now with LLMs and their infinite patience, it's easier to have a fuzzy set of values and mission statement operationalized.
Every decision can be run through an LL...
LLMs are losing the ability to simulate real people.
LLMs are largely a warped mirror of all of the human input.
It used to be possible to use LLMs as a k...
Three pace layers of prototyping with LLMs:
1) The LLM does everything.
Expensive, loosey-goosey, flexible
2) The LLM behavior is sublimated into a mechanistic harness that can be run inside o...
... but go to a different source you'll get a different answer.
The monoculture of LLMs leads to everyone having the same answers to the same questions.[ck]
I heard of someone who thought someone else had plagiarized their essay.
Turns o...
This week's round up of "we're in the wild west era with LLMs":
A postmortem for a vibecoded tool called DrawAFish that had abuse problems.
A Cursor exploit that allows arbitrary remote code execution.
AgentFlay...
...context as possible, you want the right context.
The wrong context confuses the LLMs and makes them spiral out of control, losing the plot.
What you want is the smallest amount of context that will give the LLM what it needs to give y...
Imagine a system where LLMs generate new combinations that a community of humans curate emergently through their individual authentic actions.
LLMs help create "patterns": littl...
...y code.
If you're a curated programmer you'll get a lot of curated code.
Before LLMs, sloppy programmers could at least make a lot of progress, which was an advantage.
But now the LLMs do that part for free and for everyone.
The balan...
Anthea's newest piece compares LLMs as freeing a caged tiger.
The thinker is a caged tiger set free with intellectual collaborators willing to go wherever you want to go.
A cage is also...
...people making real decisions that align with their authentic needs and context.
LLMs are a fossilized version of real people's decisions; it can't pick something novel.
It looks superficially the same, but it's fundamentally different...
...atabase to a real consumer need requires lots of complex specialized stuff.
But LLMs might not need that.
It's kind of weird that the LLM model creators also have consumer frontends to them.
It shows how powerful LLMs are that they ca...
There is no solution to prompt injection in systems where LLMs call the shots.
LLMs seeing raw data and being asked to make load-bearing security decisions cannot be made safe, no matter how good the model gets.
...
...ld add the word "useful" in front of intelligence.
You could imagine having two LLMs that require huge amounts of compute locked in an infinite debate spiral about how many angels can dance on the head of a pin.
Or more likely: a red ...