A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
Both The Algorithm and LLMs are ultimately powered by human decisions.
The Algorithm here meaning any ranking function that relies on human interaction to rank an infinite feed....
... it can be compelling on its own.
But they also cement the wrong mental model: "LLMs are just like a person, but virtual."
That vastly misunderstands what LLMs are, what they can do, how they could be used.
LLMs are an alien brain.
Th...
...ously useful in improving the capability of models.
In the earlier micro-era of LLMs it was all about the scale of how much world knowledge you could cram in.
Extremely capital intensive.
It feels like we've topped out on that, and no...
Two very different approaches for LLMs now:
1) The 'Cloud Provider' model: commodity hosting of models.
Compete on cost.
The hosting is the point.
The model is commodity.
2) The 'LLM Provi...
... also if all of the details were visible you'd be overwhelmed by it.
Humans and LLMs would both struggle.
A tangled mess of wiring between cells.
Finally, when the medium to express your ideas has no opinion, it just ends up being "ju...
...y software in the small is now free, so instead of thinking "how can I make the LLMs coding output more like how we write software today", think "what can we now create given that this whole class of stuff is now free"
You need a new ...
Without humans in the loop LLMs just produce slop.
Humans are the curatorial energy that help find the greatness amid the cacophony and extract it.
The right answer is not "how to h...
Remember: we're at the "filming stage shows[adb]" stage with LLMs.
In film, it took awhile to discover the power of montage, a particular superpower of the medium.
We don't yet know what the particular superpower of...
The challenge with LLMs is often giving them the right context.[add][ade]
It's not that they lack a baseline common sense, it's that they don't know anything about your part...
A simple trick to change how LLMs reason: every so often, inject "Wait, but " into the in-progress reasoning token stream[adf].
This forces the LLM to reflect on what it might have mi...
... any human... and also lacks some aspects of common sense.
I see dialogues with LLMs as not a conversation but an act of co-creation, and a chatbot UX seems to undermine that, and foreclose[adi] on other interactions.
An alternate UX ...
...e you're playing with the AI to develop intuition for what it can do.[ads][adt]
LLMs are primarily a sociological phenomena; understanding them at the object level of running them doesn't tell you what you can use them for.
Right brain energy will become more important in a world of LLMs.
Left brain energy is structured, convergent.
Right brain energy is creative, divergent.
LLMs can do convergent, best-practice thinking quite well.
L...
... way to architect software, that is AI-native.
Don't use the disruptive tech of LLMs try to build apps of today 20% faster.
Build new kinds of apps.[adx]
Shitty software in the small allows infinite disposable component.[ady]
... until after!
Even a lot of people in the industry who are thinking a lot about LLMs are implicitly thinking about them as simply a sustaining technology (though they think they're thinking about things disruptively).
Disruptive thing...
... say based on the category they fit in.[afv]
A similar test could be applied to LLMs: the novelty of an utterance has to do with the inverse of the likelihood[afw] that that next token would be predicted by the LLM based on the preced...
The power of LLMs comes from humans.
Both the background knowledge that makes them smart is from culture[afy].
But also the thing that makes their output good is the q...
LLMs talk to us like they're a human, but they're a collective hive mind of society, presenting as a singular "person".
Like the alien in Contact.
"I'm as...
...ake work.
Many hobbies are very hard to start doing, and hard to get better at.
LLMs are great at being an intellectual and creative dance partner, helping you grow in a forgiving environment.[agl][agm]
LLMs make it "cheaper" to start...