A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
English specs can now be "compiled" to runnable code by LLMs.
When we make programs, we compile the source code down to an executable binary.
But then we make sure to keep the source code, so when we tweak it i...
Having a background in things like sociology gives a leg up for understanding LLMs.
Most tech is understood best with a straightforward math/computer science frame.
But LLMs are a cultural technology.
They are best understood throug...
LLMs didn't train on images, but pictures.
An image could be any random bit of noise expressed as a 2D array of pixels.
A picture, in contrast, was a thin...
...stem pulls to an extreme, rigid, brittle, centroid.
A heat death of the system.
LLMs just imitate, following the innovation that humans that came before have done.
In a world where we use LLMs for more things, how can we make sure tha...
...he top models.
We can now take an IQ of 90 for granted with cheap off the shelf LLMs… and it will only get cheaper.
What happens when you take IQ-90 LLMs for granted, and assume it's too cheap to meter?
One of the reasons that LLMs appear to be so resiliently good at frontend UX in modern patterns is because that code isn't challenging in a programming sense, it's a lot of boile...
... five year old, they just confidently distill an answer on the spot.
Not unlike LLMs!
We're all inherently creative.
Confidently answering a question on the spot, distilling an answer that is plausible within the context of everything...
LLMs write shitty software quickly.
They struggle to write high quality software, even with lots of scaffolding.
To work in Serious development, the softw...
It's amazing how useful large context windows are in LLMs.
It's barely been a year since we had to deal with miniscule context windows of 4k tokens or so.
It was like living in the stone age, can you even im...
...or.
In that world, we'd have LLM-powered aggregator chatbots, but no way to use LLMs in other applications.
A recent report said that something like 75% of OpenAI's revenue comes from ChatGPT.
All it would have taken in this alternate...
LLMs are excellent teachers.
They can patiently engage with our questions, helping you learn the material.
But imagine trying to learn German and using an...
Some people critique LLMs as being just like BitCoin: a massive energy hog.
However, there's a key difference.
In BitCoin, the energy use is the point.
The price of computatio...
The assumption that Chatbots are the killer app for LLMs presupposes a centralized, necessarily one-size-fits-none system.
When you centralize, you have to have a one-size-fits-all policy or approach, and g...
LLMs only do superficial pattern recognition, but they can do it incredibly robustly.
They are amazing at superficial absorption of patterns.
But if you b...
...a tamagotchi.
Anything with a face that you can talk to.
Pond scum has no face.
LLMs and other emergent algorithms are closer to pond scum intelligence than human intelligence.
But LLMs put a face on pond scum intelligence.
You don't ...
Lots of people are talking about how LLMs might change how large organizations work.
I think LLMs will almost certainly have a big effect on how organizations work.
But I don't think it will ...
One-ply thinking: LLMs will make navigating bureaucracies and paperwork easier.
Multi-ply thinking: LLMs will make it so that bureaucracies processes get even more labyrint...