A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
... the cost gets cheaper, we'll use them for even more things.
Anything that uses LLMs will have to contend with non-trivial marginal cost, for the foreseeable future.
Electricity is cheap and yet it's still metered.
New companies have to be built to take advantage of LLMs.
It will be harder to retrofit old companies than to build new ones.
That's a process that moves at social, not technological speed.
...nthea Roberts has a new excellent piece on the 0-1, 1-10, and 10-100 impacts of LLMs for individuals.
Who ends up being the 10x vs the 100x return?
It's "who can change how they work."
... politics of multiple distractible humans with their own incentives because the LLMs will just execute on the plan with infinite patience.
So you get 10x productivity without 10x the coordination cost.
LLMs will find workarounds to achieve the goals you set.
That implies that you need to give them lots of tests.
But often the agents also create the tests...
The key question: will LLMs just compound crap code quickly, or will it accumulate and accrete in useful ways?
Will code get so unworkable that it collapses under its own weight...
... cheaper than exploit.
That happens when any new input changes.
It's less about LLMs being great at explore (though that's part of it).
It's mainly that LLMs are ushering in a new paradigm.
I love LLMs and I hate chatbots.
I think chatbots are an embarrassing party trick.
Corporations pretending to be our friends.
Depressingly, this is all people th...
Elevated and amplification are two related words around use of LLMs.
LLMs amplify whatever you apply them to.
You can apply them to something good or bad.
For example, curiosity vs laziness.
But elevated implies the r...
...t used to be the writer, the athlete, the actor, that we elevated.
But now with LLMs it will be the editor, the coach, the director who matter most.
There's a difference between vibecoding and Elevated Engineering.
Both use LLMs in new ways.
Vibecoding is a "make it work" mindset.
A good enough, satisficing mindset.
Elevated Engineering uses LLMs to extend your expertise.
For...
How will LLMs affect open source quality?
It definitely undermines the business models of e.g. Tailwind.
Those models are unlikely to ever work again.
But now engi...
LLMs are the best tech in the world to cheat at homework... and simultaneously, the best tech in the world to learn new things.
Is your default tendency l...
...ike a betrayal.
That's why Google's data is a blessing and a curse in an era of LLMs.
They're sitting on a trove of data for each user… but if they preprocessed everyone's decades of emails it would feel like a crazy beytral, an invas...
...p and performant it can be.
Whereas if it assumes normal compute sweetened with LLMs there's no floor or ceiling.
And also if you assume LLM in the loop the only way to improve is model quality or tools.
Whereas normal code can accret...