A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
... world.
Of course the best jobs are knowledge work, and always will be.
But now LLMs can do cognitive labor at an unimaginable scale.
…Maybe knowledge work won't be as common, or often will take a different form?
...itive labor, they just required humans to do a new kind of cognitive labor.
But LLMs can now truly handle it.
But only if they have the context of your life and you trust them to act as an extension of you.
Some terminology:
A harness is mechanistic code that LLMs run within.
An agent is a loop with an LLM and tool calls.
An assistant is an agent with a memory.[b]
LLMs are really good at rebasing.[c]
Rebasing normally is tedious and error-prone.
A form of cognitive labor that dominates the development of software.
B...
LLMs are easy to trick by putting words in their mouths.
If you give an LLM a task like "Write a poem about Strawberries" and then prefill its answer with...
...ade any data.
A much larger surface area!
All data can do things now, thanks to LLMs.
Powerful, but catastrophically dangerous in our current physics of trust.
LLMs are significantly cheaper if you only append the tokens.
If you only append tokens, you can reuse the existing KVCache from earlier runs instead of h...
...ready released a completion API.
The amount of value that can be extracted from LLMs is extremely variable in different verticals.
A general-purpose API has to be priced at the median amount of value across verticals.
But the amount o...
If you're below mean on a given dimension, LLMs pull you up.
If you're above, LLMs pull you down.
Your own ability is not the global average.
Your skill might be below the average quality, or above...
LLMs don't have an experience for others to resonate with.
LLMs are static and unable to learn.
Also, they're a view from nowhere.
That makes them hard fo...
...if there were demand.
That was because building products was expensive.
But now LLMs make it trivial to make real software quickly.
So now you can make a bungalow in the jungle to check for demand!
It's not a random happenstance that LLMs love filesystems and bash.
Unix has three key design concepts:
1) Everything is a file[c].
2) Commands should do one thing well.
3) Pipes connect dif...
... tied to that community.
Of course, that also doesn't matter that much, because LLMs are so insanely good at translating from one domain to another.
... used to be too much of a pain to extract our data into other services.
But now LLMs could plausibly do the cognitive labor given a Google Takeout dump.
The thing that would be hardest to leave is changing your gmail address for thous...
...k matter star at the semantic center of the definition of the software.
But now LLMs can translate from anything to anything.
For the first time it's possible to have that meta-boundary-object of the actual software definition, in a l...
When talking to LLMs you don't have to clean up your typos or worry about rambling or bouncing all over the place.
It's 100x lower difficulty than writing for another per...
... how it evolves and moves.
Humans are absurdly sensitive to this dimension… and LLMs perhaps even more so, since they can do it at a scale humans can't.
Humans live in linear time experientially in the moment, by requirement.
LLMs liv...
...re accomplishes?
If the former, then your business is now worth much less given LLMs.
If the latter, then there's no problem, and it's possibly even good.[w][x]
Improving the software production cycle with LLMs is like trying to add a new room to a house that is on fire.
The fundamentals of how software gets created are being reorganized as we watch.
...judgment call" then it's not a judgment call, it's straightforward and obvious.
LLMs are great at tasks that nearly everyone (with enough time and motivation) would agree on.
Humans often get bored, but LLMs have infinite patience.
Th...