A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
...ated based on what would have been hard to do in the past.
But now everyone has LLMs.
It's not a cheat code that only you figured out.
The "be the first, and then get a compounding advantage" only works if it's hard to build what you ...
...of fast content production, taste remains as important.
If you have good taste, LLMs can now allow you to get that stuff generated 10 times faster.
With tools like generative AI, the difference between curation and creation will get e...
Things written by LLMs are slop.
Integration code written to combine disparate things is glue code.
Glue code written by LLMs is glop.
Glop is a kind of black box; it doesn...
LLMs can be used as compilers.
They compile English to code.
This is extraordinary!
As a creator, you can sketch out real code and English intermixed.
The...
The most important thing to drive LLMs is to curate good context.
With the right context, LLMs are very good at producing high-quality output.
The hard part is no longer the magical thing ...
People who can code via LLMs don't necessarily have an intuition for what plain old code can do.
That leads to, for example, trying to create an Anthropic Artifact to identify wh...
Artifacts were a low-hanging fruit made possible by LLMs, just waiting to be discovered.
How Anthropic Built Artifacts: https://newsletter.pragmaticengineer.com/p/how-anthropic-built-artifacts
The feature w...
...being too precious about it.
That was extremely hard to do automatically before LLMs.
You had to significantly cut down on the things that could be done, to the subset that could be made tinkerable.
But LLMs are squishy!
They can squi...
...it was told to do, no judgment.
Cheap to execute!
But now software can also use LLMs in its execution.
LLMs are like magic in software.
Software with an LLM inside is squishy, alive, emergent but also a bit unpredictable.
An LLM can u...
LLMs scramble the cost equation of software.
Before, software was expensive to write, cheap to run.
But now LLMs make software much cheaper to write.
At l...
Software written by LLMs is merely good, not great.
If it's small and similar to existing software, it's typically good enough.
But if it's larger, or unlike existing softwar...
A metaphor for LLMs: an electric bicycle for the mind.
Bicycles are about extending human agency but you're very much still steering.
If you already know how to bike, yo...
...iding hand of a human, the more often it will produce these turds.
That's where LLMs that can give diffs inline while you're working are more helpful.
The human and the LLM can iterate together continuously, instead of the LLM going o...
...nd value of the product gets higher... automatically!
You could for example use LLMs to set the static floor of quality (a close-ended component) and then add an ecosystem component on top that compounds in quality with more usage (an...
LLMs are so charismatic, you can talk to them like a human.
So every AI tool puts them front and center, even though in most of the cases you want them to...
Why do engineers have such a hard time working with LLMs?
Because we're using engineering metaphors to describe a fundamentally squishy thing that is better described by organic or biological metaphors.
The...
...st times computers can't extract that context. But humans can get the vibe, and LLMs can too.
If an LLM can give a collection of data a good title, that shows that the context established is clear.
Each incremental step of work should...
...ut that lack of structure will bite you later if you try to do anything scaled.
LLMs can do all kinds of fuzzy structured things.
For example, take a picture of the books on your bookshelf and ask for a JSON representation, most LLMs ...
...t people used to building traditional software are having trouble incorporating LLMs.
Traditional software does exactly what you tell it to (which might not be what you meant).
You can design it precisely and pin it to the wall and it...
A pattern we see in LLMs: linear improvement in quality for exponential increases in costs.
(Of course, over time we've also rapidly improved the efficiency to deliver previo...