A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
...e every LLM provider makes available an API, but also has a 1P service.
Vanilla LLMs are so useful that the no-frills default UX from the providers wins by default currently.
This leads us to analyze them mostly like the consumer aggr...
...ases, an experimenter mindset is useful.
The upside for figuring out how to use LLMs is higher and the downside is lower, so the experimentation mindset is even more valuable than it once was.
...continually tackling small tasks.
But to discover interesting things to do with LLMs in this early era will require curiosity, earnestness, a sense of play, a willingness to experiment.
Intellectual interest is not sufficient; you hav...
Work that will be disrupted by LLMs: work that could be Mechanical Turked today.
That is, work that could be atomized into infinitesimal chunks that any reasonable human could do with r...
Pond scum is emergently intelligent, but it can't speak to us.
But LLMs can, which is confusing!
We think of it as a thing, with a complex inner world, because it can speak to us and sound human-like.
This confusion leads...
...ointed.
But unlike a magician, there is real magic going on.
It's just that the LLMs are what is magic.
The wizard just knows how to marshall that magic effectively.
If you want to impress people with your LLM wizardry, don't show the...
...hallow, surface level summary statistics.
Qualitative: deep but narrow.
But now LLMs can do mediocre (but robustly mediocre) human-ish analysis of non-numerical data.
So you can get qualitative style depth with quantitative level meas...
...umanity we've only tried a small subset, because human effort is expensive.
But LLMs can do a mediocre human analysis, cheaply.
So we can find the low-hanging fruit that was pre-existing, we just didn't find yet.
Even if LLMs are just...
A thing I want to ask LLMs to do (in a series of prompts):
1) enumerate 100's of academic disciplines
2) for each list the 20 big ideas (replicated, core, differentiated ideas)...
...your prediction would be way better.
That's hard for humans to do… but easy for LLMs!
Just feed it all of the embeddings of their writing, and it can do a convincing facsimile of that expert's reasoning.
If you had that, you could ass...
There are different pace layers for getting results out of LLMs.
Slowest: train a new foundation model from scratch.
Also extremely capital intensive!
Medium: fine-tune an existing model
Fast: Prompt engineering
C...
LLMs are trained on the past.
They can reproduce things that were novel and high taste from the past, but not the current frontier.
Taste constantly evolv...
...n create better results, even if it's a dialogue between two "boring" thinkers.
LLMs can have more conversations, even with other LLMs, to find interesting non-centroid beliefs.
For example: have one LLM participant play the role of g...
Having all the data in one place was not enough before LLMs.
Because you still needed a hyper-knowledgeable engineer to design and build any bit of functionality.
Swarming emergent functionality of finely enme...
Data schemas are extremely high leverage in a world of LLMs.
LLMs given a rough schema for what data to keep track of in the application can do a great job generating code with only a small bit of english lang...
LLMs can't write particularly large amounts of code before they start getting confused.
They can do maybe 1k lines of code before they start losing track ...
LLMs allow you to go from "random idea" to "a thing that vaguely works" way, way faster.
This is the crucial phase where most ideas die.
The time between ...
Training gives LLMs background knowledge. Context gives them working knowledge.
Many people are worried about LLMs using their data in training, but there the leverage i...
...nsumers will have to have an LLM subscription of some kind.
The applications of LLMs will continue to grow, becoming a thing you couldn't imagine living without.
LLM inference is too expensive to be supported by advertising.
In the fu...