A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
Gordon's concept of "Last principles thinking"
When working with LLMs, think about the superficial, last principles first, then work backwards to the first principles.
LLMs are all about vibes of what they've seen, the ...
It's kind of wild how much people care about their data flowing to LLMs more so than to a generic cloud services.
Generic cloud services could do whatever they want with your data… send it on to other companies, store it ...
LLMs are pachinko machines that have paths for anything that any writing humans have done in the past.
But if there wasn't any in the training set, it has...
...ving a good singer's version of that song from a CD.
I've heard of people using LLMs to help write… and then putting in faux typos to make it look more authentic and hand-crafted.
LLMs make errors in reasoning, but they don't do typos...
It's funny that LLMs are both creating more crap we have to cut through and also pretty good at cutting through the crap.
The meme of every email being an outline expande...
Turns out LLMs can help deprogram conspiracy beliefs by being extremely patient and knowledgeable.
The asymmetry that it's easier to spread BS than to counteract it...
...the expert so often, so you had to develop the skill yourself.
But now you have LLMs who are always patient and eager to help, and to just heroically give you the right answer.
Ethan Mollick has noted that LLMs will break the implicit...
Using LLMs effectively today requires skill to know how to drive them effectively.
But over time to become mass market it has to be a forgiving thing that anyon...
"have conversations with large amounts of data"is the key use case of LLMs.
Computers are patient, but not subtle.
Now LLMs learned how to be subtle and savvy, and they're still patient!
Data doesn't know how to synthesize i...
A domain where LLMs will be useful: where things haven't been tried because they'd take too much time and patience.
For studying things within a discipline, where our ex...
Last week I asserted LLMs' superpower is translation.
Compilers are a form of translation, too.
Just a very specific, limited one.
LLMs are notable in that they can do any tra...
LLMs are built for 4-up evolution UIs.
That allow useful things to happen even for unreliable signals by allowing an intuitive and natural way for a human...
A friend who has been tinkering with LLMs reports if you tell them to act happy they're more likely to try things you tell them to.
When you make the LLM more happy it's more willing to try n...
...ams Emerge from Simple Interaction
From my summary in bits and bobs back then: "LLMs are not some party trick. They reveal something fundamental about humanity... and the universe."
Last week I asserted that a lot of usage of LLMs in organizations is illegible.
Where the employee using the LLM has a reason to keep it illegible to their boss.
A reason to stay illegible about you...
LLMs' superpower is translation, from anything to anything.
A babelfish.
Any translation task will be absorbed over time by LLMs.
Lots of things can be fr...
...t intuition into an answer to a given problem in front of them.
Not unlike what LLMs are doing.
Though LLMs don't have a taste criteria for what to absorb from.
Their sampling criterion is "things that humans decided to reproduce", wh...
LLMs today make boring mistakes because they can't learn.
That is, the model's weights are fixed at training time.
Some of the supporting systems around t...