A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
You can give agents a "personality hash"[ac] so they know how to work with you.
LLMs are excellent at understanding the meaning of arcane jargon.
One thing LLMs know well are enneagram types and Myer's Brigg personality types.
"Alex i...
...o support long term" but "is this worth even spending the time to think about?"
LLMs can help with the latter, not the former.
If you can free up useless energy by automating it, you can spend that energy on higher-leverage things.
"W...
One way to get increasing leverage per token over time; have the LLMs extract useful tools.
"Look at our Lessons Learned doc and our commit history then create the tools that would have made it easier, faster and cheape...
...lking to a consistent person when a contractor skims a casefile for 30 seconds.
LLMs can skim 100000x more than a human in that time frame.
The limit is the context window, but it allows LLMs to read effectively "instantly".
So of cou...
An effective writing technique that LLMs have learned to imitate: "Connect nine of the ten dots."
The last dot is obvious and trivial, but the reader connects it.
Because the reader connects...
...icatively with more dimensions.
This is one of the reasons gradient descent for LLMs and evolution is unreasonably effective.
We're used to a puny three dimensions.
An excellent video about how proteins can be discovered by evolution ...
You can use LLMs as research goblins to investigate problems that you'd be embarrassed to waste an intern on.
The cost is so low that it's reasonable to task them eve...
The power of tools like Claude Code comes from the open-endedness of LLMs' reasoning being merged with the open-ended capability of the CLI.
That explosive power is combinatorial.
The CLI is intimidating.
People say, "why d...
...ven disabled their own functionality when gaslit by humans."
Say it again now, "LLMs shouldn't be trusted to make security decisions!"
Zenity CTO demos 0-click AI agent exploits on stage at RSAC
Seeking Alpha: OpenClaw is a Liability ...
A classic critique: "How can you trust LLMs? They can't count the r's in strawberry!"
"Yes but they can write the code to do that task right every time."
For everything that's not natural langu...
...ssage, append to the case log, and move on to the next one.
Not entirely unlike LLMs, of course.
The victim was entirely snookered by the most superficial continuity and the impression of a single suitor.
It shows how easily we believ...
...Claude Code mainly arises from the combinatorial power of the CLI.
The power of LLMs as a catalyst for unleashing the inherent (but intimidating) combinatorial power of the CLI.
The CLI is awesomely powerful.
In the original sense of ...
...ent calls, it required humans in the loop… which required bureaucracy.
But now, LLMs can do some levels of judgment automatically, with no humans in the loop.
So now you can solve discovered problems by distilling new AI automation.
T...
...Shapiro's Trycycle.
An excellent pattern to extract compounding leverage out of LLMs.
Just autonomously plan, autonomously implement, and then repeat!
LLMs are for code what the Hall-Héroult was for aluminum.
Before the Hall-Héroult process was invented in 1886, aluminum was treated as precious.
It was p...
LLMs take away the puzzle part of programming.
The puzzle part of programming is where you can relax.
Like doing sudoku puzzles that produce useful things...
..., the things we intended to do but never got around to.
At a certain point with LLMs, you just kind of run out of a backlog.
Contains this zinger: "Showing off your portfolio of bespoke aClaude Code projects and looking at others' por...