A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llms appears in 615 chunks across 117 episodes, from 2023-11-06 to 2026-04-20.
Its densest episode is Bits and Bobs 2/2/26 (2026-02-02), with 15 observations on this topic.
Semantically it travels with ChatGPT, Claude, and prompt injection attack, while by chunk count it sits between Claude; its yearly rank moved from #3 in 2023 to #1 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-11-06 to 2026-04-20Mean5.3 per episodePeak15 on 2026-02-02
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 615 observations sorted from latest to earliest.
If you've gone through the effort of having high quality programmatic thinking LLMs can write infinite Op Eds for you, on demand.
The backlog of Bits and Bobs feels exceptionally valuable to me in the age of AI.
Bruce Schneier points out that LLMs will bring mass spying.
Before, we had mass surveillance, but a human sifting through the collected data happened rarely.
That limited the oversight ...
Garry Tan on Twitter:
"New social networks are going to appear that will be LLMs creating a cozy web customized for us and our real friends, and their friends and so on
There will be a new social network built on mutual trust, all...
Which will be more important by unit weight in software systems in the AI era, LLMs or normal code?
A lot of platforms being built for the age of AI imagine that most of the weight of systems will be LLMs, with just a little bit of c...
...tem pole and always will be".
But the tech industry as a whole would miss it if LLMs changed it because we'd all have the same bias and blindspot.
Domains that don't work with CS are less deterministic.
Their taste, metacognition, ent...
A lot of the best practices for programming with LLMs are the same as for humans.
For example, document each directory with a README, have aggressive type checking / linters.
But most humans give up or a...
LLMs are essence extractors, not mechanical reproducers.
The way they learn from their training data is more than just reproducing.
Essence is a new conce...
Every built a Diplomacy game for LLMs to play.
Gemini generally does well.
Claude refuses to lie, and thus loses often.
ChatGPT o3 often wins because it is very happy to betray its collab...
You can typically trust off-the-shelf LLMs to not try to manipulate you in particular.
But LLMs are easy to fool.
So if anyone else you don't trust is feeding input into the context, then the ...
...hatGPT.
Only an open-ended tool could do that.
Now that we have approaches like LLMs it's impossible to see how Google Lens's old approach of manually curating data for individual verticals could never have possibly ever worked in gen...
I think LLMs will likely make corporate politics worse.
The metagame will just get more inscrutable and energized.
Corporate politics emerge from the fundamental ...
...ate politics maneuver classically used with consultants will also be applied to LLMs.
You bring in a consultant to come to the conclusion you secretly believe.
If your boss agrees with the consultant's conclusion, then you get the ben...
A pattern: use LLMs in tiny tasks where the overall swarm is emergently powerful.
LLMs handle small tasks very well.
The emergent swarm might lose the plot (e.g. forgett...
...ng.
Notably in the Hacker News comments people are starting to realize how hard LLMs are to secure–previously I saw a lot of "that's the user's fault."
This HackerNoon piece also points out the dangers of MCP and prompt injection.
Ano...
I love Anthea's new piece on LLMs as creating a Living LIbrary.
We're just at the beginning stages on all of the implications of LLMs for the creation and dissemination of knowledge.