A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
infinitely patient appears in 53 chunks across 34 episodes, from 2024-06-10 to 2026-04-06.
Its densest episode is Bits and Bobs 8/11/25 (2025-08-11), with 4 observations on this topic.
Semantically it travels with llms, ChatGPT, and coordination cost, while by chunk count it sits between compounding value and pace layer; its yearly rank moved from #129 in 2024 to #20 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2024-06-10 to 2026-04-06Mean1.6 per episodePeak4 on 2025-08-11
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 53 observations sorted from latest to earliest.
"Judgement calls" require different answers from different actors.
If every actor agrees on a given "judgment call" then it's not a judgment call, it's straightforward and obvious.
LLMs are great at tasks that nearly everyone (with enough time and motivation) would agree on.
Humans often get bored,
Two things that are hard about software: creating it and getting people to use it.
LLMs make the creating orders of magnitude easier.
But it also makes the latter part easier.
Humans need the tool to not only be useful but also enjoyable.
If it's not enjoyable, at any given point they might lose pat
Last week I talked about how someone's OpenClaw reached out to me proactively.
It's OK for someone's agent to waste another agent's time.
Agents are not situated in the world.
They have infinite patience.
It is not OK for someone's agent to waste another human's time.
Humans are situated in the worl
If you have a clear performance metric to optimize, agent swarms can do a great job.
This is what Shopify's Tobi found.
Interestingly, in his case, there weren't any magic bullets.
It was just an accumulation of tons of small benefits.
Humans wouldn't be patient enough to chase these small wins that
Users are stickier to UX than agents are to APIs.
That's because it's harder for users to switch their mental models than for agents to rewrite the API they code to.
Humans are lazy and would rather not update their mental models.
Agents have infinite patience and are willing to do any reasonable th
Claude Code is great at deobfuscating code.
Deobfuscating is an exercise primarily in patience.
LLMs have infinite patience.
A kind of funny mental image: Claude Code deobfuscating itself.
Inspecting how its own brain works.
Like the automaton in Ted Chiang's "Exhalation" story.
Agents are better at managing their focus than humans are.
Focus in humans is a precious, fragile thing.
One person interrupting you at the wrong time can rip you out of your focus.
That can feel like ripping a limb off.
But agents can be as focused as they want to be… sometimes too focused.
They fo
Companies benefit from dynamic pricing.
Imagine a system that allowed you to do the same to the providers?
Before, it required infinite patience to do it as a consumer, it wasn't viable.
But now LLMs have infinite patience and you can deploy them to achieve your interests.
Refactors that go for more than a month are always a disaster.
But now you can execute refactors way faster with the right plan.
You don't need to coordinate the politics of multiple distractible humans with their own incentives because the LLMs will just execute on the plan with infinite patience.
Claude Code's infinite patience means that if it gets pointed in the wrong direction it will just plow through multiple walls and do some damage.
That means that pointing it in the right direction is significantly more important than with a human.
Trade is only valuable if you have non-infinite time and different abilities than the entity you trade with.
These are trivially, obviously true in real world situations, so we never noticed that "trade is good" is downstream of these assumptions.
But for LLMs these assumptions don't obviously hold.
An LLM can be used as a devil's advocate with no shame and infinite patience.
They have to be asked to play this role, but they can help provide disconfirming evidence.
Some things fall below the attention line, and that's good.
Trying to keep all details up to date wastes tons of time.
LLMs can generate so much cruft that you can't get out from underneath it.
Their infinite patience allows creating towers of ossified minutiae.
You can now code even with fractured attention.
It used to take deep focus.
Now with coding agents it doesn't!
The coding agent has infinite patience and keeps track of all of the working memory.
You can juggle multiple threads of execution or interleave it in the white spaces in your day.
LLMs help diffuse knowledge of a system faster.
To open a restaurant requires navigating a bureaucratic maze.
Talking to people who have done it before, scrutinizing overwhelming, poorly documented, kafkaesque processes that use arcane jargon.
It requires a knowledge of that jargon and infinite pati
ChatGPT Commerce shines when finding candidates is easy but verifying they match your goals is tedious.
LLMs have infinite patience and OK judgment.
Two concrete examples:
Finding a furniture cover for garden furniture where there are lots of SKUs that are different shapes, and it's hard to search b
Avoid the Copilot Pause
When interacting with agents, they do work and then ask for your judgment.
If there's one agent, either the human or the agent is blocking on the other.
The human's time is valuable; the agents have infinite patience.
This article is about having a swarm of agents, so one is