A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
llm native appears in 20 chunks across 18 episodes, from 2024-02-12 to 2026-03-16.
Its densest episode is Bits and Bobs 8/19/24 (2024-08-19), with 2 observations on this topic.
Semantically it travels with llms, business model, and app model, while by chunk count it sits between critical mass and sensitive data; its yearly rank moved from #70 in 2024 to #126 in 2026.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2024-02-12 to 2026-03-16Mean1.1 per episodePeak2 on 2024-08-19
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 20 observations sorted from latest to earliest.
Token Usage as Productivity Metric
Karri Saarinen laid out the hypothetical. But as work becomes AI-enabled, token usage is emerging as a proxy for productivity. The more tokens you burn, the more you're perceived as producing. I've heard investors say that token consumption is one way to measure ho
AI adoption follows workflow friction, not theoretical capability.
[Image]
Anthropic released this fascinating chart that shows a large gap between what AI could automate and where it is actually being used today. The biggest adoption so far is in software and quantitative fields because the work is
There's a difference between "create a chat" and "create a chatbot" in an AI-native system.
The former encourages the mental model that you're talking to the omniscient service in a new thread.
The latter encourages the mental model of spinning up a specific chat thread with an entity that is separa
All of the coding agents are nothing without Claude.
They're just a little wrapper around Claude.
But this feels like mainly just an immaturity of the market.
We haven't seen the actual LLM-native software yet.
The software that takes for granted that LLMs exist, not as the primary input, but as a s
Will the main loop of the AI-native experience be a chatbot that is omniscient and makes all of the calls?
Or will chat be a contained feature that pops up when necessary?
Everyone else assumes the former, but I assume the latter.
Chat is a feature, not a paradigm.
I think this tweet about finding the right UX for AI-native tools is directionally correct.
I want interfaces that are intelligent, not in a human way, but in a way where the tool anticipates my needs and adapts to them seamlessly.
I think that would be the killer use case for LLMs.
Chatbots are (co
You can't live on a little random island in the middle of the sea.
If someone drops off containers of cargo, you'll be able to survive for longer.
Perhaps you'll even be able to get to a level of self sufficiency, with a lot of effort.
If you're an island that is part of an archipelago connected via
Shitty software in the small is now practically free to create.
Everyone's trying to produce large, chonky software of today in it.
It's hard to squeeze enough quality while shoehorn it into the app creation flows programmers use today.
But what if we leaned into an architecture that presumed shitty
The last era of software was based around zero marginal costs.
In a world of zero marginal cost, there are only three consumer business models[adz].
Hardware.
Charge a premium on the hardware, and lock people into your ecosystem.
Media.
Proprietary copyrighted content the user can't get anywhere els
What would an OS look like that took LLMs for granted?
Not a patch job on top of the OSes we have today, but a new kind of "OS" that was AI-native, and built in a world that assumed high-quality LLMs.
LLMs are a new kind of computation.
Powerful and magic and squishy.[ahk]
I'm skeptical of the role of "agents" in whatever AI-native ecosystem emerges.
Agent implies agency, and agency implies something that could take actions behind your back… including stabbing you in the back.
If the agent can take actions with you out of the loop, you either have to constrain it sign
English specs can now be "compiled" to runnable code by LLMs.
When we make programs, we compile the source code down to an executable binary.
But then we make sure to keep the source code, so when we tweak it in the future we can compile a new binary.
Or, when we get better at compiling, or want to
Why are applications the current "size"?
That is, what determines whether we have lots of little, specific apps or a small number of large, general purpose ones?
Probably a lot of factors, but one that I think is important is what I'd call the Coasian theory of the app.
That is, the app size is dete
People are launching platforms for building things with LLMs faster than people are building useful LLM-native apps.
As an industry we learned the "in a gold rush sell pickaxes" lesson, and now everyone is doing it.
But maybe it's still premature?
We're still in the community gardening and experimen
The app model can't do speculative assistance.
Speculative assistance is necessary to do anything exploratory, where you don't know what the answer of the service will be before you do it.
But in the same-origin paradigm, once you reach out to the 3P service, that service could do whatever they want
The Iron Bridge in England was the first bridge made of iron.
The bridge was built before anyone knew how to make big structures out of iron.
It used joins that were typically used for wood bridges... but just with iron.
Later we realized the special properties of iron and started making bridges in
What's the superpower of the web?
The web is a fabric of computing that is on nearly every device beefy enough to run it.
It is open, so it works mostly the same everywhere it shows up.
And no one entity has unilateral power to define what the web can do.
Unless there were a computing device used by