Topic: llm native

20 chunks · 18 episodes

Topic summary

?
A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
  • llm native appears in 20 chunks across 18 episodes, from 2024-02-12 to 2026-03-16.
  • Its densest episode is Bits and Bobs 8/19/24 (2024-08-19), with 2 observations on this topic.
  • Semantically it travels with llms, business model, and app model, while by chunk count it sits between critical mass and sensitive data; its yearly rank moved from #70 in 2024 to #126 in 2026.

Over time

?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Mean 1.1 mentions per episode across the full range2024-02-12: 1 mention2024-04-08: 1 mention2024-06-24: 1 mention2024-08-19: 2 mentions2024-09-30: 1 mention2024-11-25: 1 mention2024-12-02: 1 mention2025-01-21: 1 mention2025-01-27: 2 mentions2025-02-18: 1 mention2025-02-24: 1 mention2025-06-16: 1 mention2025-07-28: 1 mention2025-09-08: 1 mention2025-09-15: 1 mention2025-09-22: 1 mention2026-03-09: 1 mention2026-03-16: 1 mention2024-02-12: 12024-04-08: 12024-06-24: 12024-08-19: 22024-09-30: 12024-11-25: 12024-12-02: 12025-01-21: 12025-01-27: 22025-02-18: 12025-02-24: 12025-06-16: 12025-07-28: 12025-09-08: 12025-09-15: 12025-09-22: 12026-03-09: 12026-03-16: 12024-02-122025-02-182026-03-16

Observations

?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.

Token Usage as Productivity Metric

from Bits and Bobs 3/16/2026 ·

Token Usage as Productivity Metric Karri Saarinen laid out the hypothetical. But as work becomes AI-enabled, token usage is emerging as a proxy for productivity. The more tokens you burn, the more you're perceived as producing. I've heard investors say that token consumption is one way to measure ho

All of the coding agents are nothing without Claude.

from Bits and Bobs 7/28/25 ·

All of the coding agents are nothing without Claude. They're just a little wrapper around Claude. But this feels like mainly just an immaturity of the market. We haven't seen the actual LLM-native software yet. The software that takes for granted that LLMs exist, not as the primary input, but as a s

Why are applications the current "size"?

from Bits and Bobs 9/30/24 ·

Why are applications the current "size"? That is, what determines whether we have lots of little, specific apps or a small number of large, general purpose ones? Probably a lot of factors, but one that I think is important is what I'd call the Coasian theory of the app. That is, the app size is dete

The app model can't do speculative assistance.

from Bits and Bobs 6/24/24 ·

The app model can't do speculative assistance. Speculative assistance is necessary to do anything exploratory, where you don't know what the answer of the service will be before you do it. But in the same-origin paradigm, once you reach out to the 3P service, that service could do whatever they want

What's the superpower of the web?

from Bits and Bobs 2/12/24 ·

What's the superpower of the web? The web is a fabric of computing that is on nearly every device beefy enough to run it. It is open, so it works mostly the same everywhere it shows up. And no one entity has unilateral power to define what the web can do. Unless there were a computing device used by