Bits and Bobs 10/20/25
1Andrej Karpathy thinks AGI is still a decade away.
- This is one of the illusions that happens with logarithmic-value-for-exponential-cost curves.
- The initial ramp of value is so extreme that it feels like it will ramp up to infinity.
- But actually it approaches an asymptote.
2Apparently ChatGPT's subscriber counts are stalling.
- Apparently ChatGPT's subscriber counts are stalling.
- I hadn't realized how much of ChatGPT usage is free users.
- It's easy to get momentum in a business by selling dollar bills for 90 cents.
- That can go on for as long as you can get other people to give you dollar bills for IOUs.
- For their business success, it's imperative for them to get higher engagement / stickiness.
- A polite word for 'addiction'.
3OpenAI is looking at a Log in with ChatGPT offering.
- A crucial feature: businesses that use it can subsidize inference for users.
- This is a classic aggregator play to lock in an ecosystem based on an early advantage.[hp]
- The subsidy makes it increasingly impossible for other providers to compete.
- The bet is that by having predatory pricing (aka "dumping") they can force other businesses out and then corner the market.
- They're already doing this by their intense subsidy of inference on ChatGPT.
- OpenAI is betting that their overall story and momentum is strong enough that they'll be able to continue raising capital when others can't.
- But a stable equilibrium to me seems to be Anthropic and Google staying in the race indefinitely.
- They have the capital and backing to stay in the game no matter what.
- And of course, there's the possibility of open models, especially out of China, catching up.
- That is a very different end state equilibrium.
- In that world, there's not a duopoly but a triopoly.
- There's always one shorter leg of the stool who is willing to do things that the stronger players wouldn't, e.g. keep frontier model access available via API.
4A duopoly has a very different game theoretic equilibrium than a triopoly.
- A duopoly has a very different game theoretic equilibrium than a triopoly.
- A duopoly is effectively like a monopoly.[hq]
- Both competitors position their policies right next to each other.
- Like the famous Hotelling's Law where businesses on a line place next to each other.
- The appearance of competition without actual variance.
- But a triopoly is different, there's always an odd man out.
- That odd man out is incentivized to do different plays that the stronger players wouldn't.
- That changes the dynamic so the bigger players have to compete, too.
5Chatbots are like filming stage plays.
- Chatbots are like filming stage plays.
- We're still waiting for the "montage" moment for LLMs.
- My vote for the montage capability is LLMs being able to create software.
6LLMs turn out to be insanely good at writing and executing actions.
- Code, or just english descriptions.
- In the past few months, as an industry our minds have continually been blown by how powerful it is.
- We keep finding new ways to get even more out of this ability, ever more easily.
- Claude Code has only been out for 8 months.
- That's kind of crazy to think about how much has changed since then!
- This revolution is just getting started.
7LLMs ability to create software is catastrophically powerful.
- LLMs ability to create software is catastrophically powerful.
- We keep on discovering new orders of magnitude more powerful techniques for getting more value out of it.
- For example, Anthropic Skills, which Simon thinks is a huge deal.
- MCP felt cool, but it had a low ceiling and was easy to overwhelm the context window.
- Really what matters it's the ability of LLMs to do tool calling.
- More generally: to create software, to do things, whether in code, or with tool calling.
8LLM's ability to create software is like nuclear fission.
- LLM's ability to create software is like nuclear fission.
- Catastrophically powerful.
- The default manifestation of it is a nuclear bomb.
- But if you can figure out how to harness it in a nuclear reactor you could create near limitless energy.
- If you could figure out how to get the "catastrophic" downside part capped so it was safe for the mass market, you could change the world.
9Claude Code is not about code, it's about anything your computer can do.
- Claude Code is not about code, it's about anything your computer can do.
- The ability for LLMs to create software, to do things.
10Anthropic Skills is powerful for the same reason my old Code Sprouts project felt unreasonably powerful to me a few years ago.
- Anthropic Skills is powerful for the same reason my old Code Sprouts project felt unreasonably powerful to me a few years ago.
- Giving just a teensy bit of structure to the LLM (in that case, a Typescript schema for state), and allowing a hierarchy of english language instructions that the LLM could peek inside if it wanted but not be bothered with by default.
- An example of efficiently unlocking the LLM's ability to create software.
11Anthropic keeps doing something simple and elegant…
- Anthropic keeps doing something simple and elegant…
- That also causes traffic jams.
- Impeccable first-order thinking.
- Non-existent second-order thinking.[hv]
12The faster the twitch of components, the faster the clock speed of the overall system.
- The faster the twitch of components, the faster the clock speed of the overall system.
- But also fast twitch can't get more leverage.
- A swarm of shallow actions.
- In lots of tech companies today, they use Slack, never email (too slow).
- That means they can theoretically go faster to respond quickly to things that happen in the market... but also they are now default at that speed all the time, and unable to think deeply.
13Vibe coded software is fragile and shallow.
- Vibe coded software is fragile and shallow.
- It looks great but you can break it easily.
14Claude thinks everything it outputs is brilliant.
- Claude thinks everything it outputs is brilliant.
- You need an external ground truthing process.
- Your curation and judgment.
- There needs to be a quality control entity in the middle that isn't eager for your approval.
- A sphincter position in the quality pipeline.
- When you create something with AI, you make something you think is great.
- Everyone else thinks it sucks and rolls their eyes.
- Developers think everyone else's code is horrible.
- Claude makes this effect an order of magnitude worse.
15Producing content way faster gives leverage to your taste.
- Producing content way faster gives leverage to your taste[ix].
- The calibrated judgment becomes extremely important, instead of blindly accepting and making towers of slop.
- Are you happily making things... that others are actually choosing to use?
- That last bit is the key question.
16Often the technology isn't the limiting factor in a pipeline.
- Often the technology isn't the limiting factor in a pipeline.
- The limiting factor often isn't what we think it is.
- For example, making a movie faster is not about making more images.
- It's about getting things in front of the director for approval.
17A way to get leverage: focus on the meta-thing, not the thing.
- A way to get leverage: focus on the meta-thing, not the thing.
- The meta-thing is the system that generates the thing.[hz]
- If you can get it to work well, then you improve not only the thing, but a whole class of things.
- For example, if you have a compiler, optimizations improve everything it compiles.
- Another example: in search quality, don't check in a configuration file of synonyms, check in the process to generate that file based on an analysis of the querystream.
- Now it can improve itself automatically and keep itself up to date.
- Another example: never touch the code directly, touch the specs and LEARNINGS.md that you give to the LLM to generate the code.
18The way to decentralize AI is to decentralize the applications.
- The way to decentralize AI is to decentralize the applications.
- The models will likely centralize, due to capital requirements.[ia]
- But that doesn't really matter if there are three or more high-quality ones that are easy for an application layer to swap between.
- That's why OpenAI is desperately vertically integratin[ib]g and storing state, to avoid being a swappable component lower in the stack.
19LLMs for software development are a foot gun and a rocket pack.
- LLMs for software development are a foot gun and a rocket pack.
- Catastrophically powerful.
20Big companies can't enable YOLO mode in Claude Code, it's far too dangerous.
- Big companies can't enable YOLO mode in Claude Code, it's far too dangerous.
- Small startups can try dangerous things because they don't have much to lose.
- LLMs are catastrophically powerful for building software.
- The benefit of this ability accrues to the entities that have lower downside risk.
- The asymmetry is now much stronger than before.
- That means startups as a class have an advantage.[ic]
- Many startups that use it will blow themselves up.
- But some will get lucky.
- We'll see swarms of "fast-fashion" startups and apps.
21Everyone who stumbles across leverage thinks they're a genius until they die.
- Everyone who stumbles across leverage thinks they're a genius until they die.
- Leverage gives you speed for risk.
- The risk is hard to see, the speed is easy to see.
- You borrow from the future to go fast today.
- If it works, it works great.
- If it doesn't work, it works terribly.
- And perhaps knocks you out of the game.
- Lots of things are levered in ways that aren't obvious.
22With LLMs, you don't need to use a dependency, you can distill the equivalent on demand.
- With LLMs, you don't need to use a dependency, you can distill the equivalent on demand.
- A dependency brings in complexity and risk from the rest of the ecosystem.
- It's also not fit to your specific purpose, but a general one.
- One bonus: you can get security fixes for free.
- LLMs are great at writing code on demand.
- If there are lots of examples of a given library, you can have it distill a custom one on demand, just for you, perfectly fit to your purpose.
- No dependency risk!
23With LLM's ability to execute and build programs, Innovation coins just got cheaper.
- With LLM's ability to execute and build programs, Innovation coins just got cheaper.
- You can have more of them than before.
24The difficulty of a programming task now comes down almost entirely to novelty.
- The difficulty of a programming task now comes down almost entirely to novelty.
- It used to be that there was a difference between integration hard and algorithmically hard engineering.
- Integration hard is easy to do, just requires a long, detail-oriented slog.
- Can be parallelized relatively easily.
- Algorithmically hard is hard to understand, but then easy to execute once you do.
- Requires carefully reading papers, brainstorming at the whiteboard, going on long quiet walks.
- But once you write the code it's often 1000 or so lines.
- Very difficult to parallelize.
- But that distinction was pre-LLMs.
- LLMs are great at algorithmically hard problems… as long as there are a lot of examples of it in the training set.
- No matter how arcane those examples are to discover or reason about.
- So the difficulty of executing now comes down entirely to novelty.
- Less novelty: more likely the LLM's first guess works and it can iterate its way to the solution.
25A surprising LLM pattern Jesse Vincent discovered: GraphViz.
- A surprising LLM pattern Jesse Vincent discovered: GraphViz.
- Graphviz, when rendered, makes sense to humans to understand flows.
- LLMs can understand it based just on the markup.
- A boundary object for flow graphs between LLMs and humans.
26Git worktrees are a huge unlock with agents.
- Git worktrees are a huge unlock with agents.
- They used to be hard to understand how to use, but LLMs are great at them.
- Giving agents isolation, so if they blow up something, they don't blow up everything else, allows them to move fast.
- A powerful piece of infrastructure will be containerization++.
- Not just containerizing the code, but also the data and resources.
- Imagine if you had a copy-on-write filesystem and data layer.
27You keep on hearing about companies who are hiring new hires, and they are massively more productive than senior engineers just by YOLOing it.
- You keep on hearing about companies who are hiring new hires, and they are massively more productive than senior engineers just by YOLOing it.
- Irresponsibility is the unlock.
- A form of leverage.
- Take shortcuts now that might blow up later.
- YOLO!
28Will AI adoption be like "the year of the Linux desktop?"
- Will AI adoption be like "the year of the Linux desktop?"
- Every year for decades is supposed to be the year of the Linux desktop.
- Linux has never become mainstream for desktop use.
- But it quietly spread literally everywhere else in computing.
- Maybe the change is happening, just not in the place you're looking for it.
- The AI submarine.
29Bruce Schneier's Agentic AI's OODA loop problem is worth a read.
- Bruce Schneier's Agentic AI's OODA loop problem is worth a read.
30If you had to have a global data structure for everyone, what would it look like?
- If you had to have a global data structure for everyone, what would it look like?
- You'd need links to reference other parts of the graph.
- Things that are in memory like Redux state objects get that for free, but if it has to be serialized you need a formal reference capability.
31Innovative LLM offering from Stripe: an API where an app can charge their users a consistent markup on the underlying tokens.
- Innovative LLM offering from Stripe: an API where an app can charge their users a consistent markup on the underlying tokens.
32Karan Sharma, PM at OpenAI, is dreaming about AI Home Cooked Software.
- Karan Sharma, PM at OpenAI, is dreaming about AI Home Cooked Software.
- Home-cooked software has to happen in a personal kitchen, not some factory kitchen owned by a corporation.
33It's long been possible to offload memory to external brains.
- It's long been possible to offload memory to external brains.
- The people who could do so effectively were able to 10x their capacity.
- But now you can offload thinking.
- "Go think about this question and tell me what the options are."
- That's a step change.
34100x Bot is doing something interesting.
- Seems like a combination of:
- the Skills / Learnings.md compounding loop
- Crowd-sourcing
- driving AI browsers.
- A catastrophically powerful combination.
- This kind of looks like RL if you squint.
- RL researchers might say this is an under-powered hack to get something like RL.
- But it's different, because it has a swarm of human curation and judgment in the loop.
- Distributed caching.
35LLMs are great at reverse engineering software.
- LLMs are great at reverse engineering software.
- Reverse engineering software requires incredible patience.
- LLMs have infinite patience.
36LLMs help diffuse knowledge of a system faster.
- LLMs help diffuse knowledge of a system faster.
- To open a restaurant requires navigating a bureaucratic maze.
- Talking to people who have done it before, scrutinizing overwhelming, poorly documented, kafkaesque processes that use arcane jargon.
- It requires a knowledge of that jargon and infinite patience.
- Something that LLMs have!
- LLMs can help you navigate these kinds of processes more easily.
- They effectively help metabolize arcane knowledge and allow people to operationalize it more easily.
37Ideas that fit in one brain are an order of magnitude easier to execute.
- Ideas that fit in one brain are an order of magnitude easier to execute.[if]
- Serializing intuition across the brain barrier is an extremely lossy and expensive process.
- Once crossing the single-mind threshold you also start having coordination costs, which can balloon massively.
38When new technology makes what previously were big ideas into minutiae, it frees up your brain to think about new big ideas.
- When new technology makes what previously were big ideas into minutiae, it frees up your brain to think about new big ideas.
- Alfred North Whitehead: "Civilization advances by extending the number of important operations which we can perform without thinking of them."
- LLMs make orders of magnitude more ideas fit inside one mind.[ig]
39Integrating with the real world is much more difficult than coding an app
- Integrating with the real world is much more difficult than coding an app
- Rhymes with complex vs complicated.
- A version of the last-mile problem.
40LLMs are great at starting things, not at finishing things.
- LLMs are great at starting things, not at finishing things.
- The human needs to poke them and structure them to do the finishing steps.
- "The first 90% is done, now it's time for the remaining 90%."
- You get the experience of going really fast without necessarily getting that much closer to the end.
- A version of the last-mile problem.
- Are agents bad at finishing real-world tasks due to a lack of metis?
- We aren't seeing a Cambrian explosion of apps because everyone has apps 90% of done.
41One of the benefits of having software experience is instincts about how long things take.
- One of the benefits of having software experience is instincts about how long things take.
- Some things are way easier than before, some aren't.
- A jagged frontier.
42Getting from idea to demo is now super fast.
- Getting from idea to demo is now super fast.
- Going from demo to production is just as hard as it once was, if not worse.
43An external analyst thinks that vibe-coding traffic is falling off a cliff.
- An external analyst thinks that vibe-coding traffic is falling off a cliff.
- For example, a 50% decline for Lovable from June to September.
- Of course, external data is of very poor quality.
- But it does track to me as being plausible.
- It turns out that as a non-technical person you can't ship a production product even if the LLM writes the code for you.
- There's more to shipping a production app than writing the code.
- The clean up from "demoable" to "usable" (especially to make it not just usable but also safe) is a huge amount of work, that LLMs don't do a great job at unless you tell them to.
- You need to know to tell them to.
- Many of the vibe coded apps that succeed in the market get taken down by security issues.
- The Lovable founder responded with stats showing continued growth.
- But he did it in the most eye-roll-y, least-convincing way ever.
- No y-axis numbers or even describing what metric it's charting.
- That's an extremely easy signal to make misleading.
- Every PM worth their salt knows how to cherrypick data to give the appearance of momentum.
- You make the strongest case you can with the data you have.
- A weak case implies you don't have data that tells the story you want to tell.
- I didn't give the rumor that much credence until seeing that weak-sauce retort.
44A new pattern from prominent open source contributors: have a different GitHub account for stuff you've vibecoded.
- A new pattern from prominent open source contributors: have a[jg] different GitHub account for stuff you've vibecoded.[jh]
- Those prominent contributors have a brand of significant quality for code he's hand-written.
- Instead of muddying that brand, he has a separate one for things he vibecoded and are thus more "use at your own risk".
45Verification is the bottleneck for LLMs.
- Verification is the bottleneck for LLMs.
- Verifiers of taste, or of quality, or correctness.[ij]
- This is where humans typically still need to be in the loop.
46With variation, some systems converge and some diverge.
- With variation, some systems converge and some diverge.
- If they diverge, you get a rat's nest.
- The variation compounds and becomes mutually inscrutable.
- If they converge, you get a boring, over-saturated middle.
- LLMs converge their output.
- So now the middle of the distribution of output will be over-saturated.
- That will push out the differentiation to the tails.
- Hyper-niche.
- Hyper-scale.
- This was already happening before LLMs, due to the zero cost of content distribution.
- But LLMs turbocharge it.
47There is structurally less qualitative user research than there should be.
- There is structurally less qualitative user research than there should be.
- It's extremely expensive and manual today.
- But they are much higher signal than quantitative user research.
- Quantitative research requires you to ask just the right questions in just the right way.
- If you ask them wrong, you'll generate faux insights.
- Harder to find some classes of disconfirming evidence.
- You can't discover your unknown unknowns.
- You can't learn unexpected things as easily.
- But LLMs can do qualitative nuance at quantitative scale.[ik]
48I love this distinction between Slop and Aura.
- "Slop → Aura
- innovative → institutional
- personalized → individual
- generative → creative
- dissociative → enigmatic
- monetizable → valuable
- platform distribution → dark social distribution
- mechanical → artisanal
- scalable → singular
- disposable → canonical
- dead internet theory → dark forest theory"
49I attended a talk where the head of one of the major labs mostly talked about AGI and what society will be like.
- I attended a talk where the head of one of the major labs mostly talked about AGI and what society will be like.
- I wonder: what if some of this AGI talk is in some ways a long con?
- First and most obviously, the more that people believe it, the more they can attract insane amounts of capital.
- Secondly, this massive, society-defining outcome overshadows the more mundane, everyday concerns of power concentration in a hyper-aggregator.
- Everyone's fretting about this infinite outcome that might never come, instead of the almost-certainly-will-happen hyper-aggregation and power centralized in one company.
50It's not that people should care more about privacy.
- It's not that people should care more about privacy.
- It's that they shouldn't have to care about it all because what happens by default is aligned with their expectations and interests.
51Nobody reads EULAs or even could.
- Nobody reads EULAs or even could.
- Most are much worse.
52Cory Doctorow asks if AI assistants can escape the enshittification trap.
- Cory Doctorow asks if AI assistants can escape the enshittification trap.
- The answer is no, in my opinion.
- Going down the path of chatbots / LLM-as-friend / hyper-scale leads to a path to inevitable enshittification.
53A paper: "Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence"
- Just in case there was any doubt that the sycosocial relationships inherent to chatbots are bad for you.
54The Atlantic points out that ChatGPT is a fictional character.
- The Atlantic points out that ChatGPT is a fictional character.
- But unlike many characters, it was not authored.
- Instead, it emerged.
- A very different dynamic.
- Yet another way that the implicit frame of Chatbots as "LLMs as friends" is unsettling and potentially dangerous.
55A few thoughts on ChatGPT allowing adult content.
- A few thoughts on ChatGPT allowing adult content.
- It leans into hyper addiction and engagement even more.
- At that scale of deployment (beyond some niche for things like Replika) it could destabilize society even more.
- This seems like a cynical growth play... not something you do if you had runaway momentum.
- Also, do you really want this company to have records of all of your intimate interactions?
56A milestone in game engine development: rendering the teapot.
- A milestone in game engine development: rendering the teapot.
- The first milestone is "first triangle."
- A single triangle rendered on screen.
- The next milestone is "first teapot."
- Render a 3D object, lit and shaded.
- When you render the teapot, people see the teapot.
- But the demo is that you made the system to render the teapot, not the teapot!
- They understand what they see, not how it works.
57A general rule in optimization: auto-tightening systems.
- A general rule in optimization: auto-tightening systems.
- Invest effort to optimize it in proportion to how often it's used.
- For example, in V8, the first pass is a quick-and-dirty compile.
- But hot code paths (for example, in a loop) get another pass of optimization to make them faster.
- This insight comes from the original HotSpot Java compiler.
- Optimize it from generalized/sloppy to specific/tight in proportion to how many times it runs.
- The same is true for workflows that use LLMs.
- If it's going to run once, just have an LLM execute an english language prompt.
- But if you're going to run it thousands of times, have the LLM write deterministic code.
- Right now we don't have enough real-world last-mile use of AI workflows so we're in the cheap/sloppy phase of deployment, not yet to the auto-tightening parts.
58Workflows are hard to change because there's an intra and inter network effect.
- Workflows are hard to change because there's an intra and inter network effect.
- Inter: You need to coordinate with others.
- If you change the workflow but others participate in it and don't change, then you can't change it.
- Intra: each step is dependent on the others and possibly needs to change.
- Changes to a workflow that don't change the shape of inputs and outputs are a limited subset of the changes you can make.
59Stratchery dove into why Walmart decided to integrate with ChatGPT Commerce.
- To me this implies that the current model of software is stagnant.
60There's the personal OODA loop and the situations' OODA loop.
- There's the personal OODA loop and the situations' OODA loop.
- The personal OODA loop is the classic Douglas Adams quote:
- "1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
- 2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
- 3. Anything invented after you're thirty-five is against the natural order of things."
- But sometimes the situation really does have a faster OODA loop than before.
- The ability fo LLMs to write software is truly a fundamental speed limit that has changed.
- If it's not you… is it the singularity?
61What would the singularity feel like?
- What would the singularity feel like?
- "I can absorb this new change once there's a breather" …but the breather never comes.
62Ben Mathes: "My differentiated skill is I read everything Ben Thompson writes… and actually understand him."
- Ben Mathes: "My differentiated skill is I read everything Ben Thompson writes… and actually understand him."
63The Geek Fallacy: anything that cannot be understood via STEM frameworks is unknowable or unimportant.
- The Geek Fallacy: anything that cannot be understood via STEM frameworks is unknowable or unimportant.
- Geek machismo leads to a kind of Robert Moses effect with much more leverage.
- Putting on blinders and steamrolling the world.[il]
64The Sarumans are all about the head, never the heart.
- The Sarumans are all about the head, never the heart.[im]
- They think they're being hyper rational.
- In reality they've hollowed themselves out.
- The heart is where the meaning comes from.
- The intuition for indirect effects.
- To be in harmony with the world you need both.
65Sarumans hate bureaucracy.
- Sarumans hate bureaucracy.
- But so do Radagasts.
- Although they're less likely to use the word "hate."
- Bureaucracy is about the status quo and downside capping.
- Innovation is about upside.
- Both the Saruman and Radagast magic are about innovation and upside.
- The lack of magic, the dull, dreary, mundane company man, is about not innovation.
- These are the banal "organization kids".
- "Turn the crank, don't think about what the crank is connected to, that would distract you from the only thing that matters: number go up!"
66Sarumans don't care what other people think.
- Sarumans don't care what other people think.
- A lack of conflict-aversion as a super power to dominate over others.
- No shame.
- Finance and titans of industry typically have this power.
- Also, Karens.
- It's an inherently antisocial power.
- Simply don't care what other people think.
67In the hyper era, we went from meaning to MOAR.
- In the hyper era, we went from meaning to MOAR.
68In a world with extremely low friction the energy flows to the winners on the top.
- In a world with extremely low friction the energy flows to the winners on the top.
- When there's more friction it can create different pockets of smaller winners.
- More variation leads to more diversity; more adaptability and resilience in the ecosystem.
- One empire can spread throughout a vast plain; mountainous terrain often has fractal chiefdoms.
- Somewhat surprisingly, low friction leads to hyper optimization leads to centralization leads to hollowness and fragility of the system.
- The system is overfit.
69Overfitness feels great in the moment.
- Overfitness feels great in the moment.
- Whether it's overfit is whether you have a model that doesn't generalize in time to new scenarios since it fits to what is temporary noise.
- Overfit structures are 'fit' but not in a robust way.
- Even if you were exactly right about prediction for today you'd be wrong tomorrow.
- The world changes in ways you can't predict, fundamentally
- This riff and the next 10 were based on a talk I attended this week by Emmet Shear.
70The modern world is overfit.
- The modern world is overfit.
- It's overly optimized.
- Modernity is about the systematizing of accuracy.
- You create a meta model of the world, and then every process relentlessly optimizes it.
- If there's no slack in the system you are definitionally overfit.
71In Seeing like a State, it's not that they're doing a bad job at modeling the forest.
72There's a tension between complexity of your model and accuracy.
- There's a tension between complexity of your model and accuracy.
- The more complexity you add, the more you overfit.
- You explain what you see well, but also over-explain noise.
- You're now overfit, poorly fit to novel inputs.
- If you update your model by n bits it had better give you more than n bits of accuracy or you're falling behind.
73Ambiguity is that which you don't have a model of.
- Ambiguity is that which you don't have a model of.
- The unknown unknowns.
- The best you can say is "I'm still alive so whatever I did in the past must have worked…"
74In the modern hyper-connected world, connection is free, sparsity is precious now.
- In the modern hyper-connected world, connection is free, sparsity is precious now.
- The more connected, the more overfit the system gets.
75Overfitness is a form of mania.
- Overfitness is a form of mania.
- By every metric things look great.
- But everyone can tell something is off.
- It's hollow, hyper.
- Manic trips end in one of two ways: you calm down or the universe calms you down.
76Modern society assumes that global connection is as good as local value.
- Modern society assumes that global connection is as good as local value.
- But that's not true.
- Local connections are healthier and more nourishing.
- They allow more diversity, variance, and adaptive capacity of the system.
- Don't follow high frequency information sources.
- Talk to your friends and stay local, it's good for you and good for society.
77Gradient clipping is a way of forcing regularization.
- Gradient clipping is a way of forcing regularization.
- In ML, one way of regularizing is keeping track of, for each weight, how precise you believe it to be.
- That is, how often it has changed in the past.
- When you need to update it, you update precise values less than imprecise values.
- But this requires significantly more complexity.
- Another approach is simply gradient clipping.
- Simply cut off extreme values.
- It's less precise individually, but on average, stochastically it is apparently equivalent.
78A "modest proposal" Emmet offered as a thought experiment, that I found thought provoking: have a max speed limit of information of 90 mph.
- A "modest proposal" Emmet offered as a thought experiment, that I found thought provoking: have a max speed limit of information of 90 mph.
- If it goes faster, it gets taxed, at a rate that goes up quadratically.
- The main knob to control is the tax rate.
- It could be an infinitesimally small tax rate, if you wanted.
- This would naturally tilt information to local sources.
- You could use the Global TikTok for a fee, or the Bay Area TikTok content for free.
- A consistent pressure towards local connection, which is healthier for the system.
- Downward pressure on inequality.
- Like regularization, it forces it into flatter distribution.
- Fewer billionaires, but more centi-millionaires.
- Fewer centi-millionaires, but more dece-millionaires.
- Fewer deca-millionaires, but more millionaires.
- A healthier, more balanced system.
- You could argue that a system of tariffs (consistently and thoughtfully applied) is a form of gradient clipping in this context.
- It gives you a rough version of this policy, in the trade domain.
- Obviously, an unworkable proposal, but still a fascinating thought experiment.
79When everyone is in the same information stream everything becomes boring.
- When everyone is in the same information stream everything becomes boring.
- No variance to select over!
- Global connection makes the world boring and wildly unequal.
80One way to not get stuck in fast-twitch information streams: get your news from newspapers.
- One way to not get stuck in fast-twitch information streams: get your news from newspapers.
- A day after it happens, not as-it-happens.
- That gives some time for synthesis and distillation.
- It also gives some time for perspective: what was the stack rank of most important things that happened yesterday?
- Online sources can have infinite content, updated infinitely quickly.
- But newspapers have scarce space and have a built in time delay.
- They naturally regularize the signal.
- Even better would be once-a-week roll-ups of curated what matters most.
81In the last couple of years, letters of recommendation broke as a quality signal.
- In the last couple of years, letters of recommendation broke as a quality signal.
- Before they were a signal of quality because they took a long time to write.
- That meant that professors only had time for a handful of them, so the students they agreed to do it for were at the top of their distribution.
- But now professors can do them 100x faster, so they can do many more.
- Some new signal will emerge as a quality signal.
- Quality signals come from scarcity.
- Meaning comes from tension.
- One way to bring back scarcity: have professors publish the names of their top recommendations.
- Perhaps in a stack rank.
- If it's publicly viewable in a single stack rank, there's scarcity.
82As a company, raising venture capital is a red queen race.
- As a company, raising venture capital is a red queen race.
- It locks you on a hyper-growth-or-bust trajectory.
- Even if you don't want to get on that trajectory, if your competitor does it, they will dominate you in the market.
- That means that even if everyone would prefer not to, everyone must.
- A prisoner's dilemma.
- Anywhere where there's no moat (bits, not atoms) and some kind of compounding effects, you have to play.
83I asked Claude why heights feel higher looking down than looking up.
- I asked Claude why heights feel higher looking down than looking up.
- It gave an interesting report.[ip]
- This is the kind of question I wouldn't have even bothered asking before.
84The first time you hear a word, it means what you first guess it means.
- The first time you hear a word, it means what you first guess it means.
- Unless the world quickly disabuses you of that notion.
- This is what Simon calls an inferred definition.
85Conversations are fundamentally generative processes.
- Conversations are fundamentally generative processes.
- The process of ping-ponging ideas back and forth unearths mutually interesting ideas.
- Each volley is:
- 1) a vote to keep going.
- 2) a curation of which part to respond to.
- An iterative "yes, and" that zeroes in on the most interesting thread.
- The fact that two people find the thread interesting makes it an order of magnitude more likely another person would, too.
86Complexity is a balance of integration and segregation.
- Complexity is a balance of integration and segregation.
- Everything needs to be in the critical point to be able to surf.
87In complexity, surfing is better than trying to control.
- In complexity, surfing is better than trying to control.
- Control is impossible and attempting to have it will tire you out.
- A great piece from Every that calls it Rugged Flexibility.
88Everything is "trivial" if sufficiently abstracted.
- Everything is "trivial" if sufficiently abstracted.
- But the abstraction misses some of the grit and texture of the real world.
- The reason things are hard in practice is because that grit is load bearing.
- Rocket science is mostly just the fundamental dynamics.
- Problems that are complex are almost all grit and texture.
89Could and should are distinct.
- Could and should are distinct.
- There are things that LLMs could do that you shouldn't do.
90If the s-curve is so steep, just try to stay alive until it levels off.
- If the s-curve is so steep, just try to stay alive until it levels off.
- But then you could end up like Blackberry.
- In an infinite game, rule number one is to stay in the game.
91Finite games are often embedded inside infinite games.
- Finite games are often embedded inside infinite games.
- Finite game within (zero sum), infinite game for the collective (positive sum).
- The energy of the inner finite game propels the momentum of the enclosing infinite game.
- Finite games that are run too hot will steal all the oxygen.
- They'll hollow out the infinite game they're embedded in.
- In the limit, they kill the host.
- For example: the idea of America is the infinite game.
- The political parties are the finite game.
- The hyper era has much more efficient competition.
- Finite games hollow out infinite ones at faster and faster speeds.
92Swarms are adaptable but have Goodhart's Law.
- Swarms are adaptable but have Goodhart's Law.
- The antidote is trust in the collective and long-term goals.
- When individuals trust each other to behave as a collective they believe in, they will take actions that don't follow Goodhart's Law and don't destroy the collective.
- Instead of only optimizing for their local incentive gradient, they also balance what's best for the broader collective.
- That gives you the best of both worlds.
- Similar to Francis Frei's observation that diverse teams that trust one another are the way to get reliably great results.
93Swarms can't see or grapple with systemic problems.
- Swarms can't see or grapple with systemic problems.
- No vantage point to look non-locally.
94Trust between individuals pairwise only scales to Dunbar's number.
- Trust between individuals pairwise only scales to Dunbar's number.
- Past that you need something to give you leverage.
- Typically that requires reducing nuance to numbers.
- Once you do that, Goodhart's law starts showing up.
95Are you "giving up" or are you "accepting"?
- Are you "giving up" or are you "accepting"?