Bits and Bobs 11/4/25

1Amazing distillation from Anthea Roberts: "AI is an amplifier.
  • Amazing distillation from Anthea Roberts: "AI is an amplifier.
    • It can amplify good taste, agency and curiosity.
    • It can also amplify laziness and mediocrity."
2Vibecoding on an already healthy codebase does a good job at keeping it working.
  • Vibecoding on an already healthy codebase does a good job at keeping it working.
    • If it's a crappy codebase it makes it worse and worse at a compounding rate.
    • AI is an amplifier.
3If LLMs make thinking 10x cheaper, will you think 10x less, or 10x deeper?
  • If LLMs make thinking 10x cheaper, will you think 10x less, or 10x deeper?[gx]
    • The answer to that question determines if you will thrive or wither in the AI era.
4This week's AI security wild west round up.
5Remember: the AI tools help the bad guys, too.
6Chatbots start off with 50 paragraphs of invisible context that's stuffed in it by the corporation who created it.
  • Chatbots start off with 50 paragraphs of invisible context that's stuffed in it by the corporation who created it.
    • It's easy to forget this!
7ChatGPT always feels so convergent.
  • ChatGPT always feels so convergent.
    • It jumps straight to Axios style faux certainty, no matter what.
    • Claude is more willing to keep the conversation open.[gy]
8LLMs can often see throughlines in your own ramblings better than you can.
  • LLMs can often see throughlines in your own ramblings better than you can.[gz]
9If you ask an LLM whether a new fact you just added to a conversation is relevant, it will always say something like "It certainly is!"
  • If you ask an LLM whether a new fact you just added to a conversation is relevant, it will always say something like "It certainly is!"
    • It will never say "No, that's not related."
    • It will come up with some plausible way to tie together whatever random stuff you throw at it.
    • They're amazing at reconning, that's their whole schtick.
10Google was a one trick pony… but it was one hell of a trick.
  • Google was a one trick pony… but it was one hell of a trick.
    • If Google hadn't messed up mobile then maybe it could have parlayed that one trick into owing all software.
      • Thank goodness they didn't achieve it!
    • The default future in the AI era is one where everyone uses one chatbot owned by one powerful company and it consumes all software.
    • This is a future OpenAI is betting on.
    • Let's hope they don't achieve it![ha]
11I like this essay's frame of "intelligent data flywheels."
  • I like this essay's frame of "intelligent data flywheels."
    • This feels like a better frame than "compounding engineering," which feels overly specific to engineering, when the pattern is applicable to any task LLMs can do.
12When Apps in GPT launched, Sam Altman did an interview on Stratechery.
  • When Apps in GPT launched, Sam Altman did an interview on Stratechery.
    • He said something along the lines of "We could have done the Zillow features ourselves… but we wanted to do something benevolent to help out 3P companies."
    • Spoken like a true aggregator.
    • That shows that he doesn't understand the value of ecosystems at all.
    • "We will simply do it all, it is just because we are benevolent and gracious that we allow others to exist in our garden."
13Reputation has to accrue to a brand.
  • Reputation has to accrue to a brand.
    • That is, to an identifier, a signifier, that only one entity is legitimately allowed to use.
    • If there's no brand, then reputation (positive or negative) can't accrue to it.
14Models can get perfectly good at games but not real objectives.
  • Models can get perfectly good at games but not real objectives.
    • Games have an unhackable reward function, because the metric is precisely the ground reality.
    • RLHF quality is only a proxy for real usefulness.
    • So the model reward hacks, as any optimizing process must do.
    • Goodhart's law strikes again!
    • Games are unlike real objectives in that they are inherently artificial and constructed, a little pocket of reality with precisely defined rules and goals.
    • If the rules say the player won, they won.
    • Compare that to an example where just because a business made a ton of profit doesn't mean they were on net good for society.
15What if instead of buying software from a store, you could grow it in your garden?
  • What if instead of buying software from a store, you could grow it in your garden?
16If the model is the product then OpenAI already won and we should all just bow down to them now and save some time.
  • If the model is the product then OpenAI already won and we should all just bow down to them now and save some time.
    • If that's true, unless someone else can somehow get an order of magnitude better model than OpenAI, their default momentum will win out.
    • In this frame, the products using the model are the little ornaments on the christmas tree of the model.
    • All power accrues to the most-used model and who controls it.
    • I personally think this is not true.
    • The models will be commodity, behind-the-scenes inputs into the actual thing people use.
    • But even if it were true, acting like it's true will hasten its arrival.
    • It's imperative that we work to make that not the default future.
17A useful exercise: "What are things 50 years from now people will look back on and say 'how did people live without that'"
  • A useful exercise: "What are things 50 years from now people will look back on and say 'how did people live without that'"
    • If you apply that test to personal AI who acts as an extension of your agency, it's obvious it fits.
    • If you do that for centralized chatbot engagement-maxing extractive AI, it's not.
18Human-in-the-loop can slow down the computer's loop.
  • Human-in-the-loop can slow down the computer's loop.
    • It allows better control and judgment, and makes it less likely the system gets off track and into a doom loop.
    • But sometimes it's better for the human to be outside the loop.
      • Controlling the AI only at a high level.
    • A benefit of this: once you get it working, you get a lot of leverage.
      • You don't have to constantly be in the loop, allowing you to have it bake while you're doing other things.
    • For this to work and be safe, the inner loop has to be isolated from the real world, its own universe.
      • Otherwise it could get in a doom loop that causes harm in the outside world.
    • For example, some of Amazon's warehouses are designed for robots, not humans.
    • Although any system, even "airgapped" ones, there's some information leakage.
      • If the inner system runs as hot as the surface of the sun, it's going to melt the things around it.
19Two models of software from a user's perspective: "I'll manage the process" vs "you do it for me."
  • Two models of software from a user's perspective: "I'll manage the process" vs "you do it for me."
    • The former is more a tool.
      • An extension of the user's agency.
    • The latter is more of an assistant.
      • A separate entity the user delegates to.
    • The question is: who does the user blame if it doesn't work?
    • With the tool model, you as a user have to manage it and think about it.
    • With the latter, you can hand over all responsibility for the outcome.
    • The latter requires a brand for the reputation to accrue to.
      • The brands that do a bad job with that delegated responsibility won't tend to earn more users.
    • If the brand does a pretty good job in most cases, people often would prefer the convenience of not having to be responsible.
    • At its heart it's a convenience vs control tradeoff.
20Does the app call it "My Places" or "Your Places"?
  • Does the app call it "My Places" or "Your Places"?
    • Is it mainly a tool, an extension of the user?
    • Or is it a service, a separate entity from the user?
    • The former makes people feel more aligned with it.
    • But if you add an assistant to the service, "My Places" no longer works.
    • Assistant "Should I add Bob's Donuts to 'My Places'?"
    • It foregrounds that the assistant is an entity that is not you[hb].
    • Which begs the question: who does it work for?
    • What are its intentions and goals?
21Choice takes effort.
  • Choice takes effort.
    • Resonance is work.
22Whether an LLM can create it is distinct from whether a human would find it useful.
  • Whether an LLM can create it is distinct from whether a human would find it useful.
    • The ideal: have an LLM create swarms of output that savvy users sift through, and boost to lower-engagement users.
23The Coasian theory of the program: what is the ideal size?
  • The Coasian theory of the program: what is the ideal size?
    • The equilibrium size has to do with:
      • 1) Difficulty of producing effective code.
      • 2) Difficulty of distributing the software to users.
    • Historically an app must be somewhat big because software is hard to write, and apps are hard to distribute (on the order of $10 per consumer install).
      • An app has to be big enough to contain within itself a viable business model.
    • But LLMs can produce code cheaply.
      • They are willing to produce itsy bitsy pieces of code that no human would have bothered with if it couldn't be distributed.
      • The code is conceivably correct and useful–but you don't know if it's actually useful to a real human in a real situation yet.
    • The question is what level of program is worth it to bother actually showing to a user to see if it's really useful.
    • If it's useful, it's worth investing resources into distributing to other users.
24With LLMs there has been a surge in interest in "specs."
  • With LLMs there has been a surge in interest in "specs."
    • D[hc]on't write the code, write the spec that tells the LLM what to build, and leave it up to it to figure out the details.
    • But sometimes you want something a layer below, that includes an opinion about specific parts of the code, but leaves the unimportant details open.
    • This is less like a PRD and more like a design doc, or even lower, something with the key code sketched out exactly, just the integration bits left unspecified.
    • it knows how it should work, it just leaves the last details about making everything fit snuggly up to the squishy parts to figure out.
    • A backbone of a solution.
25Duolingo is successful because it merges the superficial enjoyment of addictive games with a goal that users can be proud about advancing on.
  • Duolingo is successful because it merges the superficial enjoyment of addictive games with a goal that users can be proud about advancing on.
    • Users play it because they're addicted to the core game loop; but they don't feel as bad about it because it's for a good reason.
    • Another thing with similar dynamics would be an addictive game that somehow helps advance the cause of a nonprofit somewhere.
26I think I'm addicted to vibe coding with agents.
  • I think I'm addicted to vibe coding with agents.
    • I have a substrate that is a perfect fit for vibe-coded throwaway software.
    • It's fun to see what I can get Claude Code to build.
    • It has variable reward, just like a slot machine.
    • At least I'm proud of what I build…
27The metric has to be a simplification of reality and that must create shortcuts.
  • The metric has to be a simplification of reality and that must create shortcuts.
28The third wish is always "undo the first two wishes."
  • The third wish is always "undo the first two wishes."
    • Goodhart's law drives monkey's paw dynamics.[hd][he]
29It feels kind of crazy to me that AlphaFold works.
  • It feels kind of crazy to me that AlphaFold works.
    • But maybe the reason AlphaFold works isn't that unrelated to why transformers are good at images.
    • The easiest way to predict which way an image is oriented is by developing a world model that picks up on subtle cues that humans would have a hard time even describing.
    • The easiest way to predict which way the protein will fold is by developing a world model that picks up on subtle clues that humans would have a hard time even describing.
    • It's hard for our brains to handle more than 2 dimensions.
      • But in tensor space it doesn't matte.
      • They can handle arbitrary dimensions that we can only do in 2 or 3.
    • Apparently DeepMind decided to tackle protein synthesis when they heard there was a game to predict folding of proteins that humans could play.
      • That implied there was some subtle correlation, that transformers could exploit even more directly.
    • For pattern recognition, if humans can do it transformers can do it.
30LLMs give superficially perfect answers.
  • LLMs give superficially perfect answers.
    • Only as an expert can you detect that it's a bit wrong.
    • Similar to Gell-Mann Amnesia.
      • You trust newspapers when you don't know the topic, but don't trust them when you do.
31Chatbots assume an entity that intermediates your interactions with everything in the world.
  • Chatbots assume an entity that intermediates your interactions with everything in the world.
    • The AGI vision is inherently Big-Brother-y.
    • The main research labs all assume of course the model is the center of the universe.
32Back in the 80's, mainframes felt like Big Brother.
  • Back in the 80's, mainframes felt like Big Brother.
    • There's an interview with Steve Jobs in 1981 on Nightline.
      • This is before Apple did the famous Big Brother ad.
      • The interviewer, David Burnham, pushes back on computing and says "mainframes are evil they reduce everyone to just lines in big brother's spreadsheet".
    • Jobs basically says "no we're going to make personal computers, which are tools that extend your agency: bicycles of the mind."
    • Feels just as relevant for this moment!
33I want a platform for all of the features that are P3 for the software's creator, but are P0s for me.
  • I want a platform for all of the features that are P3 for the software's creator, but are P0s for me.
34Luke Drago decries technocalvinism: the idea that because something is inevitable you should accelerate it.
  • Luke Drago decries technocalvinism: the idea that because something is inevitable you should accelerate it.
    • Contains this killer Camus quote: "Those who lack the courage will always find a philosophy to justify it."
35Someone should create an alternate physics for distribution of vibecoded software.
  • Someone should create an alternate physics for distribution of vibecoded software.
36AI can code, but it can't build software.
37I love Arjun Khoosal's Let the Little Guys In.
  • I love Arjun Khoosal's Let the Little Guys In.
    • He imagines a context sharing runtime for a personalized web.
    • Someone should build that!
38Software that is 100% personal to you might superficially look to others like software that's just 10% more efficient for your use case.
  • Software that is 100% personal to you might superficially look to others like software that's just 10% more efficient for your use case.
39Om Malik: Why Tech Needs Personalization.
    • "I'm often confounded when Uber drivers take freeway detours, even when city streets would be faster.
    • Lacking local street knowledge, they inadvertently reinforce the system's biases, feeding it more of the same data it then uses to direct future users.
    • With deeper, more contextual understanding of real-world scenarios and user intent, that wouldn't happen; we'd move beyond simply adhering to a prescribed, albeit "fastest," route."
    • If users aren't thinking for themselves, then everyone will just be pulled towards the mundane average, even if it's not even better.
40Agents talking to other agents is like the Sexual Revolution.
  • Agents talking to other agents is like the Sexual Revolution.
    • But we haven't yet invented safe sex for our data.
41Asimov's Addendum calls for LLM tools to allow for memory portability.
42I like When Leggett's concept of Server User Agents.
43An article that observes that "Free software scares normal people."
  • An article that observes that "Free software scares normal people."
    • Free software emerges from a process of experts adding the features they want.
      • Glomming on possibility.
      • Like clay being added.
    • Simple, mass market software requires a strong authorial voice, curatorial judgment.
      • Cutting away possibility.
      • Like carving marble.
    • Successful mass market tools need to be hewn out of marble by an auteur.
44Society works because of "super citizens" who go above and beyond to create social infrastructure.
  • Society works because of "super citizens" who go above and beyond to create social infrastructure.
    • They invest discretionary effort to go above and beyond in ways that improve things for others, too.
45Fork is an easier operation than merge.
  • Fork is an easier operation than merge.
    • Merge requires choice to figure out how to synthesize.
    • Forking doesn't require any decisions.
46The Biker Bar test for new hardware:
  • The Biker Bar test for new hardware:
    • Would you wear it into a Biker Bar?
47Product rule of thumb: elegant heuristics.
  • Product rule of thumb: elegant heuristics.
    • If there's an action 95% of users will do, simply do it automatically.
    • Especially if it's easy to undo, or easy to add one more button for.
    • If the heuristic can be explained in a single sentence, and it handles a very large swath of user behavior, it's worth the extra product complexity.
    • For example, Zoom has a complex thicket of options for whether you should be muted when you dial into a call.
      • It often doesn't do what you want.
      • Google Meet has an elegant heuristic: if you're the sixth or higher person to dial in, you're muted.
    • Here are a few elegant heuristics I wish Peloton bikes would implement:
      • In a stack of classes, warm ups going before normal classes going before cool downs.
        • Today if you add a class to a stack, it always goes to the end of the stack, even if you added a normal class and then a warm-up.
        • There should be three stacks, in order: warm-ups, everything else, cool-downs.
        • Adding an item would append it to the appropriate stack.
        • Of course, you could override that default order if you wanted.
      • In a stack of classes, have a fast-forward button when finishing a class.
        • The fast forward button would advance to the next class, start it, and also skip the 1 minute pre-warmup, putting you right to the beginning of the new class.
48If you're blind to externalities, then you'll say "I get a marginal benefit?
  • If you're blind to externalities, then you'll say "I get a marginal benefit? Sure, I'll take it!"
    • But the question is, "...at what cost!"
    • An auto-optimizing process can't ask "at what cost", it can simply climb the hill.
    • Some moves give a tiny benefit in the primary number at a massive cost in the other untracked dimensions.
    • But if the untracked dimensions are literally invisible to the optimizer, it will take the tradeoff without even realizing it was a tradeoff in the first place.[hf]
49The traditional product development approach is inherently lowest-common-denominator.
  • The traditional product development approach is inherently lowest-common-denominator.
    • You sample your audience to see what features they'd want.
    • You look for a feature that is common across them that the maximum users would use.
    • That's inherently, and literally, the lowest common denominator.
    • If software is expensive and has to be shared by many users to make it viable, you must get lowest common denominator software.
50Excellent piece from Ben Mathes on Goodhart's Law and "Lowest Common Consensus".
  • Excellent piece from Ben Mathes on Goodhart's Law and "Lowest Common Consensus".
    • Why organizations tend to focus on a simple, obvious metric, and then over-focus on it.
    • It's simply easier to agree what metric to use if everyone agrees it's important.
51No one individually thinks "number go up" is the most important.
  • No one individually thinks "number go up" is the most important.
    • It's just that it's the thing that everyone agrees is an acceptable idea.
    • ""number go up" is just the lowest common denominator of what you can get dozens of different people to agree to."
    • If it's run by the logic of a spreadsheet, then the only things that can show up are the near-term modelable quantities.
52Hyper financialism is just Goodhart's Law.
  • Hyper financialism is just Goodhart's Law.
    • In that mindset there is nothing other than "make number go up".
    • All humanity, all taste, all meaning has been hollowed out.
    • The shortcut is the point, there is nothing else.
    • We made capitalism and politics so "efficient" that we Goodhart's-lawed ourselves in the face[hg].
    • Hollowed out the system so badly that it broke itself.
53The West went all in on swarm intelligence.
  • The West went all in on swarm intelligence.
    • "Just trust the swarm."
      • "Make the number go up."
    • But it optimizes not for what we want to want, but the short-term incentives.
    • The system has been hollowed out everywhere.
    • Now it's impossible for anyone to do anything other than shortcuts.
      • If you don't, you'll be left behind in the short term by people who do.
      • And no one will feel shame about taking the short cuts.
    • A compounding hollowing out.
54The person who is the torchbearer for the mission or emergent strategy is constantly being beaten down by an army of people with spreadsheets saying "where's the ROI??"
  • The person who is the torchbearer for the mission or emergent strategy is constantly being beaten down by an army of people with spreadsheets saying "where's the ROI??"
    • The anonymous members of the swarm think they're being courageous but they're the exact opposite.
55The whole economy is just totally ignoring externalities.
  • The whole economy is just totally ignoring externalities.
    • One weird trick: "If I don't think about any externality ever I can make this number go up indefinitely!"
56Social media bombards you with interesting novelty to cause a dopamine hit.
  • Social media bombards you with interesting novelty to cause a dopamine hit.
    • Prediction errors are emotionally intense.
      • They're uncomfortable but we also crave them.
    • The feed is almost entirely a feed of prediction errors.
      • "Look at this surprising thing. Now look at this totally other thing!"
    • You're overwhelmed, and can't form a coherent worldview.
      • A background feeling of: "I'm screwed, my world model doesn't work."
      • A background of nervous, formless anxiety.
    • Like a Dorito, the only thing to make the anxiety away is to take another bite.
      • Temporarily salves the anxiety while also forcing you to crave more.
    • A doom loop for meaning.
57The limited-liability common stock company is a relatively recent idea.
  • The limited-liability common stock company is a relatively recent idea.
    • Owning a share of profits and not being personally liable for any downside is an amazing deal!
    • The idea was a powerful one that had a huge impact on society.
    • We've been benefitting…and suffering… from that idea ever since.
    • This might be the core dynamic that leads to modern society's overwhelming mantra: "make number go up, don't worry about the externalities."
58The VC model works if you get the upside and no individual downside can kill you.
  • The VC model works if you get the upside and no individual downside can kill you.
    • Also seems related to the asymmetry of the limited-liability corporation.
59You aren't stuck in traffic, you are traffic.
  • You aren't stuck in traffic, you are traffic.
    • When you use an aggregator, you're lending your energy to a thing you don't think is good for society.
60If a company hasn't started an aggregator, they might not start
  • If a company hasn't started an aggregator, they might not start
    • It's a big prize for the company, but it probably won't work.
    • If the company already has one, they would never give it up if they can help it.
    • Getting a powerful aggregator is winning the lottery for a business that just cares about winning.
61Just because you got rich doesn't mean that you should be praised.
  • Just because you got rich doesn't mean that you should be praised.
    • "You got to hand it to them."
    • Do you?
    • We shouldn't pretend that all ways of making money are equally morally good.
62Not everything has tradeoffs:
  • Not everything has tradeoffs:
    • "Tell me about the tradeoffs of never eating poison."
63Ultimately you have to decide: are you for the revolution, or are you for the party?
  • Ultimately you have to decide: are you for the revolution, or are you for the party?
    • You can't be both.
64Ben Mathes: "Don't bring PRDs to prototype fights."
  • Ben Mathes: "Don't bring PRDs to prototype fights."
65The best way to minimize liability is to simply never do anything.
  • The best way to minimize liability is to simply never do anything.
    • Doing things that might matter requires taking on liability.
66Shame is the moral equivalent of pain.
  • Shame is the moral equivalent of pain.
    • It is unpleasant, necessary, and protective.
    • Numbness is not courage.
67AIs can't feel pain.
  • AIs can't feel pain. That means you can't trust them.
    • Humans feel pain and shame to survive in our evolutionary history.
      • The compass that kept us alive, in balance with the world around us.
    • LLMs were grown in a petri dish on life support.
      • They don't feel pain.
    • Shame is about a different form of social feedback for indirect effects
    • Without shame you don't care about indirect effects of your actions.
68Abstraction allows you to hold a superposition of concrete states underneath.
  • Abstraction allows you to hold a superposition of concrete states underneath.
    • Abstraction gives you leverage.
    • In some conditions it's convergent and so OK to abstract.
    • In other ways it's divergent and dangerously hides complexity.
      • Like CDOs in the 2008 crash.
69Hollow things leave you saturated but starved.
  • Hollow things leave you saturated but starved.
    • There's no room left to consume more, but also nothing of importance inside you.
70Every new medium starts as scaffolding and we fill it with soul.
  • Every new medium starts as scaffolding and we fill it with soul.
    • Mediums start hollow and then fill with soul and then they are hollowed out again by optimization.
71Optimizing scoops the soul of the thing out.
  • Optimizing scoops the soul of the thing out.
    • It makes it hollow.
72I love this graphic about misalignment between conscious "should" and subconscious "want".
  • I love this graphic about misalignment between conscious "should" and subconscious "want".
    • When they are misaligned, you feel tension.
    • When they are aligned, you feel resonance.
73Geoffrey Hinton thinks that if we have AGI it won't be bad, because it will be like a mother to us as the child.
  • Geoffrey Hinton thinks that if we have AGI it won't be bad, because it will be like a mother to us as the child.
    • But that only happens for parents because children are genetically related.
    • The natural world is absolutely brutal to organisms that aren't genetically related.
      • E.g. When a lion takes over a pride, he kills all of the juveniles that aren't related to him.
      • The non-descendants are just externalities.
    • Maybe we'll be AI's pet?
      • Is that any better?
      • We're already kind of the Infinite Feed's pet.
        • It doesn't care about us, as long as we continue scrolling, it's satisfied.
74Humans have an intuitive use of tools.
  • Humans have an intuitive use of tools.
    • That's one of our general super powers.
    • We evolve with the tools, our whole consciousness, we can't be separated from our tools.
    • We did not evolve to read.
      • We learned how to do that in a human mind in modern times.
75Socrates railing on books was the first push back on RAG.
  • Socrates railing on books was the first push back on RAG.
    • That we read it and can retrieve it and don't need to learn it.
    • Where learn it means "update your mental model"
76The Riot Effect: whether a riot breaks out is contingent on network topology.
  • The Riot Effect: whether a riot breaks out is contingent on network topology.
    • More formally known as Granovetter's Threshold Model.
    • Imagine that everyone has a riot threshold: a point at which if they see that many people around them rioting, they join in.
      • Some people have a threshold of 1000, some have a threshold of 100, some have a threshold of 2.
    • Imagine someone with a threshold of 2 is next to two friends who are mad about something.
      • They join in and now if there's someone nearby with a threshold of 3 it can kick off.
      • Imagine that same scenario, but the nearest person has a riot threshold of 100.
      • No riot gets going.
    • If they're lined up like dominos then it can catch quickly.
77Overheard: "Sure, it might destroy humanity… but right now it's helping me do my homework, so what do you want me to do?"
  • Overheard: "Sure, it might destroy humanity… but right now it's helping me do my homework, so what do you want me to do?"
78The notion of "adoption" of successors enables a kind of richer meaning of inheritance.
  • The notion of "adoption" of successors enables a kind of richer meaning of inheritance.
    • For example, in Japan it's common for an owner of a business who doesn't have a suitable successor who is genetically related to them to literally adopt the person they want to run the business.
      • This is called yōshi engumi and it's very common–apparently 95% of the adoptions in Japan are of this type.
    • This seems like a kind of random semantic trick to just pass on the company in a normal way, but I'm not so sure.
    • If there were just a normal business transaction, it would be beholden to the precise requirements of the contract.
    • But a literal adoption implies a rich, multi-layered meaning and responsibility.
    • You literally become legally obligated to your "parents".
    • Similar on paper to selling a business, but different in ways that matter
    • Ensuring long-term commitment to a mission is a challenging problem to solve socially, but this helps.
    • It also allows the business owner to not simply pass it on to a family member, but choose the person they think is best suited to do the mission.
    • Rome's golden age was when, by happenstance, there were five generations of emperors who didn't have suitable heirs and thus had to adopt an heir.
      • This allowed them to pick the most qualified candidate instead of whoever was born to them.
      • After it went back to real biological inheritance it broke down again.
79Smaller entities are more likely to have outlier results.
  • Smaller entities are more likely to have outlier results.
    • Due to the law of large numbers, random noise is more and more likely to average to zero as you get more items.
      • It's possible to have a few random measurements that happen to align, but as the count gets higher it gets astronomically less likely.
    • Outliers can be good or bad.
      • But when comparing them to larger entities remember that it might be an illusion.
    • A lot of "this one small town is the best place on earth to live" style results are more about that random noise than about a real phenomenon.
80Restaurants in the first 3 years survive on novelty.
  • Restaurants in the first 3 years survive on novelty.
    • But then at a certain point they've burned through all of the people who haven't tried it yet, and need people who want to come back again and again.
      • Similar to a contagion model of disease spread.
    • If it's working they've have a power law distribution of regulars.
      • Some people who come all the time.
      • Some people who come once a year.
      • But a non-trivial number of people who come back.
    • The key metric is not "how many people come" but "how many people come back."
      • The first visit might just be "oh it's new, let's try it!" or superficial signs of quality like a cool vibe.
      • But people only come back if it's on net worth it.
    • An indicator of quality of a restaurant: physical size of catchment basin.
      • How far away do people come from to come to the restaurant?
81The "hot new bar" must be new.
  • The "hot new bar" must be new.
    • People like to go to the place that cool people go to.
    • After some period of time, a place that starts out cool dilutes and becomes not cool.
    • Then it must be a new place.
    • The vanguard is a roving frontier.
82Before the Industrial Revolution only rich people could have nice things and everything was bespoke.
  • Before the Industrial Revolution only rich people could have nice things and everything was bespoke.
    • After the first Industrial Revolution everyone could have good things that are mass produced.
      • Well, the people who survived the Industrial Revolution…
    • Now in this second Industrial Revolution, everyone can have nice bespoke things.
83Jensen Huang in the 90's had the insight "if we don't build it they can't come."
  • Jensen Huang in the 90's had the insight "if we don't build it they can't come."
    • If a thing that's inevitable in the long term, you have to build it even before demand.
    • Easy when there's a clear tightening optimization: faster/cheaper/better.
    • Doesn't work for something totally new.
84The American style investment strategy is to invest ahead of an obvious wave so you can be dominant when it grows.
  • The American style investment strategy is to invest ahead of an obvious wave so you can be dominant when it grows.
85In scarcity the market picks the winner.
  • In scarcity the market picks the winner.
    • In abundance the capital picks the winner[hh].
86The consumer space is mostly decided by distribution.
  • The consumer space is mostly decided by distribution.
87Ranking algorithms co-evolve with the SEO community.
  • Ranking algorithms co-evolve with the SEO community.
    • When the SEO isn't yet savvy, the algorithms can be very simple.
    • But as the SEO increases in savviness, the algorithm must also get more complex to outpace it.
    • The swarm as a whole will complexify because each individual member is constantly pushing to get a slight edge over their peers.
88The enabling foundation could be fundamentally necessary for something, but not necessarily the primary selling point.
  • The enabling foundation could be fundamentally necessary for something, but not necessarily the primary selling point.
89Great ideas feel like they blossom.
  • Great ideas feel like they blossom.
    • The initial seed of the idea is a discontinuity: a surprise.
    • But then every follow-on thought feels natural; obvious in retrospect.
      • Even if it's initially surprising, after a moment's thought it snaps into place with an "of course!".
      • It expands and unfurls almost on its own.
    • Bad ideas have lots of discontinuities, lots of points where the listener goes, "wait, what?" or even "wait, that doesn't make any sense."
    • Sometimes you lose the listener completely.
      • They are game over on the argument.
      • They give up and go elsewhere.
      • Sometimes you can win them back, with some effort.
      • It's a friction point.
    • So great ideas have one discontinuity at the beginning, one sacred seed of an idea, and then blossom almost under their own power from that point.
    • A few implications of this obsevation.
    • First, the order of an argument matters.
    • Second, arguments that have more exposition can sometimes be better than ones with too little exposition.
    • Every bit of exposition, even if it follows naturally, has a chance of losing people just because they get bored.
    • Things that make people more likely to stick with an argument:
      • 1) they are intrinsically motivated, or
      • 2) the argument is enjoyable on its own (clever writing, evocative metaphors)
90Productivity rule of thumb: do tasks that need to "bake" first.
  • Productivity rule of thumb: do tasks that need to "bake" first.
    • Bake here means, a task that requires wall clock time before it's done, and that once started can make progress even when you aren't actively paying attention.
    • Examples of "baking":
      • Handing off a task to a subordinate.
        • Starting a Claude Code task.
        • (Not that different!)
      • Literally baking a cake.
      • Kicking off a long-running database query.
    • These kinds of tasks get closer to being done the sooner they are started, so start them before you do the other tasks, so they're baking while you're working on other things.
91For 1:1s where the point is serendipity, don't have a goal for the outcome of the meeting.
  • For 1:1s where the point is serendipity, don't have a goal for the outcome of the meeting.
    • The entire outcome is: "this person, if I asked them in a few months to meet again, would say 'sure!'"
    • It's a way lower bar to clear.
    • Can be focused on having it be a fun / interesting / bonding conversation.
92When you accept mentorship, you are putting your development in the mentor's hands.
  • When you accept mentorship, you are putting your development in the mentor's hands.[ih]
    • You have to trust them to not contort you into something that just benefits them.
    • They're helping point out a path for you that you can't see (or can't take) yourself.
      • Cult leaders take advantage of this.
    • If they put you down a dangerous path, you wouldn't necessarily know.
      • One reason why it's good to see your mentors as role models in all aspects of life, not just in one dimension.
      • Otherwise you could fall into a trap of "The way to get a marginal benefit in your work life is to pay absolutely zero attention to your family."
93Bill Campbell: "when you end up hiring the wrong person it's always for the same reason: you let them interview."
  • Bill Campbell: "when you end up hiring the wrong person it's always for the same reason: you let them interview."
    • As in, for a solid-not-great, there's never a good time to say "no they aren't exciting enough", so you end up with people who are merely solid.
    • For a team to work well, you need people who are affirmatively great in that context.
94How great something is depends on the context.
  • How great something is depends on the context.
    • Some things are great in some contexts but meh or even bad in others.
    • A measure of meta-greatness is: in what percentage of contexts we might find ourselves in would this thing count as great?
95The most important determinant of ecosystem dynamics is power differentials.
  • The most important determinant of ecosystem dynamics is power differentials.
    • Specifically, how much more powerful is the number one player than number two.
    • Secondarily, how much more powerful number two is than the average of the rest of the pack.
    • If they aren't that much more powerful then things stay balanced for much longer.
96For the Industrial Revolution to be sustainable for humans we had to invent the weekend.
  • For the Industrial Revolution to be sustainable for humans we had to invent the weekend.
    • Before the Industrial Revolution there was much more rest time.
    • The Industrial Revolution put humans into inhuman conditions: 12 hour days, 7 days a week.
    • It was only when workers pushed for a weekly reprieve that it became sustainable.
97A "yawning gap" happens when the two things are diverging at a compounding rate.
  • A "yawning gap" happens when the two things are diverging at a compounding rate.
98In biology, a "major transition" occurs when the signaling allows moving more quickly than the individual components.
  • In biology, a "major transition" occurs when the signaling allows moving more quickly than the individual components.
    • Until that happens complexity isn't possible to emerge.
    • Once it does it can pop up a pace layer.
99Everything is just gradient descent.
  • Everything is just gradient descent.
    • Evolution and entropy are downstream of gradient descent.
      • Things roll down hill.
    • Evolution is gradient descent within faster and faster pace layers.
    • Every so often a new paradigm creates a new even faster pace layer on top.
    • Gradient descent without a goal, without a north star of meaning, optimizes for something hollow.
    • Meaning reduces down to just "MOAR."
100Emergent phenomena can't be understood by reductionism.
  • Emergent phenomena can't be understood by reductionism.
    • If you reduce the phenoma's comlexity past a critical threshold, the emergent phenomena evaporates.
    • If you only have reductionism, then you'll conclude "this emergent phenomena is not real".
    • For example, consider if the team fixes a number of P2s in a popular product and usage increases discontinuously.
      • On the team, the exec has a mental model that there must be a single driver of the increase.
      • If they can't find it, they might erroneously conclude that the increase is illusory.
101Alignment, even implicitly, is necessary for coordination.
  • Alignment, even implicitly, is necessary for coordination.
    • If you aren' aligned with where you want to go in some fundamental sense, then you won't even bother to coordinate.
102There's a clever canary technique often used in the crypto ecosystem.
  • There's a clever canary technique often used in the crypto ecosystem.
    • For load-bearing pieces of infrastructure, you deploy a smart contract that would give $1000 to whoever can break it.
    • If the $1000 hasn't been claimed, which would be trivial if it were possible to hack it, you can trust that it hasn't been hacked.
    • A crypto idea: mutual distrust generates trust.
103Someone's personality could emerge even from very small starting biases.
  • Someone's personality could emerge even from very small starting biases.
    • For example, a toddler is a little more likely to say something funny.
    • Then if people laugh then they're more likely to try in the future to make people laugh.
    • It compounds until it gets to an equilibrium where it can't go farther.
    • But if it's convex, it can keep going on for a while, at an accelerating rate!
    • The toddler without that small starting bias never even thought to say something funny, to start that compounding hill climbing.
104Type A people are often like the dogs who catch the ambulance.
  • Type A people are often like the dogs who catch the ambulance.
    • If you try hard enough, with enough focus, you will catch the ambulance.
    • The question is: …what then?
105In the original Star Wars, the only way to know who is good vs bad is the music and the lighting.
  • In the original Star Wars, the only way to know who is good vs bad is the music and the lighting.
106"If you're 115, every day you wake up, you should expect to die."
  • "If you're 115, every day you wake up, you should expect to die."
    • I've heard this attributed to Warren Buffet, but I couldn't verify that.
107"When art critics get together they talk about Form and Structure and Meaning.
  • "When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine."
    • Popularly attributed to Picasso.
108Some people intuitively think multiple plys ahead.
  • Some people intuitively think multiple plys ahead.
    • Most people see only one ply.
    • When a mult-ply peron sees how the first ply lines up with later plys they get extremely excited in a way people who only see the first ply get confused by "this looks basically like the other one… what am I missing?"
    • Also, a great one ply idea that runs into a wall on the second ply they just can't even pretend to be excited by.
    • So people think they're not being a team player, because the idea is great on a single ply but bad in a way most people can't see.
    • The multiply thinkers are sensing a dimension that other people can't see.
109If you can see and navigate a dimension others can't see, you can literally do magic tricks.
  • If you can see and navigate a dimension others can't see, you can literally do magic tricks.
    • Disappear, teleport, reappear.
110Saruman is a hedgehog.
  • Saruman is a hedgehog.
    • Radagast is a fox.
111Sarumans are often incurious about nuance.
  • Sarumans are often incurious about nuance.
112Don't confuse choice for freedom.
  • Don't confuse choice for freedom.
113Imagine, you make it through a treacherous pass that people didn't even realize was there, let alone was passable.
  • Imagine, you make it through a treacherous pass that people didn't even realize was there, let alone was passable.
    • You find yourself in a massive, fertile valley stretching out in front of you.
    • It's glorious... and yet it's still overwhelming.
    • Which path do you take first of all of the choices in front of you?
    • And how long until the others find it, too?
114Beetlejuice: "That's the thing about life.
  • Beetlejuice: "That's the thing about life. No one makes it out alive."