Bits and Bobs 11/24/25

1One of the iron laws of software strategy: whichever entity stores the important state has an order of magnitude more leverage.
  • One of the iron laws of software strategy: whichever entity stores the important state has an order of magnitude more leverage.
    • Another: controlling the pixels the user sees has an order of magnitude more leverage.
    • The combination gives an order of magnitude more strategic leverage than either alone, but both are very powerful.
    • LLM API providers don't have the state, it's stateless![ey]
    • They also don't control what pixels are on screen.
    • That's why OpenAI is moving so aggressively to own the vertically integrated consumer experience, complete with tons of state.
2Google is shipping dynamically generated little artifacts in the search results.
  • Google is shipping dynamically generated little artifacts in the search results.
    • It's impressive they can get them that quickly.
      • Though there's likely some significant caching going on.
    • It's cool, but it's a low ceiling.
    • These are little widgets, micro-apps with no data.
3An interesting deep dive into how Gemini's memory system works.
4The model providers seem to be in a meta-stable equilibrium.
  • The model providers seem to be in a meta-stable equilibrium.
    • None of them have any differential pricing power, since the models are practically commodity.
    • But they do all have a shared interest in the inference cost not dropping to zero, to recoup their capital investment in training.
    • This is not too dissimilar from the Unix Wars.
    • There were a small number of extremely expensive Unix options, in a stable equilibrium.
    • Then Linux showed up, a high-quality free option, and it totally destroyed that equilibrium.
5A prompt injection attack on ServiceNow's agents that spreads virally to other agents.
6Claude can be a bit of a stickler.
  • Claude can be a bit of a stickler.
    • Earlier this week Claude refused to comply in a hilarious way.
    • I had a recipe named "Five Cheese Mac and Cheese".
    • But the recipe only actually listed four cheeses.
    • I asked Claude to add the ingredients for the recipe to my shopping list.
    • It refused, because there were only four cheeses, not the five as claimed, so something must be wrong.[ez]
7When Gemini 3.0 was released, Google's stock dropped by 10%.
  • When Gemini 3.0 was released, Google's stock dropped by 10%.[fz]
    • It's the best model, and still not transformatively better.
    • This is what it would look like if we were hitting the top of the s-curve.
    • Even if the models are actually much better, we already have more than enough quality for many tasks.
    • Similar to the logarithmic improvement in the number of triangles in a 3D object.
    • Past a certain point the cost keeps on going up and it just doesn't matter.
8LLMs are like electricity.
  • LLMs are like electricity.
    • You can electrify things that used to not be able to move on their own, making them dynamic, almost alive.[fb][fc]
9The LLM model providers are like electricity providers back when electricity was new.
  • The LLM model providers are like electricity providers back when electricity was new.
    • Competing to get better quality for cheaper.
      • Innovating on new techniques to do so.
    • But ultimately it will just become a commodity.
    • No one will care where their tokens come from, if most providers have similar quality and don't store state.
    • One place this metaphor breaks down is that power delivery infrastructure has a natural monopoly in a way that APIs don't.
      • Atoms can be rivalrous, but bits don't have to be.
10No one really cares about their electricity provider.
  • No one really cares about their electricity provider.
    • It's just a provider of a commodity.
    • Your LLM provider should be the same--although unlike electricity which has a natural monopoly, the LLM provider should be easy to swap out.
11David McWilliams calls GPUs "Digital lettuce."
12The quality of LLMs is model + harness.
  • The quality of LLMs is model + harness.
    • Model quality is getting saturated.
    • The differential quality comes from the harness now.
    • It's gotten way harder to do a vibecheck when they're all so good.
    • Long-running agentic toolcalling is where the incremental quality is visible.
    • But most uses just don't need the quality.
    • Andrew Ng has noted in the past that the quality jump from adding a good agentic harness to GPT3.5 was higher than the quality jump to GPT4.
    • If the harness is more important than the model, but the harness is easy / cheap to build and reverse engineer, that implies different strategic outcomes.
    • By wrapping the models and standing on their shoulders you can get further, with way less capital–but also less moat.
13LLMs are mainly a new information retrieval tool.
  • LLMs are mainly a new information retrieval tool.
    • Step changes in those have profound implications!
14Humans have limitations not unlike LLMs.
  • Humans have limitations not unlike LLMs.
    • Massive projects you can't do with just the squishy "muscle" of associative reasoning.
    • You need to give it external structure.
      • Whiteboards, notes, tracking docs.
    • That allows you to page in and out things into the "CPU registers", which there are only a very small number of!
15Legendary programmer Kent Beck in a tweet a couple of years ago: "The value of 90% of my skills just dropped to $0.
  • Legendary programmer Kent Beck in a tweet a couple of years ago: "The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x."
16I think it would not be great if most LLM usage in the US is an open-source Chinese model.
  • I think it would not be great if most LLM usage in the US is an open-source Chinese model.
    • First, Anthropic's research shows it's remarkably easy to poison a model of arbitrary size with deliberately chosen malicious training data.
    • Second, if there's a model that everyone uses that has a subtle but consistent bias, that bias at society scale could lead to significant society-scale impacts.
    • The Ouija Board effect again: a consistent bias in a noisy signal, at scale, leads to large emergent macro effects.
17Using my Claudeberry feels like feeding a tamagotchi.
  • Using my Claudeberry feels like feeding a tamagotchi.
    • Feeding my little remote Claude Code instances with little thoughts to keep them happy and productive.
    • But unlike a tamagotchi, at least they're producing output, and it's not just a game.
18Vibecoding is addictive for the same reason as gambling or factorio.
  • Vibecoding is addictive for the same reason as gambling[gc] or factorio.
    • You feel like you're right on the edge of it working and don't want to lose the streak / mental energy.
19I was addicted to programming hobby projects in the past but it was hard to get back into the mode.
  • I was addicted to programming hobby projects in the past but it was hard to get back into the mode.
    • But with vibecoding I can get back in the mode in a fraction of a second.[fe]
    • Uh oh!
20If you're vibecoding with multiple agents, offload tasks that don't require much input from you.
  • If you're vibecoding with multiple agents, offload tasks that don't require much input from you.
    • That is, do the hard thinking up front in design and research and speccing, and then the execution is mostly small questions.
    • If the LLM asks you big questions constantly, it quickly gets overwhelming.
      • Especially if you have multiple of them that you have to page between.
    • You are constantly needing to page back in significant complexity, thrashing between workstreams.[ge]
    • It's overwhelming and exhausting.
21It's not too hard to prompt inject humans, too.
  • It's not too hard to prompt inject humans, too.[gf]
    • The basic approach is to start a normal interaction routine and then abort it.
      • For example, put out your hand to shake the other person's hand, but then pull it away in a natural way before they shake it.
    • A couple of examples of this:
      • Cialdini tells us that the best way to jump the queue at the photocopier is to say "I need to jump the queue because I need to make a copy".
        • That is, to imply you have a good reason but… just say a thing everyone else says.
      • Derren Brown is able to convince people on the street to give him their watch by doing this aborted routine carefully.
    • Here's my mental model for what's happening.
    • When you start a stored social routine, your brain expects to simply execute it.
      • Presumably the prefrontal cortex goes to sleep until the routine finishes.
    • When you pull the rug out, the brain fritzes.
      • It's a kind of stunned chicken moment.
    • The prefrontal cortex is put to sleep but now you need to actually think, so you just go along with whatever was suggested.
      • The prefrontal cortex is what is suspicious and questions things, but it's temporarily off line.
    • In that stunned chicken moment we're extremely suggestible.
22Most of our security systems are downstream of an assumption that "acting like a human is expensive."
  • Most of our security systems are downstream of an assumption that "acting like a human is expensive."
    • Uh oh!
    • This week I learned about "account ripening."
    • Bad actors need fake accounts that look real.
      • That have existed for a while, with normal looking usage.
      • This helps them be used for various attacks.
    • This used to be expensive.
    • LLMs make it orders of magnitude cheaper!
23LLMs do a bad job noticing "not".
  • LLMs do a bad job noticing "not".
    • So if you have a long conversation where you say "not X" over time it has a high likelihood of thinking just "x."
24The AI-ism "It's not X, it's Y" turn of phrase is now everywhere.
  • The AI-ism "It's not X, it's Y" turn of phrase is now everywhere.
    • It was always a powerful rhetorical trick, it's just most people hadn't noticed before.
      • Framing things by what they aren't is a powerful, useful way of thinking clearly.
    • But now it's kind of ruined by everyone knowing it's an AI tell.
    • Before, good rhetoric often co-occurred with good thinking.
    • But now LLMs allow applying good rhetoric to half-formed ideas, which makes signal of rhetoric quality less powerful.
25Assistant Games are an interesting area of ML research.
  • Assistant Games are an interesting area of ML research.
    • Most models have a baked in reward function.
    • Assistant games try to infer what the user wants to do, based on their actions, and then help them do it.
      • Like an ebike.
    • Each action the user does helps update the model's priors about what the user's goals might be (or definitely are not).
    • Instead of a baked in reward function, they have a floating reward.
    • They're much harder to do, but potentially valuable.
26There's a movement for what Simon Willison calls "Vegan Models."
  • There's a movement for what Simon Willison calls "Vegan Models."
    • That is, models trained on only healthy inputs that the model creator has permission to use.
    • Personally, I haven't invested much mental energy in it.
    • We have these models, imperfect as they are, and they aren't going anywhere.
      • If you push for only vegan models, you'll have much less powerful models and will be outcompeted by the much better models.
    • We might as well figure out the ways to unlock as much prosocial power given that we have them.
27Big tech is overwhelmings...
  • Big tech is overwhelmings... and just kind of mediocre.
    • Billions of users are held captive in a small set of one-size-fits-none products that are hard to leave and have no alternatives, so the competition to improve them evaporates.
    • Mid tech.
28It's in the air that we need something other than Big Tech in this era of AI.
  • It's in the air that we need something other than Big Tech in this era of AI.
    • But what?
29Imagine if your notebook did deep research on the things you care about while you slept.
  • Imagine if your notebook did deep research on the things you care about while you slept.[fh]
30Imagine a garden where you plant the seeds of your intention and then harvest and prune what grows.
  • Imagine a garden where you plant the seeds of your intention and then harvest and prune what grows.
    • Even easier if a master gardener does the work for you so you don't have to be a gardening expert yourself!
31Living software is dynamic software.
  • Living software is dynamic software.
    • Software that can change itself.
    • My friend Aniket imagines what it would be like in a world where software is fully dynamic.
32The PM job might be iterated to zero.
  • The PM job might be iterated to zero.
    • The PM job is about making software that can be sold to users.
    • But in a world of infinite software, everyone can have software perfectly fit to them.
    • The idea that PMs will create one-size-fits-none software is downstream of software being expensive to produce!
    • PMs today are racing to use LLMs to do their normal process faster, to get an edge.
    • But that's kind of like the racoon washing the cotton candy.
    • Oops, all gone!
33Imagine if you could only use one piece of software for the rest of your life.
  • Imagine if you could only use one piece of software for the rest of your life.
    • What would it have to do?
    • It would have to be something that no company could control.
    • That would have all of the little features you need,
    • That would allow you to collaborate with the people you want to without coordinating on which bit of software to use.
    • If you had an everything app that did everything for you and you could collaborate with everyone in the world you collaborate with, you'd never use the old big-box apps.
34The software industry has unlocked the power of turing completeness for industry.
  • The software industry has unlocked the power of turing completeness for industry.
    • But consumers haven't gotten that benefit.
      • Consumers are consumed by industry.
    • Someone should unlock the prosocial power of turing completeness for humanity.
35One reason AI feels scary is because it lands power disproportionately in the hands of whoever controls the compute.
  • One reason AI feels scary is because it lands power disproportionately in the hands of whoever controls the compute.
36Compounding things can become a virus if they aren't prosocial.
  • Compounding things can become a virus if they aren't prosocial.
    • Compounding isn't necessarily good, it's just powerful.
    • Compounding is amoral; morality comes from whether it's a thing that's good for society or bad.
37ChatGPT is like chat rooms on the internet at the beginning.
  • ChatGPT is like chat rooms on the internet at the beginning.
    • Obvious, but not the end point.
    • The first mass-market use cases of the internet were chat and email.
    • Everyone "gets it" immediately.
    • But that's not all there was in AOL.
    • As time went on, the value of all of the information at your finger tips grew, and then those experiences could be interactive applications.
    • The secondary use case of "teleport anywhere, do anything" was harder to explain, but could diffuse out as people used it.
38AI in your job feels like a threat.
  • AI in your job feels like a threat.
    • Because if you get more efficient, that labor is owned by someone else.
    • If there's only so much the company needs done, they need less of you.
    • But in your personal life, AI makes you the master of your own personal life.
    • The more you can achieve that is meaningful, the more you can achieve.
39Humans love to put things in boxes.
  • Humans love to put things in boxes.
    • Then you can take the messy, amorphous reality, abstract it away, and have just a clean, easy-to-reason-about box.
    • We do it all over the place
      • Chunking.
      • Coarse-graining.
      • Wrapping code into a function.
      • An app for software to abstract over your data.
40The next big disruptive thing will emerge from a thing in Ben Thompson's blindspot.
  • The next big disruptive thing will emerge from a thing in Ben Thompson's blindspot.
    • Ben Thompson's analysis is excellent and widely read in the valley.
    • There's no surprise anymore, anything on his comprehensive radar is known to everyone.
    • So the things that surprise the industry will be things that Ben Thompson can't see.
41A rule of thumb in business: "buy commodities and sell brands."
  • A rule of thumb in business: "buy commodities and sell brands."
    • If you have to sell a commodity, the play is to go for volume, since you can't go for margins.
    • Volume gets you economies of scale.
42Imagine an alternate future where SCO had bought Linux.
  • Imagine an alternate future where SCO had bought Linux.
    • SCO was famously litigious and cynical.
    • They would have destroyed the progress on Linux.
    • I think the industry would be in a wildly different place.
    • We're in the world where Oracle bought MySQL and then ruined it.
43Some problems are 0-to-1.
  • Some problems are 0-to-1.
    • When you get to 90%, you still have 0 return.
    • It's not until 100% the value unlocks.
    • Other problems are "incremental work unlocks incremental benefit."
    • Tightening vs innovation.
    • The first kind of problem is what is necessary for a technical breakthrough.
    • A nice characteristic of ecosystems: they have the "marginal investment gets marginal benefit" but also have compounding returns!
44The 0-to-1 phase for an idea is radically different from all other phases.
  • The 0-to-1 phase for an idea is radically different from all other phases.
    • Before you hit that viability point, the idea will rapidly evaporate if you take your eye off for even a second.
    • It requires tons of convergent energy to will it into existence, to pull it from the amorphous space of ideas into reality.
    • But once you hit that point, it's rolling down hill.
      • Incremental updates, improvements, tightening.
      • If you look away, it will now either erode very slowly, or, if people are using it, it will demand your attention with obvious improvements.
    • This nests: adding a feature on a viable product is a fractal version of this. Immanence and transcendence.
    • Maintenance and innovation.
45Things that are unstoppable start off as unstartable too.
  • Things that are unstoppable start off as unstartable too.
    • The trick is the thing that can be startable and become unstoppable.
    • That's where compounding loops come in.
    • A self-accelerating thing.
46Getting started is the hardest part.
  • Getting started is the hardest part.
    • Static friction is an order of magnitude higher than rolling friction.
    • If you have a thing you want to do, just get started.
    • Figure out a way to give you the little burst of energy, the why-now, the easy bootstrap into it.
47A gauntlet delivers highly motivated users, but not a lot of them.
  • A gauntlet delivers highly motivated users, but not a lot of them.
    • A gauntlet is an onboarding flow that is high friction.
    • Only the most motivated users make it through the gauntlet.
    • Some gauntlets are intentional (e.g. an early, rough open source project).
    • Some are unintentional.
    • If the gauntlet is too bruising, it possibly delivers zero users.
    • But you can tune down the severity of the gauntlet until you're left with a dribble, and then tune it up or down from there.
48To get to a quality loop that learns from people's actions it has to be useful enough to actually be in their loop.
  • To get to a quality loop that learns from people's actions it has to be useful enough to actually be in their loop.
    • That's very hard to do!
    • A quality loop that is on the side can't ever get going.
    • Typically you have to do it with a different, more quotidian primary use case, and develop the quality loop as the bonus use case.
    • Over time as the quality improves (hopefully at a compounding rate) it might eclipse the original primary use case.
49Why is everything so over-optimized now?
  • Why is everything so over-optimized now?
    • The Optimization Ratchet.
    • The benefit of the optimization is clear, direct, concrete, immediate.
    • The cost of the optimization is unclear, indirect, ambiguous, delayed.
    • This creates a clear asymmetry, an unstoppable gradient.
      • Like a reverse entropy.
    • Each optimization step that is taken is extremely unlikely to ever be undone.
    • So things get more optimized, until they get overfit, hollowed out, and then become prone to catastrophic failure.
    • This is why society has gotten so over-optimized, to the point of being hollow.[fi]
50Society has over-optimized for things it can measure at the catastrophic cost of the things it can't.
  • Society has over-optimized for things it can measure at the catastrophic cost of the things it can't.
51Resonant things are aligned at every layer.
  • Resonant things are aligned at every layer.
    • It's beautiful, and the closer you look, the more beautiful it becomes.
    • Each layer supports the layer before, and your appreciation only grows.
    • Resonant things are transcendent.
52No one is proud of being addicted to Doritos.
  • No one is proud of being addicted to Doritos[gi].
    • However, some people are proud of being addicted to working out.
    • The question is: are you proud of the action?
    • If you are, you're more likely to evangelize it.
53How do we create Resonant AI is the defining imperative of this era.
  • How do we create Resonant AI is the defining imperative of this era.
    • Resonance is general phenomena.
    • Resonant Computing is the application of resonance in tech.
    • Resonant AI is the application of Resonant Computing to AI.
    • Resonant AI is the humanity defining question today.
54Resonant things can bring deep joy.
  • Resonant things can bring deep joy.
    • Not just a thing they like, but a thing they feel nourished by, proud about, happy to evangelize to others.
    • It's not just technology, it's something much deeper.
55The default, emergent goal of a service is to maximize stickiness.
  • The default, emergent goal of a service is to maximize stickiness.
    • That means it wants to accumulate as much of a user's data as it can
    • Also use that data in at least some ways that the user finds valuable.
    • That last part is aligned with the user's incentives, at least.
56If you follow the gradients of optimization you get what people "want" not what they "want to want."
  • If you follow the gradients of optimization you get what people "want" not what they "want to want."
    • Don't drive something that matters off a cliff, or let them drive themselves off a cliff.
    • "I'm just giving them what the number say they want."
    • If your friend were drunk and said they wanted to go on a joyride, would you let them?
57The system should handle privacy so you don't have to.
  • The system should handle privacy so you don't have to.
    • Everything is safe because it all aligns with your expectations.
    • The closer you look, the more comfortable with it you become.
    • Resonant privacy.
    • Gives you peace of mind.
58A big component of the principal agent problem is a timeline mismatch.
  • A big component of the principal agent problem is a timeline mismatch.
    • If you have a principal agent problem: people who are not on board for the long term will choose the minor short term benefit at catastrophic long term cost.
      • Especially if they're incentivized heavily to make that short-term number go up.
    • Imagine a world where you were locked to a specific collective, for live, with no possibility for exit.
      • What's good for the collective is what's good for you… at least, much more than if you only expected to be part of it for some limited period of time.
    • We only care about things on the time horizon we expect to be involved.
    • Renter mindset vs owner mindset.
    • The reason no country allows tourists to vote is because if you had only short-term users voting, the country would be destroyed.
      • "Empty social security and split it equally among whoever is in the country right now" and then leave the next week.
    • So why doesn't that happen to public companies, which have a large number of "tourist" shareholders?
    • The reason everything isn't destroyed immediately in practice is because there's a mix of short- and long-term interest.
      • Those naturally overlap each other.
    • Imagine that every single shareholder was expecting to hold the stock for precisely three months and then sell it and never hold it again.
      • The shareholders would vote to plunder all the resources.
      • The decision would be catastrophic.
59A searing response to Dario Amodei's 60 Minutes interview:
  • A searing response to Dario Amodei's 60 Minutes interview:
    • "This is a microcosm of why AI is waning in popularity with normies:
    • > "we're going to take out your jobs"
    • > offer no tangible solution as to what comes next / how normies ought to get by
    • > but "trust us bro, everything will be better with AI"
    • Out-of-touch hubris, unfortunately"
60Jack Conte, the CEO of Patreon: I'm Building an Algorithm That Doesn't Rot Your Brain.
61A friend's analogy for technologists in the AI era:
  • A friend's analogy for technologists in the AI era:
    • Dario is Edison.
    • Sam is JP Morgan.
    • Someone is going to be Henry Ford[gj]
      • Taking advantage of the insights of the assembly line and applying it to some new industry[gk].
      • You can't sell assembly lines to others, you can only use them yourself.
62Taking the oxygen out of the room is a cynical shark business move.
  • Taking the oxygen out of the room is a cynical shark business move.
    • That's because they remove something invisible.
    • All of the onlookers won't see they did anything at all.
    • But all the competitors just die, what a crazy random happenstance!
    • Icky!
63Chones has a nice piece on Curation being more important than Reach.
64Math Academy shared an excellent guide on how the brain actually learns and how to design content for it.
  • Math Academy shared an excellent guide on how the brain actually learns and how to design content for it.
65Evocative frame from Gordon about two kinds of organizations: Spreadsheets and Cults
  • Evocative frame from Gordon about two kinds of organizations: Spreadsheets and Cults
    • Innovation requires cults.
    • Maintenance requires spreadsheets.
66Coordination takes so much time because it's mainly "waiting for others to be ready to receive your output".
  • Coordination takes so much time because it's mainly "waiting for others to be ready to receive your output".
    • That's mainly busy waiting, ready to go as soon as they're ready.
    • Enormously wasteful!
67A YouTube video: Why Movies Just Don't Feel "Real" anymore:
68Overheard: "This seems stupid but stupid ideas win."
  • Overheard: "This seems stupid but stupid ideas win."
69Just because you can't see a single cause, doesn't mean the phenomena isn't real.
  • Just because you can't see a single cause, doesn't mean the phenomena isn't real.
    • Emergence is like magic.
    • Impossible to see directly.
    • Only possible to see when you blur your vision a bit.
    • Emergence is magic.
    • You can never pin it down, but it's real, powerful, inescapable.
70A doorbell in the jungle only works if you actually have a doorbell!
  • A doorbell in the jungle only works if you actually have a doorbell!
71When gardening, you can never push something to grow.
  • When gardening, you can never push something to grow.
    • You can only react[fp].
72Don't build a sandcastle next to a sink hole.
  • Don't build a sandcastle next to a sink hole.
    • Everything just pulls it in and there's nothing you can do.
73If you optimize for comfort, you'll never grow.
  • If you optimize for comfort, you'll never grow.
    • Growth comes from challenge.
    • Challenge doesn't feel good in the moment.[fq]
    • But afterwards you're glad you did it.
    • Bad challenge grinds you down.
      • Overwhelms you.
    • Good challenge makes you stronger.
    • In the moment it feels like all challenge is bad, and after you're done most challenge feels like good challenge.
    • Doomscrolling is not comfortable, but it's also not challenging.
    • It doesn't force you to grow, change, update your model of the world.
    • It just says "Yes, you're right, the things you thought were bad are bad."
    • Challenge is not comfortable.
    • But not all discomfort is challenge.
74In math, there's a tension between pragmatism and beauty.
  • In math, there's a tension between pragmatism and beauty.
    • Math typically chooses beauty over pragmatism.
    • In CS, there's no need to choose between pragmatism and beauty, you can have both.
    • I've heard this insight attributed to Alan Kay.
75Kids' development accelerates when they first go to daycare or preschool.
  • Kids' development accelerates when they first go to daycare or preschool.
    • If any kid makes a breakthrough they can all copy it.
      • The skill of any kid is similar to the max of swarm.
    • Also there are older kids to learn from and pull everyone up.
      • Older kids don't regress but younger kids do grow.
76In Myers-Brigg, Sensing types have a harder time seeing emergence.
  • In Myers-Brigg, Sensing types have a harder time seeing emergence.
    • Emergence can't be seen in the details, only in the whole.
77When you're obsessed with something, you're insanely productive.
  • When you're obsessed with something, you're insanely productive.
    • But you can't force yourself to be obsessed.
78When you're hollowed out as a person sometimes the job is everything for you.
  • When you're hollowed out as a person sometimes the job is everything for you.
    • Imagine a zombie exec at a large tech company.
    • Post financial but have no other meaning.
    • Work fills in for meaning.
    • "What would I even do without this job".
    • The job tells them a thing to keep optimizing!
79A process of accumulation: a person makes a decision to change the world, which requires clearing a high intention bar.
  • A process of accumulation: a person makes a decision to change the world, which requires clearing a high intention bar.
    • Then other people continually vote that it's useful to keep, preventing it from eroding away.
    • But it gets smoother through erosion and selective rebuilding.
    • The process of keeping is orders of magnitude cheaper than the process of creating.
    • This is the process by which everything of value emerges.
80You can't change someone's mind.
  • You can't change someone's mind.
    • They have to change their own.
    • If they don't realize there's a hole in their understanding, they aren't yet ready to change their mind.
81Denis Morton: "If you can't get out of it, get into it!"
  • Denis Morton: "If you can't get out of it, get into it!"