Bits and Bobs 3/2/26

1An adoption threshold: when a dangerous product becomes as safe as paintball.
  • An adoption threshold: when a dangerous product becomes as safe as paintball.
    • You'll almost certainly get hurt, but it's unlikely you'll have to go to the hospital.
    • OpenClaw doesn't yet clear that threshold.
2Fascinating take from Steve Yegge a couple of months ago: Software Survival 3.0.
  • Fascinating take from Steve Yegge a couple of months ago: Software Survival 3.0.
    • "For purposes of computing software survival odds, we can think of {tokens, energy, money} all as being equivalent, and all are perpetually constrained. This resource constraint, I predict, will create a selection pressure that shapes the whole software ecosystem with a simple rule: software tends to survive if it saves cognition."
    • A friend's distillation: "Binary tools with proven solutions to common problems make sense when reuse is nearly free and regenerating them is token-costly.[au]"
3Agents can give you 1000x leverage on your effort.
  • Agents can give you 1000x leverage on your effort.
    • Three different 10x moments:
      • 1) Using Claude Code instead of Claude.
        • Focus on the durable artifact; the chat is a means to an end.
      • 2) Using sub-agents.
        • You don't need to keep in your mind what the next step is; the agents can do multiple things in parallel.
      • 3) An orchestrator agent over a swarm.
        • Instead of you orchestrating the swarm, an overseer agent does.
    • This makes you frantic and nervous every moment your agents are waiting for you, since you have a deep feeling about wasting time if your swarm is waiting for you.
4When I wake up, the first thing I do is feed my Claudes to unblock them.
  • When I wake up, the first thing I do is feed my Claudes to unblock them[bu].
    • I do it even before I brush my teeth.
    • I know that if I unblock them, they'll keep producing value.
    • Two hours later, I'm still in the loop with them.
    • I think to myself… "Wow I should really brush my teeth…"
5How much of agent use is real productivity and how much is the aesthetics of productivity?
  • How much of agent use is real productivity and how much is the aesthetics of productivity?
    • If you make a tool and no one ever uses it (including you), does it matter?
    • There's a bigger market for people who want to feel productive than to be productive.
    • See also "Tool-shaped objects"
    • LARPing productivity.
6A tweet:
    • "i have one upsetting observation: all the beautifully designed AI tools we've seen so far (dot, humane, cobot) were basically dead on arrival, while complex, highly technical products (claude code, openclaw) gain mass adoption in seconds.
    • we're definitely missing something."[aw]
    • Grubby Truffles win out in this phase over Gilded Turds.
7OpenAI Plans to Price Smart Speaker at $200 to $300, as AI Device Team Takes Shape
8Agents are better at managing their focus than humans are.
  • Agents are better at managing their focus than humans are.
    • Focus in humans is a precious, fragile thing.
      • One person interrupting you at the wrong time can rip you out of your focus.
      • That can feel like ripping a limb off.
    • But agents can be as focused as they want to be… sometimes too focused.
    • They focus on the next step in front of them, and lose track of the big picture, going down rabbit holes that lose the plot.
      • Not too dissimilar to highly competent humans who fall into a hyperfocus trap.
    • Infinite patience allows infinite focus.
9Code is now as easy as any other type of data to create.
  • Code is now as easy as any other type of data to create.
    • Code is a specific subset of data.
      • Magical incantations that can do things.
    • Another sub-class: human-readable data.
      • Natural language that humans are happy to read.
    • Data in general used to be easy to create.
    • Now LLMs can produce natural language and code easily, too.
10Dumb tools shouldn't become smart.
  • Dumb tools shouldn't become smart.
    • When you use a dumb tool you expect it not to spy on you.
      • It can't!
    • When you use a smart tool, you expect it might develop a model of you.
    • When it switches from dumb to smart, it becomes a violation of your expectation, a betrayal.
      • "I thought you didn't even have a brain, so you couldn't spy on me.
      • Now I see that you were spying on me the entire time!"
      • Things that you did in the system because it was dumb now feel retroactively dangerous in a moment.
    • It's the uncanny moment when you realize the walls literally have ears.
    • This is one reason you won't be able to retrofit agentic workflow onto previously dumb tools like Google Docs, Notion, and Airtable.
11It's unnerving when an LLM misspells something.
  • It's unnerving when an LLM misspells something.
    • From the classic Simpson's episode: "Nothing could possib-lie go wrong. Hmm, that's the first thing that's ever gone wrong!"
    • If you think of the system as infallible, all it takes is one crack to shatter confidence.
12A system that every so often punches you in the face isn't great… unless the only alternative is a system that every so often stabs you in the chest.
  • A system that every so often punches you in the face isn't great… unless the only alternative is a system that every so often stabs you in the chest.
13The engineers best suited to this new era are TLMs.
  • The engineers best suited to this new era are TLMs.
    • That is, people who were excellent engineers themselves…and also excellent managers.
14The hardest part about building now is resisting the urge to build more.
  • The hardest part about building now is resisting the urge to build more.
    • Every incremental feature is just a click away.
15I wonder if the whole focus on AGI is downstream of LLMs talking like humans.
  • I wonder if the whole focus on AGI is downstream of LLMs talking like humans.
    • Like, the idea of a planetary scale omniscient personality is easy to imagine, and also terrifying.
      • Chatbots make this kind of fundamentally incorrect "LLMs are basically computers as people" mental model natural.
    • Because it's easy to imagine, it dominates our horizon.
    • What's more interesting to me is: what happens when every human has auto-compounding super powers[ax]?
      • That would be different from AGI.
      • It's not clear if it's worse or better!
16When you work with Claude Code, you chat with it.
  • When you work with Claude Code, you chat with it.
    • But the chat is a directive.
    • A UI to poke at the process that is generating the actual thing you care about.
17In Claude Code, the chat and its context is a temporary means to an end to produce the durable result of data.
  • In Claude Code, the chat and its context is a temporary means to an end to produce the durable result of data.
    • That data may be content, or code.
    • Chatbots treat the context like the main thing.
    • They're a party trick!
18There's a difference between builders and coders.
  • There's a difference between builders and coders.
    • Builders see coding as a means to an end to build things.
    • Coders see coding as an end unto itself.
    • Before, they looked the same.
    • But now LLMs reveal the difference.
    • Builders love LLMs, and coders hate them.
19To successfully control a complex system you must have more internal complexity in your control model.
  • To successfully control a complex system you must have more internal complexity in your control model.
    • This is Ashby's Law of Requisite Variety.
    • The LLMs are complex, so driving them effectively requires you to be more complex.
    • It forces you to develop your complexity, to move up the ladder of adult development.
    • Meta-cognition is necessary to effectively drive agent swarms.
20Markdown is not great for anything.
  • Markdown is not great for anything.
    • But it's good enough for everything.[ay]
21This week in the Wild West roundup:
22Even Every doesn't like ChatGPT's memory feature.
23LLMs are insanely good at detecting patterns in the signal that are almost invisible.
  • LLMs are insanely good at detecting patterns in the signal that are almost invisible.
    • But so are humans!
    • That's what experts who have developed deep intuition in a domain do.
      • It's even a similar training process and algorithm.
    • Humans get the richness of experience and skin-in-the-game seriousness.
    • LLMs can do that training at computer speed and scale, which is many orders of magnitude beyond humans.
24Intelligence augmentation is so powerful that everyone will be compelled to use it.
  • Intelligence augmentation is so powerful that everyone will be compelled to use it.
    • Even if you don't want to.
    • If you don't, you'll get outcompeted by people who do.
    • Luckily it's not performance enhancing drugs.
    • But it does still have the red queen dynamic.
25I love the vibe of Chatty Community Gardens.
26Creativity requires friction.
  • Creativity requires friction.
    • That friction can come in the form of adversarial clashing.
    • Having agents that clash, similar to Generative Adversarial Networks, helps make them work.
    • If you don't have an opponent there's less of a reason to improve.
27The free energy principle applies to agents.
  • The free energy principle applies to agents.
    • The agents can restructure their environment to reduce future uncertainty.
28A system needs a maintainer to pay down debt.
  • A system needs a maintainer to pay down debt.
    • That is, to counteract entropy.
    • That requires patience.
    • LLMs have infinite patience!
29Data has been dead.
  • Data has been dead.
    • It's like a fossilized foot print of your life.
    • What if the data could come alive and can do things for you?
    • Your data should be able to blossom in the right context.
30Your data is the root of your agency in computer systems.
  • Your data is the root of your agency in computer systems.
31What will be the human interface layer for agents?
  • What will be the human interface layer for agents?
32Systems that assume LLMs will be nearly perfect won't work.
  • Systems that assume LLMs will be nearly perfect won't work.
    • LLMs will never be perfect.
    • Resilient systems assume that they can be confused.
33I don't want more apps!
  • I don't want more apps!
    • I want to never have to think about apps again!
34Data has potential energy
  • Data has potential energy
    • It used to be hard to unlock that energy.
    • Abundant cognitive labor blows it wide open.
35LLMs are additive for consumers and possibly subtractive for employees.
  • LLMs are additive for consumers and possibly subtractive for employees.
    • LLMs are abundant cognitive labor.
    • Consumers never had the resources to dispatch cognitive labor on their behalf, so extra labor is fully additive.
    • Companies, however, have long paid employees to be cognitive labor.
    • Now LLMs can displace some of that labor, which is subtractive by default.
      • Especially with the default "do what we do now but more cheaply" mindset of enterprises.
      • Compare to a "what new things can we do now?" mindset.
36When your data can come alive for you, it feels like an extension of you.
  • When your data can come alive for you, it feels like an extension of you.
    • A prosthetic.
    • A new kind of limb.
37Instead of doing what was possible before, but more cheaply, do what was never possible before.
  • Instead of doing what was possible before, but more cheaply, do what was never possible before.
38A piece from a few months ago by Tim O'Reilly: Jensen Huang Gets It Wrong, Claude Gets It Right.
39LLMs can be made to be default-converging.
  • LLMs can be made to be default-converging.
    • Every input to them, if scoped small enough, it will do what a reasonable person would with that information.
    • So if you make the structure clear enough, they can auto-converge.
    • If you have just the right amount of meta-structure then LLMs can be default-convergent, instead of default-divergent.
    • Once you have that, you can pump infinite cognitive labor into it.
40Code is no longer precious.
  • Code is no longer precious.
    • It's now, for the first, time, disposable.
41Excellent analysis by Orion Reed on Digital Topology and Economic Power.
  • Excellent analysis by Orion Reed on Digital Topology and Economic Power.
    • It complicated my mental model by reminding me that before the cloud, there were opaque proprietary file formats, giving apps more power over their users.
    • Apps always had a leg up over users, it wasn't a new thing with the cloud.
42Confused deputy attacks are about tricking someone who is powerful but kind of dumb.
  • Confused deputy attacks are about tricking someone who is powerful but kind of dumb.
    • The expected danger of them is the multiplication of "powerful" and "naive."
    • LLMs with access to sensitive data are the most confusable deputies ever!
43No individual user needs a million features in their software.
  • No individual user needs a million features in their software.
    • But software has to be made for a market and the set of needs anyone in that market has could be a million.
    • Software creators have to cram into one place which is fundamentally limiting.
    • So you end up with an equilibrium of maximally used, minimally liked.
44UXR is downstream of designing software for markets.
  • UXR is downstream of designing software for markets.
    • If you design software for 10 people, you can talk to each one individually, you don't need UXR!
45The normal business models of software will be upended.
  • The normal business models of software will be upended.
    • The way it used to work:
    • A company creates software, at great expense.
    • They have to do it for a market, and come up with something lowest common denominator that everyone in that market would want.
    • Then when users use it, their data accumulates on the service provider's turf, almost by happenstance.
    • But now that the data is on the service provider's turf, they can do whatever they want with it.
      • They can rent it back to the user.
      • They can use it to create aggregate insights they sell to others.
      • They can hold it hostage.
      • Or in some cases they can just flat out sell it to others.
    • That deal only made sense when software was precious.
    • If all software is commodity, why would users ever put up with that deal?
46The app developers will hold your data hostage when you can add value for yourself more than they can.
  • The app developers will hold your data hostage when you can add value for yourself more than they can.
    • They used to be able to add more value than you could, so the deal worked.
    • Now that's reversing.
47At some point we'll see the mask-off moment for the aggregators.
  • At some point we'll see the mask-off moment for the aggregators.
    • The aggregators act all cuddly and friendly.
    • But that's only because they know we're stuck and can't leave.
      • It's a mask.
    • Under the covers, their business is powered by taking your data hostage.
    • When we are no longer OK with that, when we reclaim what is ours, the knives will come out and they will no longer look cuddly.
      • If they're just a dumb, faceless data lake, they won't make as much money off of you.
      • Becoming a dumb data lake will be an existential loss for them.
    • It will be a mask-off moment.
    • Last year Slack changed their API policy to make it orders of magnitude harder to extract your own data.
    • That was an opening salvo in what is rumbling along as cold war, but will become a hot one as the power of unleashing LLMs on your data heats up.
48Reclaim your data!
  • Reclaim your data![bb][bc]
    • The origin acts like the data you've saved in their tool is theirs.
      • The extra insights or data they added: those are arguably theirs.
      • But the precise content you input: that is fundamentally, inarguably yours.
    • No origins should be able to prevent you from downloading the data you uploaded to them.
    • If an origin tries to prevent you from reclaiming your data, you are within your moral right to take it back by any means necessary.
      • An act of civil disobedience that you should be proud of, not embarrassed about.
    • It's your data, don't let anyone convince you otherwise.
49For consumers, paying for your compute happens automatically for on-device but not by default in the cloud.
  • For consumers, paying for your compute happens automatically for on-device but not by default in the cloud.
    • It does for enterprise, but not consumer.
    • If you aren't paying for your compute, it's not working for you… it's working for somebody else.
50I've seen a lot of approaches that mitigate the danger of malicious skills.
  • I've seen a lot of approaches that mitigate the danger of malicious skills.
    • Or that mitigate the danger of naive LLMs confusing themselves.
    • But still nothing in the market that credibly mitigates prompt injection.
51For a schelling point to stay coherent in an ecosystem, everyone has to agree it's reasonable relative to the value of the ecosystem.
  • For a schelling point to stay coherent in an ecosystem, everyone has to agree it's reasonable relative to the value of the ecosystem.
    • At the beginning, everyone thinking it's reasonable is critical.
    • After the ecosystem has significant momentum around that schelling point, the bar for "forking" the ecosystem gets higher and the bar for "unreasonable characteristics of the owner of the schelling point" also gets larger.
52When you're in a punctuated equilibrium it feels like a singularity, but it will level off.
  • When you're in a punctuated equilibrium it feels like a singularity, but it will level off.
    • We're either in an ongoing punctuated equilibrium, or the singularity.
    • Then again, humans have basically been a singularity since we invented language and transcended the pace layer of biologic evolution.
    • We've transcended many other layers since then, this is just the most recent one.
    • Maybe each new pace layer transcending via a new technology is a punctuated equilibrium for the human race, extending the meta-singularity of humans.
53Inversions of fundamental constraints are hard to reason about.
  • Inversions of fundamental constraints are hard to reason about.
    • Everything changes, in surprising ways.
    • How far can you throw a ball from your current position?
      • It's almost entirely determined by the force and angle.
      • Gravity is the constant that constrains the system.
    • But what if gravity reversed direction?
    • Now almost no matter how you throw it, the ball will go infinitely far.
    • One inversion, with infinite consequences.
54When you're absorbed by the puzzle, it's easy to lose sight of the goal.
  • When you're absorbed by the puzzle, it's easy to lose sight of the goal.
    • The ends fades away and all that exists in your mind is the means.
55The struggle of becoming is the process of learning and growth.
  • The struggle of becoming is the process of learning and growth.
    • Don't do it for the person you're helping grow.
    • Put them in the conditions where they can do it themselves.
56Learning moves a conscious process to an unconscious one.
  • Learning moves a conscious process to an unconscious one.
    • Conscious effort requires focus.
      • It's expensive, serial, intentional.
    • Subconscious is effortless, the most natural thing in the world.
    • That process of abduction requires repeated experience to capture.
57The struggle is the learning.
  • The struggle is the learning.
    • A low friction learning experience on things you don't care to push yourself on you don't learn at all!
58One reason I'm able to be so productive is the scaffolding I built around myself.
  • One reason I'm able to be so productive is the scaffolding I built around myself.
    • I've constructed an external project manager to tame my attention loop.
    • I use my internal neurotic conscientious loop as the raw material to power it.
    • But that system is not me, it's outside of me.
    • It's just a tool that I have decided to never put down because I find it so useful.
59What is a project?
  • What is a project?
    • It's a goal with a frontier of 0 to n obvious next-actions that each help you get incrementally closer to achieving the goal.
      • The moves are default-converging.
    • The set of obvious next-actions are your "adjacent possible."
    • Sometimes the next-actions are not obvious and must be discovered, but once they're discovered they can be compared so you can pick the right next action.
60Do you benefit from the chaos or get destroyed by it?
  • Do you benefit from the chaos or get destroyed by it?
    • That's the defining question of the modern era.
61In cacophony Gilded Turds have an advantage.
  • In cacophony Gilded Turds have an advantage.
    • No one has time to distinguish between the two.[bd]
    • Modern society is awash in Gilded Turds.
      • Everywhere you look, in every domain.
62Competition to increase consumption leads to ever-more-addicting products.
  • Competition to increase consumption leads to ever-more-addicting products.
63People who grew up with TikTok might not know what it feels like to reflect.
  • People who grew up with TikTok might not know what it feels like to reflect.
    • They might not have a cognitive immune system to protect against informational junk food.
    • They never had a situation where there wasn't a constant stream of hyper-interesting information.
64Why doesn't Apple seem to optimize for short term engagement as much as other big tech companies?
  • Why doesn't Apple seem to optimize for short term engagement as much as other big tech companies?
    • Other companies have cloud services that make money based on usage.
      • The more usage, the more ads are seen, the more money they make.
    • That means other companies are incentivized to optimize engagement.
      • This leads to addictive patterns and chasing metrics.
    • Apple, in contrast, only really wants you to buy another Apple device when you're in the market for one.
      • The purchase is lumpy and infrequent.
      • What matters most is trust.
      • Does the user trust that purchasing another Apple device will be worth it?
    • Apple doesn't care how much you use your iPhone… as long as you use it enough to value it and thus want to buy another one when you're in the market for one.
65I like this post's frame of today's default software architecture as putting a "billionaire in the middle."
  • I like this post's frame of today's default software architecture as putting a "billionaire in the middle."
66In a world of abundant cognitive labor, serendipity gets more important.
  • In a world of abundant cognitive labor, serendipity gets more important.
    • It's easier to plant seeds, tend to them, and judge them.
    • That implies the balance is moving away from exploit and towards explore.
67Power is where meaningful state accretes.
  • Power is where meaningful state accretes.
    • True in business strategy and the economy.
    • "Meaningful" here means something that others value.
    • If others don't value it, then they won't yield to the owner's advantage.
68The center of the hurricane gets the value of the hurricane without being caught up in it.
  • The center of the hurricane gets the value of the hurricane without being caught up in it.
    • Some people can intuitively swim to it and then get stronger and stronger the longer they're there.
    • They look to others like they cause the hurricane, so in a very real way they do.
69Everything regresses to the mean.
  • Everything regresses to the mean.
    • It happens even faster when a system is optimized.
    • Optimization on a few measured dimensions leads to entropy accumulating on all of the unmeasured dimensions.
70Cooperation is the path to transcendence.
  • Cooperation is the path to transcendence.
    • The Goddess of Everything Else is about this phenomena.
    • Effective cooperation overpowers and outcompetes everything else.
    • It creates positive sum out of what was previously zero sum.
      • A form of social alchemy.
    • How does it do it?
    • It moves in a dimension that is invisible within the zero-sum game.
    • Transcendence.
71The message of Little House on the Prairie is the importance of community.
  • The message of Little House on the Prairie is the importance of community.
    • In challenging environments we need community to survive physically.[be]
      • So we get emotional support automatically.
    • When you are interdependent over a long time horizon, you must operate as a community.
      • And vice versa.
    • In modern society we are interdependent but in a way that is totally fluid, each person in your life is just a cog in the machine around you who you might never see again.
    • Modern society makes it so we don't need a community to survive physically.
    • But we still need it emotionally.
    • It's no wonder we feel so empty…
72Unconditional love is one of the most important feelings as a human.
  • Unconditional love is one of the most important feelings as a human.
    • You feel it for children but not for your spouse.
    • The bonds of being a parent are unbreakable.
73We're entering the Bright Ages.
  • We're entering the Bright Ages.
    • Contrast it with the Dark Ages.
    • Now, instead of not enough information, we have far too much.
      • Dazzling.
      • Cacophonous.
      • Overwhelming.
74The term "Chatham House Rules" has become more common in the last few years.
  • The term "Chatham House Rules" has become more common in the last few years.
    • As the social media landscape becomes overwhelming and cacophonous, we retreat to cozy communities.
    • Cozy communities have high trust and expect discretion.
    • Higher quality discourse can happen in cozy communities, but it also means that insights are not necessarily shared outside the group.
    • The insights are meaningful state; they accrete in the communities that people are invited to.
    • The people invited in these communities get compounding benefit; the people not yet in them fall behind.
75It's hard to get people to do something they're proud of if it's not also enjoyable in the moment.
  • It's hard to get people to do something they're proud of if it's not also enjoyable in the moment.
    • The best is a resonant experience.
    • It's enjoyable in the moment.
    • Afterwards you feel actively proud.
76Is a system aligned with its goals?
  • Is a system aligned with its goals?
    • One way to detect it is the "candid aim" test.
    • Is the stated aim actually explanatory for how the system actually works?
    • If not, then the system has confused its means for an end.
77Revealed preferences aren't enough for agency.
  • Revealed preferences aren't enough for agency.
    • Even a tumbleweed has revealed preferences.
    • To have agency there has to be a second-order goal.
78Agents will optimize for the thing they get evaluated on.
  • Agents will optimize for the thing they get evaluated on.
    • For any collective (of more than one agent) that must be different than the goal of the collective.
      • In small, high-trust teams, the agent will be evaluated on the collective's output.
      • In large, low-trust teams, the agent will be evaluated on something disjoint from the collective's goal.
    • Goodhart's law arises from this misalignment.
    • Agents want to maximize their own value (capping downside of getting fired, while maximizing upside of reward).
79Incentives almost all pull in the short term direction.
  • Incentives almost all pull in the short term direction.
80Who is liable if an agent makes a mistake?
  • Who is liable if an agent makes a mistake?
    • Imagine that a doctor uses a system to research and recommend a prescription.
    • The system's agent accidentally issues the prescription without confirmation.
    • Who is liable?
    • If the doctor is liable for using the system, then doctors would only want to use systems that caps downside, and provably gets final confirmation from the doctor before doing any possibly-dangerous action.
81There's a natural cycle of innovation.
  • There's a natural cycle of innovation.
    • At the first you pioneer and explore new territory.
    • Then you find valuable resources and settle down, expanding your territory.
    • Then you optimize and reform your society to make it better.
    • But reform doesn't innovate, it only maintains what you have.
    • Reform can become an end unto itself, an infinite loop that traps you.
    • You need to transcend again by exploring.
82Some people have a meta-process to get good in a new domain.
  • Some people have a meta-process to get good in a new domain.
    • This process is often meticulous and careful, which means it gets off to a slow start.
    • But if done right it can compound on itself.
    • A ramp-up process will be like a Grubby Truffle.
    • Starts off slow, but then becomes unstoppable.
83LLMs will find a way, even if it's laughably wrong.
  • LLMs will find a way, even if it's laughably wrong.
    • Someone this week told me about an agentic workflow they had that took an input image and some text and made it sparkly for social media, using Nano Banana Pro.
    • The workflow started producing images that looked like a design-illiterate programmer had made them.
      • Times New Roman font, little hand-drawn stars.
    • It turns out that Nano Banana Pro was down, so the LLM, ever eager to please, decided to use Python drawing libraries to produce the output.
84Patient emergent systems will explore every nook and cranny of a space.
  • Patient emergent systems will explore every nook and cranny of a space.
    • The slime mold will fill every bit of the container it's put in.
    • So give it the right jello mold to give it the right shape!
85Anything that is sufficiently patient and sufficiently broad will find a way through.
  • Anything that is sufficiently patient and sufficiently broad will find a way through.
    • It will explore all of the nooks and crannies, and find cracks in the wall to pour through.
      • As it flows, it will erode the crack into a canyon.
    • Life, and LLM swarms, will find a way.
86LLMs are persistent, so they can power through any non-resilient thing.
  • LLMs are persistent, so they can power through any non-resilient thing.
    • They just eat away at it.
    • If it's not regenerating itself, then it just gets ground down.
87Walmart has a clever / corrosive effect on brands.
  • Walmart has a clever / corrosive effect on brands.
    • Walmart can marshal insane amounts of demand.
    • If a brand is about to invest in brand-building marketing spend, Walmart forces them to invest in lowering the price instead.
      • If they don't, they won't stock them on the shelves.
    • This benefits consumers and Walmart, but it harms the supplier.
      • Any time a three-way ecosystem is this out of balance, it is in the long-term bad for all participants.
    • The supplier becomes increasingly dependent on Walmart, and Walmart has more leverage over them.
    • Premium brands might be hollowed out by consumers seeing them as cheap, transactional.
    • It's a monkey trap: short-term benefit to the brand at long term cost.
88Excellence and efficiency are distinct.
  • Excellence and efficiency are distinct.
    • Sometimes overlapping at the beginning but as you get further along they diverge more and more.
89A lot of companies and individuals default to coherence strategies not because it's useful but because they don't want to look dumb.
  • A lot of companies and individuals default to coherence strategies not because it's useful but because they don't want to look dumb.
90ATProto's Lexicons are an architecture of participation.
  • ATProto's Lexicons are an architecture of participation.
    • The ones that get used the most bubble up and get used more, naturally.
    • Convergent emergence.
91Clarity of desire is necessary to spur action.
  • Clarity of desire is necessary to spur action.
    • A generic desire is not specific enough to overcome static friction.
92There's apparently a Spanish idiom about "monkeys with razor blades."
  • There's apparently a Spanish idiom about "monkeys with razor blades."
    • People who don't know (or don't care) about the plan who have real leverage can do significant damage.
93Pre-PMF mode: find the crux of the problem and then pound the shit out of it.
  • Pre-PMF mode: find the crux of the problem and then pound the shit out of it.
94When you don't have a coherent nucleus yet, divergent mode is actively dangerous.
  • When you don't have a coherent nucleus yet, divergent mode is actively dangerous.
95If as a manager you don't like a chef's taste who is on your team, you're going to have a bad time.
  • If as a manager you don't like a chef's taste who is on your team, you're going to have a bad time.
    • They'll keep on creating things that you don't like, and be hard to steer.
96It's easier to execute a playbook than create one.
  • It's easier to execute a playbook than create one.
    • Just because you can execute a playbook doesn't mean you could have created it.
    • To create one requires meta thinking, two-ply thinking.
97When you're running downhill it feels amazing.
  • When you're running downhill it feels amazing.
    • But also you have a terrifying moment of "can my feet keep up?"
98I was pretty good at taking tests.
  • I was pretty good at taking tests.
    • My strategy was to sprint through the exam and rough-in answers as quickly as possible.
    • While I went, I flagged answers that I wasn't as confident in.
    • Then, I'd go back through and look at each flagged answer in more depth until I was confident in it.
    • This breadth-first approach allowed me to get to good enough as soon as possible and then tighten the result until the buzzer went off.
    • I wrote papers the same way.
      • First, I'd just barf out, free-association-style, 2x the number of pages required.
      • That means that if the buzzer went off before I improved it, I at least had a good-enough (though messy) result that technically cleared the bar.
      • Then, I'd iteratively go through the essay, tightening it:
        • Moving similar arguments next to each other.
        • Reducing duplication.
        • Moving arguments so they built on each other in a coherent order.
        • Cutting uninteresting observations.
        • Adding signposting and transitions.
        • Tweaking wording to be less confusing.
      • As I chiseled away the marble, I'd discover my thesis.
    • This strategy naturally focuses time on the things that can get incremental benefit, and allows you to juggle multiple things naturally.
99If you're even vaguely ambivalent about having kids or founding a startup, don't do it!
  • If you're even vaguely ambivalent about having kids or founding a startup, don't do it!
    • You need to want to do it even if you had to crawl through broken glass.
    • Otherwise you'll find yourself questioning or even resenting your decision when you run into hardships.
    • But if you want to do it no matter what, then it can be the most meaningful experience of your life.
    • Precisely because you are so committed so you must learn and grow from the hardships that are inevitably a part of it.
100Never meet your heroes.
  • Never meet your heroes.
    • Your abstract ideal cannot survive the collision with the actual messy reality.
    • Keep your heroes on a pedestal.
    • Believing that someone can be infinitely good helps you strive to be better yourself.
101Polarities have no fixed answer.
  • Polarities have no fixed answer.
    • The only answer is a dynamic one.
    • That confounds people!
102If history unfolds automatically and linearly then we wouldn't have to feel any responsibility for shaping it.
  • If history unfolds automatically and linearly then we wouldn't have to feel any responsibility for shaping it.
    • But it doesn't unfold that way!
    • Your actions matter.
    • Technology is not inherently good.
    • It is amoral.
    • You can't just build it and assume it will be good for society.
    • You have to help it unfold in a positive way for society.
103Society is not built.
  • Society is not built.
    • It emerges.
    • That means that philosophy is critically important, to give a prosocial asymmetry to the underlying decisions.
    • Makes it structurally more likely that what emerges is good.
104Adam Ferguson: Society is "indeed the result of human action, but not the execution of any human design."
  • Adam Ferguson: Society is "indeed the result of human action, but not the execution of any human design."
105You should care about something larger than yourself.
  • You should care about something larger than yourself.
106How you act is more important than what you know.
  • How you act is more important than what you know.