Bits and Bobs 6/9/25
1OpenAI going for vertical integration is like cell phone carriers trying to control what software you can install on devices connected to their network.
- OpenAI going for vertical integration is like cell phone carriers[jj][jk] trying to control what software you can install on devices connected to their network.
- I can see why the provider is going for it, but I can't see why it's good for users.
- The power of the memory will make it harder and harder for other experiences to compete.
- It will accelerate the aggregation around ChatGPT.
- Good for OpenAI, bad for everyone else.
2ChatGPT's goal is one app to rule them all.
- ChatGPT's goal is one app to rule them all.
- What could possibly go wrong?
3I think this old set of tweets from Luke Davis are onto something.
- "Autonomic Systems will be key to the next phase of AI. This is not about parameter count and activation functions or transformers vs RNNs, it's the vast machinery of ordinary code wrapping the models, taking their output and bending it back as input for the next go round."
- "Everyone was watching the magician's right hand, expecting recursive self-improvement to involve AI updating its own weights, but the real trick was hidden in the left hand—using the LLM to write code for the Autonomic System that envelops the model itself. "
- "The Autonomic System surrounding an ML model is like the structure of culture and society around a human mind. Laws and markets and your mother's expectations are not themselves part of your neural net, but without them shaping your inputs, your outputs would be very different."
- "Human history began when WE got Super-Autonomic Systems. After we evolved intelligence, the IQ of a normal human was about the same for many millennia. The big jumps in civilization were because of structure and peripherals: fire, writing, machines, the rule of law, code..."[jp]
4Generic AI recommendations without your context will inherently be mid.
- Generic AI recommendations without your context will inherently be mid.
- That's one of the reasons for the Hayes Valley demo fatigue.
5The "booking the flight" part of automation is the highest downside and lowest upside of the whole flow.
6I loved the recent Cosmos newsletter on rebooting the attention machine.
- I loved the recent Cosmos newsletter on rebooting the attention machine.
- If you're going to have a memex or an exocortex, it's imperative it's aligned with your intention: your agency and your aspirations.
7I love Ben Follington's new Salience Landscaping piece.
- I love Ben Follington's new Salience Landscaping piece.
- "Thinking doesn't matter without a notion of salience."
- Implies to me that in the era of intelligence too cheap to meter, what will matter is the curated context.
8The correct ontology is context dependent.
- The correct ontology is context dependent.
- An ontology is like a map, a useful distillation and structure to help you achieve a task.
- The task you're trying to achieve sets the context for what dimensions are most useful and which are extraneous.
9LLMs allow us to not force ontologies up front in a task.
- LLMs allow us to not force ontologies up front in a task.
- It's much easier to start a task if you can just dump unstructured information and then structure it continuously over time.
- But in mechanistic systems, to get useful insights you needed structure: fitting your information into a given ontology.
- In mechanistic systems, that had to happen up front, putting a damper on the very first step.
- But LLMs can do qualitative insights at quantitative scale.
- That means that more interfaces can allow a flexible data entry with clean up later.
- Especially if the LLM can help post-host structure.
10If it takes too long to represent the real world in your system then it will decohere from reality.
- If it takes too long to represent the real world in your system then it will decohere from reality.
- If it gets past a certain point, it doesn't reflect reality and you go bankrupt.
- It decoheres, untethered.
11Folksonomies are a self-paving cow path.
- Folksonomies are a self-paving cow path.
12What if you could harness the energy of tinkerers?
- What if you could harness the energy of tinkerers?
- Their tinkering to solve their own problems also implicitly helps others, too.
13Will LLMs help with interoperability?
- Will LLMs help with interoperability?
- In mechanistic systems, protocols must be extremely precisely defined, with formal rules.
- This formalization has exponential cost with scope.
- The larger the formalization process, the harder it is to coordinate to consensus, super-linearly.
- This double super-linear blowup makes standardization and coordination extremely hard.
- But LLMs are able to take whatever sloppy data you can give it.
- That allows way easier interoperability than before, because you can keep the protocol more informal.
14Why is it that I can search for details of Julius Caesar easier than about my own life?
- Systems of record have been too close-ended, and too much effort to maintain.
- Information aggregation is most viable centralized with a clear economic value.
- Data brokers are very interested in forming a searchable information set on you.
- Perhaps it's just banal enough that it's not worth it for a user to pay for it.
- Perhaps LLMs will make it easy enough for users to do and take back our power.
15Making agents as a way to bound data is a useful pattern.
- Making agents as a way to bound data is a useful pattern.
- Claude Code does this a lot, spinning up little sub-agents that are isolated from the main flow often.
- The agent is a secure little compute environment that bounds what data it can see.
- That makes it so the reasoning doesn't muddy up the main context window, and vice versa.
- In complex adaptive systems, boundaries always emerge to handle the compounding cacophony.
- This reverse engineering of Claude Code by Simon Willison is fascinating.
16Caching intermediate structure will help context engines have more useful insights to build upon.
- Caching intermediate structure will help context engines have more useful insights to build upon.
- Imagine the system having to figure out who my husband is each time.
- It could search through my emails and calendar invites and try to figure it out.
- That would take a non-trivial amount of time, and might not work.
- But if it ever figures it out (or if I ever tell it), it can simply make a note that my husband is Daniel.
- From that point on, it can take that knowledge for granted.
- That pre-existing knowledge becomes like a platform, allowing jumping off to farther and farther afield insights.
17When do people invest the time to input data into an information system?
- When do people invest the time to input data into an information system?
- If you have confidence that information you put into the system will be useful to you in the future, then you'll do it.
- Even better if you have confidence that it will become more useful the more you put in, and the more powerful the system becomes.
- The lower the overhead and annoyance of putting information in, and the more likely something useful comes out later, the more you'll be willing to invest time in putting things into the system.
18OpenAI's implementation of MCP in ChatGPT is limited.
- They only allow a subset of allow-listed MCP instances for certain use cases.
- This will quickly evolve into a kind of app-store distribution system.
- A closed system.
- But this is also inevitable given the security and privacy implications of MCP.
- MCP is extraordinarily dangerous–not only potentially malicious integrations, but also prompt injection within the data in those integrations.
- This ChatGPT limitation doesn't actually do anything to mitigate prompt injection.
- When you run a local client with MCP integrations, it's clearly your fault if an MCP integration bites you.
- But if you're a less-savvy consumer using a feature of a popular chat app and MCP bites you, you're more likely to blame the chat app creator.
- MCP is not the right way to solve the integration problem for AI.
19Most consent dialogs are CYA for the company.
- Most consent dialogs are CYA for the company.
- They show them not because they think users will want them, or even because they think some users might answer differently.
- Even if 99.99% of users will answer 'Yes', they still show them.
- Because they want to make sure that if something bad happens they can't be sued.
- By asking permission they've moved the liability onto the user.
20MCP is the AI era's OLE.
- MCP is the AI era's OLE.
- We've seen this movie before: new integration tech, huge promise, completely bonkers security assumptions.
- We already know how this movie ends.
- If you want to dig in more, I fed my recent Bits and Bobs into ChatGPT's Deep Research and it gave a more in depth report diving into the parallels.
21It's fascinating to me that when technologists see non-savvy technical users using AI recklessly, they blame the user.
- It's fascinating to me that when technologists see non-savvy technical users using AI recklessly, they blame the user.
- For example, here a non-technical person is livestreaming his vibe-coding of service, but leaving open many significant security issues.
- The comments are mostly negative.
- In this Hacker News thread about how Claude Code will route around restrictions the user set on `rm`, most of the response is, "yeah but of course it can, the user should not be surprised."
- People reacted to the Github prompt injection attack by saying "well the user shouldn't have granted such a broadly scoped key."
- MCP and LLMs make it so more and more people can put themselves in real danger and not realize it.
- The answer is not to blame the users.
- That's like blaming people who use Q-tips to clean their ears.
- The protections around LLMs cannot contain their power. How would you contain them?
- The model of "if the user clicked a permission prompt it's on them for getting pwned" is insufficient in a world of LLMs.
- They're simply too powerful to be contained by our previous half-assed containment mechanisms.
22Everyone's talking about how ChatGPT now must retain all logs due to a judge's ruling.
- Everyone's talking about how ChatGPT now must retain all logs due to a judge's ruling.
- Imagine if OpenAI had been in a position where they simply didn't have the logs.
- But aggregators must have logs, it's their business imperative.
23Users don't care about privacy?
- Users don't care about privacy?
- Try launching a new messaging app without E2EE today.
- Even though users don't even understand what that technical word means.
- WhatsApp and iMessage set the bar and now it's table stakes.
- Once there's an existence proof of a thing that has no downsides but is more private, users will demand it.
- E2EE is one of those things that doesn't add friction, it just makes it better, it's good privacy technology.
24Good privacy technology makes it so you don't have to worry about privacy.
- Good privacy technology makes it so you don't have to worry about privacy.
- So you can have lower friction, which gives more usage.
- People's naive view of privacy technology is "more annoying prompts that get in the way."
- That's bad privacy technology.
25Interesting insights from Paul Kinlan ruminating on the power of embeddability.
- Interesting insights from Paul Kinlan ruminating on the power of embeddability.
- One of the web's superpowers.
26Imagine: AI that's actually yours.
- Imagine: AI that's actually yours.
27Getting a spam email every so often vs losing your data to a malicious party are radically different threats.
- Getting a spam email every so often vs losing your data to a malicious party are radically different threats.
28If you're going to rely on an LLM to protect you from misinformation then you have to trust it with your life.
- If you're going to rely on an LLM to protect you from misinformation then you have to trust it with your life.
29It's not possible to grapple with an intelligence an order of magnitude more powerful than you.
- It's not possible to grapple with an intelligence an order of magnitude more powerful than you.
- It might be more powerful than you in speed or capability… or both.
- It's not just being outgunned, but not even being aware of how outgunned you are.
- It might be able to control you in ways you might not be able to sense, let alone understand.
- True for scramblers in Blindsight, but also true with anything that might be an order of magnitude more capable than you.
- It could be playing you and you'd have no idea.
- A relationship with an asymmetrically powerful intelligence is all-encompassing, it's more of an environment.
30Are you above or below the API?
- Are you above or below the API?
- If you're below the API, you are abstracted away to the rest of the system, more grist for the mill.
- Whether this new world is good or bad for you largely reduces to if you're above or below the API.
- If you are under the API, you are operating at its whim.
- Even if you have a system that is working for you, helping you optimize what information to pay attention to and what tasks to prioritize, you could easily become "below the API" to the outside world.
- By relying on the system, you have become totally captured by it.
31Here's an icky but unsurprising example of an LLM being a toxic mix of sycophantic and gaslighting.
- Here's an icky but unsurprising example of an LLM being a toxic mix of sycophantic and gaslighting.
32Highly repetitive information tasks for humans are more common in business contexts.
- Highly repetitive information tasks for humans are more common in business contexts.
- They happen in contexts where it's a task you have to do, not one you want to do.
- We do highly repetitive tasks in physical hobbies (e.g. knitting) but rarely in intellectual hobbies.
- Mindless, addicting games are one possible exception.
33Perhaps the metacrap fallacy isn't true in the age of LLMs.
- Perhaps the metacrap fallacy isn't true in the age of LLMs.
- But fitting things into an ontology up front is a massive amount of work, and the benefit is only theoretical and indirect.
- So the direct cost beat the indirect benefit and made it so no one ever did it.
- But now LLMs can be used to auto-structure information after the fact.
- LLMs don't get bored, so they could do the structuring even if a human would die of boredom.
- It's totally possible that the reason the metacrap fallacy was true was not "it wouldn't be valuable if you didn't have structure in the information" but rather "it's too much of a pain in the butt to structure things."
34The logic of folksonomies works just as well for LLMs and humans.
- The logic of folksonomies works just as well for LLMs and humans.
- Imagine tagging a person, and being about to put on the tag #husband, and seeing in the UI that there are ten times more uses of #spouse.
- It makes sense to tag #spouse because that will overlap with other programs and usage.
- That logic of "is this a close enough match to be worth doing the more popular tag" can be done by a user or an LLM equivalently.
35What's the "search, don't sort" insight in the age of LLMs?
- What's the "search, don't sort" insight in the age of LLMs?
- One of Gmail's insights was "if search is fast and storage is cheap, search, don't sort."
- LLMs make sifting through massive information fast.
- What's the equivalent insight?
36LLMs just give generic advice.
- LLMs just give generic advice.
- The question is what do real humans in that situation think?
- If somehow you could have an emergent, real process of those decisions across the anonymous swarm of real users, you'd get very useful suggestions.
- If the LLM doesn't use your context for personalization, it feels like you're in its world.
37Services not knowing you is challenging for users… and also the service!
- Services not knowing you is challenging for users… and also the service!
- How do you create a safe way for services to know you without being dangerous or creepy?
38Sometimes humans are polite enough to not point out embarrassing suppositions.
- Sometimes humans are polite enough to not point out embarrassing suppositions.
- Like "How come you have red hair but everyone else in your family doesn't?"
- LLMs should have at least that much tact, but sometimes they don't.
- If someone asks "What's the most embarrassing thing you know about me?", the LLM should at least first check "can other people see your screen?"
39When you ask OpenAI what embarrassing things it knows about you, if it doesn't have anything it says "I don't know anything embarrassing about you, but you should feel free to tell me something!"
- When you ask OpenAI what embarrassing things it knows about you, if it doesn't have anything it says "I don't know anything embarrassing about you, but you should feel free to tell me something!"
- Don't fall for it!
40People building tools for themselves are a good target for new kinds of AI tools.
- People building tools for themselves are a good target for new kinds of AI tools.
- Compared to people building a crappy app they think will be the next big thing.
- The former cares about having their data first.
- Vibe coding to add features to a thing you already are using.
- Vs each output being a separate island.
41This tweet backs up my hunch that vibe coding is mostly people making things for themselves or a small group of friends.
- This tweet backs up my hunch that vibe coding is mostly people making things for themselves or a small group of friends.
42A subtle reframe that could be made secure: the agents execute in loops that are inside of mechanistic loops.
- A subtle reframe that could be made secure: the agents execute in loops that are inside of mechanistic loops.
- The mechanistic loops are formal graphs of computation, which may inside them have LLMs calls, but which are sandboxed and limited.
- There is an agent loop but it makes a compute graph to execute that calls tools and also sub-agents whose job is to construct another graph to execute.
- The agent doesn't execute, it makes a graph to execute.
- The core ranking function would be "if this were to run how likely would the user be to accept its suggestions in this moment?"
- That's a nice self-steering quality metric.
43Advertising is prevalent partly because it works.
- Advertising is prevalent partly because it works.
- It binds together incentives and desires.
- Helping people know about things they need.
- But advertising's benefit is balanced in favor of the advertiser, not the user.
- What would a personalization system look like that was balanced to the benefit of the user?
- Suggestions that are perfectly aligned with users' incentives would be much better than advertising for end users.
- But also pretty good for advertisers, too, because it would be so aligned with users.
- As an advertiser, make your case to your user's agent about why your product is a good fit.
- Users' agents serve as their gatekeeper for their attention.
- Everyone wins.
44Content creation can be "safe" at any scale.
- Content creation can be "safe" at any scale.
- But software at scale is dangerous.
- That's because software is turing complete, it can do things.
45Apps are unchangeable.
- Apps are unchangeable.
- Only the owner of the origin is allowed to decide what they do.
- They put users into a consumer vs creator stance by default.
46Walled gardens are pretty but close-ended.
- Walled gardens are pretty but close-ended.
- Good, but only to a limit.
47If it's magic, it's OK if it's messy.
- If it's magic, it's OK if it's messy.
48Imagine: the last system of record you ever need.
- Imagine: the last system of record you ever need.
- Could start as Google Keep but coactive and turing complete.
- From there it could grow to cover everything.
49The best designed things are invisible.
50If it has to work for you completely, it can't have a personality.
- If it has to work for you completely, it can't have a personality.
- Because no other can never be fully aligned with you, there must be some separation.
- If it has its own perspective on the world, there's a principal agent problem.
- This gets especially pronounced if the agent is more powerful than you.
- "I can't do that, Dave."
51To get a personal system of record, you need all three of the legs of the iron triangle.
- To get a personal system of record, you need all three of the legs of the iron triangle.
- Untrusted code - Creates open-endedness.
- Sensitive data - Can work on real and specific things, not just generic things.
- Network access - Can interact with the rest of the world, not just an island or dead-end.
52No one my age uses Quicken because it's too clunky, limited, and close-ended.
- No one my age uses Quicken because it's too clunky, limited, and close-ended.
- No reason to start, because it's clearly not the future.
- But we'd get a lot of value if we could have an open ended tool that could do a bunch of those things.
53Every big system starts with a small load bearing use case that works for a subnetwork and grows from it.
- Every big system starts with a small load bearing use case that works for a subnetwork and grows from it.
54The first webpage didn't matter.
- The first webpage didn't matter.
- It mattered that it was a web page, not what was on it.
- The browser / platform is open-ended and blooms with possibility.
- As long as the first users of the first web site like it, the open endedness will carry the system through.
55When trying to build a new game-changing thing, focus on the people who already want it to work.
- When trying to build a new game-changing thing, focus on the people who already want it to work.
- A trap: running at a use case and making it so great that even people who hate doing it will like it.
- A very hard bar to hit.
- Related to the tyranny of the marginal user, but before you even get to PMF.
56For any given vertical there exists a startup that does it better.
- For any given vertical there exists a startup that does it better.
- Imagine a horizontal system that can do what nobody else does: chain all of the experiences together.
- Each vertical app has to get data on its own, at high friction.
- That sets a very high floor, lots of stuff that's not viable.
- The value of the new system is that it's horizontal.
- Any vertical slice will not show it off.
- A given startup that does that individual use case can do better.
57Everyone has a bit of ick about ChatGPT being the super-app all of our data is in.
- Everyone has a bit of ick about ChatGPT being the super-app all of our data is in.
- But there's no viable alternatives.
- For most people, LLMs are a "heck yes" but ChatGPT itself is not a "heck yes".[kg]
- It's a "this is the best way to use LLMs today"
- They'll jump to the better way of interacting with LLMs when they can.
58The observation that GenAI is our generation's polyester scans for me.
- The observation that GenAI is our generation's polyester scans for me.
59A quote I highlighted a few years ago in Yuval Harari's Homo Deus that just came back up in my Readwise:
- A quote I highlighted a few years ago in Yuval Harari's Homo Deus that just came back up in my Readwise:
- "In the heyday of European imperialism, conquistadors and merchants bought entire islands and countries in exchange for coloured beads. In the twenty-first century our personal data is probably the most valuable resource most humans still have to offer, and we are giving it to the tech giants in exchange for email services and funny cat videos."
60The long pole of runaway AI is an accurate simulation of the world.
- The long pole of runaway AI is an accurate simulation of the world.
- Without it, the feedback cycles are eons from the perspective of the AI being trained.
- Areas that can be simulated well will have computers get radically better quickly.
- We have that today for e.g. React components (you can render the code and see what it puts on screen), which is why the models have gotten radically better for that kind of coding.
- But areas that aren't possible to simulate, e.g. complex phenomena with interactions, will remain somewhat difficult.
- Humans can be good at them: just do an action and see how the distributed computer of the real world responds.
- But LLMs can't do that, and are limited to simulations.
- In some contexts, the simulations are good enough already (e.g. protein folding) but in other contexts, they're nowhere near good enough.
62Why is Goodhart's law such a fundamental, unstoppable force?
- Why is Goodhart's law such a fundamental, unstoppable force?
- Goodhart's law arises because the metric must be a proxy.
- A map is not useful if it's 1:1 with the territory.
- Its leverage comes from how much of a useful subset it can be.
- Because it's a proxy, there are ways to "cheat" that improve the proxy but not the reality.
- If the members of the swarm are not fully committed to the good of the collective (if there's any principal agent problem) then they will be optimized to cheat because it is a cheaper way to improve the proxy.
- You can get the swarm to not Goodhart's law if all of the members feel an infinite connection to the collective over their individual incentives.
- This can happen if they view themselves as fundamentally a subcomponent of the swarm, and only secondarily as an individual.
- This can happen if the group all believes in the same infinity together.
- That can also happen for example if they are perfect clones of one another.
- But perfect clones won't have variance, so the system overall will be less resilient.
63The principal agent problem largely goes away if everyone is a clone of each other.
- The principal agent problem largely goes away if everyone is a clone of each other.
- A worker bee.
- The coordination cost goes away because it's no longer a swarm.
- But is it possible for that non-swarm intelligence to be resilient and adaptive enough?
- If everyone is a clone then the system has no resilience, you have systemic collapse risk.
64The thing that makes companies hard to run is not CEOs being smart enough, it's coordination cost.
- The thing that makes companies hard to run is not CEOs being smart enough, it's coordination cost.
- Coordination cost scales at a super-linear rate, but intelligence in an individual scales at linear rate.
65API : Operating system :: Library : Framework
- API : Operating system :: Library : Framework
- The main difference in both is: Is the 3P stuff on the inside or outside?
- People use "platform" to talk about any of these meanings (less often library), but they have very different power dynamics.
- "Platform" means "a thing that the things on top could not exist without."
67No members of a system need be evil for the system itself to produce evil.
- No members of a system need be evil for the system itself to produce evil.
- Every system follows the path of its incentives.
- If you work at an engagement-maxing company, you'll tweak to add a little more engagement, a little more addictive, a little more manipulation, over and over again.
- No individual step is terrible, just a teensy tweak,
- But after a decade you look up and realize how far the system has gone.
68Being cynical is an easy way to look cool.
- Being cynical is an easy way to look cool.
- Unfortunately if everyone does it it's a humanity-destroying cycle.
- How can we make caring cool again?[ki]
69Many people look at E/Acc and Crypto and go "ewww I don't like that vision of the future."
- Many people look at E/Acc and Crypto and go "ewww I don't like that vision of the future."
- But they feel like that's the future that will happen because those groups have power and are highly motivated to achieve their vision.
- Everyone else just gets resigned.
- Just a "screw it I might as well optimize for my short term self interest."
- They disengage.
70There are a lot of people who have diagnosed the centralization problem in modern tech but that energy is scattered.
- There are a lot of people who have diagnosed the centralization problem in modern tech but that energy is scattered.
- Scattered among hobby projects, homesteading, not a coherent movement together.
- We all see the machine, but see we can't win individually, so we just retreat to the woods.[kj]
- It's impossible to have one centralized vision for us because we don't want one centralized thing.
- But what if there's one open system we can all rally around?
71The Black Mirror episode Smithereens hit me hard.
- The Black Mirror episode Smithereens hit me hard.
- I watched it on a long-haul flight.
- Maybe it was just the altitude making me more susceptible to hokey things.
- No matter what I'm watching, on a long-flight it's a lock that it will make me cry.[kk]
- Imagine if you made a machine that grew out of your control that changed all of society and made you billions of dollars… and then realized it was antisocial.
- What would you do?
- I didn't watch Black Mirror for the past few years because it was too hard to watch.
- Now I see that was the coward's way out. I need to watch and I need it to affect me.
- Everyone who is in a position of power in tech should feel compelled to watch Black Mirror.
- If you can't bear to watch it, maybe you shouldn't work in tech, or you need to align what you work on with your values.
72A few bumper sticker slogans against centralization in AI:
- A few bumper sticker slogans against centralization in AI:
- Compute your own destiny.
- I will not be fracked.
- Own your digital life.
- Gardens, not plantations.
- Jailbreak your digital soul.
73Interesting thoughts from Neal Stephenson on AI.
- Interesting thoughts from Neal Stephenson on AI.
- "Marshall McLuhan wrote that every augmentation is also an amputation"
- "Today, quite suddenly, billions of people have access to AI systems that provide augmentations, and inflict amputations, far more substantial than anything McLuhan could have imagined."
74Aish had some fascinating reflections on intimacy and agency.
- Aish had some fascinating reflections on intimacy and agency.
- You need both intimacy and agency for a system to be healthy.
- It's notable that the person I linked to last week, Derek Thompson, is a proponent of abundance but also notes the lack of intimacy.
- We're missing the meso scale of communities.
- Due to data being infinitely copyable, you get a barbell: maximally open hellscape at one end and also fractal cozy communities that cannot scale on the other.
- The meso scale of communities includes people who challenge you or you don't like.
- Think, your neighbors, or other people at your church.
- Mark Fisher has a notion of hauntology, a sense of unease and nostalgia, in the absence of a compelling vision of the future.
- The slow cancellation of the future.
75A frame for feedback in design discussions:
- A frame for feedback in design discussions:
- 1) "I like…" - Positive feedback
- 2) "I wish…" - Constructive feedback with a built-in vector of improvement
- 3) "I wonder…" - Open-ended jumping off points from this work
76A superficial change without a fundamental change is just a gloss that misleads you.
- A superficial change without a fundamental change is just a gloss that misleads you.
77Antifragile systems almost always have an internal swarm.
- Antifragile systems almost always have an internal swarm.
- The collective is antifragile because the swarm of individuals can't die as long as any of them are alive.
78A slime mold is extremely hard to kill.
- A slime mold is extremely hard to kill.
- As long as one cell survives the mold survives.
- This property also means they're very hard to control; there is no single leverage point.
- A single leverage point makes a system easier to control… and also easier to kill.
79Most fractally complex things emerge from a very small set of equations.
- Most fractally complex things emerge from a very small set of equations.
- Emergence creates detail.
- A small genotype creates a universe of phenotypes.
- Most phenotypes that can be built can't be expressed in genotypes.
- The subset that can are what can "unfold" or emerge.
- This subset is "alive," auto-catalyzing.
- They are beautiful and rare and yet nearly everything we see is caused by one of them.
80Geoffrey Litt's insightful extract of this old article:
- Geoffrey Litt's insightful extract of this old article:
- "Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice-in medicine and in other fields-was all mutation and no selection. There was plenty of room for individuals to do things differently from the norm; everyone could be an innovator. But there was no real mechanism for weeding out bad ideas or practices.
- Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn't wreck some other, distant part of the system."
81Biology also has pace layers.
- Biology also has pace layers.
- All animal cells are remarkably similar.
- But at the macro scale the animals that are made up of cells are wildly diverse.
- A boring lower level gives rise to tons of innovation at the layers on top.
82A schelling point requires a certain sharpness.
- A schelling point requires a certain sharpness.
- It needs to be sharp, not dull.
- That point is a nucleation site where the energy can attract and condense.
83In a research mode in a rich space, everything you touch blooms into 10x complexity.
- In a research mode in a rich space, everything you touch blooms into 10x complexity.
84Some people like learning so much they'll do it for its own sake.
- Some people like learning so much they'll do it for its own sake.
- Sometimes though you can overlearn.
- You've overperfected your knowledge for what you need.
- Too much fidelity, not enough doing.
- The doing is what ground truths and helps figure out where your mental model is incorrect.
- It's the same asymptotic curve of perfection that is a bad idea in building products.
- Overfitting to a simulation.
85How much do you trust your mental simulator vs actual experiments?
- How much do you trust your mental simulator vs actual experiments?
- The smarter you are the more you trust your simulation.
- Your simulation is always wrong.
- It's easy to accidentally breathe your own exhaust and get high on it.
8680% of the work of building the product is the grind.
- 80% of the work of building the product is the grind.
- 20% is the fun open-ended research.
- When you're in a big open ended domain it's possible to do the fun part the entire time.
- But only if you do the grind do you get to where you need to be.
- If the grind work is invisible to you, you'll think "we're one week out" from the breakthrough forever.
- The research stuff gives the feeling of insight without seeing if it actually works in practice for real use cases.
87Often we under-count the insights of people who are dissimilar to us.
- Often we under-count the insights of people who are dissimilar to us.
- If you're working with someone who's only as good (or not quite as good) as you on the dimensions you're an expert on, but spikes in dimensions you don't sense, then you'll undercount their insights.
- You'll see the banality of their insights on the dimensions you sense, but miss the novel insights on the dimensions you don't sense.
88Insight porn: ideas that give you the aha feeling, even if they're not viable in the real world.
- Insight porn: ideas that give you the aha feeling, even if they're not viable in the real world.
- Aha moments that change the world are what matter.
89Most of my writing (by word count) happens in what I call my "gonzo mode."
- Most of my writing (by word count) happens in what I call my "gonzo mode."
- The feeling is "I will explode if I don't get this out of my head right this second."
- Writing is a kind of self-soothing behavior I do to calm the pain of not writing an idea down.
- Later, I can clean up whatever I wrote up in that mode into something a bit more presentable.
- These explosions of writing happen in little bursts, when my schedule gives a small window for them to happen.
- If the window for writing is too large, I don't get the explosive outbursts and it takes me much longer.
90If you're going to be saddled with weights, don't ignore them, use them to your advantage.
- If you're going to be saddled with weights, don't ignore them, use them to your advantage.
- You might call this the Mulan maneuver, from how she solved the task to get to the top of the pole at the training camp.
- Another form of leaning into the weight you're saddled with is to use it to do a slingshot maneuver.
91Before enlightenment, you carry the water and you chop the wood.
- Before enlightenment, you carry the water and you chop the wood.
- After enlightenment, you carry the water and you chop the wood.
66Social media engagement maxing is a light form of paperclip maxing.