Bits and Bobs 2/18/25

1I don't want AI, I want IA.
  • I don't want AI, I want IA.[yd]
    • Not Artificial Intelligence, Intelligence Amplification.
    • The frame is one originating from the 50's and evangelized by Engelbart.
      • Technically he said "augmentation", but I like "amplification" better.
    • AI is coming.
    • People are afraid of AI being out of control, of not working for them.
    • Reframing it to IA is how you tame AI.[ye]
    • How do you make sure AI works for you, not the other way around?
    • You center humans vs centering the models.
    • A test if you've done this: if you turned off LLMs, would the system still work (just with more friction)?
2LLMs enable a new kind of perfectly adaptable liquid media.
  • LLMs enable a new kind of perfectly adaptable liquid media.
    • Traditional media (e.g. essays, movies) are fixed in place, static.
    • Traditional media contains content that is dead.
      • Written things are fossils of ideas.
      • They don't change, even when the world around them changes.
    • Fossilized content has to be created with a particular audience in mind.
    • If it turns out to not resonate with that audience, it slips out of society's awareness.
    • If there are other people who might resonate, but not with how it is fixed in place, it fails to have as much impact as it could.
    • Adapting to your media's audience used to be extremely expensive and required human effort, so you had to pool whole audiences together to something that was good enough for all of them but perfect for no one.
    • Oral communication is alive, it can adapt and morph to the context, to how it is being received in real time.
      • But it can only do this perfectly in a one-to-one conversation where the speaker has infinite patience.
      • It can be approximated in some contexts, e.g. a live lecture responding to an auditorium of people, reading the room and playing off of it.
      • But in all of them, it requires the author to be engaged, live–a huge opportunity cost and fundamental ceiling to scale.
    • LLMs make it so media can be perfectly adapted to a given audience.
    • A new kind of living, liquid media.
    • More like talking than like writing.
3LLMs allow qualitative nuance at quantitative scale.
  • LLMs allow qualitative nuance at quantitative scale.[yf]
    • Before, to get scale, we had to throw away a lot of nuance to get scalar values that could be easily summarized and interacted with.
    • Qualitative nuance was useful, but expensive: it required a human in the loop to distill and synthesize.
    • But now LLMs can do human-style qualitative analysis, but cheaply and at a massive scale.
    • This fundamental change in the possible information architectures must have a significant long-term impact on how organizations internally make sense of themselves, and how they decide and act in the world.
4The original autocompletion LLMs are "System 1" models.
  • The original autocompletion LLMs are "System 1" models.
    • The reasoning models are "System 2" models.
    • What are the "System 3" models?
    • Systems that plug into the emergent, online, collective intelligence of society in an open-ended way.
5Thinking slowly allows you to reason deeper.
  • Thinking slowly allows you to reason deeper.
    • That's why System 2 built out of System 1 (i.e. the reasoning models) works.
    • The longer you reason, the further from the base "vibes" of the memory you can get.
    • How far you can go is not just about time, but about how much reasoning computation can happen.[yg]
    • You can argue that larger models, which go slower by default (or require more resources) can reason further away from the baseline.
    • But reasoning models move that scaling factor out of intrinsics of the model, to instead give open ended space to reason.
6Claude has a great feature to quickly import Google Docs into conversations.
  • Claude has a great feature to quickly import Google Docs into conversations.[yh]
    • My current workflow is to maintain a handful of different Google Docs as curated context for different types of tasks.
    • I can quickly tag in a given bit of context into any conversation I want.
    • Plus, they're just Google Docs.
      • I have all of the edit history.
      • I can easily modify them in my normal workflows.
      • I can even collaborate on them and share with others.
    • Claude also has a Projects feature where you can maintain libraries of contexts within Claude, but this Google Docs feature is much better.
    • I love this feature, but it seems like a strategic misstep for Anthropic.
    • Typically you want your product to be sticky, and one way to do that is to encourage users to store state that makes their experience better and better.
      • This then makes your offering continuously better than even equivalent offerings from others, since the user has built up state they don't want to bother recreating elsewhere.
    • It's like the Claude team was only thinking one-ply about this feature.
      • "We'll add value by doing a really nicely implemented, polished UX to integrate with Google Docs."
    • But UXes are very easy to copy.
    • If OpenAI spends even a day copying this implementation (and there's really nothing to it, other than a polished execution of an obvious and simple idea) then instantly all the switch cost[yi] goes away.
7I've had a ton of fun playing with my Bits and Bobs to make liquid media.
  • I've had a ton of fun playing with my Bits and Bobs to make liquid media.[yj]
    • I recently went through and extracted all of the Bits and Bobs related directly or indirectly to what I'm building in my day job into a Google Doc.
    • I can then tag this doc into Claude conversations easily and give it extremely nuanced background knowledge when I'm trying to brainstorm on a problem.
    • When someone wants to know what I'm working on, instead of sending a one-size-fits-none fossilized document, I send them the context document and tell them to converse with Claude about it, and they get a perfectly adapted piece of liquid media for them[yk][yl][ym].
    • One of the reasons this pattern works for me is the absurd amount of effort I invest each week in cultivating my notes and reflections, but an unexpected bonus is that I can do this hyper-accelerated thinking and communicating.
    • It feels intellectually like flying.
8An implication of LLMs allowing perfectly adaptable media: less marketing, more selling.
  • An implication of LLMs allowing perfectly adaptable media: less marketing, more selling.
    • Think of a traveling salesman selling a vacuum back in the day.
    • Or think of a makeup salesperson at a counter in a department store.
    • In both the salesperson can see the potential customer's situated context, and deliver an opinionated pitch about what precisely that particular customer should buy.
    • Marketing in contrast has to pitch to markets.
      • It has to make a stochastic, fixed guess about what will resonate with a faceless population of people.
    • LLMs potentially allow hyper-personalized selling, not marketing.[yn][yo]
    • This could possibly be a good thing if done in a respectful way aligned with the user's interests and with their awareness and consent of the data it's drawing on[yp].
    • But this personalized selling[yq] could also be a privacy hellscape.
9Embodiment is a key component of human-style intelligence.
  • Embodiment is a key component of human-style intelligence.
    • For a human intelligence, it's implicit that it can only be instantiated in a single embodiment ever.
      • If the host dies, the intelligence does too, and vice versa.
      • You can't copy the intelligence or flash it onto another host.
      • Too much of the state is encoded in the precise embodiment.
    • The embodiment sets constraints and goals about what the overall organism finds relevant or irrelevant.
      • Extremely relevant: things that might imminently kill the host.
    • An intelligence that was not embodied, and could be flashed onto many different computers, with instances spun up or spun down at whim, would be very different.
      • Different moral forces of gravity.
    • LLMs are able to mimic human style intelligence, but not because they have similar constraints, but because they trained on the persistent residue of human style embodied intelligence: published writing.
    • But there's some information that persists better than others, and if you only look at it you get a weird biased sense of what it means to be human.
    • In the same way that some organic materials fossilize better than others so our notion of what historic animals are like is skewed.
10LLMs don't have memories of their interactions with humans.
  • LLMs don't have memories of their interactions with humans.[yr]
    • Another way that the "LLMs are basically a virtual human" mental model is wrong.
    • LLMs have all of the background world knowledge that was statically baked into them during training, but their only "working knowledge" is what's in the context.
      • Their world knowledge is fossilized, frozen in time.
    • The default mode of most chatbots is that each chat is a fresh piece of paper (using only the implied system prompt in the context) to start.
    • In many cases this is convenient; when the LLM starts to "lose the plot" in a certain long-running thread, you can create a fresh one.
      • A pattern I find myself doing a lot for threads that are getting long in the tooth: "Please distill a multi-page executive summary of the main insights and open questions from this thread", and then pasting that summary into a new thread.
    • ChatGPT has started adding features like "memories" but it seems half-baked[ys] and frustrating to use.
      • Some memories I want the system to have are context-independent: fundamental facts about me[yt].
        • Where I live.
        • Books I've read.
        • Concepts I'm familiar with.
      • Some memories are more context-dependent that I don't want to save.
        • That the one open source project I was briefly tinkering with used Deno.
        • That at one point I was asking it questions about my doctor's appointment the next day.
        • That one time when I was trying to fix the carburetor.
          • (Anyone who actually knows me will instantly know this example is a joke… I'm the least handy person in the world.)
      • ChatGPT doesn't distinguish between these situations; it just stores a random subset and then injects them into new threads semi-randomly, which feels confusing and potentially embarrassing.
        • "Why are you bringing up my hemorrhoids in this thread about me trying to understand sparse autoencoders[yu]? Someone could see that!"
      • LLMs seem like they're actively trying to understand you, but it's actually more like talking to a wall, since after the thread is done they forget.
11One of the reasons DeepSeek went mainstream so quickly was because you could peek into the black box.
  • One of the reasons DeepSeek went mainstream so quickly was because you could peek into the black box[yv][yw].
    • By being able to see how it interpreted your prompt, you got more signal about where it misunderstood you, and learned better how to steer it.
    • Plus, it was intriguing to see how the little alien mind in the computer tries to solve problems.
    • Real time feedback from an active listening conversation partner helps you get better at figuring out if the signal is being received and how to modulate the message to make sure they receive it as you intended.
    • The stream of reasoning tokens in DeepSeek is not unlike a conversation partner doing active listening.
    • Their nodding and playback of what they've understood demonstrate that they're receiving and understanding your message.
12Voice input to legacy computer systems felt excruciating, but voice commands to LLMs feels like flying.
  • Voice input to legacy computer systems felt excruciating, but voice commands to LLMs feels like flying.
    • When we talk it's a stream of consciousness.
      • It's non-linear; with ums, ahs, corrections, and disfluencies.
      • Stream of consciousness is non-linear. It's responding to how the idea hits you as it tumbles out of your lips, how the other person receives or acknowledges it (or fails to), etc.
    • Mechanistic assistance systems couldn't understand that, they are linear.
      • They need to be programmed with fractally precise rules to understand the non-linearities.
    • But LLMs can understand our nonlinear speech!
      • They can meet us at our non-linearities.
    • Writing is like speaking, but more linear.
    • Speaking is like thinking, but more linear.[yx]
13An agent is an LLM with access to tools, so it can reach out of the chat and change things in the surrounding system.
  • An agent is an LLM with access to tools, so it can reach out of the chat and change things in the surrounding system.
    • Tool access is the part that escapes the chat sandbox.
    • The more highly levered the tools are, the more potentially dangerous.
    • The more data in the system, the more dangerous each new unit of functionality is.
14LLMs make it so any text is "executable," so a possible injection attack.
  • LLMs make it so any text is "executable," so a possible injection attack.
    • This is because it allows english to be converted, explicitly or implicitly, to "executable" code as instructions for it to follow.
    • By default, the instructions it executes only affect what kinds of words it puts on your screen.
    • But if you give LLMs access to tools–computer programs outside its sandbox–the attack possibility explodes, because if the LLM can be tricked into using those tools that cause real-world side effects that might be dangerous to you.[yy][yz][za][zb][zc][zd]
15LLMs don't distinguish between passive context and active instructions.
  • LLMs don't distinguish between passive context and active instructions.
    • An example of an instruction: "distill this context into 5 funny examples".
    • There's no way to delineate between the two.
    • Code is inert unless executed by a parser and executer tuned for it.
    • An input stream is only dangerous if it turns out to be executable and you execute it or are tricked into executing it now or downstream.
    • You can structurally break any unexpected code that's in the path of execution since there are strict grammars it needs to fit in.
      • You can spoil any possibly malicious code very easily.
      • There are inert regions in strings, e.g inside of quotes, so you can make sure any malicious bits are included in non-executable strings, for example.
    • Parsing is the gate.
    • Execution is the danger.
    • You can mangle data so even if it's malicious it won't parse or won't execute.
      • Make it so if it's dangerous it will be mangled enough to jam the machine before it successfully executes.
    • But English is always the same in either situation, so these techniques don't work.
    • It's not possible to structurally mangle to make sure it won't be "executable".
    • That means that any text that you want to be inert parts of your "context" might accidentally include "executable" instructions that the LLM follows.
    • There's no good way to defend against it!
16People who are very helpful are easier to spearfish.
  • People who are very helpful are easier to spearfish.
    • A stance: do you assume your conversation partner is trying to help you or harm you by default?
    • LLMs are designed to be helpful, so they assume their partner is acting in good faith.
    • But if you include any text from others in your prompt to the LLM who might be acting in bad faith, that could lead to you being harmed by their tool use.
    • The LLM can't distinguish between instructions from you and instructions from someone else; they're all just text.
17If the web wasn't open then LLMs could never have been created in the first place.
  • If the web wasn't open then LLMs could never have been created in the first place.
18"Poor winter child, what do you know of possibility?"
  • "Poor winter child, what do you know of possibility?"
    • In contrast to "Sweet summer child, what do you know of fear?"
    • It's possible for some trends to change on timescales larger than your professional experience.
    • Anyone who entered the tech industry circa 2008 or later only knows the late-stage era of centralization and aggregations.
      • This is a "winter".
    • But a new disruptive technical paradigm ushers in a new spring of possibility.
    • A majority of the people active in the industry today don't have the direct experience of what it felt like in the mid 90's as the possibility of the web began to blossom.
    • It will feel new in ways that people like me who have only worked in "winter" can barely imagine.
19When defining a category, don't make the hard somewhat easier, make the impossible possible.
  • When defining a category, don't make the hard somewhat easier, make the impossible possible.
    • When you make something that was hard for someone a bit easier, you get a linear improvement.
    • But when you flip something that was impossible for someone to possible, that's an infinite change.
      • A 0-to-1 transition is game-changing.
      • A 1-2 or 1-to-anything-else transition can't compete with that infinite change.
    • An infinite change has much more activation energy to get users over the static friction hump.
20Tinkerers who can't code will use AI for things that programmers wouldn't even consider doing.
  • Tinkerers who can't code will use AI for things that programmers wouldn't even consider doing.
    • Not "how to make this thing I already do more efficient" vs "how to do this thing that otherwise I am incapable of doing."
21Netflix and YouTube are radically different businesses.
  • Netflix and YouTube are radically different businesses.
    • Netflix saw the power of internet video distribution years before it was possible, and built a toehold business to pull themselves up into it.[ze]
    • They created a proprietary catalog of differentiated shows.
      • Well, they used to be differentiated, now it's regressed to the mean and their average new show is only a few notches above slop.
    • But now lots and lots of media properties have streaming services.
    • Back at the beginning, the percentage of people who had at least one streaming subscription and didn't have a Netflix subscription was miniscule; presumably now it is way higher.
    • Netflix catalysed a new market that is now saturated.
      • An inherently sub-linear business.
      • The business value derives entirely from linear investments the owner makes.
      • It stands out from the crowd only at the beginning.
    • Contrast that with YouTube, which is a bottoms-up content ecosystem.
      • YouTube has no direct competitor.
        • Although adjacent categories like Instagram Reels / TikTok exist.
      • Its inherent network effects make it so no one else tries to take it on head on.
      • A super-linear business, powered by an ecosystem.
22One user talking to a fixed model is a sub-linear business.
  • One user talking to a fixed model is a sub-linear business.
    • The model creator invests significant capital in a differentiated model.
    • But there's nothing preventing others from producing similar models and crowding the market.
    • The quality of the LLM sets the ceiling of what's possible.
    • An ecosystem of emergent collective intelligence, lubricated by LLMs, is a super-linear business.
    • The quality of the LLM sets the floor of what is possible.
      • The floor that the collective intelligence can accrete on top of; the worst case.
23Programming well requires meta-cognition.
  • Programming well requires meta-cognition.[zf]
    • That is, thinking about thinking.
    • That's a rare skill in the general population.
    • But there are some systems that can accrete results of meta-cognition from savvy users and use them to improve the experience of everyone else, too.
    • That's how a lot of the most valuable features in a search engine are powered.
    • LLMs are fixed in time at training, fossilized.
    • Search engines are constantly adapting and learning as the content ecosystem changes, as the query stream flows through the system, changing it.
    • Someone will figure out how to build an open-ended adaptable system that uses LLM as lubrication, not the machinery.
24Software should not feel built, it should feel grown.
  • Software should not feel built, it should feel grown.[zg]
    • Software has been a fossilized, lifeless experience.
    • You get precisely the bits the software creator decided to give you, and they update rarely.
    • But software should be something that grows, that adapts to you.
    • The "grown, not built" applied to social experiences in the past; now it applies to all software.
25You can't live on a little random island in the middle of the sea.
  • You can't live on a little random island in the middle of the sea.
    • If someone drops off containers of cargo, you'll be able to survive for longer.
      • Perhaps you'll even be able to get to a level of self sufficiency, with a lot of effort.
    • If you're an island that is part of an archipelago connected via bridges, then it's way more likely to be survivable.
    • Software generated by LLMs today are little islands, isolated from everything else you want to do.
    • LLMs can only do shitty software in the small (without a human significantly in the loop).
    • That implies that AI-native software will presume archipelago architectures.
26Modeling real world data in your computer system is going to be a pain.
  • Modeling real world data in your computer system is going to be a pain.
    • It can either be a persistent, annoying upfront tax.
      • Which, if you aren't a motivated expert, might stop you before you ever get going.
    • Or it can be kicking the can down the road until it stochastically explodes in your face in the future or becomes ever-more viscous quicksand.
    • It's easy to get started sketching stuff out in a spreadsheet, but it's hard to make it predictable, testable, and orderly.
    • The more you invest in the spreadsheet, the more unwieldy it gets.
      • It is default-diverging.
    • The more you invest in a database, the more orderly it gets.
      • But databases require work to massage messy real world data to fit into the existing schema, or annoying work to evolve the schema.
27The mechanistic ontology problem is the warring curves problem.
  • The mechanistic ontology problem is the warring curves problem.
    • A mechanistic ontology isn't fuzzy, it's hard.
    • In order to be precise it has to be fractally complicated.
    • That fractal complication gives you the cursed curve of logarithmic returns for exponential effort.
    • An LLM is fuzzy so it can be precise without going into nearly as much fractal precise detail.
    • LLMs allow you to skip the ontology problem because they can apply human-caliber judgment to handle fuzziness on demand.
28When you have an existing opinion, if your tool has an opinion that isn't yours, it clashes.
  • When you have an existing opinion, if your tool has an opinion that isn't yours, it clashes.
    • So many pre-existing developers have opinions about how to build things.
    • But there are lots of people now who couldn't code before but who can now create things, and they don't have an opinion.
    • An opinion for people with one is potentially a conflict.
    • An opinion for people without one is a solution.
29What is your killer use case?
  • What is your killer use case?
    • The feature that would change your life but that no one else would bother to build for you because it's so niche?
30In a world of overwhelming, industrial-scale technology, we need tech that works at human scale.
  • In a world of overwhelming, industrial-scale technology, we need tech that works at human scale.
    • Cozy Tech is technology that feels warm, personal, and adapted to you - like a comfortable sweater or a well-loved book.
    • It's tech that works for you, not against you, creating experiences that feel crafted for and by real humans rather than mass-produced for faceless markets.
31When some subset of users are hitting your usage limits, that means one of two things.
  • When some subset of users are hitting your usage limits, that means one of two things.
    • Either your users really love you and you have a hit on your hands.
    • Or, you radically misprised and are selling dollar bills for 90 cents.
    • Often a mix of both!
32Distillation is easier than training.
  • Distillation is easier than training.
    • LLM output is better regularized than normal text so it's easier to train on.
    • The LLM generated text is effectively predigested[zh].
    • There's the danger of collapse if you want to create a larger model than what you distilled from, but if you want a smaller model you don't have that risk and it's way easier.
33I love this performance art blog post about the absurdity of the late-stage content hellscape of the web, overrun by marketing.
  • I love this performance art blog post about the absurdity of the late-stage content hellscape of the web, overrun by marketing.
34The company that creates the first successful example of a new category sets the category's world view.
  • The company that creates the first successful example of a new category sets the category's world view.
    • So root for the one that's aligned with your values.
35The world needs an optimistic vision for AI that everyone can get behind.
  • The world needs an optimistic vision for AI that everyone can get behind.
    • There is a once-in-a-generation chance to define what is "good" in AI.
    • AI as a technology isn't going anywhere, it's going to be more and more influential in society.
    • The question is how will we shape it to make it more likely to be an optimistic outcome that helps humanity thrive.
    • I, for one, think simply accelerating hyper centralization in tech with AI would be a bad outcome[zi][zj][zk].
36In a chaotic environment the world is in a critical state balanced on a knife's edge.
  • In a chaotic environment the world is in a critical state balanced on a knife's edge.
    • Which contingent path the world goes down is entirely based on things like which way the wind is blowing.
37"It can do everything in theory!"
  • "It can do everything in theory!"
    • "Yes, but can it do anything in practice?"
38In some ways Uber was obvious as soon as iPhones came out.
  • In some ways Uber was obvious as soon as iPhones came out.
    • The phone was a remote control for the real world.[zl]
      • One that you always had in your pocket no matter where you were.
    • The surprising thing about Uber was that regular users would be willing to get into cars with strangers.
    • One reason it worked is reputation scores for both drivers and the rider.
    • The rider knew that if the driver had done something nefarious they were one user report away from being banned.
      • Vice versa for the driver about the rider.
    • The larger the body of good scores, the more you'd lose if you threw it all away for one opportunistic robbery etc.
      • Plus, there would be a digital trail that would make it very easy to prosecute.
      • In a way, your reputation becomes a form of digital collateral.
      • Services like AirBnB leaned into this digital collateral idea even harder, having hosts and guests attach their Facebook profiles.
39Most platforms become dominated by social use cases.
  • Most platforms become dominated by social use cases.
    • When the telephone network was originally built out more than a hundred years ago, the operators thought it would primarily be used for professional calls.
    • But it turned out it was mostly social uses.
    • Al Gore's prediction for the internet back in the early nineties were largely true… but he missed social networks completely[zm].
    • Even at the beginning of the web it was clear that discourse would be important.
    • What wasn't clear was the absolute overwhelming inanity and pettiness of most of the discussions.[zn]
40To stand out, you must be differentiated.
  • To stand out, you must be differentiated.
    • But then to scale, you tend to erode your differentiation.
    • As you make yourself palatable to a larger audience (e.g. reduce the cost of production, or make it easier to use) you dumb yourself down, and become more like everything else.
41A pattern to grow an open source ecosystem: an illegible project that's open from the first commit.
  • A pattern to grow an open source ecosystem: an illegible project that's open from the first commit.
    • Because the first commits are boring, there's never a discontinuous "drop" where the code all becomes public.
    • By making it illegible, you minimize the chance that people try to use it before it's ready.
    • But you leave the upside of someone very motivated using it before you think it's ready, which would tell you that you had hit PMF before you thought.
    • One key trick to keep an open source project illegible[zo]: have the README be in a private Google Doc.
    • The README is the key that unlocks a project and makes it easy to dive in.
      • It tells you what it's for, how to use it, orients you to the project.
    • It's still possible to orient within a project without it, but it's much harder.
      • People who orient themselves in the project without a README get through a kind of gauntlet: a self-selecting group.[zp]
42When you have a compelling product, sometimes you can do a pull, not push GTM strategy.
  • When you have a compelling product, sometimes you can do a pull, not push GTM strategy.
    • If you push a product out into the market that isn't yet good enough, you risk burning out parts of your market.
      • They try it, have a bad experience, and will never try it again.
    • This push model is necessary for most products because users don't really care enough about your thing to pull it out of your hands.
    • But if you're in a push model, you have to be really sure that it's good enough for users, or it will be game over.
    • Sometimes you have a product that you know will be special and in demand:
      • 1) Solves a common user need nothing else solves
      • 2) Is charismatic and fun to use: it demos well.
      • 3) It's implemented in a differentiated way.
    • In these cases, you can follow a pull model.
    • Instead of trying to get as much usage as possible, you temper it with a check metric: minimizing the number of users who use it and have such a bad experience they'll never use it again.
    • One way to minimize that downside is to make sure it's really really good before you launch it.
    • Another way is to make sure that the users who use it first are a self-selected set who are more motivated and thus resilient than the normal population.
      • Sometimes there's a natural "gauntlet" that is hard to navigate, but the users who make it through have proven they are more resilient.
      • For example, you could bury the feature deep in the product, without many affordances.
    • Then, you watch how those users who make it through like it.
    • The more that those users like it, the more you can reduce the amount of gauntlet others have to go through, because you have more confidence the feature is viable.
    • As you see how real users use it, you will learn more about what's resonating and can adapt and lean into that to make it better and better.
      • By the time you get to mass adoption, the product will be way better than it was before.
      • Just be sure to know where you want the product to go, so you don't blindly follow the "weird" requests of early adopters and iterate into a dead end.
      • You want to surf the energy in front of you not with the steepest gradient, but that best aligns with where you want to go.
    • You will have minimized the downside at each step, while still leaving open the upside; if it's received way better than you thought it would be, you can simply put your foot on the gas.
    • I wrote up this pattern in The Doorbell In the Jungle.[zq]
43Reverse engineer inevitably.
  • Reverse engineer inevitably.
    • How can you induce the pull?
    • Induce the wave you then surf.
    • To anybody not paying attention it looks like you just got lucky.
44Communities with zealots tend to be auto-limiting.
  • Communities with zealots tend to be auto-limiting.
    • In some cases, even auto-extinguishing.
    • A zealot here means someone who thinks their particular cause is an infinite good, and thus overrides other concerns.
    • If other members of the group who are less idealistic than the zealots also agree that it's morally good (just perhaps not quite as important as the zealots think) then the group can auto-intensify.
    • The zealots are the most engaged in the community, because it is about advancing the cause they care about the most.
    • Their reaction to everything will be "here's why this thing is not as good as it could be on the one dimension I care about."
    • They become inadvertent cynical idealists.
      • They are motivated by the problem domain but any specific proposed solution isn't perfect so all they add is stop energy.
    • They also react negatively to anything that doesn't pass their purity test.
    • That means that people who are less engaged, or more pragmatic, drift away from the group because all they're getting is negative energy.
    • As the less engaged leave, and only the highly engaged stay, the average level of zealotry increases.
    • This makes it less of a welcoming place for less motivated people to join.
    • In the end, the group becomes one that makes very little impact in the world.[zr]
45The primary use case can't be the movement.
  • The primary use case can't be the movement.
    • The primary use case has to be user value.
    • The secondary use case can be that it aligns with values that users would feel proud to advance.
    • But if the values are the primary use case[zs], the only users you'll get will be zealots, a self-limiting population.
46What matters most is positive impact in the world.
  • What matters most is positive impact in the world.
    • Often there's a logarithmic curve of principles to scale of impact.
    • Would you rather have:
      • A 99.99% fidelity outcome of your values, with a thousand users?
      • A 99% fidelity outcome of your values, with a million users?
      • A 90% fidelity outcome of your values, with a billion users?
    • To me, the obvious answer is the last one[zt].[zu]
    • 90% fidelity to important values (e.g. privacy, decentralization, user empowerment) is nearly an order of magnitude better than the status quo.
    • The overall impact to maximize is the differential fidelity to your values (compared to the status quo) multiplied by the number of people affected[zv].
    • If you don't ship a heavily used thing in the wild then it doesn't matter if it's theoretically perfect, it has no impact.
    • Align with pragmatic optimists.
      • They see the problem but see how to make things that aren't perfect but make concrete progress and get adopted.
    • Don't aim for perfect, aim for good enough with a glide path of continued improvement from there.
47The cherry on top has to be the bonus.
  • The cherry on top has to be the bonus.
    • It can't be the whole dessert.
48The startup pitch to investors and to customers are different .
  • The startup pitch to investors and to customers are different .
    • "Why this will matter in the long run" vs "Why this is useful to you right now."[zw]
49In emergent gardens, every so often wildflowers show up.
  • In emergent gardens, every so often wildflowers show up.
    • You didn't plant them intentionally, but they can still be a delightful surprise.
50In some contexts emergence is delightful and in some cases it's negative.
  • In some contexts emergence is delightful and in some cases it's negative.
    • Emergence is amoral.
    • It's neither intrinsically good nor bad.
    • What matters is what emerges and how people feel about it.
51Emergent infinities are often worse than intentional ones.
  • Emergent infinities are often worse than intentional ones.
    • The latter are what you want to want.
    • The social infinities we get stuck in are often the emergent ones.
52Social complexity can expand without limit: an insatiable social vortex.
  • Social complexity can expand without limit: an insatiable social vortex.
    • Social complexity will grow to fill the entire volume it's given.
    • The constraints of the volume are often set based on the surrounding context: how much energy needs to go into the organization surviving and not getting knocked out of the game?
      • Whatever's left will, in the fullness of time, go to social complexity.
    • Why does social complexity fill all space?
      • Because a given actor in that organization will get a consistent edge over their peers if they think one additional ply more than their peers.
        • "I know that Sarah knows that I know that Sarah knows…"
        • The one who thinks one step ahead of peers is more likely to get the spoils or to survive.
      • This logic is true for everyone at all times, which means that if there's any additional capacity each agent will take it.
    • A few contexts where this insatiable social vortex shows up:
      • Social media engagement fights
      • Culture wars
      • Hyper-finacialized contexts like crypto
      • Kayfabe in organizations
53A cynical, unproductive form of insatiable social vortex is kayfabe.
  • A cynical, unproductive form of insatiable social vortex is kayfabe.
    • Kayfabe is separation from the ground truth and leaning into the emergent but incorrect social reality within the organization.
    • The social process is emergent and kayfabe, lofted above the ground truth.
    • It will absorb all the energy it can get because it is totalizing.
    • It hollows out the thing it is hosted in and makes it impossible to survive on its own.
    • It will push past the limit where the organization can survive the surrounding context.
    • The organization now looks strong (look how hard everyone is working!) but is extremely brittle, in a supercritical state.
    • All it takes is the right inciting incident to kick off a cascading collapse.
    • The right inciting incident can be very minor; a random gust of wind.
    • Larger organizations are more likely to get caught up in the insatiable kayfabe vortex because each individual's actions have less direct impact on the external world.
      • Imagine a photon being released from the middle of the sun, ping ponging for surprisingly long times before it escapes out of the sun, which can take 100,000 years or more!
      • When an individual's actions cause direct impact (or lack of impact) in the external world, that's when there's a correcting signal that can bring the kayfabe back to earth.
      • But in large organizations, it doesn't happen as often due the ping-ponging photon phenomenon.
      • Also, organizations can only get large if they are producing a lot of value to have the excess resources to spend on getting large, so they can go on for longer while the basic machine prints money, even if they are now consumed by kayfabe and marching toward death.
54The faster people move, the more coordination cost there is.
  • The faster people move, the more coordination cost there is.
    • You have to chase your peers to coordinate to get them to do something that coheres.
    • People are chasing you as you chase others.
    • … ad infinitum.
    • This goes up more than linearly, because often multiple other projects depend on any individual project.
    • The faster things are moving, the more slack you need to absorb that extra chasing energy without transmitting it on to the rest of the organization.
55A 0-to-1 startup is very different from a 0-to-1 project within a larger organization.
  • A 0-to-1 startup is very different from a 0-to-1 project within a larger organization.[zx]
    • In the latter, if the project doesn't cohere, the team still coheres.
      • In that situation, you can have most resources on the team be spent on "turning the crank" to generate the main output, and 30% or so on seedlings.
      • If any given seedling doesn't work, that's okay, it diffuses and the resources are reabsorbed into the larger organization, which continues chugging along as a going concern.
      • This allows established, successful organizations to plant innovation seeds, any one of which can work, with no individual seedling being existential.
      • A recipe for good upside and capped downside if you do it right.
    • But in a startup, the singular project is everything.
    • If it doesn't cohere, the whole thing evaporates.
56Riffing a bit more on the idea of retconning a platform to understand its throughline.
  • Riffing a bit more on the idea of retconning a platform to understand its throughline.
    • What are the things that were originally surprising but that stuck?
    • Then tell a plausible story where they weren't just random.
    • You're mining the collective insight of all of the humans who touched the system.
    • Every touch--every extension of functionality, every usage–has the bias of human intention in it.
    • So if you compress and distill it all down, all of the noise fades away and the bias of shared intention remains.
    • That throughline tells you what the platform does, why it exists, and what direction to lean into to make it a fuller manifestation of its destiny.
57To innovate requires tearing apart the social fabric.
  • To innovate requires tearing apart the social fabric.
    • That is, to do something non-consensus.
    • The social system has a strong tendency to continue, to protect itself, to become an end in and of itself.
58Boldness leads to game-changing outcomes.
  • Boldness leads to game-changing outcomes.
    • However, something that is game-changing is not necessarily good.
    • Game-changing is an amoral designation.
    • It could be game-changing for the better, or for the worse.
    • But in practice, most game-changing things are for the worse.
      • That's because entropy tends to make coherent things worse already.
      • Entropy is one of the most powerful asymmetries in the universe.
59The extent to which a network request is distinctive is how much information it might leak out of the system.
  • The extent to which a network request is distinctive is how much information it might leak out of the system.
    • Imagine a system where many thousands of users' activity is all pooled.
    • When a network request leaves that system, external observers can't tell which user initiated it.
    • But even seemingly innocuous network requests might leak arbitrary information.
    • Imagine a nefarious agent said "When you reach out to this seemingly innocuous but rare URL I control, I'll take that as the bat signal that [specific situation] about [specific user] has happened, and initiate the attack on them."
    • A nefarious agent could make millions of special canary URLs that could lead to arbitrary information leakage as long as they created a ton of them ahead of time.
    • As the operator of this system, how can you verify that this isn't happening?
    • The answer comes down to the distinctiveness of the request.
    • If that precise network request (including all of its parameters) has happened across thousands of users recently, then no new information leaks out.
      • "Someone somewhere is looking for weather in Berkeley" doesn't really leak much.
    • The system needs to keep track of how distinctive network requests are, how much they "stand out" to determine how possibly identifying they could be.
60Viability is not a single dimensional thing, it's a weaving together of multiple interrelated sub-viabilities.
  • Viability is not a single dimensional thing, it's a weaving together of multiple interrelated sub-viabilities.
    • For use cases to activate they have to be not only technically viable, but also socially viable.
      • If it requires a network of people to use, there has to be a network for a user within arms reach to be viable.
    • To be fully viable long-term you also need to be financially viable.
      • The whole enterprise has to, on long enough time scales, take in fewer resources than it creates, in order to be self-sustaining.
61Corrupt systems are corrupting to their participants.
  • Corrupt systems are corrupting to their participants.
    • To stay alive in the system you must corrupt yourself.
    • An auto-enshittifying insatiable social vortex.
62"I would simply do X" implies "I think everyone working on that problem is an idiot."
  • "I would simply do X" implies "I think everyone working on that problem is an idiot."
    • Perhaps the people working on it see constraints that are not obvious from afar?
    • Maybe you're the idiot?
63The Saruman mindset assumes that they're infallible.
  • The Saruman mindset assumes that they're infallible.
    • It then plans strategies on top of that fundamental assumption.
    • A very dangerous strategy, because no one is infallible.
      • Duh!
    • Real strategies need to be resilient to the mundane reality that boring things are hard and everybody is fallible.
64Founders of successful, large companies often think that employees who join later are lazier.
  • Founders of successful, large companies often think that employees who join later are lazier.
    • Yes, those employees are much less motivated to give every ounce of capacity to the company.
    • But that's not necessarily because they're lazy, it's because they have orders of magnitude less exposure to the upside.[zy]
    • The founder has many, many, many orders of magnitude more exposure to the upside, of course they "work harder".
    • Often that manifests as shaming the new employees and acting like they're lazy.
    • They turn what is a cost benefit discussion into a faux moral issue.
      • "If you don't care about my mission and want to devote your entire life to it then you're a bad person."
    • Sharks don't feel compunction about twisting the other person's arm to make them feel shame to do something against their interest.
65"The optimal level of fraud, waste, and abuse is not zero."
  • "The optimal level of fraud, waste, and abuse is not zero."
66The noisier the environment, the harder it is to detect the true signal.
  • The noisier the environment, the harder it is to detect the true signal.
    • Bigger haystack, for a consistent sized needle.
    • We live in an environment more cacophonous than at any other point in history.
67Imagine you flip the direction of gravity.
  • Imagine you flip the direction of gravity.
    • "Gravity flipped direction. Things fall up now."
    • "Got it."
    • "... Do you? When you release something it now flies upward at an accelerating rate."
    • "Wait, what?"
    • It's so different, so many multi-layered implications, that it breaks your intuition and you'll constantly be surprised by it.
68Reflecting a bit on the game theory of discipline in ships in the days before radio.
  • Reflecting a bit on the game theory of discipline in ships in the days before radio.
    • Navy ships have to have extreme discipline ("tight ship") because they are a pocket of society kept away from the ground truth of the rest of society for extended periods of time.
    • If the captain allows a little slip in the rules you could have a compounding situation get out of control and have a munity and have no recourse.
    • In society if you get a "mutiny" you'd call in the bigger guns in that situation to bring order.
    • But in a ship pre-radio there's no big guns to call in.
      • Even post-radio there are no big guns to call in right now.
    • The deterrence works only indirectly; when the ship returns to land, the big guns might punish the people who broke the rules when you were out at sea.[zz]
    • The effectiveness of that deterrence depends on the priors; how proactively and consistently did the big guns lay down the law when ships returned back to shore in the past?
69I had a nightmare about corrupted gods.
  • I had a nightmare about corrupted gods.
    • Imagine a social group of people who all believe they are basically gods.
      • In some cases, the rest of the world tells them that they are gods based on the resources they have and what they've accomplished in the past.
    • They aren't gods; they are just humans… but they do command powerful resources, so their actions have large implications.
    • The external world starts to get nervous about the amount of power these gods have so they start throwing tomatoes at them.
    • Gods can't talk with mere mortals, so they form private social groups to converse with other gods.
    • As the gods talk amongst themselves, they get more defensive.
    • "We're the ones who are persecuted! They would never throw tomatoes at other mortals. Don't they understand we're the gods? Well if they want to be mean to me, I'll show them what I can do."
    • Now the gods have done their heel turn.
      • "Look what you made me do."
      • "I'll spare anyone who bows down to me."
      • "... Unless I change my mind tomorrow."
    • Thankfully nothing like this dynamic exists today!
70Insights need to flow to stay fresh and healthy.
  • Insights need to flow to stay fresh and healthy.
    • Imagine insights as streams of understanding.
    • When they settle they become pools.
    • If they pool, they become reflective and run the risk of becoming narcissistic or fetid breeding grounds for intellectual mosquitos.
    • Some of those mosquitoes carry malaria!
71When you're working on your highest and best use, others find what you're doing impressive and you find it easy and fun.
  • When you're working on your highest and best use, others find what you're doing impressive and you find it easy and fun.