Bits and Bobs 1/8/24

1Early adopters are more engaged.

There's an inherent self-selection bias: they are by definition more engaged, more willing to try new things than the rest of the user population.

This means that the beginning engagement numbers of a feature will often look better than their numbers at scale.

The exception is if there's some kind of self-improving product quality that grows with use, e.g. a network effect

The question for a new feature's engagement is: what portion of the engagement is due to the self-selecting effect, and what portion is structurally due to the quality of the feature?

2A product with self-accelerating quality is significantly more valuable than one that is not.

Self-accelerating quality can come from network effects, ecosystem effects, etc.

Most products do not have this property, but it's actually not that rare.

It's just a subtle thing, hard to discern unless you know what you're looking for.

It's like foraging for a specific kind of edible mushroom in a forest.

Take the time to forage, and not just eat the first mushroom you find.

3Post-hoc tightening the security model of a widely used software system is extremely challenging.

If you want to support as many existing "good" uses as possible, you'll have to design a combinatorial explosion of finicky, oddly-shaped carve-outs.

You can think of this as taking a fractally wrinkled, living sprawling thing and trying to cram it into a new, smaller box. You'll need a very weirdly shaped box to fit it.

Those finicky carve-outs will feel over-complicated and arbitrary, and have tons of extremely detailed new surface area to design.

This is made orders of magnitude harder if the system is based on open standards and you have to coordinate with many other designers.

My heart goes out to the poor folks working on the APIs to deprecate third party cookies.

4Quiet threats are more dangerous than noisy threats.

Some threats are noisy, hard to ignore when they happen.

The more dangerous threats are the dogs that don't bark.

Noisy threats create an interrupt; you don't need to constantly think about them because they will spring into your attention.

But dogs that don't bark you have to actively take notice of.

They can sneak up on you without you noticing.

5If you extrapolate any trend to infinity it often seems to end in some kind of dystopia.

In the fullness of time, before balancing loops kick in, everything ends in dystopia.

Luckily, unforeseen and hard-to-imagine balancing loops almost always kick in before that.

But it's hard to imagine the balancing loops that will show up, and easy to see the current trend and its trajectory.

6The default state of a city is alive.

The default state of a city is alive. The default state of a company is dead. Why?

In Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies, physicist Geoffrey West shows universal scaling laws that seem to show up everywhere.

Companies behave more like living things: they grow to a point and then seemingly inevitably die.

With exceptions, the bristlecone pines of corporations, things like family-owned teahouses in the mountains in Japan that have survived for hundreds of years.

They hit a ceiling of some kind.

Cities, on the other hand, die very rarely.

With exceptions, like ghost towns

They keep growing at a somewhat compounding rate.

Why do cities and companies behave so differently?

Perhaps the difference comes from the difference between swarms and individuals.

A city is a swarm. A company is an individual.

For an individual to survive, that singular, big thing must survive.

For a swarm to survive, at least one small member of the swarm must survive.

An individual is fragile. A swarm is antifragile.

This is a massive difference.

The difference between the two gets stronger and stronger as:

The number of individuals in the swarm grows

The diversity of the swarm grows.

Diversity here would include things like geographical dispersal.

This is because as those grow, the likelihood that at least one member of the swarm survives goes up super-linearly.

For swarms past some threshold of size and diversity, they effectively become immortal.

It's the same observation Stewart Brand made at a talk I recently went to: "Individual civilizations die all the time. Civilization as a whole has never died since it was created."

It's also similar to the difference between a specific living thing vs life as a whole.

This is something that the Netflix series Life on our Planet makes very visceral, as it documents the various mass-extinctions over the eons: the fragility of individual species and the resilience of life as a whole.

7What makes something a swarm vs an individual?

The difference is agency: does the entity exercise coherent agency?

The amount of agency is a spectrum.

Does it exhibit coherent agency?

Do we allow it to have agency?

Do we require it to have agency?

Society's laws require individual humans to have agency, and holds them accountable for their decisions' repercussions.

A company is also a thing that we legally allow and expect to exercise coherent agency.

Examples along the spectrum:

Individual humans

A corporation

Bee colonies

Neurons in a brain voting on which thing in the environment to navigate to

Cities

Life writ large

The Technium

8What's a swarm and what's an individual is largely a point of perspective.

From outside the system, it often looks like an individual.

From inside, it often looks like a swarm.

A company from outside looks like a factory. From the inside it looks more like a rainforest.

As some rando on the internet once said, a company is more like a slime mold than an elephant.

Sometimes where an entity is on the spectrum can swap somewhat suddenly.

An example is that before the Civil War, "United States" was a plural noun (emphasizing the swarm of states), but afterwards it was a singular noun (emphasizing the collective federation).

9Our agency is what makes us mortal.

A thing that can be said to exercise agency is a thing that can die.

It is dead once it no longer is able to exercise agency.

Once it has passed through an absorbing barrier, has made its last move, is in static equilibrium, is no longer a live player.

Making a decision is picking one strand of possibility into the future and committing to it.

It requires limiting the cone of possibility, collapsing the wave function.

These decisions are beautiful and important; they are what cause things to happen and create meaning; it is what it means to be alive.

But they also make it possible to pick paths that are not viable.

Contrast this with a swarm, which does not exhibit coherent agency.

Each action in the swarm creates more divergence; more coverage of the space of possible paths that someone in the swarm could take.

The likelihood that at least one member of a swarm has a viable path forward is significantly higher than the chance that any specific individual has a viable path forward.

The swarm can live even while any of its constituent individuals die.

In fact, it survives because the individual components often die.

Individuals experiment and are culled by the ground truth of reality.

That creates the space for new individuals that are viable to grow.

This is a kind of self-pruning diversity.

This characteristic is what makes an individual fragile and the swarm antifragile.

10An ecosystem transforms an individual product into something more swarm-like.

A single product is an individual; fragile.

An ecosystem surrounding a product makes it into a qualitatively different thing that is much bigger than itself, that is antifragile.

This is one of the reasons popular open-source ecosystems have an air of immortality to them.

11Emergent things will happen in the system, whether you think about them or not.

The systems that have truly kafkaesque outcomes are often composed entirely of people who are individually very smart... and know it.

This is the phenomenon described in The Smartest Guys in the Room.

If you know you're not smart in a given context, you'll be on the lookout for ways you're wrong.

If you think you're smarter than everyone else, you won't seek out as much disconfirming evidence.

More threats will become "dogs that didn't bark" kinds of threats, creating an overall more dangerous situation.

12Trying to control an uncontrollable thing is a recipe for frustration and wasted effort.

Just because you can't control it doesn't mean you can't influence it.

Influence is a light form of control that is cheaper and has different levels of expected leverage.

To influence something well requires you to understand it.

13The process of note taking to think doesn't stop with the note.

You need to let it marinate, chew on it a bit, riffle through it and stochastically collide it with other thoughts.

Note taking is just one step in the process of growing a personal knowledge garden.

A knowledge garden doesn't end with spreading the initial seeds around. You have to tend to them.

14Data-backed analysis only helps navigate the past.

To take action in the future requires a theory (even if an implicit one) to make bets.

A theory is a choice, a collapse of the wave function.

15Where you can, optimize at the level of the system, not the individual.

Optimizing at the individual level is easy.

The effects of your action are immediate and obvious, but low-leverage.

Optimizing at the level of the system is hard.

The effects of your action are diffuse and non-obvious, but extraordinarily high leverage.

Every action has effects at the individual and system level, even if you don't realize it.

For example, when optimizing for individual performance in teams you inadvertently select for super-chickens that destroy value around them.

16"Wrinkling your brain" is "adding complications to your previously overly simplistic mental model"

Wrinkling something makes it more nuanced, more complex, more capable of handling subtlety.

Fractally-wrinkled things are more resilient.

You can't wrinkle someone's brain without their permission.

They have to be motivated to accept brain wrinkling in that context from you.

It's much easier to wrinkle someone's brain when they already have momentum in the direction where the new wrinkles would form.

17Making something neat and tidy makes it more likely to die.

Making something "neat and tidy" entails cutting off the parts that don't fit in the box.

The kinds of things you might cut to make it neat and tidy:

The random stray fibers that aren't important

Sometimes fingers and toes

For glorious weirdos, their wings!

Neat and tidy is like optimizing for efficiency; it messes with resilience and upside.

18I was chatting with Brie Wolfson recently and she brought up the concept of "soulfulness".

Not just transactional and opportunistic, but intentional, for a bigger purpose.

An end, not just a means. An infinite game mindset.

Soulfulness is related to craftsmanship and artisanal approaches.

Soulfulness: the thing you're doing has a point. It's something to be proud of, not just what you made happen, but how you made it happen.

19When you hit a wave at the right time it's magic.

Instead of pushing a rock up a hill, you're skiing downhill.

It's hard to catch a wave, you have to be in the right place at the right time!

So you need to have lots of attempts at it.

And lots of durable tools built up that will help you catch some wave in the future, so the likelihood you have the right tool at the right time to catch any particular wave is higher.

20Our relationship with the systems we operate within differs at different stages of vertical development.

(Using Keegan's labels here, my friend Dimitri has a great summary of vertical development.)

In a self-sovereign mindset, we can't conceive that anything outside of ourselves could possibly matter.

In a socialized mindset, people have an implicit (but overly rigid and dogmatic) sense that systems matter: "What do you mean what is my opinion on which sportsball team is best? This is a Panthers town, it's inconceivable that anyone could root for anyone else. If we didn't all believe that, we wouldn't be able to trust each other and society would collapse!".

In a self-authoring mindset you lose the systems sense: "Systems primarily function to constrain the behavior of individuals and sap life force. I am my own thing".

In self-transcending mindset you rediscover the value of systems, but in an emergent, coevolving, flexible sense. "I am a system myself, interwoven into the fabric of systems all around me, influencing them and being influence by them."

21Heirlooms are like a living thing.

They are not just the object, they are the story behind the object.

If the story dies it becomes just an object.

The story requires humans to transmit it and give it significance: a carrier.

The story can be documented in writing and persist, but with no one to read the story and find it significant and worthwhile, then it is dormant, very unlikely to ever awake again.

Not too dissimilar from code: every time a bit of code is executed, it's a vote from the executor that it matters and should continue to exist.

The Pixar movie Coco also makes this theme an explicit plot point.

The longer the streak of keeping the story alive, the more pressure the current carrier of the story feels to transmit it into the future.

However, a countervailing force is the farther they are from the original events, the less meaning it might have for them personally.

If anyone drops the hacky sack, the game is over.

22The lifeforce of complex adaptive systems comes from their ability to adapt.

Being alive, and self-repairing, is what makes them resilient.

But it's possible for a previously complex adaptive system to lose its ability to adapt.

The process of increasing efficiency requires creating structure, but that structure is ossification that reduces the ability to adapt.

A common pattern is a successful individual complex adaptive system becomes more efficient until it turns itself to stone and cannot change when surrounding conditions require it.

23Systems that have a structured formal language will have interesting applications with LLMs.

Writing code (or any formally structured document, e.g a Domain Specific Language) has two things that must be true:

syntactic correctness (is this thing described in a legal way)

semantic correctness (does it do the things that I want it to do).

The latter is by far the most important, but most human effort in programming and distilling formal documents in a given DSL is occupied by the former.

Systems like code and formal DSLs have a form of built-in ground truthing for syntactical correctness: "does this compile? Are there any errors?".

LLMs often hallucinate, but if you have a formal DSL you can automatically achieve syntactical correctness. "That didn't compile, I got this error: (error). Try again."

The LLM is throwing a bit of spaghetti at the wall, but it's possible to automatically check if it stuck on the wall or not.

This allows you to use LLM's indefatigability to help find things that would have been a time-consuming pain to do yourself.

This gives you a kind of self-smoke-testing in these contexts. So in those domains you can almost completely remove the mental energy spent on syntactical correctness, freeing up more to focus on semantic correctness.

Of course, semantic correctness is significantly harder to reason through than syntactic.

When you're struggling through the details of syntactical correctness, you have more of the state loaded up in your head, and have your head "in the game" more thinking about the implications of each bit of functionality, you're on the dance floor with the code.

This also helps you intuit semantic issues in the code more easily.

When you're doing only the semantic correctness, you have a more detached, passive perspective, from the balcony.

You might not grok subtle semantic correctness issues you would have if you had written it yourself.

Still, with LLM-written docs we can act like a senior developer doing code review an intern's work: it's likely to do roughly what we wanted, and we can take for granted that it is syntactically correct (that is, it compiled).

The important work shifts from creating syntactically valid code to verifying its semantics: less of a programmer mindset, more QA.

There's a lot of places where this "senior expert reviewing an intern's work" applies, but code and other formal DSLs are special in that they can self-smoke-test.

24I've found that I use LLMs for certain curiosity-style questions I wouldn't have even bothered searching for in the past.

Search relies on the SEO swarm of content farms to have guessed that someone will have that exact question and write up some poorly-researched schlock for it.

LLMs have a similar level of factual accuracy as random content-farm content, but they skip the step of "someone in the ecosystem had to stochastically guess that people might have this question".

LLMs can provide bespoke, just-in-time content-farm style content on demand... without the annoying ads.

25Everyone tends to prefer that things they interact with fit neatly into boxes.

In the (excellent!) Netflix animated movie Nimona, the category-bending main character is asked "What are you?" and she replies, with finality, "I'm Nimona."

Each individual is a unique, fractally complex and wrinkled, ever-evolving shape.

What gives them meaning and potential is precisely that complex shape.

But to interact at scale, we need cleaner interfaces and abstractions: putting a thing in a box.

When things are cleanly in boxes, it becomes much more efficient to interact with them, allowing interacting with orders of magnitude more things.

The system you're a part of would rather you just be in a box, and if you're not careful that's how you'll think about yourself, too.

People around you will rarely ask you what makes you special and different, the ways you don't fit in whatever box the observer would rather think of you in.

It's up to you to not put yourself in a box (except when doing so helps you plug into a system you want to be a part of).

This is another thing that what David Brooks calls Illuminators do: they help remind you that you are not the box that you have put yourself in.