Bits and Bobs 2/5/24

1People tend to focus on big rocks, but they should focus on the acorns.

Big rocks are already big. They're obvious points of leverage.

"If we could just move this rock, it would move the needle!"

But big rocks are also the hardest ones to move.

In a product context, big customers are often the ones with the strangest, and most bespoke, demands.

Some forces grow super-linearly with big rocks, meaning they are pound-for-pound harder to move.

The alternative of going after a 1000 pebbles can feel underwhelming.

Especially if each one has some kind of non-trivial fixed cost.

But that's why it's best to go after acorns.

Acorns are small now, but have the potential to grow on their own into towering oak trees.

Rocks are not alive. Acorns are possibly alive.

This makes a significant difference, and gives the self-accelerating upside.

How do you help acorns grow?

Any given acorn might or might not grow into an oak tree.

Luckily, there are a lot of cheap things to do to stochastically help lots of acorns sprout.

Spread out fertilizer.

Give them all a bit of shade.

Make sure there's water.

This way, you can take a meta-bet on lots of acorns.

You can late-bind your decision on which one to invest the most in, waiting to see which ones get momentum to start.

This allows you to invest most efforts into the ones that turn out to be viable… without needing to know which ones are viable a priori!

Many things that will become great are not great today.

If you have small investments in a diverse set of things that might become great, the likelihood at least one of them sprouts into something great is much higher.

2Too-short time horizons lead to bad decisions.

Imagine a given plan as being a path, where the height is the value of that point.

Imagine comparing two possible paths: one that has a slight linear increase, and one that has a very slight dip, but then a compounding improvement, and is quickly orders of magnitude beyond the linear path.

If you have a long-enough time horizon, you can clearly see that the latter is much better.

But if you have a too-short time horizon, all you will be able to see is the dip of the latter, not the imminent peak.

A short-term optimizing agent can never climb down a hill, even if it would bring them to the foot of a much larger peak.

Another reason short time horizons lead to bad decisions: real-world paths are not straight. They loop around semi-randomly in the details, but have consistent large-scale arcs.

Imagine looking 10 time steps ahead and a vector from here to there: the vector remains pointing in mostly the same direction even as your short-term path jiggles randomly.

But if you look 1 step ahead and do the same procedure, your vector will swing around chaotically.

Of course, this does not mean you should always think exclusively long-term. A couple of reasons why it's possible to think too long-term too:

If you're going to die in time step 2 without an intervention, then thinking about time step 10 is a dangerous distraction.

You have to clear the bar of survival to even be a going concern in timestep 10!

Things get less certain over time at a compounding rate. 10 steps ahead is in much less resolution than 1 step ahead.

Still, if the result is orders of magnitude different, you can see it clearly even with the extra noise and uncertainty.

3A pricing vision needs to be co-designed with a product vision.

They are fundamentally intertwined; mis-matching them will lead to a non-viable plan.

The right match can be orders of magnitude better than the wrong match; a switch from pushing a boulder uphill to skiing downhill.

Often the change to a different pricing structure will have a J curve–a slight dip in the short-term before (hopefully) a much better path.

There is no analysis to make a pricing decision obvious or a no-brainer; to some degree it requires a strategic declaration from the leader at the very top, and a commitment to the new plan.

4Tailwinds and flywheels look superficially similar but are very different.

Both lead to returns greater than the investment; not pushing a rock up a hill but skiing downhill.

Tailwinds are exogenous; they come from outside the thing being accelerated.

Flywheels are endogenous; they come from inside the thing being accelerated.

The conditions outside a thing sometimes change as the context changes.

If you've caught a flywheel, you'll have internal momentum to help you catch your footing in the next context.

But if you've only caught a tailwind, you might be left adrift in the new context.

5The ability to think strategically is not only intrinsic but also situational.

Even if you have the intrinsic ability to think strategically, you still need space to do so.

If you're starting off in a tactical frame, then you cannot create a strategic argument within it.

The realities of day-to-day execution (e.g. mundane details will take up every square inch of space you give them) tend to conspire to impose a tactical frame on an analysis.

6Blockchains, Twitter Threads, and improv scenes all have something in common.

Each one has a kind of "yes, and" logic to it.

For each, the participants are all making intrinsically-motivated decisions on which of the possible sub-threads are worth being the one to build on.

People intuitively assume that "yes, and" systems cannot be rigorous–that you need to have a "no" to have a hard truth.

But it turns out that if participants are intrinsically motivated to participate and vote of their own free will, they will naturally pick the things they think are "worth it".

They must–they have only limited time to invest, so they choose the subset they like the best.

So a decision to not invest in something is effectively a "no", just a less harsh one.

When lots of people agree, this can create tons of momentum.

And of course, the more people who have already "voted" for a given sub-thread being the "real" one, the easier it is to go with the flow and not fight it.

This can create a quickly-congealing wisdom of the crowds without any explicit coordination.

This force is powerful, if a bit hidden. Things like "leading by gardening" work because of this dynamic.

7Knowhow is a special kind of knowledge in people and systems.

Tacit knowledge in a person is called knowhow: experiential knowledge that's pre-linguistic, "felt in your bones" and extremely useful, but impossible to pass on to others directly.

You could say that you "understand" something when you don't just feel it in your bones, but can also talk about it and convince another person of it.

Knowhow can be load-bearing in your actions even if it's beyond your understanding.

Systems (e.g. societies, orgs) can also have knowhow.

It's tempting, when looking at some long-living human system, to think, "it's a mess, totally ad hoc, not even scientific. We should remake it in a modern way."

But just because a human system doesn't understand itself; can't make itself legible to others via scientific principles, doesn't mean it isn't full of wisdom.

Myths are a form of society's knowhow.

8Good writing often brings clarity to things the reader already knew in their bones but didn't know how to articulate.
9Labels are a means that accidentally become an end.

It takes time for an observer to absorb nuance.

Continuously varying phenomena are a kind of (one-dimensional) nuance.

We'd love to be able to understand each thing we interact with--each person, each living thing, each object--as its own, fractally nuanced thing.

But we don't have time; we have to interact with too many things, and the number of things we have to interact with has exploded in the modern age.

So we create discrete buckets and then we put the real things into those buckets.

The buckets might also be called labels--it's a replacement concept for the real thing.

People start off with labels because they're extremely useful. But then we over-rely on them.

The bucket is an illusion that we created for ourselves for convenience.

It obscures the underlying nuance, making a fractally wrinkled thing appear smooth, simple.

The bucket is a cage, preventing the nuance from escaping.

We don't just come up with buckets ourselves; others create buckets that we all coordinate around and use.

Morningstar Ratings.

Letter grades in academic contexts.

Perf designations in any company's perf process.

The more people that use a given bucket, the harder it is to avoid using the bucket.

But buckets can create misleading illusions.

Imagine a smoothly varying dimension, with object A infinitesimally below a bucket boundary and object B infinitesimally above the bucket boundary.

A and B get different bucket labels despite being practically identical.

We think of A and B as discontinuously different. We might assign different processes for each bucket, as though they are inherently different.

But the buckets are made up; they are a fiction.

What is real is what's in the bucket.

The bucket is what we all coordinate on with everyone.

So the bucket becomes the real thing, it goes from being a means to an end.

10It's tempting to pick and choose the best of other systems to assemble together.

But this is effectively sewing together a Frankenstein's monster.

The systems those components come from are living, organic things.

You can't lop off an arm and then sew it onto another assemblage of found body parts and have it spontaneously burst into life.

Systems are living things; they cannot be built, they must be grown.

11"Oh, the data is 80% precision, that's good!"

"Yes, but remember you don't know which 80% is good!"

12Questions like "what is the TAM" can kill a subtly brilliant strategy.

A subtly brilliant strategy is one that looks similar to others but has the potential to grow into something great: an acorn.

Often these strategies are a figure-ground inversion; a subtle reframe of what exists today that implies a very different path for continued investment.

These subtly brilliant strategies need a bit of shade at the beginning.

Questions like "What is the TAM of this" are valid questions, but they're like harsh sunlight.

And often at the very beginning questions like "what is the chance this is viable and will cheaply grow into something that might become great" are more important than "how massive will this tree be, assuming it successfully grew".

Weeds can handle direct sunlight: resilient in a boring, low-ceiling way.

Oak seedlings cannot: they're resilient in a big way but only once they've grown strong.

13Things people intrinsically care about have an easier-to-clear viability bar.

For a thing to be viable for a given user, its expected cost has to be less than its expected value.

Cost and value here are not just financial considerations but also things like opportunity cost, frustration, meaning.

When someone cares intrinsically about something, their "expected value" for it is higher, meaning it can tolerate a higher expected cost and still be viable.

If the person doesn't have the fuzzy things like "mission", "principles" etc lifting up the expected value of a given thing, then the expected value becomes principally about a cold, hard financial calculation.

Mission-driven people are more willing to put up with challenges to achieve their goal than mercenaries.

People are not necessarily intrinsically mission driven or mercenary, it's highly context dependent!

14Companies present an illusion of unity but are actually made up of thousands of individuals.

This illusion is even supported in law with concepts like the "corporate veil".

But of course companies are not some unitary machine operating with a completely aligned purpose.

They're more like a swarm of bees draped in a sheet.

There's some kind of emergent phenomenon with a post-hoc rationalization to explain the decisions that were made.

Though obviously internal structure like company priorities will have a significant impact on what emerges, making it much easier to post-hoc rationalization; it's already mostly aligned!

This same logic applies to humans and the way our brains work.

For an entity to make a bold decision, it often has to understand the situation.

It's easier for one mind to understand a thing than for thousands of minds to all individually understand it.

This is one reason founder-led companies can have better returns.

The founder is someone that everyone recognizes as having the right to steer.

That means, even if an employee doesn't understand why the founder is steering that way, they're still likely to go in the direction they've steered.

The result is that founder-led companies can steer around obstacles that the company itself does not "understand".

15As organizations grow, they must increasingly become like machines.

Living things can engage with lots of other living, nuanced things around them.

But living things have a fundamental limitation of how many things they can interact with.

Past a certain scale, you simply must reduce nuance to the dimensions that matter most, operationally.

Summary statistics, putting things in buckets, etc.

Scale vs nuance is one of those fundamental tradeoffs that recurs all over the place.

16I like the sponsorship model of supporting people around you.

A typical model for support in an organization is traditional management.

An alternative model is sponsorship: supporting the other person, but without boxing them in.

Traditional management is akin to an aggregator.

Supporting the person from above, but also constraining what they can do.

Sponsorship is akin to a platform.

Supporting the person from below, not setting any ceiling on what they can do.

Of course, organizations must have formal management to be able to be administered at scale.

Still, a sponsorship mindset is a kind of figure-ground inversion that leads to different kinds of investment in the people around you.

17A single existence proof of a thing makes it much easier for others to try, too.

Without an existence proof, a wide-open ocean might be a blue ocean… or a dead ocean.

If it's the latter, there are hidden constraints and costs that likely make it non-viable in non-obvious ways.

But if there's even one existence proof then it implies it's not a dead ocean.

The longer the existence proof has existed, and the more it appears to be thriving, the stronger the signal.

18When you are boxed in, it's hard to do creative work.

Creative work is best done based on intrinsic motivation.

Creative work has the potential for unplanned upside.

Intrinsic motivation is often actively fun–a thing you choose to do partially for its own sake.

But a box presumes a certain structure or goal.

In corporate contexts, an OKR is an example of a box.

19If you go to all the work to acquire a trophy, don't leave it in a pool of acid.

Imagine going to great lengths to acquire a trophy: a prize that everyone knows is valuable.

But then after you acquire it you leave it in a broom closet, out of sight.

In the broom closet, some caustic cleaning chemicals are slowly leaking, and drip on the trophy.

A few months later you open the broom closet and see the now-misshapen trophy.

"Well, I guess the trophy wasn't as good as people thought it was," and you chuck it.

But the problem was not the trophy, it was the acid!

20Smart people can accidentally create echo chambers.

Smart people are very good at engaging with a diversity of arguments, and producing an argument that wins on the merits.

As a result, they tend to win many arguments.

The arguments they don't win count as disconfirming evidence they can use to get smarter.

They can also use the rate-of-argument-failure to calibrate how well they understand a given domain.

But now imagine that person–partially based on the strength of their arguments–accumulates lots of formal power.

Now, some proportion of arguments they're winning because they're right, and some portion they're winning because they're the boss.

But crucially both situations will feel the same to the boss.

As the proportion of arguments they win due to being the boss increases, it will have a chilling effect.

People will be less willing to bring up arguments they know will be overridden, so they just don't do it.

The boss is still engaging with disconfirming evidence… they're just getting much less of it.

It's easy to erroneously think "My arguments must be good because no one has pointed out where they might be wrong."

As this accelerates, it can create a supercritical state.

The boss has accidentally created a machine for manufacturing confirmation.

21Just because you don't realize you're creating a system doesn't mean you aren't creating one.

Often the system has exactly the opposite effects from what you intuitively think it will.

When designing a system, almost everything you do will by default make it so the savviest, most empowered members will be increasingly optimized to take a short-term, local perspective.

22The evolution of a species looks like a single thread.

But it's actually a succession of particles.

Many of those particles die out, some of which survive just long enough to bud off a new particle or two before also dying.

It's an ever-so-slight edge for those particles that persist into the future, not a point but a small streak as they persist just a little bit longer, creating a fiber that will be spun into the thread of the collective.

The species is one collective but it's composed of individuals who all die, some sooner than others.

23It's easy to dump on jargon.

Jargon makes it nearly impossible for a non-expert to understand a conversation between experts.

But jargon isn't primarily an obfuscatory phenomenon.

Jargon is a form of compression within a community of experts.

It makes it easier for the experts to talk to each other about nuanced/large concepts in that area of expertise using shorter labels.

It makes it much easier for the experts to talk, but at the cost of making it harder for outsiders to understand.

Just because the experts find the information useful to compress doesn't mean that it's useful for society, of course.

Any kind of closed system tends to become an echo chamber and then increasingly become more kayfabe than ground truth.

But still, the experts are implicitly voting that the jargon is worth having, so it might be load-bearing.

24A precept from 1979 that seems more relevant than ever:

"A computer can never be held accountable. Therefore a computer must never make a management decision."

25When conditions change significantly, the entity composed of building blocks has an edge over the monolith.

A monolith can be highly efficient and well-fit for a given context.

But if a given context changes, the monolith might now no longer be viable, and will be hard to evolve to a point of viability.

But a system that is constructed of modular building blocks can reconfigure itself more easily.

Some building blocks will be used in new ways.

Some building blocks might not be vestigial and wither away… but won't bring down the rest of the system.

Modular things can be duct-taped together into new and novel combinations more easily, which means they are more likely to be viable in whatever context is thrown at them.

LLMs are like magical duct tape.

Modular systems seem to have an edge in this new world.

If you know the world will change but you don't know precisely how, modularize.

26LLMs make it easier to play with E-Prime.

E-Prime is a version of English that removes the verb "to be".

Words like "is" are a kind of equals sign.

It's how we attach labels to objects and then reason about the simple label and forget about the fuzzy, complex object.

Labels are a box, a cage, for specific objects.

Labels are good for scale, bad for nuance: a cheat code.

Talking in E-Prime forces you to confront how often we use labels as cheat codes.

Talking in E-Prime is hard, an intense mental workout.

But ChatGPT is really great at translating a phrase into E-Prime.

It helps you practice doing it, and helps you see where you were accidentally relying on an overly simplistic label instead of grappling with the nuance.

27Riffing a bit more on the "in a small town, you're forced to get along with others" from last week.

Even if there's someone you dislike, there's often a goldilocks topic that you both agree on, e.g. "Oh, we both like chocolate!"

That's a toe-hold of mutual understanding that you can then use as a starting point to pull yourselves up, with effort, into more broad mutual understanding.

These can be hard to find, especially in quick interactions.

Embeddings could help with this; imagine if everything public someone had written was embedded. A computer could then quickly sift through the embeddings and find toeholds of mutual understanding for any pair of people.

28A few riffs on LLMs.

An intuition for things that LLMs will get right: if Wikipedia has explained the concepts well.

Those facts are likely to also ripple out and inform lots of other sources across the internet, making it way more likely that the LLM picks up the pattern.

Remember, the query stream for a system with a quality dynamic coevolves with the underlying quality.

The swarm of users will tend to use it for things it's good at, and the creators of the system will tend to improve behaviors they see real users doing.

LLMs are electric bicycles for the mind.

They are like bikes in that they help you go where you want to go and make your agency have more leverage.

Adding on the "electric" accentuates the underlying bike properties (e.g. you don't need a license for either)… but also changes some of them.

It's difficult to get yourself in too much trouble with a normal bike, but an electric bike is another story.

LLMs as the well-read friend who is eager and easy to talk to, and remembers the big ideas very well but sometimes gets the details wrong.

Good taste and quality control become the human responsibility.

29A quote that Gary Pelissier shared with me from Invisible Cities that deeply resonated with me:

"The inferno of the living is not something that will be; if there is one, it is what is already here, the inferno where we live every day, that we form by being together. There are two ways to escape suffering it. The first is easy for many: accept the inferno and become such a part of it that you can no longer see it. The second is risky and demands constant vigilance and apprehension: seek and learn to recognize who and what, in the midst of inferno, are not inferno, then make them endure, give them space."