Bits and Bobs 10/16/23

Last week I learned about the "never scar on first cut" principle.

If you apply a fix to an n-count of one, then you will wildly over-fit to that one thing.

It takes at least two things to be able to get even the beginning of a sense for the variance of things in that context.

The more samples you have, the higher fidelity your model of expected variance.

By seeing patterns you'll be more likely to tackle systemic solutions as opposed to over-fit, overly rigid solutions to that one detail.

"Learn and adapt" vs "prevent and fix"

Think of every event as a dice roll. "What does this incremental dice roll imply about future events I can expect and where I should place bets?"

One strategic frame I don't find that useful: "If we didn't have this [existing, moderately successful product] today, would we proactively build it?"

This seems like a useful frame, a kind of zero-based budgeting technique to make sure you're considering the opportunity cost of keeping a moderate success alive.

But I think the lens is flawed because you get to take for granted your starting point.

You don't have to prove to yourself that the starting point is viable... by construction it is!

The question is entirely "what kinds of cool things could we achieve, given that we have this starting point".

The "would we build this if we didn't have it" gets at that core question indirectly, by smooshing together "would this be a valuable starting point" and "would this work".

In the vast majority of strategic questions, the "would this work" dominates your analysis--the vast majority of seemingly great ideas get stuck in the mundane filter of real world viability.

But when you have an existence proof that it works, you can focus entirely on the real question of "is the potential value we could create from this starting point worth the direct and indirect cost of maintaining it."

Risk is weird.

Most everyday failures are tiny, and extremely easy to absorb without breaking a sweat.

The system in this state is self-righting: you nudge it off its balance and it pulls right back to center, automatically.

That can lull you into a false sense of complacency: "we've been doing this for years and it's never even been close to being out of control, let's step on the gas."

But you're only known to be a self-righting system within a relatively small range of variance; you have no idea what will happen past a critical threshold... or even where that critical threshold might be.

But there must be a threshold, in every system, the point at which it breaks.

A lot of risk calculations (both explicit and implicit) rely on an implicit assumption of independent, normally-distributed failures.

But the real world isn't like that at all! Things have stacked, inter-related dependencies--every partnership you have is a connection point to a whole chain of dependencies.

Every so often a freak black swan event occurs; a weird set of happenstance failures that take a smaller player outside their critical threshold and take them out of the game.

This failure is now a bigger, correlated shock for their partners. Most will be able to resolve this larger-than-expected shock... but some won't.

At each stage, this shock takes out larger and larger players, and that means that the likelihood it takes out other downstream players is higher.

The downstream players would have successfully withstood the initial shock, but not the cascading, self-accelerating shock.

I visualize this as the exponential domino fall you might have seen at the Exploratorium. The very first one is tiny, but the last one is the size of a door!

The system in that situation pushes each player out of their self-righting phase and into their self-accelerating / flame out phase.

This phase transition from self-righting to self-accelerating, and the fact that the critical threshold is impossible to precisely predict in any given context, is highly dangerous.

The actions any given player takes to optimize themselves and become more efficient will almost certainly push them closer to their hidden threshold.

This kind of worst case scenario is tail risk, and as Taleb would say, the tails are much fatter than we typically think.

It can be impossible to predict any particular black swan, but in a complex adaptive system, the existence of at least one black swan, over long enough time horizons, is a certainty.

Everyone is constantly placing bets.

The world is an inherently uncertain place, and the only way to navigate it is to make bets.

You simply can't wait for full certainty to arrive before acting in the real world.

You can make these bets using implicit, tacit knowledge, or you can make them based on explicit hypotheses.

Your tacit knowledge is a wealth of intuitive insight, but it contains fundamental biases.

Interrogate your tacit knowledge, draw it out, abduct it into a hypothesis you can write down.

When it's written down, it will be easier to find and attract disconfirming evidence.

This allows you to place bets more deliberately as opposed to entirely implicitly.

A machine cannot change itself in fundamental ways.

A machine cannot see itself and reflect on philosophical questions of its existence.

A machine can only make itself incrementally better suited to the thing it already does; wearing down common paths to make them smoother / tighter / optimized.

The people operating in a given machine will over time have their worldview warped towards the worldview of that machine.

This is because as you are executing you'll intuitively lean into the things that machine makes easier to do and follow that path of least resistance.

You learn what the machine wants based on what is easy.

Everything, by default, will follow the path of least resistance.

The more you do this, the more you intuitively absorb the knowhow of the machine--and become molded to it.

In some sense, you fuse with it.

The ideas to fundamentally change a machine have to come from outside it.

However, if you try to do an outside-in first-principles rethink of a machine stuck in a local maxima the default organizational force you will meet is "well have you talked to these 1,375 pieces of the machine who all have notes about why the part of the machine they're in is not compatible with that idea today?".

A shove at a machine from outside is likely to be shrugged off by the machine; to bounce off of it without changing it.

The best judo moves, that can put a machine on a significantly better path, have to come from a place of love and understanding.

That's why the best judo moves come from someone with one foot firmly inside the machine, and one foot outside the machine, giving them perspective but also everage.

A few riffs on the power dynamics of partnerships.

People implicitly imagine the partner feels ~the same about them as they do about the partner.

This is partly because when we model how others feel about us, we can draw on our own internal state, and also our intrinsic situated context.

This is an illusion! The two sides of the partnership might think about each other very differently.

The partner that has more power in a relationship: the one who doesn't stay up at night worrying about the other.

That worry might be "what will happen if they do this mean thing to me" or "what would happen if they went away?"

When thinking about a huge "partner" that looms over you, you can't say "oh in this small pocket we have leverage over them" because the leverage is in total.

If they have massive leverage over you, it doesn't matter what you do in the pocket, they'll destroy you overall!

You don't have leverage in pockets, it's in sum across the relationship.

When deciding which partners to invest in, all else equal, invest in the partnerships that are viable (they have the capability to do it successfully) that if you invested in it would help incrementally even out the power dynamics of your partner ecosystem.

You can't be excellent at executing a thing you don't agree with.

Being excellent at something is what earns you the right to change it.

"I fully understand this system and here's why I think we should change it" is way more trustworthy than "I dunno I've thought about this for 30 seconds, but why do we even have this [thing they don't realize is a load-bearing wall], let's take a sledgehammer to it!"

If you don't have the people with the expertise to tell you how dangerous what you're doing is, you might not understand the danger at all.

A person I met last week shared this insight he had picked up from Alan Kay.

You have to build from the inside out, in the environment you need to survive in.

Don't build a ship in a bottle, if you throw it in the ocean it will sink.

You build a raft first, then use that to build a bigger barge, then use that to build an aircraft carrier.

Bootstrap your solution in the context it will be used in.

By construction, you will have at every step a solution known to be viable in its target environment.

Churchill was a hero, but also a jerk.

I think about him as an archetype for a certain kind of high-agency hero in the world.

The undeniable accomplishments of these heroic jerks loom very large in our collective consciousness.

This seems like a causal factor, and in a stochastic sense it is, but for any individual case, it's actually principally a selection bias illusion.

The likelihood that the game-changing outcome comes from a heroic jerk is extremely high.

The likelihood any given heroic jerk causes a game changing outcome is extremely low.

The superpower of a heroic jerk is not feeling self doubt.

That allows them to imagine something very different from the mundane status quo and drive relentlessly towards it.

Most of the time they're wrong. But sometimes one of them is right, and they put a dent in the universe.

We don't spend any time thinking about the vast majority of heroic jerks who got quietly caught in the filter on the way towards attempting their heroic outcomes.

Some people have an intuitive, compassionate, bottom-up sense of systems.

The best of them can do a kind of alchemy to create massive amounts of indirect value out of totally uninspiring inputs.

The downsides to this playbook:

1) anyone who is watching them at any single point in time will think they just got lucky

2) they look irredeemably kooky to onlookers (think Radagast, not Gandalf)

3) it's not possible to extract that value directly for themselves and financialize it. None of these alchemists own their own helicopters.

There's a special kind of alchemy I'd describe as judo moves.

They are little, carefully-calibrated flicks of the wrist that can put a system onto a fundamentally different trajectory from before.

Judo moves are deep insights: blindingly obvious, but only in retrospect.

The vast, vast majority of "flicks of the wrist" are nothing at all, and don't lead to any kind of new trajectory.

Finding a specific judo move among the sea of possible flicks of the wrist, and then executing it flawlessly to achieve the outcome, is a rare but important skill.

The problem is that to an observer, the hard parts of the judo move are hidden; the parts that are visible hardly look like anything at all!

Observers, like organizations needing to decide who to promote, have a hard time distinguishing judo moves from luck.

That is, was the judo move practitioner just lucky?

A lot of organizations implicitly end up throwing their hands up at trying to distinguish true judo moves from luck, and revert back to rewarding good old fashioned visible heroics.

But there are ways to tease this apart, they're just a bit subtle.

If the judo move practitioner has accomplished more successful judo moves over time than could be explained by luck alone, that implies it's more about skill than luck, and the incremental judo move can be inferred to more likely be skill, too.

If the coworkers of the judo move practitioner agree that the group would be unlikely to have found the judo move without them, that's a good signal.

Resentment causes disengagement.

Engagement is necessary for curiosity.

Curiosity is required for learning.

A garden cannot exist without a gardener.

On a fundamental basis, cohesion of the whole requires some reduction of agency of the individual components.

At Google, I saw how the perf-driven goal to "demonstrate a compelling vision that others are drawn to" (that is, a leadership frame that emphasized agency) ended up leading to a whole bunch of competing visions and a lot of thrash and confusion.

The single entity that invests the most in an open source project controls it.

It is not the creator of the project, it is who controls investment decisions over a majority of the total time/dollars invested.

You can think of the swarm of open source projects as a random, oozing evolutionary search.

Most of them will fail, but some subset of them will turn out to be successful ideas.

But once they have been shown to be a good idea, a highly capitalized entity can invest even more, gaining ultimate control over the project (or at the very least extracting most of the profit).

This is a bummer, but I'm not aware of any fundamental mitigation.

If you're asking to be treated specially, you must have some plausible reason you're special that others would agree with.

By default we think our own thing is special: "My thing is special to me because I am me!"

But that argument, obviously, won't convince other people that you are special.

That is, we are not neutral observers of the world, we are situated within it and have a specific perspective... one that differs for every individual.

Put yourself in the other's shoes, and ask yourself if you would think of your thing as special if you were them.

If you wouldn't, then you shouldn't proactively push for special treatment.

My least favorite form of PMing is aggressive program management to optimize for optics.

This is, unfortunately, the slippery slope that mundane day-to-day pressures across the industry inadvertently drive towards.

Your perspective on a system depends on where you are in it.

People at the top of the system (its beneficiaries) will barely realize there's a system at all.

It just feels like an omnipresent strong wind at their back, easy to forget it's even there.

People at the bottom of the system will feel the crushing weight of the system.

It will be an inescapable feeling of the system, but without a good vantage point to understand it.

People in the middle of the system can both feel the presence of the system and also potentially understand it.

Only by combining the insights across the different participants can a fuller picture of the system emerge.