Bits and Bobs 10/28/24

1Don't you hate it when you're using an app, and there's just one feature missing that would make your life so much easier?
2The determinant of market outcomes is less about the tech and more about the distribution model.

AI + the current default consumer aggregator pattern will likely lead to hyper centralization.

The consumer aggregator with an algo at its center is the most inhuman human system we have, and has no ceiling of the scale it can attain.

But AI manifesting in and being used in other substrates, that's great!

3It's no accident that the web comes from CERN.

Physicists are familiar with open-endedness, and how the right set of universal, simple principles can unfold into infinite complexity.

Catalyzing an open-ended system requires a physicist mindset.

4Is chat the primary or secondary use case of an AI system?

If it's the primary, then everything is about chat, and every so often non-chat like experiences (e.g. little artifacts or bits of software or documents) sprout up inside the chat.

If it's the secondary, the system is about software, documents, and some of those interfaces revolve around chat… but not all of them.

Does the data live in the chatbot or does the chatbot live amongst your data?

Is it one big chatbot or is it dozens of little chatbots amongst your other experiences?

This software layer is so minor in today's nascent systems that it's hard to see it.

But it could be that chatbot is merely the first app in AI, not the most important one.

5Pre-viability and post-viability feel radically different.

For example, in a product development context, pre-viability might mean pre-PMF.

Pre-viability is like pushing a rock up a hill.

All of your effort has to go into motive force to push it up the hill.

If you ever get distracted the rock rolls back down the hill.

Post-viability is like skiing along behind the rock as it tumbles downhill.

The rock moves on its own, and you mainly need to help steer its path.

The rolling rock has momentum.

Everyone participating can see the direction it wants to travel and is traveling on its own, and can help make that happen.

The momentum is a natural coordination mechanism.

The difficulty of pushing the rock up the hill is both the weight of the rock and how many rocks it is.

A single rock is possible–with patient effort–to push up the hill if you're strong enough.

But when there are multiple rocks, they constantly roll off to the side and roll down the hill.

Keeping them together is just as hard as pushing them up the hill, if not harder.

In the limit, the hardest would be pushing water up a hill; it's impossible, it just flows around you and down the hill.

6Imagine you're pushing a collection of rocks up a hill.

The hill is getting steeper as you go, it's getting hard to keep the collection together and get it further up the hill.

If you crest the hill and hit viability, it will transition to downhill; orders of magnitude easier.

But how do you know if you should power through to the top of the hill that others can't see yet, or if you're investing increasingly desperate effort in a dead end?

An important skill: to learn when to pull up out of a rabbit hole when you're hitting diminishing returns.

It's easier for someone watching you to identify it than for you to self-identify.

7If you're pushing a rock up a hill that others can't see, you'll just look weak.
8When you're in an open design space, start with a loose sketch and iterate on it to tighten based on what feels right.

If you start off with a thing that's too optimized, there's nowhere to iterate with feedback; everything is already tightened down.

If it's looser, there's more wiggle room to play with it, to discover interesting or valuable directions.

9We just assume that the hardness of a task and its value is correlated.

But actually they are much less correlated than it seems!

10A lot of problems work in the first ply but break down in the later plys.

In the first ply, you take an action and it causes a result.

In the next ply, the world responds, possibly adapting to what you did or fighting back.

If you only focus on the first ply, it seems easy. "Simply do the thing."

But the hard part is the second ply and beyond; constantly fighting back a problem that evolves as you interact with it.

If the second ply is hidden to you you'll feel like you're constantly getting unlucky breaks.

If you can see the second ply, you'll realize that things are fundamentally much harder than they see.

11One-ply thinking: LLMs will make navigating bureaucracies and paperwork easier.

Multi-ply thinking: LLMs will make it so that bureaucracies processes get even more labyrinthine.

The equilibrium of misery: when the capacity changes, the supply rises to hit the same equilibrium as before.

12Two fundamental shapes: logarithmic and exponential.

Logarithmic: starts off quickly, but then hits a ceiling it never surpasses even with infinite effort.

Exponential: starts off slowly (compared to logarithmic), but then self-accelerates to infinity.

Logarithmic systems are closed systems.

Exponential systems are open-ended systems.

When you focus on quick returns, you will pick the logarithmic.

When you focus on a longer time horizon, you'll pick the exponential.

13Command and control techniques are logarithmic.

Emergent techniques are exponential.

Command and control: effective at achieving the ends, quickly.

Cuts through coordination costs: "simply do the thing the boss says".

But can never rise higher than what the boss planned.

If the boss is wrong, or didn't communicate properly, then the ceiling of what is achieved is very low.

This approach is powerful but brittle.

It gets weaker as it is bombarded with disconfirming evidence.

Emergent techniques: create the conditions where good outcomes emerge.

Will take some time to get going.

If you have to converge on one coherent outcome, this technique won't work; discovering a schelling point via bottom-up processes has exponential costs.

But if the swarm doesn't need to converge on one approach, the emergent approach can steadily accumulate more good ideas at a compounding rate.

This approach is diffuse but antifragile.

It gets stronger as it is bombarded with disconfirming evidence.

14In complex environments, single-lens thinking is dangerous.

You'll likely focus on only one dimension and ignore the others, and dangerous things likely lurk outside your vision.

What you need is a meta-lens: a lens that inherently includes a diversity of lenses.

For example, my friend Anthea Roberts' Six Faces.

A meta-lense helps you look at a problem from many angles and then integrate them.

Meta-lenses are structurally better than singular lenses in complex environments.

Of course, over time you want to use multiple meta-lenses; a single meta-lens is still dangerous, just an order of magnitude less dangerous than a non-meta-lens.

Meta-lenses take time and effort to apply.

They require patience, comfort with abstraction, openness to finding disconfirming evidence.

This takes time–if you're mad or scared or stressed you won't feel like it's available to you.

AI is patient and very comfortable with abstraction, and can be a useful tool to help apply meta-lenses.

15In complex environments, we tend to do the opposite of what we should.

We only have so much room in our minds for ambiguity and uncertainty.

In complex environments, uncertainty goes up.

So we crouch defensively to bring our inbound signal to a level we can grapple with.

Shorter time horizons

Posterize (make gray things black and white)

Put on blinders

But this is exactly the opposite of what is necessary to succeed in complex environments!

The more you crouch, the worse it gets, leading you to crouch more, in a toxic spiral

Tools like AI might help us grapple with and navigate uncertainty better than we were able to do ourselves.

16It's possible to be smart and be a one-ply thinker.

It's possible for onlookers to be wowed by how plausible the smart person's quick analyses seem on so many different domains, but on each individually they're only single ply.

Sarumans are one-ply thinkers mostly.

Radagasts are multi-ply thinkers.

Just because someone is a virtuoso in detailed one-ply thinking in novel domains does not make them a multi-ply thinker.

17Sarumans can't see multi-ply problems; they can only see one ply.
18One-ply thinking can predict the emergence of cars, but not the emergence of traffic jams.

Cars can exist on their own; they can be reasoned about in one ply.

But traffic jams emerge from the interaction of things in an iterated way; they are inherently multi-ply.

19To be a multi-ply thinker requires you to understand there are some things you can never know.

A Saruman will think that sounds weak and resist acknowledging it.

So instead they'll have a massive blind spot.

Every time their one-ply thinking fails they won't see the failure or will blame random factors.

Powerful one-ply thinkers can do a lot of damage.

20Someone who says "Simply pick the non-goodhartable measure" is a one-ply thinker.

Multi-ply thinkers can see that Goodhart's law arises, inexorably, in complex domains.

But you can't see it in any single ply; it is inherently a multi-ply phenomenon.

So smart single-ply thinkers will acknowledge the phenomena but miss why it is so important and inescapable.

21The danger of your blindspot for society is tied to how powerful you are.

A very powerful person with a massive blindspot can do a ton of damage.

22Reductionist thinking is one-ply thinking.

Super powerful!

But only in problems that yield to it.

The statement "Problems that can't be understood with the computer science lens either are unimportant or unknowable" is obviously absurd... but replace computer science with "reductionist" and it's also absurd, just slightly less obviously so.

23Fast execution requires one-ply thinking.

We've fetishized execution so much (it's useful in hill-climbing contexts) that as an industry we've lost the ability to do multi-ply thinking.

What our increasingly complex environment demands is multi-ply thinking.

24Reductionist lenses are a single vertical slice on a problem.

Reductionist lenses are a single vertical slice on a problem. Vertical, detailed, concrete.

Horizontal, platform lenses are inherently multi-ply. Horizontal. Levered, abstract.

25A swarm can do something like multi-ply thinking even if none of its individuals can.

It doesn't do it by understanding, it does it by brute forcing it.

Some of the things that end up sticking are, by happenstance, multi-ply kinds of ideas.

If you benefit from the swarm (you benefit no matter who wins the lottery ticket), don't worry about multi-ply thinking, the swarm will brute force it and find it.

But if you're an individual (you only have one lottery ticket) multi-ply thinking will make it more likely you find new game-changing hills.

26If you look at a new thing and pattern match quickly, are you super insightful...

If you look at a new thing and pattern match quickly, are you super insightful... or just tricking yourself that you're smarter than you are?

"I've put this in a bucket, now I understand it."

You understand the bucket, not necessarily the thing you put into the bucket.

If you put a thing in a bucket that doesn't capture its relevant qualities, you'll be blind to what they might be.

The bucket is not real, the thing is.

The bucket is just a shortcut to thinking.

Never forget that it's a shortcut.

27In uncertainty (e.g.

In uncertainty (e.g. complex environments) people cling to a charismatic, powerful person that says "Don't worry, I know what to do, simply follow me."

Sometimes these people are the ones who least know what to do–they don't even recognize the multi-ply challenges at all, which is why they're more confident.

Every so often consistency and confidence is sufficient to get through the complexity… but often it isn't, and has the potential to be a self-defeating trap.

28Theory X leadership styles can get stuck in a self-defeating trap.

As a reminder:

Theory X assumes that people are by default lazy and incompetent.

Theory Y assumes that people by default will rise to what is asked of them.

In an organization that isn't achieving as good of results as leadership wants, if management is using Theory X, they think the problem is that the employees have gotten more lazy or the employees' standards have eroded.

The answer seems to be to tighten: to set higher goals with less wiggle room.

Every so often this is exactly what was needed, and the problem is addressed.

But often it is not the problem, and the problem gets worse.

Management squeezes harder. As the results get worse, they clamp down harder.

"See? They aren't capable of doing the work and I need to increase the discipline."

In practice what might be happening is that the strategy the team is supposed to be executing is not actually a viable one.

"The beatings will continue until morale improves" is obviously a bad idea, but this same effect in the small is often seen as good management.

29How much wasted work happens in a domain?

How much is the proof of work aligned with the work?

Does the work itself show that it's useful, or does it take time to document, measure, and explain?

Sometimes the effort to document and explain is orders of magnitude larger than the effort to do it in the first place.

How much does the environment require you to prove you did work vs give you the benefit of the doubt?

If the environment requires everyone to defensively document their work or get fired, you'll get a lot of proactive cover-your-ass documentation work.

30In an org, the person executing hardest is the one presumed to be correct.

Hustle is a hack to have proof of work, in a way that isn't necessarily aligned with useful work.

"Is that person adding value by doing the right thing?"

"I don't know, but they've clearly been hustling, so they're at least pulling their weight."

But it's just as easy to run around in circles or do performative work ("look how much sweat I have on my brow!")

When people don't look closely and there's a disagreement, everyone assumes the person hustling is more likely to be correct. A core asymmetry.

31A cover your ass org strategy for a leader: push for more hustle.

You'll get more work happening… not necessarily useful work, perhaps running in circles or make-work.

Everyone thinks, "well, that leader fixed it, look how hard they're all working!"

But likely they're not doing the useful work but the obviously strenuous work, and so the underlying situation deteriorates more and more, leading to pushes for more work.

This strategy emerges from a fundamental belief in Theory X: that everyone is fundamentally lazy and incompetent.

The people who are making it all worse with their craven "execute and don't think," think they are the heroes and everyone else is soft, or misguided.

The org and everyone else will keep patting them on the back saying, "good hustle!" even as they destroy value in the medium to long term.

If it's default cohering, more work tends to lead to more output.

E.g. There's a very clear, valuable hill to climb and it's obvious which outcomes lead to more progress up the hill.

If it's default decohering, then more work tends to destroy value.

32If you're fearful you can't think long term.

Long term thinking is a luxury.

If you won't survive to time step n+1, thinking about time step n+5 is a waste.

The default state of orgs getting frustrated makes them fearful.

The fact things aren't converging makes everyone fearful, the fact everyone is fearful means that people want to find a thing to do, which destroys value.

33"Perception is reality" is a post-truth mindset, those who hold it are fundamentally lost.

The mindset implies that ground truth doesn't exist or matter.

Perception obviously matters–people make decisions based not on the world as it exists, but their model of the world (their perceptions) which might be wrong.

But following that to the limit of "therefore ground truth doesn't matter" is lunacy.

34It's very easy to create an accidentally delusional OODA loop.

Is it momentum in a vacuum (just makes sense for you and the team, possibly only coherent to the kayfabe), or is it momentum in the ground truth reality?

Momentum that's just motion might be bad motion.

An OODA loop is mainly about the loop; it's possible the observation step is coherent with past iterations of the loop but not the actual ground truth reality.

35A team of the same kind of people will feel good about themselves as they get farther from reality.

The diverse team will feel pain as they get away from reality.

Coherence is order; it feels good as a proxy for truth.

But truth is what ultimately matters, and sometimes the uncomfortable, and order-destroying idea is what brings you closer to truth.

A team of all the same kind of person can fall into the illusion of their order implying truth; the diverse team is less able to fall into that illusion.

It hurts, and it feels like turbulence, but it makes them more likely to find the truth.

36The person who extends the kayfabe is safe.

There's safety in numbers; you're doing the thing everyone else is also participating in (or pretending to).

The person who points out the kayfabe takes on all the social downside.

This fundamental, strong asymmetry is one of the reasons emergent kayfabe is such a strong force.

37When you uncover dinosaur bones it feels like going backwards.

As a reminder, dinosaur bones are real constraints that were previously hidden.

When you discover a dinosaur bone and dig it up, it feels like going backwards.

Previously there was no constraint, and now there is.

There's a new thing to disagree about!

But that dinosaur bone was there all along.

When it was hidden it was more dangerous, because you couldn't reason about it and pick solutions that fit within it, it was just lurking dangerously in the background.

38A measure of the open-endedness of a given discussion group: if you were to join in the middle of a meeting, how quickly could you figure out what the topic was?

How much surprisal is possible in that group is a measure of how open-ended it is.

My favorite discussion groups are ones that are very open-ended.

39People love to hate on bureaucracy.

But it's a symptom, not the cause. The cause is complexity of interrelated work at scale.

If you slash bureaucracy, you just get a big chaotic jumble.

Bureaucracy is slow and difficult to point in a new direction, but at least it continues to grind forward on its path as opposed to swirling and diffusing into nothing!

40Fire begets fire but water doesn't beget water.

Fire is exponential but water is linear.

If there's a runaway fire but only linear amounts of water, the water can't keep up.

Compounding beats linear no matter the coefficients..

41Efficiency is in tension with resilience..

Efficiency leads to centralization.

Centralization lessens competition.

Competition creates adaptability.

Adaptability leads to resilience.

42More niches lead to more diversity in the ecosystem, which creates more resilience at the level of the ecosystem.

Efficiency flattens, makes it so one genotype can dominate all of the others, reducing stores of adaptive ability.

43Want to give feedback that will lead to growth?

Focus not on what people are.

It's implicitly tied to ideas about their worth as a person, implicitly unchangeable, implicitly tied to their ego.

They will defend against the feedback because they can't change who they are.

Instead, focus on what they do.

They have agency over that, and can choose to do differently.

Even good people do bad things sometimes.

44If you "innovate" on something unintentionally, that's bad.

Just means you're going against the grain for no reason.

Innovation is variance that turns out to be useful.

But the vast majority of variance, or things that attempt to be innovative, turn out to not be viable.

Innovate intentionally!

Innovation is dangerous, all else equal.

If you don't have a hypothesis for why a bit of atypical effort will be worth the risk, then don't do it.

Only do it if you think it could turn out to be significantly better in a way that differentiates you.

45A judo move: if you can frame a problem as an optimization problem (without dangerous externalities) you can now use hill climbing techniques on it.

If you can fully "capture" the complexities of the argument as an optimization problem, it changes the character of the problem.

Lassoing complex problems in an ever-tightening lasso.

But you have to make sure all of the relevant complexities and indirect effects are captured within the lasso.

46How load bearing and lightly held is an idea?

Load bearing and not lightly held: don't bother debating because it won't change. Take it as a given and debate the things around it.

Not load bearing and not lightly held: All you'll get is disagreements that don't matter. Pick a thing and move on.

Not load bearing and lightly held: don't bother debating, it can be changed easily later.

Not worth the time to debate because it doesn't matter.

You can wait until you have more experience and insight to help answer the question, no need to debate and come to a high quality decision now.

Load bearing and lightly held:

Kick the tires as hard as possible now to try to falsify it as early as you can.

The more that you've kicked the tires and it's been solid so far, the more you should hold it increasingly tightly.

47Big companies don't compete in niches.

A scaled machine can't go into niches.

The big companies cannot see the small opportunities, they are just too small to fit into its sensemaking apparatus.

It's way too small! "We can't even count that low!"

Some subset of the niches will turn out to have the disruptive seed that will blossom and knock the incumbent off the pedestal.

48Someone who is very often right in ways that others don't understand will make it even harder to notice when they are actually wrong.

And no one, no matter how smart, is always right.

49Which would you rather be: right in a boring way or wrong in an interesting way?