Bits and Bobs 2/20/24

1I'll be OOO next week, so will skip Bits and Bobs for the week
2The shift from viewing your offering as a collection of products to a suite is subtle but profound.

Nothing changes at the moment of the switch, but the trajectory from there on out is wildly different.

In computer science this is called a figure-ground inversion.

A great real world example is that before the civil war, "United States" was a plural noun ("The United States are"), emphasizing the states.

After the Civil War, it switched and became a singular noun ("The United States is"), emphasizing the Union.

That was a subtle but profound shift for the nation and charted a wildly different course.

This shift is one that every great company does, and when they do it transforms the business.

The shift will never feel urgent, but it is important.

A portfolio strategy cannot emerge organically out of individual product strategies.

It must be deliberately brought into existence at the level of suite.

From the perspective of each existing product It will feel like going backward, at least for a bit.

This means all of the existing day-to-day incentives and processes will fight it.

If you have a portfolio strategy, then different trade-offs across business decisions resolve significantly more easily.

Before it will be pushing a boulder up a hill.

After that it will be skiing downhill.

A portfolio strategy is not just pricing.

It's Product + Commercialization + Market Reality all supporting and understanding each other.

Every suite has a "keystone" product that holds it all together, the center of gravity.

The keystone must be:

Differentiated / Valuable

Sticky

Adjacent to nearly everything else in the suite

The keystone doesn't need to be a front-door product; it might only make sense in combination with other parts of the suite.

Products in a suite play different roles than just individually "making as money as possible".

When you get direct value out of a thing, you are implicitly trading off indirect value (e.g. stickiness, loyalty, growing the pie).

"Make money off it" is the easy answer, and in a portfolio/suite, often wrong!

3You can't more than quibble with the strategy of a trillion dollar company.

"They went against the established playbook and broke some strategic rules!"

"... Yes, but they made a trillion dollars doing it, so they've clearly done something very, very right."

4Innovation is novelty.

Novelty is sometimes bad!

If you want the thing to be boring, you don't want novelty.

Some things you want to be boring to support the differentiated not-boring thing with a minimum of fuss and drama.

Spend your innovation tokens carefully!

5Hill climbing is easier to organize and coordinate than hill finding.

With hill climbing everyone can agree, easily and with their own eyes, the steepest upward slope from the current position.

That's the natural schelling point to coordinate around.

With hill finding, the foot of the next hill could be in any direction in the fog.

There is no obvious schelling point to congeal around.

6The output of a strategy process is often short and clear.

The hard part is the thinking and judgment calls in creating the strategy, which typically has to be done in the absence of details and data.

Data is often a comfort blanket. Many of the most important decisions have to be made without it.

7If you have a thing you know will almost certainly exist and be important in 3 years, you don't need to do a TAM analysis, sometimes it's just "in three years, if this succeeds as we realistically expect it to, would we high five?" and if so, then do it!

The hard part about this is the "realistically" part.

It's important to make sure you actually have a grounded understanding of the reality, and aren't just taking your own kayfabe too seriously.

8It's easy to think you're 10 steps ahead of everyone else, but actually you're 10 steps behind.

In that case, you might think you're a bold genius, but actually you're an immature baby.

Imagine everyone else proposing the obvious best practice and the upstart saying "that sounds trite and boring, that can't possibly be it!".

The response is: "The reason it sounds trite is because it works resiliently in diverse situations so everyone knows it's good advice and no one ever feels the need to question it".

Every so often, the general consensus baseline is wrong, and tweaking it will give an asymmetric advantage.

But far, far more often, the received wisdom is so boring precisely because it works so well in so many situations.

9The less strategic clarity you have, the more that too many layers of middle management will create chaos.

With a comprehensive strategic north star, everyone can break ties easily.

Even though each middle-manager wants to put their stamp on the outcome, their stamps tend to nest mostly cleanly.

But without a comprehensive strategic vision, the different layers of middle management will tend to create local visions that do not nest cleanly.

10A thought-provoking old Twitter thread from my friend Ben Mathes.

A thought-provoking old Twitter thread from my friend Ben Mathes. (A lightly edited version of the thread follows)

"2 key ingredients behind every good situation I've ever been in: Shared Fate and Slack.

Shared Fate examples:

equity ownership + long lockup period means that to individually succeed the thing you all have shared ownership in must succeed

family genetic lineage

people with mutually-assured blackmail

and so on…

Slack to adjust to uncertainty is harder to define. Examples:

a VC where one in 20 meetings needs to run 2x as long as you deep dive, so they schedule in hour breaks between each hour meeting

enough financial cushion that you can enable your employees to retrain to a changing strategy

enough savings and low cost of living so you can move cities easily and not need to work for several months as you get your feet on the ground

The more uncertain the domain, the more you need slack and shared fate."

My own reflections:

Shared fate creates the need for deep, non-transactional trust.

Slack allows room for experimentation and growth.

11Process gives you efficiency at the cost of not being able to handle surprises.

So put it in place in inverse proportion to your rate of surprise in that domain.

Science lab approaches are good at creating differentiation.

Factory approaches are good at driving down cost.

12A Bain cultural precept about a collaborative, supportive stance that I think is wonderful:

"A Bainie never lets another Bainie fail"

13In large organizations, the smarter the individuals, the dumber the organization as a whole acts.

This is a more pointed framing of an earlier observation.

14It's hard to pull a load-bearing card out of a house of cards, so instead we tend to rationalize why it's good.

The bigger the house of cards the harder a foundational card is to change, even if it turns out to be wrong/bad.

The more load-bearing something is, the more likely that if it's wrong it's catastrophic and destabilizing, so the more likely you and everyone else will have motivated reasoning to discover why "it's correct, actually".

This problem gets worse the longer it festers and the more load-bearing it is.

It's easier to just keep building instead of changing foundations. "I dunno, let's just add better layers on top and not think about what's underneath the house."

Carl Sagan: "One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It's simply too painful to acknowledge, even to ourselves, that we've been taken. Once you give a charlatan power over you, you almost never get it back."

15With an early stage technology that has promising but uneven quality, you have to design the UI to be a good enough experience for the worst case, not an exceptional experience in the best case.

You have to meet the technology where it is.

If your UI sets an expectation of a quality level your backend can't match, you're setting users up for disappointment.

For example, for an LLM-powered tutoring experience, maybe anthropomorphize it: "This is Reginald. He's a kindly old professor who is well read and world-renowned, but has gotten a bit scatterbrained. He still remembers the big picture well but he sometimes struggles with the details, and has all the time in the world to help you."

For Google Maps Augmented Reality Walking Navigation, we experimented with flowing particle streams where how diffuse the particles was was an indication of our confidence in the quality of the localization (whose quality was highly variable).

16These are some principles I try to live by, both as an individual and in the organizations I participate in:

Catalyze something far greater than yourself. The value we create in the world is both direct and indirect. Although the direct value is easiest to see, the ripple effects in time and space of our indirect value are far greater. Lead by gardening; catalyze greatness in those around you. Create significantly more value than you capture. There's more to life than commercial value; inspire meaning and significance around you. Create infinite games wherever you can.

Survive, then thrive. Get a good-enough, usable prototype as quickly as humanly possible, and then as it gains momentum, continuously improve it to converge on greatness. Working code is orders of magnitude more useful than beautiful docs. Be scrappy and clever, and use existing components wherever possible: lateral thinking with weathered technology.

"Yes, and…". Meet surprising new ideas with openness and curiosity. This is not to say that every idea you come across is great, but challenge yourself to find the seeds of greatness in everything you see and then build on those seeds. Approach debate in a collaborative, not combative, stance. Understand that diversity is strength, even if it can feel hard in the moment. Choose what to build on deliberately, but don't close doors you don't have to. Be radically open to those around you.

Spread your wings. Follow your highest and best use. Inspire yourself to lean into your superpower and grow it, to continually become the best version of yourself. Do work you would be proud to show your ancestors. Be authentic to yourself at all times.

17Growth requires change.

Don't hold too tightly to what you are today.

See how to continually grow into a better version of yourself.

Structure you lay down should support, not constrain, and wherever possible it should be a living structure.

18One of the best ways to understand a system is not so much where it is but where it's going.

The best way to do that is to see where it's been.

How it's evolved from there to here; the forces that shape it.

The socio-techno forces that shape it, the archeology of the system.

That shows you a glimpse of its emergent animating logic, its throughline.

When you do it becomes easier to bet where its destiny will take it.

19A bad frame: "That's incremental, therefore it can't be a big idea."

An idea is both its instantaneous expected value, but also, more importantly, its long term possible value.

Many of the best ideas are small to start but then have a smooth, accelerating slope upwards if they work.

If you can find a thing that has an EV of neutral to positive instantaneously and has a smooth, accelerating curve from there with increased investment, then it's a great idea.

A pebble can't grow. An acorn can.

Grow here means a natural set of adjacencies that if can expand into automatically if it works, without miracles.

Judge an acorn not on its size to start but what it could grow into.

20Thought-provoking ideas don't need to be true or false.

They just have to be generative: curiosity producing.

This is possible even if it's a point from a person you otherwise don't care for, or if you don't agree with the overall insight.

Generative things are curiosity amplifiers. Lots of nucleation sites for new insights.

21If lots of different people find an insight interesting and true, then it might be game-changing.

This is especially true when a large diversity of thinkers who don't often agree agree.

If everyone finds it interesting (that is, novel, surprising, not just "well duh") and also true then it's a very good sign it's onto something deep.

The logic is similar to Fil Menczer's observation that early reshares of a piece of content from very different people is a good indicator of how virally it will spread.

22When you can, make it so the great outcome is possible but not required.

If it's required, and you don't get it, then you fall on your face and are knocked out of the game.

If it's optional and you get it and it wasn't expected it's a wow moment.

23Curiosity is your intuitive novelty search.

Novelty search is the best algorithm for open-ended exploration.

It helps you discover and surf your flow-state, a bespoke standing wave.

24I think by talking.

If you've ever had an open-ended 1:1 conversation with me, you've seen me implicitly developing riffs, searching around to find ideas that resonate.

Things that seem to resonate with different audiences I then harvest and nurture into a thread that can later be woven into a fabric.

Once I have enough threads harvested, the tapestry I didn't know I was weaving pops into sharp focus, and I simply must capture it in writing.

From there it's simply a matter of taking a bit more time to weave it into a stable tapestry.

…that's the ambition anyway. As you can see, more often the end result is less "rich tapestry" and more "amateurishly knitted, ill-fitting sweater"

25In a former life I sketched out dozens of different possibilities for how over the course of decades society might end up with people wearing Head Mounted Displays (HMDs) out and about.

The enumeration included everything I could think of ever working, including "Smartphone AR with Google Lens," "Snapchat World Filters", "Specialized limited-function Googles", and for completeness even "Direct Neural Links".

Each idea had a ton of headwinds and a very shallow gradient of activation.

The most promising gradient by far was high-resolution, low-latency inside-out VR by the only provider who can do vertical integration and get users to wear it without feeling like a dweeb: Apple.

You'll know the Apple Vision Pro gradient is happening when you see people you don't recognize wearing it on the street and it's so normal you don't even bother to point it out to the friend who is walking with you.

26You need two Steves to catalyze a new consumer ecosystem.

A Wozniak to design the system…

and a Jobs to sell it to consumers.

They have to understand each other deeply, but most other people will understand only one Steve or the other.

The fact the two Steves understand each other is the bridge to catalyzing whole new landscapes of value.

Sarumans (e.g. Steve Jobs) catalyze consumer-facing product breakthroughs.

Radagasts (e.g. Steve Wozniak) catalyze ecosystem-facing platform breakthroughs.

27A new ecosystem is often catalyzed by a particular kind of kooky visionary and becomes a scenius.

A true ecosystem and scenius escapes its founder and grows to be bigger than the catalyst.

The visionary should be bold, eccentric, and oddly charismatic.

Perhaps a bit ascetic, motivated by more than just money.

Everyone who hears them should think at least they aren't actively controversial.

The worst anyone should think is they're kooky and mis-guided, never dangerous.

Often the most game-changing ideas are clever combinations of the most ordinary pieces.

Tim Berners-Lee is a great example of the archetype.

28No one wants to be the chump that shills for a for-profit corporation for free.

So if you want earnest, intrinsically motivated evangelists, make your thing bigger than any single for-profit corporation.

Make it a movement that is truly owned and created by the participants.

29Someone pointed out to me that music labels started out as being good, actually!

It allowed a lot of musicians in a hit-based business to band together into something bigger than themselves to give mutual support and smooth out outcomes.

However, over time it grew into something that had so much more leverage over the musicians that it started becoming an abusive relationship.

The collective became so powerful that it completely disempowered the individual.

This is partly because the collective does well when any member of the swarm does well, but individuals only do well if they specifically do well (which has a large luck component).

Also, any musician who gets really successful is incentivized to break off on their own, disempowering the musicians who remain in the collective.

The collective's sustained growth rate is larger than any of the musicians, so it grows in relative power.

This dynamic is, I think, inevitable in some sense.

Yet another example of where preferential attachment and the asymmetric advantage of the swarm plays out.

That said, I think you can design systems that slow its advance… or periodically reset the world to disaggregate it.

30A useful frame is n-ply thinking.

How many layers, or plys, are you able to consider in your actions while still responding quickly?

An n+1 player can beat an n-player in ways the latter cannot understand.

This can be taken advantage of for bad ends!

Imagine a player who is clearly behaving in a morally good way for plys 0 to n. But in the n+1 ply they deploy an evil strategy to dominate.

This is similar to the "in the last round, defect" strategy in the iterated prisoner's dilemma.

If you think you're a 10-ply thinker but you're actually thinking 2-ply, you'll be operating recklessly in ways you don't understand.

The best way to get more plys is to have more experience and then reflect on that experience to abduct out an intuition.

Think of the ply beyond your current ability as a dimension you can't yet see.

All kinds of truly mind-blowing things will happen that are impossible for you to understand at that ply.

Similar to Flatland, or the 4D Toys iPad App or the Miegakure game.

With experience and practice, you can learn to see in additional dimensions.

The higher-ply thinker will look like they're breaking the rules, but in reality they're playing within a set of rules beyond your ability to comprehend them.

31Last week I talked about the rock/plant/animal/human implicit categorization.

First of all, remember the rock/plant/animal/people distinction is just a made-up bucketing over a continuous phenomena.

But also, I think the model is missing an extra layer: 'god'.

A god is an entity that can think multiple plys beyond you; they will be powerful and inscrutable.

When you're playing against a god, the best you can do is hope they're not an evil god, and worship them so they are unlikely to want to squash you.

32People will miscategorize players who are thinking at a higher ply than them.

If the higher-ply player is less powerful than them, they might erroneously categorize them in the 'animal' category–which might lead to a nasty surprise for them later.

If the higher-ply player is more powerful than them, they might categorize them as 'god', and be unable to bring themselves to imagine how they could ever fail.

33People tend to see black and white thinking as bold and decisive.

But black-and-white thinking is low-ply thinking.

Low-ply thinking is not bold, it is immature and reckless.

High-ply thinking looks milquetoast and weak ("they can't even make their mind up!"), but it's actually bold.

Entertaining the idea of "maybe I'm wrong" is terrifying!

34Not doing your highest and best use is corrosive to your soul, like battery acid.

Doing anything for numbers, not meaning, is corrosive to the underlying thing.

35You're never not swimming in the kayfabe of whatever org you're in at the moment.

Some orgs have more or less acute kayfabe (distance from ground truth), and different orgs will have their own particular flavor of kayfabe.

But every org has it, at least a little bit.

Kayfabe, within that org, is in some serious sense, "real" in that it is inescapable and you can't ignore it.

But as soon as you leave that context, it goes from being important to being completely and totally unimportant (and actively dangerous), because the kayfabe of a particular organization only exists in the context of that particular organization.

To be a supremely successful cog in any particular machine you have to completely and totally surrender to its kayfabe.

36If you've never had a boss, you might erroneously conclude that uncertainty can only arise from weakness.

If you've ever had a boss, you've experienced the difficulty of explaining why a thing they want to be certain is actually uncertain.

This forces you to develop an intuitive sense that uncertainty exists, and feel it in your bones.

Uncertainty is a fundamental characteristic of complex problems.

Most important real-world problems are complex.

37Just because you're great at seeking and absorbing disconfirming evidence at the tactics (e.g.

Just because you're great at seeking and absorbing disconfirming evidence at the tactics (e.g. talking to users about features) doesn't mean you're good at seeking and absorbing disconfirming evidence at the level of strategy (e.g. ground truth market dynamics and your place in then).

38LLMs are vulnerable to the "screenshot attack".

That is, if they say something offensive or wrong the user can take a screenshot and it can go viral, eroding trust in the system.

But it is not a given that it has to work this way!

Consider a Google Search result where one of the ten blue links is clearly not a good result.

It's not nearly as viral, Google isn't vouching for it as strongly.

Going even further, imagine someone opening up Word and writing something dumb or offensive and screenshotting it.

It couldn't possibly go viral, because the person clearly put it there themselves!

How viral it is is how much a result could negatively surprise the user, and how much the result is "vouched" for by the service.

In some ways, LLMs today are ambassadors of their creators; anything they say is liable to embarrass their creators, because they have to be the singular, dependable answer.

But if there were a much larger number of different LLMs, each with different personalities, then it wouldn't matter quite as much.

39The frame of "agents" implies agency.

Agency implies an ability to marshal motive force intrinsically without needing an immediate external causal force.

A rock when you leave it alone you don't have to worry much about.

An animal might get out and wreak havoc.

You wouldn't allow an agent you didn't deeply trust to roam around your house out of your view.

Imagine Amazon sending a nuclear-fusion-powered humanoid robot to your house for free. Would you allow it into your home?

What about if it only worked if it was plugged into electricity from the wall?

What about if it was just the robot and you got to install an operating system of your choice?

When you send your data to another 3P service, the data is out of your view, and the 3P has agency to do whatever they want with it (curtailed, to some limited degree, by their privacy policy).

40When things grow quickly, it's messy and incoherent.

But when things grow below some threshold rate of staying within their iterative adjacent possible they can retain coherence.

Sometimes if you don't absorb enough energy around you (not growing as fast as you can) you get left far behind and possibly out-competed.

In those situations, you must grow quickly.

Fast pace layers almost require messy, incoherent growth.

41Bureaucracies can't deal with things they can't imagine.
42Machines will force the humans in them to act like robots.

When an organization is small, if there's a significant problem affecting a user, of course you reach out to the user directly.

When you're much larger, you might think "is there a process for that that I'm supposed to follow?" and default to not doing anything to be "safe".

The logic of the machine overrides the logic of the human.

43A mentor chooses to mentor someone.

A mentor chooses to mentor someone. A manager doesn't necessarily choose to manage their report.

This means there are some kinds of feedback that a manager can't give and have it be received productively, but a mentor can.

The manager is partially the person running their part of the machine and partially the person looking out for the development and thriving of their reports.

The machine output is a survival bar to clear. "if we don't put out 5 widgets this month the team will be disbanded".

Helping your reports thrive is a term to maximize.

The shorter the time horizon, the more that managers will treat the job of manager as the former vs the latter.

44Stray thoughts on LLMs.

LLMs (just like people) are a lot better at critiquing things than coming up with new things.

LLMs have a theory of mind, in general.

But if they don't maintain memories about you, then they don't have a theory of your mind.

If an LLM doesn't understand a concept, it's a good sign that there isn't a clear consensus on it in society.

ChatGPT turns unknown unknowns into known unknowns.

It's useful when you don't even know what questions to ask.

You still don't know the answers, but at least you know the right questions now, and can make an informed bet on what the answers might be.

People who are better at lightly holding different lenses, and navigating uncertainty, will be able to more effectively use LLMs.

Phone support is a metaphor for how annoying the lack of memory for LLMs is.

Today, every session with an LLM feels like getting transferred to a new phone support rep.

In pre-computer business processes you'd have a process with lots of imperfect humans.

The humans could make local judgment calls but might miss larger patterns of abuse.

Instead, you designed for resilience at the level of the system.

When talking to phone support agents, it doesn't matter if you trick them or fall in love with them, because they have to go through very narrow channels to do anything.

That's one of the reasons interacting with phone support reps often feels kafkaesque.

They are clearly human, yet acting like robots.

When it comes to using LLMs, everyone is already here, but we don't know what it's for yet!

Typically with new technologies, there's a wave of early adopters who experiment and sense-make about the new technology, and share those best practices with new users as they join.

ChatGPT is weird in that we have maximum usage already but before the sense making happened.

That creates a lot of chaos and swirl. The mental model for what it is hasn't fully emerged out of collective tinkering yet.