Bits and Bobs 1/16/24
1When a strategy doesn't work, it can be either the strategy is wrong or the execution is wrong.
In practice in organizations, when something is wrong people tend to assume the execution is wrong, not the strategy.
If you think something is simple, then the only excuse for it not working is the competence of the entity executing it.
But if it's actually hard, you might erroneously blame the entity executing it.
Whether or not a strategy is actually viable often reduces, largely, to "is it possible to execute this without a miracle?"
In practice the reason most strategies don't work is that almost everything is more of a slog to execute than it seems like it will be.
The coordination headwind is a massive headwind.
A useful question to ask yourself when diagnosing a failed thing: "If the strategy turned out to be wrong, what would it look like?"
... And then if what you see looks anything like that, assume that maybe the strategy is wrong, not the execution.
2There's a difference between clean execution and correct execution.
Imagine a 2x2: clean vs messy execution, and correct vs incorrect execution.
"correct" here means "is effective at achieving the stated goals".
It's easier to see the cleanliness than the correctness from a distance.
Clean execution is also the only way to execute something consistently at scale.
So organizations tend to naturally select for clean execution.
The real world is surprising, messy, and constantly changing, which means that the correct execution is often messy, too.
If it weren't, it wouldn't be able to stay viable among the mess.
So in practice orgs tend to optimize for cleanness of execution, but at the expense of correctness.
3A single app is a strategy: a bet on a single thing.
A platform is a meta-strategy: a bet on any of a swarm of things.
You can discover later which item in the swarm turned out to be most important (or, perhaps a collection of smaller things in the swarm add up to something much larger).
These strategies look superficially similar but are fundamentally different.
4You can't build a platform by yourself.
You need to discover an ecosystem of users and coevolve it with the ecosystem.
Many of the world's great platforms did not start as platforms, they were products that found PMF and then expanded the lower technical layers and opened them up to others.
5Platforms only have killer use cases in retrospect.
You can't motivate turning something into a platform with a singular killer use case.
If there were one killer use case you knew would be a massive success and would work, you'd just do that.
You can have notions of classes of apps that you know will likely be great if you build the platform.
The point of a platform is the kinds of emergent things that show up even though you didn't plan them.
Seeking a singular use case to focus on gets you in "building an app" mode, not "building a platform."
6I assert that the implications of Apple's ATT policy changes are some of the most wide-reaching in technology in the last decade.
But lots of people, even in tech, don't really know that much about them or think about them.
It's been an absolutely massive tectonic shift that has harmed players like Facebook... but more importantly made a massive number of long-tail companies simply non-viable anymore.
It's a FUD-able topic: one that nearly everyone who hears about it will say, "Oh, Apple did a bold thing to protect privacy, they should be cheered", but the second-order effects are massive and I'd argue largely negative for society in ways that are hard to see and rarely talked about.
7Evolution is a phenomenon at the level of the collective not the individual
It's great for the collective, but it doesn't care at all about the individual.
In fact, it's often quite bad for the individual.
The collective grows and succeeds because most of the individuals are in a state of constant tooth-and-nail competition that likely kills them.
8There's a famous game-theoretic strategy to win a game of chicken.
Game of chicken here means, two opponents get in cars pointed at each other and floor it. First person to steer away loses.
The best game theoretic strategy is to visibly throw out your steering wheel before the game starts.
That means your opponent knows they must swerve (losing the game) or die.
Imagine that you've already executed this strategy and your car is 80% of the way to impacting the other car.
The worst possible thing you could do is to say "Oh actually let me pull out this backup steering wheel I have right here!"
9When I was on the PM hiring committee at Google, I was looking for the "two unteachable skills" in candidates:
1) That the candidate saw that the world was not black and white and one dimensional, but shades of gray and multi dimensional, and surprising information that didn't fit into their current world model implied a dimension they were previously blind to.
2) That when they encountered disconfirming information, instead of leaning back and saying with disappointment "Oh. That's interesting..." they leaned in and said with excitement "Oh! That's interesting..."
The combination of these two skills allows someone to bootstrap their ability, without ceiling, in ambiguous situations.
You can think of it as "they can sense disconfirming evidence in their blindspot, and disconfirming information energizes them."
10Last week I was musing about why cities are default alive while companies are default dead.
My friend Rohit reminded me he has an old essay on this topic: https://www.strangeloopcanon.com/p/thinking-like-a-city
Someone else pointed out to me another reason: cities are fixed in place; they don't compete (directly) for resources.
They also don't have to compete with other overlapping cities because geographic space is partitioned such that each unit of land has one entity with a legitimate monopoly on governing it.
(This of course is somewhat complicated by the government hierarchy of country < state < country, but still, at each layer there's only one state authority).
What would happen if cities could move around?
The post-apocalyptic steampunk movie Mortal Engines proposes an answer: they'd try to kill each other to get an edge!
... Maybe that's not too far off what would happen for anything that can directly compete with others over scarce resources.
Companies don't "move", but by default a legitimate competitor to any given company could sprout up at any time, and legally impede on the original company's "territory."
11My husband got me the Titanic lego set for Christmas.
It's been a total blast to build--highly recommended.
If you look at individual legos, you'll just have a random mishmash.
But if you think about what you can build with them together, you can create amazing, mind-bending things.
No individual lego piece is that cool or special.
What's cool or special is the higher level thing you can build out of totally ordinary pieces.
Legos are cool at the level of the system, not the level of the block.
12I used to love playing the Maxis games like SimCity and SimAnt.
The games were effectively little agent-based modeling toys.
They made complex adaptive systems cool.
In most games, you play the role of a single avatar, making heroic decisions, the most important individual in the world. A finite game mindset.
In these games, there was no goal, it was just to tinker with the system holistically and see what would happen: more of an infinite game mindset.
Playing with the complex system, over time you fell in love with the complexity, intuitively grokking the emergent dynamics way better than you would if you just read about it.
13A number of years ago my husband and I went to Hamilton Island in Australia on a vacation.
We hadn't done a lot of in depth planning because we had opportunistically tacked it onto a work trip of mine.
It appeared to be a cute remote island amid the Great Barrier reef with a small downtown and a few different resorts.
When we got there, we realized that all of the restaurants in the downtown were affiliated with the same entity that managed our hotel--you could sign the check with your room numbers.
It was the off-season, so any given day half of the restaurants were closed. The next day, the staff from one would be working at one of the other restaurants.
It turns out the entire island and every "business" on it was run by one company--basically a cruise ship permanently berthed in one remote port.
This felt like a total bait and switch to us. But why?
We thought we were going to a city, but we were going to a disneyland.
When we see a city, we assume that the various businesses within it are, by and large, independent.
They are competing with one another to attract customers and stay afloat, meaning that they have to maintain some level of quality to survive the selection pressure.
Any given entity won't take actions to directly harm itself, but one restaurant absolutely will take actions to differentiate itself from its competitors, indirectly harming its competitors.
Hamilton Island gives the appearance of a normal city with competitive businesses, but actually it's all one entity competing for tourist traffic vs other destinations as a whole.
The basis of competition is not within the island economy, as you'd expect, but between the island and other destinations, a more indirect competition.
14I was talking with a friend about startups and luck.
A founder of a startup has to be above some very high level of ability to be successful.
But if you select down to the population of people with that very high level of ability, which one actually succeeds is largely luck.
This is a phenomenon seen in any highly competitive environment. For example, the outcomes of NBA games are largely explained by luck.
Imagine a choice between two opportunities:
A has a 5% chance of a $1B outcome, but a 95% chance of $0.
B has a 50% chance of $100M outcome, and a 50% chance of $0.
These have the same expected value, but A has much more variance.
Some people will be more motivated by the upside potential of scenario A.
But presumably quite a lot more people will be more motivated by the better odds of B.
The difference between $0 and $5M is way bigger than the difference between $995M and $1000M.
Imagine if a small but diverse set of high-ability people agreed, before they started their companies, to pool any winnings.
You can shift the scenarios quite a bit!
Imagine everyone is in scenario A, but now in the pool.
If they give up 5% of their shares to the pool, they have a 5% chance of $950M, but a 95% chance of $1M.
If they give up 50% of their shares, they'd have a 5% chance of $500M but a 95% chance of $5M.
This is great, because it gives significant downside protection.
(Numbers are not my strong suit, it's entirely possible I messed up this math! Still, the high-level conceptual point stands.)
This pooling would tie their fates together in some way.
Shaving off the best case scenario significantly, but bringing up the worst case scenario significantly, too.
It would also require high trust in one another, and belief that the other people are as similarly high potential as themselves.
There is also a significant moral hazard (for example people going after a fun problem instead of a valuable problem since their worst case is less bad).
But if you could somehow figure out how to make this work, it might unlock a lot of value for society.
Presumably there is a whole class of people of high ability with good ideas who would jump at the pooled scenario but balk at Scenario A.
Getting more of those people off the bench and playing would unlock more great ideas for society.
15Imagine it's 1997 and you're trying to figure out good fledgling ideas from the internet to copy.
You could take the top 10 products on the web and then make CD-ROM versions of them.
But that would totally miss the point.
The point of the internet was not the CD-ROM style experiences, it was the low friction of things that could emerge at any point in swarms.
When there's a horizontal disruptive technological shift that recalibrates what is possible, don't try to build things in the old way, try to find the new things that weren't possible before and lean into that.
After a massive technological shift, the new wave of companies that turn out to be successful will look totally alien to start.
16I was talking to someone who recently left another company after almost 20 years.
They described it as feeling liberating, even healing, to leave.
When you're in a given job, you are obliged to treat the kayfabe as real--a heavy emotional and intellectual burden, especially in a dysfunctional, high-kayfabe environment.
But when you leave that job, that obligation instantly evaporates. You no longer have to even pretend to care about the kayfabe.
You can drop the burden of holding onto the kayfabe, and you in that moment you feel like you're flying.
17In my knowledge management workflow, I interact with notes three times:
Once, as I capture the insight in the moment in a rough form to process later.
Next, a few days later, as I clean up the note to file away in longer-term storage (e.g. as one of the 11k private notes in my https://thecompendium.cards).
At this stage, I clean up spelling errors, add a teensy bit more context to help the idea make sense in the future once the background context is lost, develop the idea just a teensy bit, and maybe interlink with other recent related ideas.
A lot of the time, similar ideas have come up in multiple discussions since the original note, giving it a bit more color.
Finally, every week I skim through all of my notes and select ones to develop into a bit or a bob.
These synthesize them into almost little stand-alone essays, making them more durable against the sands of time.
This process is kind of a chewing of the cud, a slow thinking process.
18Before electricity, factories were laid out very differently.
Everything was organized around the central steam engine, with massive systems of belts and pulleys to distribute motive force around the building.
The whole building had to be organized around this power distribution system.
When electric motors became a thing, totally new layouts of factories were possible that were much more flexible.
But it took a very long time to discover that. Factories didn't change overnight.
It was the newer factories who realized, "wait, we can get away with this more flexible layout".
19Imagine a context changes significantly: a force of gravity shifts direction.
It is significantly easier to extend and grow a small pre-existing thing into a large thing where the new parts are viable in that new context than to retrofit a large pre-existing thing in place.
20Situated software is software a user makes for themselves to solve their particular problem.
Situated software is often a hack, jury-rigged, duct-taped together.
To an observer all they can see is how ugly the software is: how messy, how insecure.
But to the user who made it to solve their specific problem, it's perfect.
Because the alternative was to have nothing for their use case, and now they have something.
That's an almost infinite difference.
The challenge, and the reason that this didn't actually happen in 2004 when Clay wrote the original essay, is that building software requires talking to computers in a way they understand, and that was hard.
21LLMs allow humans and computers to talk naturally and understand each other.
This is new!
For a long time, humans had to learn a very challenging and unforgiving foreign language to talk to a computer, and it never felt completely natural.
The same was true for computers to talk to humans.
LLMs allow users to express their situated needs and goals at way less cost and challenge than before, in a way the computer can natively understand.
This allows the human to give a high level sketch of what they want and have the system choose reasonable details, automatically.
22The level of customizability of experiences determines where the basis of competition occurs.
In the early days of the web, there were a lot of mash-ups and greasemonkey scripts; competition could happen to some degree at the level of the feature.
Apps are extremely hard to tinker with and compose.
In the last decade, the basis of competition has shifted more and more to the level of the whole product, as a monolith.
The result is a centralization of time and attention in a very small number of very large monoliths.
But there are tons of features that could add a lot of value as a feature, but not enough value to motivate trying to build a whole new monolith.
And the owners of the monolith aren't motivated to do them either.
They need to make a one-size-fits-all thing for their massive user base.
Every bit of complexity (even progressively disclosed complexity) they add makes the app harder to use for their lowest motivation users.
And also, you need to reach a big scale of adoption to make it "worth it".
If you have a billion users, and there's a feature that 1M people would love it's just too small to prioritize.
The result is monoliths tend to optimize for the lowest common denominator.
My friend Ivan riffed on this idea in Tyranny of the Marginal User a few months ago.
Monoliths also mean that users have to trust a small number of experiences to have a huge amount of data on them.
There is a whole set of easy-to-imagine improvements that would add a ton of value for society but are not feasible today.
What if you could atomize experiences into swarms of self-assembling micro-apps?
Each micro-app wouldn't need to know that much individual data on a user to be useful.
Each micro-app wouldn't be that interesting; it would be the emergent fabric of them woven together that would be the value.
That could unleash huge amounts of pent-up innovation.
You'd need some level of general-purpose intelligence to stitch them together that could take an abstract natural language intent and make it concrete for computers to execute...
23When building an experience using an LLM, is the LLM the engine or the car?
Is it a component of the overall thing, or the overall thing itself?
Another related metaphor that emphasizes agency: is it the jockey or the horse?
Software is rigid and precise and predictable; when you give it free reign, you can know (mostly, most of the time) how it will operate.
But LLMs are squishy. They are more impressionistic. They lose the plot, especially the longer it's gone since the last checkpoint with whatever entity is guiding them and giving them direction.
It's a mistake to expect the precise, perfect execution of software out of LLMs.
These things are like impossibly precocious middle-schoolers, who never get bored, and who have read 1000x more books than you will in your whole lifetime.
But they're still middle schoolers, with the theory of mind of an 11 year old.
They easily get lost, ungrounded.
You should be careful about handing them moral agency to act fully on your behalf.
Englebart back in the 60's had a frame: not AI (Artificial Intelligence) but IA (intelligence amplification).
The agency and responsibility come from the human, the amplification comes from the computer.
The computer, in this frame, is like a telepathically controlled exoskeleton.
Another way to reign in this tendency of LLMs to get lost: give them shorter chunks of things to do: tasks, not jobs.
It's harder to get lost when they don't go very far.
And you can more quickly intervene to nudge them onto the right path if they get lost.
This is the same intuition as agile software vs waterfall development: reducing the length of feedback loops gives you better control and steerability.