Bits and Bobs 5/20/24

1Deciding what to build is easier with fewer people!

There are fewer people to align.

The effort to align people scales, all else equal, proportional to the square of the number of people.

A downside: with fewer people in the group, it's less likely a person with a game-changing idea (or an important bit of disconfirming evidence) will be in the group.

2The more successful something gets, the more bland it becomes.

The first step of creation only has to be viable for some set of users.

But now the next step to improve it has to be viable, but also not regress anything from the previous iterations.

If the next step regresses on any dimension, then some agent within the organization most responsible for that dimension will flag the issue and push back on the idea.

In the limit, this push back becomes an effective veto.

The person who stands to concretely lose something can make a more compelling case than the other employees who might have a diffuse, speculative benefit.

This is a fundamental asymmetry.

The more successful the thing, the longer it's been around, the more likely there will be someone within the organization will find a regression in any new idea.

The more successful the thing, the more constituents there will be, people who are brought on board to help build and maintain the thing.

Each agent is another person who could veto a change.

None of these constraints is a big deal: a single Lilliputian rope.

But in aggregate they can tie you down so you can't move, caught up in a Lilliputian web.

Someone watching you from a distance won't see any particular constraint holding you back.

Because there is no individual constraint.

The problem isn't any individual constraint.

The problem is the totality of the constraints, the web.

The observer will likely conclude "I guess they just got lazy…"

3An anti-consensus swarm will find novelty.

An anti-consensus swarm will find novelty. A consensus swarm will become bland.

An organization with a coherent identity and top-down goal will tend towards consensus.

Every idea that someone might have will be checked for whether it fits with the top-down goal.

Anyone in the organization who thinks a given idea will be a net negative can, in the limit, veto the entire idea.

An organization that is a swarm; an emergent, bottom-up thing, will tend towards novel ideas.

Members of the swarm are implicitly in competition.

To compete, the members want to do the thing unlike their neighbors, to stand out and get an edge.

This competition leads to differentiation, trying new things, most of which will fail, but some of which will succeed.

The things that fail naturally die off; the things that succeed naturally get more investment as other members of the swarm flock to it.

Of course, consensus vs anti-consensus is not a black and white trait but a spectrum.

Inside of organizations there's always competition (e.g. for promotions).

And even bottom-up swarms often have a motivating ethos or emergent goal that participants were all attracted to.

But where along that spectrum a given organization is leads to if you'll get blandness or novelty.

4Silver bullets are very rare.

Silver bullets are very rare. Silver swarms are very common.

Everyone wants there to be a single, obvious, high-leverage unlock.

But often the constraints are a complex web of interrelated forces; there is no single silver bullet.

And wasting all of your time looking for a non-existent silver bullet will waste a ton of time.

But that doesn't mean it's hopeless.

Often there are silver swarms.

That is, a swarm of smaller ideas or adaptations, executed by a bottom-up swarm, that collectively change the top-level characteristics.

Any individual part of it looks like nothing.

But the swarm overall is the secret.

People can be looking right at it and not see it.

"Wait but where is the silver bullet? What is the singular thing that fixed it?"

Sometimes it's a secret not because you aren't being told the answer but because the answer is in your blind spot, and it literally doesn't look like anything to you.

Most people when they see a swarm, just see a diffuse cloud of ambiguity.

But train yourself to blur your vision a bit; and to marvel at the shifting, shimmering silver cloud.

5When you have all the user data in one place a swarm is better at finding the use cases than coordinating as consensus in a slime mold of a company.

The single company is a consensus machine.

It gets more bland over time.

The Tyranny of the Marginal User.

A swarm is an anti-consensus machine that creates novelty.

When a swarm and a single entity go face to face on similar footing, the swarm wins.

The single entity might win for a short period of time if it gets lucky, but the asymmetry of the swarm will dominate over time.

The single entity has to have a good outcome every turn.

The swarm needs at least one member to have a good outcome every turn… wildly more likely.

The problem is historically the swarm can't have the user's data to operate on because of privacy; as a user you need to trust everyone who has access to your data.

All of the members of an anonymous swarm can't have your data; that would be terrifying!

So the slow, lowest common denominator aggregator wins today, by default.

Even though it's slow to find new use cases, it finds more than the swarm, because the swarm isn't viable.

But what if you had laws of physics that allowed a swarm in a fully private way?

The swarm's natural advantages would win out.

6Will AI usage go to the mega app or a swarm of specialist apps?

The easy answer is the former, because it's just extrapolating how it currently works.

But what if it won't go to apps at all?

What if there's something post-app?

A swarm of use cases with different laws of physics.

7Everyone assumes that the most powerful AI service will be created by a single entity.

Creating a single, high quality model makes sense to be best done by a single entity.

But if you could figure out a way to use a model, combined with an ecosystem, to produce the overall quality of the service, the ecosystem would win.

The swarm beats the single participant.

Is the quality of your system tied to a thing that one entity built alone, or something that the swarm builds?

8An interaction pattern I've seen people do in ChatGPT: named, thematic chats.

The default style of use of ChatGPT (in my experience) is that every conversation is a new chat.

A fresh sheet of paper, totally blank context.

This works well because you don't have to worry (as much) about the LLM getting confused.

LLMs tend to "lose the plot" the longer the conversation goes on, and the farther the original prompt recedes in memory.

But a downside of this approach is that you don't get to benefit from a broader context.

There are tools, like GPT memories, but they are a kind of awkward kludge, nondeterministic.

A few people I talked to this week told me they maintain a curated set of chats with themes, e.g. "Journaling".

These are less like chats, more like append-only collaborative documents.

All of the context in that theme is right there for the LLM to draw on (especially with much larger context windows).

Information still doesn't cross chat boundaries, but the LLM is more likely to have the related context to draw on, because the user organized their chats by theme.

A user-land emergent usage pattern that moves us away from ephemeral chat and steps closer to persistent, enchanted artifacts.

9If your use case fits within one silo, the silo probably does a pretty OK job at that use case.

If the app or service you're using has all of the data it needs to do a good job at a use case, then it's probably done a pretty good job at that use case, and beat out other services for the same use case that didn't do as good of a job.

But some use cases cut across silos, they require data from different contexts to be used together.

The use cases that will be most powerful will be the ones that cut across today's silos.

The cross-cutting use case will have a lower bar to clear to be viable, because there isn't a viable competitor today.

10Making money is hard.

Making money is hard. Losing money is easy!

One requires you to create something that can stand out from the background noise, resist entropy, cohere as a thing.

Entropy points in the direction of losing money, naturally.

You have to fight upstream to make money.

Just because customers are buying your service does not mean you're making money.

It's very easy to sell dollar bills for 90 cents without realizing it.

11Solving all of the edge cases causes exponential blowup of scope.

20% of the cases cause 80% of the work.

Subtraction cannot be done by consensus.

When people on a team hear an edge case everyone by default assumes "of course we want to fix it".

It takes boldness to say, "no, we will not fix that, we will cut that case out of the scope."

It requires someone who everyone on the team agrees has the authority to be the authorial voice of the project, to take responsibility for the outcomes.

12The same origin model is about silos of data.

The origin where the data accumulates might (erroneously!) view the data as

"their" data, even when it's their users.

You can see this tension in tech platforms who have signed deals to allow their user's data to be used to train LLMs.

A bargain common in the same origin paradigm: "give me your data in exchange for getting this service for free."

What if it were possible for us as users to maintain ownership over our data?

13In the same origin model, the app that has the data has the edge.

Silos start out with no data.

That's what makes it safe to install a new app or visit a new domain.

But that means there's a massive cold start problem.

The origin has to convince users to put data into their silo in the first place.

The origin effectively gets carte blanche to do what it wants with the data once it's in there.

That's scary for users, so users have to have clear value before they're willing to do it.

That requires a clear primary use case that works even before the network effect value of the service.

The network effect of many users putting their data in the service can only be a bonus.

14Data is viral.

Anyone who saw your data could keep your data.

That's because data is non-rivalrous; in the limit it's possible to make a perfect copy instantly, without interfering with the original.

Today for a third party to do useful things with your data, they have to see your data.

This requires trusting that third party with that data, since on a technical level they might keep your data.

But what if it was possible to allow third parties to do useful things with your data, without them ever seeing your data?

15Perfect security is infinitely expensive.

Security is not some fixed quantity, it is tied to the threat model; how much the adversary invests in defeating the system's security.

Systems are embedded in larger systems; even if an inner system is "perfect", an adversary can attack in the larger system.

For example, someone physically abducting you and physically threatening you to compel you to put in your password.

But just because perfect security is infinitely expensive doesn't mean we shouldn't improve security where we can.

Incremental investments in security creates incremental benefits.

Also, the threats don't stay still, they coevolve with the underlying system.

Very good security yesterday might be insufficient security today if the threats evolve.

16The status games you're powerful in tend to be invisible to you.

With status games you aren't high-status in, you can intuitively feel that you're at the bottom of the totem pole.

You can feel the frustration of running into a hurricane-force headwind.

Maybe you give up and say "this is a silly status game that I don't care about" and decide to ignore it.

But in some cases, everyone else will compel you to play that status game, and you'll have to escape the context if you can't play it well.

Caricatures of industry/city dominant status games:

Finance / New York: money

Media / Los Angeles: fame

Academia / Cambridge: citations

Tech / Bay Area: impact

When you move to a city whose status totem pole aligns with your own, you'll say "this one doesn't have a status game, it's just good people doing good things"

When you have a strong tailwind, it just feels like you're a faster runner.

It feels totally natural, like you're floating.

17A hyperstition is a belief that becomes true if people believe it's true.

Science fiction tends to be hyperstitious.

What starts out as fiction, tends to manifest in the world.

Part of the reason is that if the future in a given work of fiction isn't obviously bad, but a number of people working on a project are familiar with it, it provides a convenient schelling point.

"We're making an AI like the star trek computer"

"We're making an AI like in Her"

This happens for protopias… but also for (subtly) dystopian visions.

Something that has subtle commentary might be lost on an audience.

Understanding subtlety takes time; most people are too busy to get more than a superficial understanding of things.

Black Mirror is a useful, nearly comprehensive compendium of different dystopias (with a few more subtle ones thrown in).

People in the tech industry tend to be more optimistic about tech.

Partially because we build it so we feel more control over it.

But that means that tech sometimes is inspired by visions that were subtly dystopian.

"Did you actually read the book?"

18In a high-performing organization, most employees should be doing the best work of their career.

As people gain experience and know how they improve.

People typically want to do a good job.

So in a non-dysfunctional environment, people are typically constantly doing the best work of their career, ratcheting up what good looks like for them.

If most people in an org aren't doing the best work of their career, that's an indictment of the org, not the employees.

Even the best employees can't do great work in a toxic environment.

19An important thing that is too big to fail tends to get more too big to fail.

It's a schelling point, a thing that everyone can agree is less risky than alternatives.

The bigger it gets, the more obviously it is too big to fail, making the preferential attachment effect stronger.

"Yeah, it's possible that [TBTF bank] goes down… but if they do, we're all screwed"

20Everyone thinks they're a special snowflake but often if you squint everyone is practically indistinguishable.

We're more the same than different.

But the sameness kind of averages out as background noise and so all we attend to is the difference.

21Compounding loops have balancing loops that create an asymptote.

They bring a runaway effect into balance.

If there weren't a balancing loop, then the compounding loop would quickly go to infinity and swallow the whole universe.

A balancing loop often shows up for proasic, even automatic, reasons, like exhausting the supply of inputs.

This compounding loop + balancing loop is what gives the familiar s-curve that shows up in almost every context.

It's easier to extrapolate out a compounding loop into the future, just draw a line through the existing dots.

It's much harder to imagine the balancing loop, especially at the early stages of the s-curve where the balancing loop isn't yet powerful.

Another reason it's hard to imagine a balancing loop is the balancing loop is often not a context-free thing; it is specific to a particular compounding loop.

This is one reason why almost all trends, extrapolated forward, seem to end in dystopia: we can extrapolate the trend easily, but can't concretely imagine the balancing loop that will almost certainly kick in at some point.

22As our ability to create grows, so too does our ability to destroy.

But the ability to destroy always has a slight edge.

That's because entropy, the default flow of the universe, is towards disorder.

Order requires a helping hand; disorder is automatic.

So our ability to destroy grows more quickly than our ability to create.

23Money cheapens things.

Money collapses a lot of the realness of human experience into a legible, single, efficient metric.

That cleanliness gives the metric massive leverage but also makes it hollow out the thing that the money is about.

Money can make people do crazy things; things that are completely at odds to what they would have done without the money.

But of course money is an enormously important and powerful coordinating force for society.

24A nice little aphorism I heard this week: "If you marry for money you pay for it the rest of your life".

Money hollows things out; investments create obligations.

You want to find people you want to work with not out of obligation but because you want to work with them.

25The currently viable use cases are ones that are by construction "fine" in the current privacy paradigm.

Many users simply accept whatever permission prompt they see with a shrug.

But that's partially because the only permission prompts they'll ever see (unless they're actively getting scammed) are prompts that someone thinks some users might reasonably decide to accept.

You never see a permission prompt for "send my bank details to this random other website" on a legitimate website.

There are implicit privacy constraints on what experiences are even conceivably viable.

For example, imagine a device that uses multiple cameras trained at your bed to automatically analyze your sleep, doing its processing of the video feeds in the cloud. Many people wouldn't even consider buying this product.

In that region beyond the horizon of our current paradigm, privacy matters quite a bit.

Privacy constraints are what sets the horizon of what we can reason about.

Because people are mostly fine with the status quo's privacy, we (erroneously) conclude that "people don't care about privacy."

But there are use cases that are inconceivable in this privacy paradigm, impossibly creepy, unthinkable. And those are absolutely constrained by privacy.

26If you have a personal AI mediating everything you do the first thing you'd ask it to do is remove the ads.

We assume that the advertising model is the one true model for consumer tech.

But what if that changes?

27The more often you see something, the more successful you assume it is.

A founder told me this week that when she started personally tweeting quite a bit more–not even directly about the company, but about company-adjacent themes–friends would proactively tell her "sounds like the startup is going well!"

A kind of default assumption: "if it weren't going well they wouldn't be posting, or I wouldn't see their content with them having a smile on their face."

This seems like a silly assumption!

But there's some logic to it.

Things that die you don't see again.

Things that are effective tend to replicate; other people also adopt the practice.

You'll see more of an idea that persists for longer (each time step you might see it again), and you'll see more of an idea that replicates often.

So all else equal, it's not unreasonable to have an assumption that things you see often are more successful.

This logic is one of the reasons that takeover ad campaigns work.

"I'm seeing this product everywhere, it must be good!"

28Pricing AI assistance for businesses is easier than for consumers.

For a business, they can see the value is "the amount of salary I don't have to pay to get the same result."

You can think of SaaS payments as being like mini-salaries to tools (and, increasingly, AI-powered agents).

Value-based pricing makes sense here: some fraction of the saved salary.

But for consumers, there's nothing quite as legible to compare it to and make it very obvious what it's worth.

29An effective product-led growth strategy is a double win.

By making the product better, you bring down CAC, not just now but in the future.

It's not a one-off improvement (more rental / opex), but a long-term durable improvement (more ownership / capex).

30The secret to most magic tricks: preparation that is significantly above what the audience thinks a reasonable person would do.

Often the preparation includes an extremely intricate / expensive gimmick.

Or just a ton of practice to make the key move invisible and natural.

You can do magic tricks in the work context, too.

"Wow, he really stuck his neck out there in that review with that off the cuff bold new idea… and it worked, everyone seemed to like it!"

When in actuality he had invested significant time pre-socializing with everyone individually beforehand and knew exactly where their heads were at and could sense the thin path to walk to convince everyone.

31Communities that grow slowly tend to be more resilient and durable.

Communities that grow in a flash attract a lot of low-engagement / low-savviness members.

For example, people who find the tool via a TechCrunch post.

Or people who want to get in on the ground-floor of a new crypto token.

Now your community is full of low-engagement / low-savinness users.

If you ask what your users want, you will get low-quality suggestions that dumb down the product.

You'll chase your worst users into the worst version of your product.

You'll get the Tyranny of the Marginal User effect, turbocharged.

Some communities are actively difficult to join in on.

Joining in requires going through a gauntlet: crawling through some amount of broken glass.

Maybe the product is very rough around the edges of hard to use.

This is a naturally-occurring gauntlet for the first release of products!

Or maybe the documentation is illegible, and takes some work to unpack.

If you ask these high-quality users what they want, they will give you interesting dimensions to develop on.

If the early adopters have to go through a gauntlet, the less-engaged bounce, and only the most-engaged remain.

Sometimes the gauntlet is too intense, and a critical mass of people never make it through.

32Engineers try to force LLMs to behave like normal computers.

The entire reason they're so useful is that they're not normal computers.

They're unruly, squishy computers.

Lean into what makes you different, not what makes you the same.

Don't merge into the background noise.

33In systems, the "waves" look like external fundamental forces.

But they are actually made up of a multitude of infinitesimal decisions by actors in the swarm responding to their local incentives, in a way that can add up to overall movement and create the wave.

A swarm.

The agents at the vanguard will look like they are pulling the wave, but in a way they are being pushed by it, too.

34There's the famous statement that there's two ways to make money: bundling and unbundling.

It makes it feel like all of the movement nets out to zero over time.

Really what it is is an oscillation in the ecosystem, from one polarity to the other.

An ecosystem gets too far in one direction, and so some enterprising people realize that, and see a temporary arbitrage to pull it back in the other direction.

They pull the curve back that way, ahead of the curve, but also pushed by it.

And it picks up steam, then it too gets too far to the other direction, loses momentum (this is the late stage period when everything feels static) and then starts pulling back in the original direction.

The people who are doing the bundling or unbundling are pulling the wave but also being pushed by it.

It's like the tidal forces of water on earth being pulled by the moon, always just a bit behind.

35A totalizing worldview is self isolating.

You stop talking to the people who challenge you or don't agree.

Which makes your views more and more from an echo chamber and thus brittle.

Removing yourself from the ground truth makes it so that when you interact with the ground truth again you won't be strong enough to survive it any more.

It is only via continual interaction with the ground truth that you stay strong enough to survive it.

36Hype is the emergent kayfabe in an ecosystem.

Don't get caught up in the hype.

Hold it at arm's length.

Surf it.

37Platform thinking is hard because it requires designing for emergence, not building directly.

If you do one-ply product thinking for a platform shaped problem, you can't have more than superficial temporary non-failure in that role.

Great platform thinking is extremely difficult, because it fundamentally requires multi-ply thinking.

But the secret is that designing for emergence is a kind of magic that is applicable just about everywhere.

38When you're dysfunctionally conscientious, you're very easy to manipulate.

Things that don't feel shame are very good at manipulation.

E.g. sociopaths… and organizations.

Organizations with a top-down mandate are machines.

They don't feel shame, because they are not human.

39With remote attestation, if you trust the code the node is running you can trust the node, even if you don't trust who runs the node.
40I was talking to someone who said they didn't trust confidential computing.

Confidential computing and LLMs are both technologies that are useful ingredients, imperfect though they may be, to iterate towards something better.

If you were a security absolutist, SSL would never have been allowed, because you have to implicitly trust all certificate vendors to do their job well.

Over time we figured it out, e.g. with certificate transparency.

How did we figure it out? A messy process of humans talking to humans. And the world didn't explode and it's gotten radically better.

Also, for confidential computing in particular, a key customer is military defense contractors.

Do you trust the US military to be more paranoid than you about who can see their data?

If so, then if it's good enough for them, it is likely good enough for you.

41A generally useful tool to help decide between two options: which will most help you grow into the best version of yourself?

The person you want to be.