Bits and Bobs 5/13/24

1A term I love for unwanted AI-generated content: slop.

The term has an adjacency to spam, while being distinct from it.

The connotations are pitch perfect: mass produced technically edible junk fit for animals, not humans.

The term got to be much more well known this week, and I had a front row seat. I watched a friend flag https://twitter.com/deepfates/status/1787472784106639418, which encouraged Simon Willison to post https://simonwillison.net/2024/May/8/slop/ which got picked up by Daring Fireball: https://daringfireball.net/linked/2024/05/08/slop

It's so interesting watching a neologism burst onto the scene. Everyone intuitively felt a need for such a word, and everyone could recognize that "slop" is it.

2A pattern in industrial design: a looks-like prototype and a works-like prototype.

They start out separately, and then attempt to converge to one another.

If they successfully converge, you have a viable product!

It allows you to not focus on either/or but both/and.

3There are two ways to build a car.

Tired: Wheels, then add a chassis, then an engine.

Not viable until the end.

No feedback on if it actually works, or if any incremental change works, until the end.

Wired: build a skateboard, then make it a scooter, then a bike, then a car.

Continuously viable.

Continuous feedback about how to improve it and if incremental tweaks work.

A continuously viable solution is many, many orders of magnitude more valuable than a solution that is only viable at the end, because it is significantly more likely to produce something.

An excellent post on this: https://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp

4When designing a system and coming across an edge case, what do you do?

One option is to add extra complexity to the solution.

To absorb the edge case and make it fit.

This is an approach that engineering-led companies that are resource rich can do.

The complexity of the overall solution can balloon significantly with each edge case incorporated.

This is due to the pareto principle: 20% of use cases will add 80% of the cost.

Another option: what can you remove to make it so that edge case doesn't cause much harm in practice?

Cut back to the 20% cost solution that covers 80% of the value.

Subtraction is impossible to do by consensus.

Subtraction must be done by an authorial voice.

5If you ask an expert if LLMs are good at a task, they'll say it's insufficient.

But what you should do is ask the person who isn't good at the task if it's better than them!

You rarely actually have the relevant expert on hand.

The standard to beat, as Ethan Mollick has pointed out, is the best available human.

6LLMs aren't just fluent in English, they're fluent in all languages in their training set!

Spanish.

JSON schema.

Mermaid diagrams.

A universal babelfish!

7LLMs allow swarms of amateurs to find interesting new ideas more quickly.

A friend was sitting next to a college student on his last flight.

The friend was laboring to bang out some work emails.

The college student was writing a 20 page essay using AI tools, at a 100x faster clip.

The student was orchestrating tools like a conductor, totally naturally.

I won't comment on if I think using AI to write a college essay is a good thing, but I will point out that this is the new baseline reality that non-centaurs will have to compete with.

LLMs allow everyone to think through a problem more quickly, to apply the existing best practices without having to read the book on those best practices.

This means that every member of the "swarm" searching for new ideas can do it more quickly.

The quality of the ideas the swarm searches for might not improve, but the swarm's "clock speed" or rate of discovering a viable idea does improve.

We should expect game-changing new ideas to be found more quickly.

Now amateurs have a hurricane force wind at their back.

Most of the things AI-assisted amateurs discover will be junk.

But every so often they'll discover something novel and valuable.

And it only takes a few of those game-changing ideas found by anyone in the swarm to make the whole swarm better off.

You could argue that YouTubers like MKBHD and Casey Neustadt were members of the swarm of YouTube creators who discovered compelling and viable new aesthetics that were previously unknown.

The AI-assisted swarm of amateur humans is more likely to find more "move 37s": discoveries that, once found, change the game forever.

8Brenda Laurel has a nice metaphor about the butler vs the horse for assistance.

The butler model robs you of agency.

The horse is a powerful, independent thing that you are in tune with.

The horse is stronger and faster than you.

Your intelligence is much higher than the horse's.

The horse's strength and your intelligence combine into a unit that is greater than what either of you could do alone.

9Modernity was partially about de-enchanting everything.

Previously, everything was hand-made, bespoke, a totem of a kind of informal charismatic magic of the creator of the object.

Modernity led to making everything scientific, repeatable, standardized, impersonal.

Microchips put magic back into objects, but a mechanistic kind of magic.

LLMs allow a squishy, organic kind of magic to be put into objects.

What happens when everything you interact with is enchanted?

10An alternative to an omnipotent/monolithic image of AI: Miyazaki-style forest sprites.

In Miyazaki films, magic is everywhere. Every tree, every rock, is animated by its own magic, represented as a sprite.

These sprites are magic but not omnipotent; they have a very limited sphere of influence.

These sprites have personalities.

Not monotheism, pantheism.

Imagine the omnipotent/monolithic version of AI having a personality.

It would be terrifying!

What if the personality is a bad fit for a given need?

It's only by being a smaller component of the system, and not omnipotent and omniscient, that these enchanted AI objects can have a personality.

If a given personality isn't right for what you're looking for, go find another one.

What if instead of talking to an AI omniscient oracle, you were talking to a hive mind of little AI sprites?

11CEOs think that their companies are adopting AI carefully.

But that's only from the top-down perspective.

In many cases, individual employees are adopting it aggressively: "I use it all the time for everything."

Magical duct tape is hard to use in a structured way (e.g. top down) but it's easy to use by anyone to jury rig anything.

AI is the most individualistic disruptive tech in recent memory.

That implies the most disconnect between leadership and employees.

In the past, tools like Figma, DropBox, and Slack grew via disruptive guerilla enterprise strategies: individual employees would adopt it and then later it would be officially approved by IT.

These had a collaborative aspect that made them naturally viral within an organization; you rarely would use the tools secretly or alone.

But AI is a bit different; it's not about collaboration with other employees, it's about making your own work more efficient.

Using LLMs at work is a bit like a cheat code… something you often don't want your management (or even your peers!) to know.

That means that there will likely be much more adoption of AI in businesses than the businesses think.

12In the same origin paradigm, new experiences have a significant cold start problem.

Any link is safe to click / any app is safe to start using.

But that's largely because it doesn't know anything about you, it starts from zero.

And any data you upload to it is scary, because the app/domain can do anything with that data!

EULAs theoretically constrain this, but not very much in practice because users don't read them and they are non-negotiable.

The privacy model is handled implicitly outside the core model.

It's safe to install an app, but scary to deeply engage with one.

This gives you a cold start problem, especially for experiences that have a network effect, where their quality rises with the amount of overall usage.

A product that is only network effect can't hope to get off the ground.

To get off the ground, you have to have a compelling primary use case that works without the network effect, to give the network time to get going in the background.

Privacy is the primary source of friction in the same origin paradigm.

If you do a lateral approach that handles privacy naturally, then a whole universe of things become viable.

13Desktop application model: requires high trust, but allows coordination between experiences.

Same origin model: no trust required, but also no coordination.

What if you could have both?

No trust required, but also coordination?

14New laws of physics can be hard to build awareness of.

For the system to be acceptable, people need to intuitively grok its physics and why it's safe.

This seems like an impossibly high standard; most people won't be able to grok the model.

But in practice it's not necessary

For example, when was the last time you thought about the same origin model, the fundamental security model that undergirds the web and apps?

In practice what you're looking for is not "will everyone understand it and therefore accept it."

You're looking instead for diffusion of awareness of the acceptability of the technically complex model.

You can do this via a chain of people vouching for it.

As long as "your more savvy friend is OK with it" is true, inductively back to the person who understands the system directly, then it's viable.

15The best proof of a thing being viable is you built it and it worked.

But if you can project forward thought experiments with deep relevant intuition you can avoid mistakes.

The thought experiment only works if you have relevant knowhow, and you don't project out too far.

If the idea you're considering fails the thought experiment, it most likely doesn't work. But if it passes the thought experiment, it might work.

So thought experiments are best for finding paths that don't work and pruning them.

16A physicist-style thought experiment to stress test an idea: multiply by 0, and by infinity, and see if it breaks.

That is, the idea "fails to compile" showing that such an outcome is not viable, and can safely be ignored.

Helps you prune paths of analysis.

17Generative systems cannot be modeled in your head with thought experiments.

Our brains cannot grok the emergent properties of the generative system, it's computationally irreducible.

You can't do a cached vibe with it, you must execute it.

You have to prototype and play with it.

18Generative systems are squishy.

Previous versions had to do it out of hard, formal components.

It's hard to get that squishness out of hard components (though not impossible).

Now we have squishy computers!

Maybe this next era will be a new era of generative systems.

19Having the relevant knowhow is like knowing the right arcane magic spell.

Easy if you know the spell, how to pronounce it, the right swish of the wrist to enact it.

But impossible if you don't have the knowhow already.

20People assume that the ultimate implications of an idea are tied to how hard it is to execute.

But that's only true for incremental, obvious execution.

It's also possible to do incremental, non-obvious execution.

Combining multiple rare knowhows in a novel combination.

Lateral thinking with weathered (and arcane) technology.

If you execute in a novel direction, then you can get massive implications and simple execution… but only for the people with the right knowhow.

Finding an interesting combination of useful and catalytic knowhow can make magic happen.

Sometimes the simplest ideas have the most massive implications.

21A measure of the complexity of a task: how many PhDs does it require?

A scale of complexity for a task that grows by at least an order of magnitude a jump:

1) Could it be duct-taped together in a day?

2) Could it be duct-taped together in a month?

3) Could someone be granted a PhD for that work?

4) Could someone be granted a Turing Award for that work?

A successful project is a combination of tasks into a coherent and viable whole.

The risk of the project scales with the multiplication of the difficulty of each task required to get to viability.

A task that can take an already-viable thing and make it better can be a higher difficulty and not matter as much.

The viability means the thing can be in the market, creating value, while you do the work.

That allows you to be patient.

If the project succeeds, the product gets significantly better. If it doesn't, the cost is only the opportunity cost.

Larger products with larger bases of use can support larger investments into improving them.

The product is unlikely to go away, and even a small improvement for a massive number of users is important.

Google-scale infrastructure needs could support, for example, the development of Spanner.

In a fractally complicated new idea there are "PhD thesis' rabbit holes in every direction.

You only want to do the bare number of PhD thesis style projects that you need.

As the system gets more momentum, it can support more PhD theses.

A new PhD thesis as a thing that blocks the path to viability is a miracle.

A thing that good PMs know in their bones: the hard part of most new products is not the individual tasks, it's the integration.

Even if you have all of the components sitting on a shelf ready to be integrated with a day of duct tape work, they still need to be the right combination, everyone needs to do their tasks in a way that coheres, everyone needs to coordinate to do the tasks at the right time.

In a large organization, the coordination cost will be many orders of magnitude more than the actual focused implementation work.

So even if the individual tasks are all easy, the likelihood of producing a viable product quickly enough is very, very low.

Good PMs aggressively search for ideas where a possibly-viable sketch of the product could be built out of components that can be roughed in with no more than a day of uninterrupted execution.

Possibly-viable means a prototype or demo that obviously has legs and is worth developing further.

The real world (especially large organizations) rarely allows uninterrupted execution, so that "single day" will be more like a "week".

A sweet spot: use components where someone else already got the PhD for it, and now has the knowhow to make it easy to duct tape to a system in a day.

It's hard for others to do but easy for your expert to do: an asymmetric advantage, if it turns out to be useful.

22The biggest risk is often not "will it work" it's "will people want it?"

That's riffing off this quote from Steve Jobs: "You've got to start with the customer experience and work backwards to the technology. You can't start with the technology and try to figure out where you're going to sell it. And I've made this mistake probably more than anybody else in this room. And I've got the scar tissue to prove it. And I know that it's the case. … As we have tried to come up with a strategy and a vision for Apple, it started with 'What incredible benefits can we give to the customer? Where can we take the customer?' Not starting with 'Let's sit down with the engineers and figure out what awesome technology we have and then how are we going to market that?'"

23There are at least three distinct meanings of the word "privacy".

The first is compliance.

This one is the realm of contracts and regulations that affect the enterprise.

This is the one that legal departments think about the most.

The second is anti-surveillance.

This uses tools like end-to-end-encryption, security defense in depth, confidential computing, on device computation, etc.

This is the one that cryptographers and engineers tend to think about the most.

The last is contextual integrity.

This is Helen Nissembaum's concept.

Roughly, "data is used in line with my interest and intent"

This is the one that UXR tends to talk about the most.

This is the one that most closely aligns with what users intuitively want.

Note that the first two are a different kind than the last one.

The last one is, roughly, the end of privacy.

The first two are particular means to achieve some aspect of that end.

Contextual integrity is the platonic ideal.

You can never fully reach it in all cases.

But you can clear a good enough bar for most cases, and you can continue to improve it.

The only way to have true contextual integrity is for the user to have the agency to decide which code can run on their data.

That requires the user to run the code on their turf, where they call the shots about what happens.

24Being on the receiving side of disconfirming evidence hurts.

That's one of the reasons that we all say we seek disconfirming evidence, but in many contexts we don't actually.

To get accountability requires disconfirming evidence.

To have disconfirming evidence requires an other to have the ability to audit and raise issues.

The other has to not feel pain directly when you do.

This is one of the reasons things like an independent review board are sometimes necessary.

25There's a goldilocks zone in a conversation.

If you talk about things that both parties agree with already, it's boring, nothing new is discovered.

If it's a new relationship, this agreement might build trust, but after a sufficient amount of trust is built it doesn't add much.

If the things the other person talks about are things you actively disagree with, or are about topics you don't understand, then the whole conversation will be tracked as distracting, and potentially frustrating, noise.

But in the goldilocks zone the conversation is in the zone of mutual proximal development.

The most interesting frontier of understanding.

The set of things that both people are prepared to accept but one party hasn't yet accepted.

Ideas that are interesting and novel to both sides of the conversation.

It is in this goldilocks zone that the most interesting insights nucleate.

Socrates believed in the power of dialogue; Bakhtin believed that all interesting insights emerge from it.

How do you laser in on the goldilocks zone of a conversation?

The best way is if you're familiar with the other person's perspective, for example from reading their public writing.

But humans don't have the time to read each other's writing before talking (especially for prolific writers).

… But LLMs can read them! Especially using techniques like RAG and embeddings to sift through ideas.

Imagine an AI-assisted tool helping guide you to the goldilocks zone of each conversation.

26The hard part is not the modeling of the system.

The hard part is the interface of where the model and the real world meet.

The real world is fractally complicated.

Models require a concrete thing to interface with, but the real world is nebulous.

You have to resolve it down to absurd, unworkable levels of fractal detail to get it to be concrete again, for the model to integrate with it.

Those details cost increasing amounts of relative effort at strongly diminishing value.

Similar to the shoreline paradox. The closer you look, the more wrinkles you need to contend with.

27When a remote participant in a VC breaks into the in-person discussion, they should never apologize.

Breaking in on a live meeting as a remote participant is hard enough.

Having that "no apologies" expectation, shared as a norm by everyone, reduces the barrier just a bit and makes people more likely to do it.

28Getting volunteers to create together is hard.

Getting them to curate together is easier.

Creating makes divergence, so a coherent outcome is hard to distill.

Curating is about filtering down, which is easier to be coherent.

There are more ways to go away from a point than to go toward it.

The same reason entropy emerges fundamentally!

29A trick to help grow a potentially subversive idea in people's heads: connect 9 out of 10 dots but leave the last unconnected.

People see a dangling dot and want to connect it, and they are drawn in.

Leaving one dot unconnected allows other people to co-create meaning.

When people co-create, they feel an ownership, they are drawn in.

The "leaving a dot unconnected" is similar to the "leave a simple TODO to do in the morning to get sucked back into the flow of programming" trick.

When telling a story, the indirectness allows the listener to extract something different than the literal message, leaving a dot unconnected.

The safe subversive tone: put the spicy stuff exclusively between the lines.

Make the reader work for it to extract the spicy stuff.

30Organizations tend to become inward-focused over time.

Each person spends more time talking to other people in the organization than people outside.

You erroneously, intuitively conclude that the whole world cares about things within that space, because all of the people who you talk to care about it.

The social dynamics of those interactions come to dominate, to take all of the attention away from the ground truth, from the outside world.

The attention that must be paid to navigate the inner world is the maintenance cost of the organization.

The activities the organization does that impacts the outer world is value creation.

The organization becomes a hyper engaged universe that folds into itself, that only makes sense within it.

The inner world of an organization is its kayfabe.

From inside it looks like everything; in the limit nothing beyond the org's horizon is visible.

From outside it looks like nothing, like random, chaotic noise.

When you cross into that boundary, you are captured by it.

This happens for formal organizations, but it can happen for any collection of people.

For example, tech ecosystems that are heavily interconnected, but isolated from the rest of the surrounding ecosystem.

As people get pulled in, they get increasingly pulled more in, away from the outer ecosystem.

When you are pulled into the kayfabe of the inner world, you lose yourself. The outer world doesn't care that you're distracted; the ground truth of the outer world may smash you and you won't seem it coming.

31Kayfabe in an organization is a giant whirlpool.

It is patient, and strong, and gets stronger over time.

Every time you give it an inch, it will never give you that inch back.

It will suck you down into its own internal logic and never let you back up.

32A way that kayfabe grows in any organization of sufficient size.

The kayfabe has delaminated from the ground truth.

If you privately point out the ground truth, a leader will pull you aside and say something like:

"We know the official plan is not perfect. It has a ton of room for improvement. But if you point out the ground truth, you'll shatter the whole thing and throw it into chaos. Don't break it, help fix it!"

That sounds totally reasonable!

But sometimes it's so far gone that you realize that not only is each incremental bit of effort harming users, and harming employees, but also harming the company.

And with each incremental bit of effort, the problem is getting worse, not better.

In that situation, it's not possible to "fix it". If you try to, you'll be sucked into its swirling currents, dragged into the gravity well, lost like everyone else around you.

33Signing is useful when you have to convince someone else that you said something.

You don't need signatures to convince yourself that you said something.

34If success requires a single story to take hold in a specific person's head in that specific moment, you need a miracle.

If success requires any part of a story to take hold in at least one person's head in a swarm of people at some point in the future, it's not a miracle.

Over long enough time horizons it approaches a certainty.

35It doesn't matter if you paint yourself into a corner if your thing isn't useful to anyone anyway.

A tension: the more you try to defend against painting into the corner the more likely you are to overcomplicate and die before reaching PMF

36Coordination is about creating momentum.

By default, the sum of chaotic movement of components is zero.

No net motion of the system happens.

When you get some subset of the components even somewhat aligned, the force nets out to something non-zero.

That non-zero force points in a particular direction.

You get movement, slowly at first.

Components tend to swing to align with the direction of momentum each time step, in proportion to how much momentum there is.

The parts that aren't aligned either align themselves a bit more or drop out.

The parts that join are more likely to be pre-aligned.

So over enough time with some coherent, continued momentum no matter how misaligned originally you get alignment.

This means that alignment tends to beget more alignment over time.

37Every successful pattern over time gets cargo culted.

The nuance is hard to communicate in language, and each replication caricatures it more and loses nuance.

A repeated photo copy, losing detail each time.

Bezos's stance is something like "the presence of a cross-functional meeting indicates a failure to make the right APIs".

When you make an API, the nuance / responsibility might drop out if you aren't careful.

It's important to make sure the API captures the nuance and emergence of the system.

If not, you'll get a hollowing out of the system that leaves it incapable of navigating existential problems that the formal system can't understand, let alone address.

This is part of what happened with Boeing; sub-contracting out key components to save money according to the spreadsheets, but meanwhile losing all of the internal antifragility of the system that could evolve, adapt, and grow.

38Normally if someone has authority over someone, the subordinate has to believe the one with authority is right.

If it's informal authority that must be true (at least broadly).

Informal authority cannot be coerced.

If it's formal authority then it can skew from the ground truth reality.

But the prime emergent directive is to act like your manager is correct, even if you don't believe it.

If you don't, your boss can fire you (or make your life much harder).

When the ground truth of belief skews, the subordinate has to lie.

Lying tears your soul apart.

39CRDTs only guarantee everyone eventually sees the same thing.

Not that it makes any sense or is good, just that it's the same.

40Most syncing contexts have to assume that one peer might be gone for an arbitrarily long time and need to re-merge later.

You save a ton of complexity if you can assume any of:

Roughly 24/7 availability of peers, with only minor interruptions.

No need to automatically merge peers cleanly in the future

If you can simplify that, you cut out a 20% edge case that causes 80% of the work.

41If you pack all of your hopes and dreams into one box, that box surviving is existential.

You'll defend it like your life depends on it.

42Instrumentalist reasoning can't say what it's for.

It has no value system.

It can optimize a thing, but it can't tell you why.

It is a means, not an end.

A powerful, very useful means!

The values have to come from outside of instrumentalist reasoning.

It's easy to fall in an optimizer trap.

"We should apply instrumentalist reasoning to optimize that thing"

But that begs the question of, "to what end"?

Work with people who have a perspective, who believe something beyond just "we should apply instrumentalist reasoning".

43In abundance, taste for viability is important.

Viability is ultimately things like "things that users would be willing to pay for at a price that we'd make money on" in a smaller company.

But in a large corporate context, a proxy for viability is used.

It will take lots of time for the actual idea to be executed and tested in the market, and you need a steering signal before that.

So in that context the proxy might be "things internal leaders will think are viable".

Over time the proxy becomes the metric (it's what's most immediately important for survival of the idea in that context), and the ground truth is increasingly forgotten.

Every successful organization's selection pressures inadvertently are pulled inward, into the internal social structure.

If your idea is not viable within the organization (others won't collaborate or will seek to kill it) then it dies. This becomes an inescapable, large constraint as the organization gets larger and larger.

If you don't ground truth externally it becomes an island where all selection pressures are internal.

You select for becoming a dodo.

Only viable on that island, but as soon as a land bridge of ground truth shows up you're dead.

44Agility and coherence are in tension.
45Being a high level executive in a large organization is more about management, not leadership.

Being a leader requires having a perspective on the ends, not just the means.

The former is very hard to have a perspective, to stand for something.

The thing you might be required by the org to stand for might change at any point, and you need to be able to credibly now stand for that thing.

In practice you get a cargo culting of instrumentalist reasoning, "I care about optimizing". That sounds like a perspective on ends, but it isn't one.

A "data driven" company will lean more towards eroding away a perspective in favor of an optimization lens.

But that is a perspective, just an unbelievably bland one.

46A decaying organization often harms itself.

When an organization is under stress, the obvious thing is to get rid of the high-variance / low-legibility people.

To improve the machine, you have to optimize the machine.

But those people, those seedlings that have been cut away, are precisely where the upside comes from.

Seedlings, to start, are difficult to differentiate from dandelions. But some seedlings could grow into massive oak trees.

The seedlings of an organization are the high variance, the interesting thing to be selected over.

If you get rid of the seedlings, you're left with an organization that all it can do is maintain itself, execute along the course it's already on.

It can't learn, or innovate.

Those growth and innovation seedlings will necessarily look illegible, high-variance.

They will be outside the system, the "way things are done". That's what gives them potential to be something different, an improvement.

Of course, most seedlings are dandelions: noise, junk.

But seedlings are the raw input to a selective culling system.

An org in distress will neuter the very thing that gives it the potential of unexpected upside in the future.

47An operator will change themselves to fit the imperfect system.

Artists will refuse, and will try to change the system knowing they will likely fail (tilting at windmills). Because the alternative is unthinkable.

48A bad idea: hiring chefs and having them do line-cook style roles.

1) Chefs don't make very good line-cooks.

2) Chefs are expensive.

3) Chefs will hate line cook work and burn out.

4) The entire point of a chef is the unexpected upside: that they might do something great that you never asked them to do.

A seedling of innovation that could grow into a whole new tree of value.

If you find yourself with an army of chefs, the best move is to do a messy bottom up approach.

At least you'll have exposure to the unexpected upside!

49People are only willing to sacrifice for the collective if they think it's their "team", they are joined with it in a non-transactional way.

Sometimes it's obvious that it's another team, e.g. when you're collaborating with employees of other companies.

But in a large enough company, often the "same team" mindset isn't the same. There can be legitimately big differences in goals and expectations across different parts of a very large company!

50A pattern to automatically improve quality: make educated guesses of what a user might want to do based on the aggregate behavior of similar users.

The more that the suggestion is accepted (or at least not rejected) by users in other contexts, the more confidence you get in that quality and generalizability of that suggestion.

This approach doesn't work for high-downside scenarios.

Especially scenarios where unless a user takes the time to audit it, they wouldn't notice a mistake in the suggestion.

In that case, just because the vast majority of users have not rejected the suggestion does not mean that it's high quality.

But you can fix that by creating an asymmetry: if any user rejects the suggestion, take that as a much stronger signal than many, many users accepting the suggestion.

A user making a proactive "that's not right!" is way more powerful than a user passively going "shrug, looks OK to me I guess".

But in each case what matters the most is: which situation had the most proactive and informed user intent? And how bad is the downside if the suggestion is wrong?

51Two reasons a startup could be in stealth.

First (the most common) is to build buzz, so that when the product is ready to reveal to the world, they can do so with a bang.

You might call this the "big bang stealth" playbook.

If your big bang fails or falls on its face, you're out of the game.

People who are intrigued will hear about it, and set their expectations high.

You'll also not get feedback to help steer development and make sure what you have is viable.

The expectations can become nearly impossible to meet, making it very likely you'll die.

The big bang going well is a miracle.

Another reason to stay in "stealth" is to avoid setting high expectations.

You might call this one the "illegible stealth" playbook.

Avoid having people hear about the thing from Hacker News or TechCrunch.

You can be in "stealth" and be doing things in plain sight, just deliberately illegible.

This allows highly motivated members of the community to fall into the rabbit hole, and decide if what's there is interesting or not.

This allows more capped downside and continuous feedback during development.

52Generative systems often look crappy to start.

No Man's Sky started off significantly, embarrassingly below what the original vision was.

This was caught well in the "jurassic park harmonica" meme, which makes me guffaw every time I watch it.

But No Man's Sky, after years of iteration, persisting and playtesting and improving, really did get close to the original vision.

Generative systems have to have some kind of real-world selection pressure to tune them.

In the meantime, there's a trough of crappy output.

No Man's Sky used a traditional marketing playbook, and the high expectations almost killed it out of the gate.

Dwarf Fortress got through that by doing the illegible stealth playbook.

For a long time, Dwarf Fortress looked like nothing at all!

There wasn't even a GUI for many years!

53Perfectionism prevents continuous viability.

"That's not perfect, so we shouldn't do it."

But perfection is impossible, and in the meantime you're frozen and can't do anything.

Different solutions can be better or worse than others.

Just because perfection is impossible doesn't mean improvement is impossible, too.

The right answer when you come across that kind of nihilism is to say: "You're right, it's not perfect! But it's better than the alternatives in use today and it is viable today. Feel free to develop things closer to the perfect ideal, those will work in the system too! In the meantime we'll do this because we think it's viable and has a clear path to iterative improvement."

54Non-complex thinking is often not correct.

But at least it's simple, and gives you an action to take quickly.

After taking the action you get feedback on if it worked in the real world, information that can help you make a better bet next time.

A complex lens on a complex topic you might not have a clear next step, just go in circles, swirling in confusion until you die.

55Aishwarya's insightful riffs on my "perfect is boring" from last week":

"makes me think of "quintessence" in products - 'the most perfect or typical example of a quality or class.'

the bic pen, levis 501 jeans, etc – quintessential items tend to be bland looking, no personality

perfection implies a lack of variability or uniqueness"

56There's something magic about a project room.

That is, a dedicated room where the project members can accumulate post-its, scribbles, etc as they work.

A few things it does:

Shared memory palace allows everyone to maintain more rich and nuanced memories by spatializing them.

Humans are significantly better at recalling spatialized memories!

Shared landmarks orient everyone in the team in similar spatial metaphors, allowing gesturing in a direction to remind people of a concept.

Stigmergy, to offload some of the thinking and memory into a physical space.

The team is storing state outside of their minds, even if they're illegible scribbles.

If you tear down that state it's like a lobotomy.

A space that is only temporary, that will reset to a base state every night, cannot accumulate that meaning and state.

57Any single idea in an r-selected context is fragile and unlikely to succeed.

But the overall system that includes r-selected items, the meta-idea, the swarm, is antifragile.

58Someone last week framed the SAFE agreement as kind of like a union for founders, catalyzed by YC.

When an investor and a founder are haggling, the investor has the upper hand because they've done 100 or 1000x more deals than the founder has.

They have the relevant knowhow, and the founder does not.

A SAFE says "here's a widely agreed upon schelling point that everyone finds reasonable, no need for haggling."

The more that everyone just uses a SAFE, the more that anyone who doesn't will look suspicious, and feel compelled to.

59You exist in your own narrative as a permanent, inescapable fixture.

But you have to make a case to exist in anyone else's.

Why should they bother to think about you to include you?

60Kierkegaard: "Anxiety is the dizziness of freedom."

The swirls and chaos of not having any constraint to hold you down, a stable point to build off of.

61You can't aggressively give someone a zen riddle to blow their mind.

Something that will blow someone's mind might turn them upside down.

It's a mind-virus. The receiver has to trust that they'll like what the virus will do to them.

If the virus is coming from someone transactional and aggressive, they'll resist it, they won't even let it in.

If the source of the mind-virus is kind and non-transactional (if they trust them), they might let it in.

62Planting a tree is a statement that you care about the future of that location.

A way to get grounded in history and show a non-transactional relationship to that location.

Later, you can brag: "I planted that tree over there 20 years ago" to prove your connection to that location.

63Most computing experiences today are about zoning out.

Passive.

Turning off your agency.

Going with the flow.

Using a system to extend your creativity and agency, getting users to want to do that, to get up off their butts, is hard.

But humanity depends on it!