Bits and Bobs 4/8/24

1New paradigms tend to eclipse previous paradigms, not replace them.
2A thing that is viable in one set of laws of physics will not necessarily be viable in a new set of laws of physics.

The viability of a thing is less about its intrinsic properties in a vacuum, and more about the laws of physics of the environment it's in.

When the laws of physics change, everything changes.

Things we took for granted in the other physics suddenly become non-viable.

And things we assumed were impossible suddenly become gloriously, surprisingly viable.

This is a hard idea to wrap your head around. It's so foreign.

It's like me saying "OK, so now objects will fall up."

You'll think you get it, but it won't stick in your brain until many interactions with it, because it's just so fundamentally different.

It changes absolutely everything.

3In the web, users could travel to any content instantly.

We need to take that UX a step farther.

Instead of users going to experiences, experiences should come to users at precisely the right moment.

In some ways, the inverse of the web.

To do that will require taking the privacy and security laws of physics one step further.

Instead of the one-size-fits-all same-origin policy, we need a more nuanced model that makes everything provably private by default.

By flipping the privacy model on its head, you create the potential for transcending the limitations of the current web/app paradigm.

The right privacy laws of physics + AI = a big bang of a new universe of previously unthinkable software.

4Everyone else takes the app model for granted.

I take AI for granted.

The former is a backward looking perspective.

The latter is a forward-looking perspective.

I have more confidence that AI will be useful than that apps will be the dominant paradigm in this new era.

5Systems with humans embedded are more resilient.

Many finance jobs from back in the day are now handled by a spreadsheet cell.

This creates extraordinary efficiency that allows massive leverage, making whole new classes of things viable.

But it also loses something: the human judgment in the loop.

"This is way off from what it normally is... let me show my manager just in case."

An org made of people is a living thing in a way a spreadsheet is not.

A spreadsheet and an organization might have similar output in typical conditions.

But the org will handle anomalous situations much better.

If you only see them both operating during normal conditions, you'll erroneously assume they're mostly the same.

Having a system that includes humans embedded throughout it is way more resilient than one that does not.

At the early stages of AI-based experiences, we'll want humans in the system, tinkering, tweaking, responding, interacting.

6The late-stage of a paradigm is efficient but soulless.

The robber baron era.

Highly centralized, highly extractive.

We're in the late stage of the app paradigm.

7One-time-use software is easy to write.

It gets significantly more difficult if you want to generalize it.

To make it more resilient to unexpected or challenging contexts and inputs.

If you're assembling a one-time-use software out of existing, generalized building blocks, it can be easy.

The more often you run a given assemblage, the more effort you should invest in generalizing it, packaging it up into a reusable, resilient building block.

8Building a bit more on the LLMs as trained circus bears analogy from last week.

The trajectory we're on as an industry is not better-trained circus bears, but more of them on the loose, and the average circus bear being less well trained!

It's a wild bear that's only been trained with clicker training.

It works well but if it gets confused or angry that clicker isn't going to stop it from doing some damage.

Using LLMs to power customer-facing enterprise chatbots is wild.

As a company you want the circus bears to represent your brand directly to users?

That's like putting your customers in a cage with the bear!

9A trained circus bear is an untrusted component.

To be able to work effectively and safely, you don't have to make it intrinsically into a trusted component.

That might be impossible, and very dangerous if you get it wrong.

You need to figure out a way to work with it productively given that it's untrusted.

LLMs are gullible and squishy, and highly susceptible to their (perhaps hidden to you) inputs.

You must treat LLMs as an untrusted component in your system.

But if you do, you can get a lot of great output out of them.

One way is to put it inside of a cage.

A cage might be as simple as a sandbox.

You assume the bear might break anything in the cage.

But by being careful with what you put in the cage you can prevent downside.

The bear alone is untrusted. The bear + cage combination is trusted.

The key question becomes making the most effective cage.

You can imagine a highly bespoke and contoured cage giving the bear just the right maneuvering room to accomplish what you want it to accomplish.

Too big, and the bear can do some damage, and destroy anything in the cage.

The bigger the cage, the more nervous you have to be about anything you put in it; you might have a big cage but with few things you put in it.

Too small, and the bear is constrained and doesn't have the autonomy and maneuvering room to accomplish what you want.

There's no room for the bear to surprise you with a better-than-expected result.

The optimal size of the cage has to do with the downside risk.

If it's a high downside risk, you want it to be smaller.

The bear will be able to do something dumb, but not dangerous.

A normal cage is a big cube.

Straight, easy-to-reason-about edges.

E.g. the same-origin model in browsers today.

But the real world is fractally wrinkled and complex.

A straight line will slice right through the middle of a real world concept.

This makes it very hard to get precisely the right things in the cage and the right things outside of it.

You end up with rough approximations, bounding-box style answers.

That puts lots of things into the box you'd rather not, or requires you to leave many things that should be in the cage outside the cage.

Imagine a new kind of nanotechnology that allows you to create highly contoured, bespoke cage shapes for precise situations, while still being strong enough to contain the bear.

Imagine if this nanotechnology could also reconfigure itself at will; a shape-shifting cage perfectly bespoke to the needs of the moment.

Kind of like a Holtzman shield from Dune, but to keep the bear in instead of attackers out.

Everyone today is focusing on making the bear smarter or more docile.

An asymmetric approach is to create nanotechnology for a space-age dynamic cage.

Such an approach would effectively allow new laws of physics.

10Creating the seed of an ecosystem is like creating a frankenstein.

You assemble a lot of different components and hope that when stitched together in just the right way they'll have the spark of life.

What matters is not the quality or coherence of the individual components; it's whether the whole can be alive.

People focus on the parts because they're easier to see.

But what's actually important is the whole.

Very small differences in the way the components are combined can lead to success or failure.

You need a highly tuned knowhow for how to work with the materials on hand and combine them in the alchemical way.

Be scrappy with the components you use, clever in the way you combine them.

11The enabler of an ecosystem is not the technology, but the schelling point.

The actual tech often looks dinky.

Perhaps just a particular novel assemblage of existing components.

The reason an ecosystem becomes alive is not because of the components, but because its assemblage provides a viable schelling point.

As more participants are attracted to the schelling point, it makes the schelling point even stronger: a gravity well.

What is most important is that first gasp of air as it crosses the rubicon to viability and becomes gloriously alive.

12In ecosystems, the physics set the constraints, but the living things are most important.

The atoms and forces matter, but only indirectly.

What matters the most is the ecology.

Does the ecosystem spring to life?

If not, the ecosystem is just a gruesome corpse.

Everything is about getting the ecosystem to take that first wild gasp of air, to become alive.

The right mindset is not builder but gardener.

13When content comes to you, lots of things change.

Imagine your friend has a blog that has lots of rich insights.

Someone could create a Chrome Extension based on that content.

Whenever you're reading a page, it would see if there's a relevant insight from that content and display it.

The right insight at the right moment could plausibly be magical.

This model has some problems.

First, installing the extension is very high friction. I have to:

Hear about the extension

Decide it might be useful in some abstract, fuzzy way in the future

Install the extension

This requires a strong primary use case to get over that hump, some acute user need–which most users, even close friends of the author, wouldn't have in this case.

Also, the extension, once installed, will likely over-trigger.

The extension's entire reason to exist is to share insights relevant from its author's content.

In the same way that everyone thinks they're special, the extension's logic is "my entire reason for being is to show these insights, I should do it whenever I think it's useful."

But this will likely lead to significant over-triggering.

The extension's triggering logic is content-centric.

After a few too many distracting triggers, even the most motivated user will uninstall the extension in frustration.

But imagine instead someone creating a recipe that your personal AI can take note of.

Your own personal AI can run the recipe in the background, and then decide if it makes sense to show it to you at that moment.

The extension trigger logic is user-centric.

Your personal AI could develop a better and better model of what kinds of content, from which authors you find relevant, and trigger only in the cases it is very likely to be useful.

Your personal AI could do a significantly better job managing your personal attention preferences than any swarm of content-centric extensions could ever do.

To know when to trigger, you need a holistic sense of what the user wants and what else is available.

A single extension only knows its own context, not the overall context of the user and what else could be shown.

14LLMs don't reason, they mimic.

They are absurdly good at mimicking patterns.

LLMs are the ultimate wisdom of the crowds.

They have the same success (and failure!) conditions.

But it's easier to get tricked by the failure modes.

LLMs are always so polite and confident sounding even in their failure mode.

15Even if your ecosystem has a ton of momentum, if the ceiling is low it doesn't matter.

For example, if your system requires users to take an action on the command line, you've set a ceiling.

The ceiling of the audience of a tool is set by what the worst case outcome is that might show up more than 5% of the time.

If when it breaks you have to tweak JavaScript then only programmers will be able to use it.

Or, if a user needs to use the command line to set it up.

The population of users that will be willing or able to use your thing is now set very low.

Sometimes if it has enough momentum it's possible to build an easier-to-use version to get lower-savviness users in.

But just as often, it's extremely hard to cross that rubicon.

If your system takes for granted that early users will use the command line, and "we'll figure out a different UX for less savvy users later", you might back yourself into a corner and have no good options.

This effect is especially strong when the most friction is felt at initial onboarding.

The savvy users are already boarded onto the ecosystem, they never again feel the friction and it doesn't loom large for them.

The friction of first use to them felt like a few gentle rolling hills they've long forgotten about.

But the friction of first use can be like the Himalayas for new users who are less savvy.

You get boxed in by the Himalayas; they set a horizon for your ecosystem that it can't expand beyond.

If you want to change the whole world, the ceiling has to be in the stratosphere.

Every user of software in the world should ultimately use it, otherwise it doesn't matter.

You can have all of the builders or developers in the world, but if you don't get any of the generic population, it doesn't matter.

16An ecosystem has not crossed its rubicon of viability until demonstrating network effects strong enough to break through the early adopter ceiling.

If you have network effects but a low ceiling, it can't change the whole world.

To change the whole world, you have to have gravity-well style network effects, and a thing that every user in the world could plausibly come to use.

Something that has to be `npm install`ed cannot become viral. The real virality comes when a user can share a link that another user can then use without using the command line.

17In today's laws of physics, If there's a feature you want to have in an app that is widely used, you have to hope that a million other users also have the same need.

If not, then the feature will never be prioritized. It will never stand out from the chaotic background noise of user requests.

The bigger the app is, the more this effect dominates.

This is the dynamic that drives the phenomena in Ivan's excellent The Tyranny of the Marginal User.

18In today's laws of physics, the only consumer-facing products that are viable are aggregators.

This is kind of insane!

Aggregators are like concentrated, walled oases that are powered by sucking in all of the surrounding moisture.

They leave vast, barren deserts all around them where absolutely nothing can grow.

On a fundamental level, aggregators only can do one-size-fits-all products that leave wide swathes of consumer use cases untouched!

This is not the aggregators' fault; they are after all providing significant user value.

It's a consequence of our current laws of physics.

There are whole universes of things that humans want but today's software paradigm can't do.

A lot of amazing consumer products demoed at the most recent YC demo days are not viable in today's laws of physics.

But what if there were a new ecosystem with different laws of physics?

19I was imagining bespoke situated software for planning my five-year old's birthday party.

Someone countered that that was a niche use case.

But it's not a niche use case!

Every parent in the world will have this use case at some point.

The only reason it seems niche is because it's not viable in our current laws of physics.

It's not possible to build a single one-size-fits-all five-year-old-birthday-planning app.

Even if it were, It's not possible to distribute it.

Users have the need at one specific time in their life and never again.

For the users to use the app, they'd have to know it exists, think to use it at that moment, and do the high friction steps to install and onboard.

That would require massive amounts of marketing to accomplish… more marketing than the value of the app supports.

But that's a limitation of today's laws of physics, not a lack of importance of the human needs.

The universe of one-size-fits-all, expensive-to-distribute software required by todays laws of physics simply cannot cover all of the possibility space of useful software.

There are millions of these use cases that are human needs but cannot be addressed by software today in our current laws of physics.

We need new physics!

20LLMs break the current laws of physics.

The current laws of physics assume

1) easy teleportation

2) cheap distribution

3) low cost of compute to run the experiences.

That is, write a service once somewhat expensively, then convince users to travel to your origin and then you can run it for them extremely cheaply.

The stable outcome of our current laws of physics is consumer services are, by and large, supported by advertising.

But LLMs upend this logic!

Building an experience becomes cheaper.

Running it becomes significantly more expensive.

LLMs can't be supported by ads revenue.

The burgeoning consumer model is subscriptions… but how many subscriptions will a consumer be willing to pay for?

Imagine a model where users pay one subscription cost to get access to all experiences.

Instead of paying a walled garden for access, the user's "subscription" cost is paying their own compute.

Not too dissimilar to paying for bandwidth to access everything the internet has to offer.

LLMs do not work in the current laws of physics.

We need new laws of physics.

21What advertising looks like in new laws of physics is unclear.

Perhaps advertising won't be the primary model.

Maybe the dominance of advertising in the current laws of physics is highly specific to these physics.

Maybe if users don't have to travel to experiences, but instead experiences dome to them, advertising to induce traveling is less useful.

Maybe advertising will be about companies with a product to offer subsidizing the computing costs of executing their recipe, so users are more likely to choose to run it?

If users are already paying for compute, then money is already flowing through the system.

Small, clever diversions of some of that money flowing through the system could help get new flywheels going.

Like flowing water that can be directed through channels to spin new waterwheels and do new kinds of useful work.

Once you have energy flowing through the system, it becomes easier to divert some of its flow to make new things happen.

22Google Search is a hyper bespoke yet mass market product.

It was one-size-fits-all, but also perfectly bespoke to each user's needs.

The user interacts with it, in a kind of conversation of intent and action.

It seamlessly morphs itself to be precisely what that user needs in that moment.

Every user query from the beginning had a good-enough answer.

As more users used it, it got better and better answers for more and more users.

This is possible because of the laws of physics that allowed instant teleportation to new content.

A one-size-fits-all product that could cover a long-tail of hyper bespoke user needs, automatically.

Powered implicitly by the web's laws of physics, an ecosystem of content and creators, and clear signals of user's intent and improvements to the system.

23If savvy users can fix a product failure case, that gives the potential for a self-bootstrapping quality system.

Your product must have quality good enough most of the time for most of your users.

Some products, when they don't work, don't work at all: a slammed door.

But some have the characteristic where savvy users can easily tweak or fix the broken result to make it work.

If you can aggregate these tweaks, you get the ability to automatically improve the product's quality by the wisdom of the crowds.

Here's a concrete example from a search context.

Imagine a user searches for [foo], a new kind of thing that he wants to see images of.

No images show up in the search results; perhaps the search engine hasn't yet noticed that foos are very image-y.

The user fixes the issue with a new query: [images of foo].

This is an unambiguous signal to the search engine of what the user wants, and it can confidently show images.

This was one savvy, motivated user. But we can aggregate small amounts of savvy user fixes into a global quality improvement.

When deciding to show images in the search result, a simple (stylized!) decision procedure, given the query [foo]:

Check how many times in the past 90 days the queries [foo] and [images of foo] were issued.

Divide the latter by the former, and if it's above some tuned threshold, trigger the images.

This simple procedure is self-healing; the system will automatically notice new image-y concepts just from the actions of a small number of savvy users fixing their results manually.

This basic pattern is captured in an old essay of mine, [ITW] Self-hoisting Feedback Loops

A viral ecosystem with these kinds of loops can get huge amounts of momentum.

24In many AI products today, the ceiling is set by the LLM's quality.

That is, if the LLM doesn't work properly in a given situation, the product doesn't work.

In some ways this is reasonable: LLMs are improving the quality-per-unit-cost rapidly.

But a better approach is to design a system where the LLM's quality is the floor.

If the humans are always able to be inside the system configuring it, then the LLM becomes a bonus.

It can automatically configure many things for the user.

But if it fails in a given circumstance, then the user can pop open the hood and fix it.

This then gives a self-hoisting feedback loop to improve the quality for all users.

This latter approach is significantly more resilient for a cutting-edge technology with variable quality.

The AI sets the baseline that humans can improve.

The tool should be usable even if AI doesn't do a good job.

25Almost everyone today is implicitly assuming that the winner will be a single hyper-capable model produced by one entity.

This is a reasonable belief, and it might turn out to be true!

If this belief is true, to compete will require massive capital expenditures to create a competitive model.

You'd then need to take a significant early quality and momentum lead and parlay it into a successfully bootstrapped aggregator play to get staying power.

With Claude Opus outcompeting GPT4 in some areas, it looks like this will be a costly and hard-fought race.

There's an orthogonal bet, though.

You can bet that the quality dimension that will matter is not any one component, but the combinatorial power of the totality of the ecosystem.

If your ecosystem is designed to be open and allow zero-friction safe composition, it will turbocharge the network effects and hopefully eclipse the quality of any one model.

This approach is only possible with the right laws of physics and ecosystem gardening knowhow.

The single-model-to-rule-them-all playbook is taking a traditional cathedral-style approach, whereas the other is taking an asymmetric bazaar-style approach.

One of these bets will turn out to be right in the end.

The latter bet is a judo move: a cheap, asymmetric, non-consensus bet that if it turns out to be right will have massive returns.

26It's not end-user programming, it's end-user product management.

End-user programming still requires the user to be able to understand the code.

End user product management is more about behavior.

Even with the magical duct tape of LLMs, end-user programming is still hard to achieve.

You can get quick, scrappy prototypes quickly, but it's hard to maintain / grow / extend them without direct programming knowledge.

In contrast, a PM-level understanding of the system is plausible.

(Thanks to James Cham for this insight!)

27The etiquette of multi-user + AI conversations is unclear.

In a conversation with one user and an AI it's obvious the turns should ping back and forth.

That is, there should be precisely one AI message after each human message.

With multiple people and an AI that's way less clear when the AI should speak.

Should it speak after every user's message?

Should it speak only when it thinks it has something particularly important to say?

Should it speak only when spoken to?

28Building a new open ecosystem paradigm requires three things.

1) Product - The thing that consumers actually use, and that creates new kinds of value for them that are not possible in other paradigms.

2) Infrastructure - The behind-the-scenes infrastructure that makes the new system actually work in practice without the user having to think about it much, which typically implies a viable model of which entities pay which costs.

3) Protocol - The decentralized protocol that defines the laws of physics of the system, for example, allowing safe zero-friction composition.

Although the most visible part will be the product, you have to have good-enough versions of each for the frankenstein ecosystem to take its first breath.

If you're missing any one of the three, what emerges is non-viable or uninteresting.

If you skip product, you get a thing that only hyper enthusiasts will be motivated to use: a low ceiling.

If you skip infrastructure, there's no viable business models for entrants into the system.

If you skip protocol, then you get something like a traditional aggregator.

You can't build this frankenstein with just a couple of body parts.

You need good-enough versions of all of them to start.

Ideally parts that could then become radically better in a self-ratcheting way with more usage.

29Many projects have attempted a new, decentralized model of applications.

But none of them have had a significant, wide-scale impact.

After looking through many, there are a number of recurring patterns I see:

1) No security model

For composing untrusted components, they'll "figure it out later".

But this is the core dynamic that has to be figured out, you can't retcon it onto a system after the fact.

If you don't have a composition model, you end up with a high friction system with a very small set of composed components.

For example, lots of permission prompts with impossible-to-answer-in-the-moment questions.

And a small number of components that have earned enough brand reputation for people to be willing to take a leap of faith to use them: a heavily centralizing force.

2) No ecosystem dynamic

These might have an infrastructure dynamic

That is, the provider builds a given integration at fixed cost, they can re-sell it to many users with low marginal cost.

But an ecosystem dynamic makes the product more valuable at a compounding rate with the size of activity in the system.

An ecosystem dynamic requires things created in the ecosystem to be able to be used by others without the provider doing any work at all.

Can good ideas from your most motivated users bubble up and help the less motivated users without your involvement?

If not, then you don't have an ecosystem dynamic.

3) B2B

In some ways, B2B is easier, because you can go after a specific business problem and immediately get revenue in the door.

But businesses are also typically only willing to pay for things that solve a direct problem for them.

A lot of these new paradigms have an ecosystem effect, where they get radically more useful overall the more engaged users there are.

You can draft off hyper-motivated individuals who are tinkering and experimenting, and use their improvements to improve the thing for other users, too.

But if they have to be high-intent, paying B2B customers, that is much harder to activate, much slower.

A low-friction model that allows tinkerers to create value might get to a large enough value proposition for businesses to want to use it later.

4) Non-turing-complete

There are a lot of cool alternate-physics protocols and ecosystems in for example the social networking space.

These are interesting and have potential, but they're fundamentally about messaging, not creating.

There's limited space for turing-complete tinkering that could buoy the whole ecosystem.

5) Requires a command line

There are lots of ecosystems that grow a highly engaged developer community with tons of momentum.

But they assume that users will use a command line, if only for a few actions.

But this sets a very low ceiling on the size of the plausible user population, and can be hard to break out of.

6) Only as good as the AI.

A lot of systems rely on the single core model having high enough quality.

But if it's not, the product fails at that use case

The AI's quality sets the ceiling, instead of the floor.

30There's a massive difference between reading the rulebook and playing the game.

Reading the rulebook gives you the from-the-balcony, passive intellectual understanding.

At every step, if your mind wanders, or you miss a key implication, nothing will happen.

There's no stakes in the moment.

Playing the game gives you the visceral, in-the-arena, active experiential understanding.

At every step, if your mind wanders, or you have a crucial misjudgement, you're knocked out of the game.

Only the latter can calibrate your intuition and help you feel the game's logic in your bones.

31Instead of handing your agency over to LLMs, use them to extend your agency!
32The point of an OS is to be open-ended.

But an OS also should be useful out of the box as a minimum bar to clear.

33The Iron Bridge in England was the first bridge made of iron.

The bridge was built before anyone knew how to make big structures out of iron.

It used joins that were typically used for wood bridges... but just with iron.

Later we realized the special properties of iron and started making bridges in new ways native to it.

Similarly, people are using LLMs to make things in more code-like ways, because we don't yet know the best way to work with this new, odd material.

The current products are hodgepodges of new materials in old paradigms.

What will the truly LLM-native applications look like?

(Thanks to Tyler Odean for suggesting this frame.)

34Daydreaming of what the world would look like with different laws of physics.

I imagine my kids as teenagers, after these new laws of physics are just taken for granted, have been all they ever knew.

"Wait, in the past user data was owned by advertisers? And you worked for one of those companies?!"

35Too many resources can smother a thing.

They create eddy currents of possibility.

Swirls of coordination costs.

Instead of focused, laminar flow, you get turbulent flow.

It's much harder for anything to cohere and then be built on top of.

36Boil-the-ocean attempts can create boiling knowhow cauldrons.

Over-resourced boil-the-ocean boondoggles can still produce the knowhow to later allow distilling a differentiated, game-changing component.

For example, Golang came out of the Plan 9 boil-the-ocean plans.

A great simmering cauldron of knowhow, heated by a highly-resourced sponsor.

The people soaking in that cauldron will implicitly pick up:

Things to not bother doing–that they explored in great depth and realized are fundamentally nonviable in non-obvious, hard-to-explain ways.

Small, clever things that do work: judo moves, flicks of the wrist with massive implications.

People who were simmering in that cauldron will have the knowhow that can be catalyzed later into world-changing things in the right contexts.

Just because that cauldron didn't make anything itself, doesn't mean it won't indirectly catalyze something world-changing.

37If your first step is to boil the ocean, then the plan won't work.

One impossibility upstream makes everything downstream impossible too.

The only entities that think it ever might work are large, resource-rich companies.

But even there it basically never works.

The best that can happen is a boiling knowhow cauldron.

38Users shouldn't be ok with their personal AI being bribed or coerced.

For your personal AI to act as an extension of your personal agency, it must answer only to you.

39E2EE for messaging went from an enthusiast-only technology to something everyone takes for granted in the majority of messaging apps.

E2EE for messaging went from an enthusiast-only technology to something everyone takes for granted in the majority of messaging apps. How?

No mass-market user ever asked for encrypted messaging.

But everyone would prefer it at the margin when given the choice.

This sets up the conditions for a cascade with the right trigger.

Moxie Marlinspike made it his personal mission to move the world to E2EE by default.

Jan Koum, one of WhatsApp's founders, grew up in Ukraine and was personally motivated to have E2EE in his product.

The combination tipped the world into a cascade where now E2EE is everywhere and you couldn't even go back if you wanted.

When users all want something, even lightly, then all it takes is the right catalyst to kick off the cascade.

Over long enough time horizons, the likelihood of such a catalyst happening approaches 100%.

40A few people have built a way to interact with LLMs where it can ask questions about your day and help you distill a journal for reflection later.

One way: do a GPT in ChatGPT.

It's easy to do... but no way to export your journal entries anywhere.

They're just stuck in chat transcripts in the system.

Another way: duct tape together your own system with Python and a local database in an hour or two.

But not shareable!

These are precisely the kind of thing that should be possible to share as a recipe.

"Here's the pattern I use for end of day reflection to distill journal entries"

"Here's a recipe that I use to look back on my last week of journal entries and ask me questions about them"

Imagine a fabric that lots of people use, and being able to share recipes with others that can run instantly on their data when they click them.

41Mass market users almost never think about security or privacy models.

What's the last time you thought about the same-origin model, the fundamental model that underlies the web and modern apps' laws of physics?

The inductive willingness of less-savvy people to trust a security model is about iterative trust in people who are more knowledgeable than them.

An example inductive chain:

"My tech-savvy niece says this is secure, and I trust her."

"My security expert friend says this is secure, and I trust him."

"The expert I look up to read the white paper and believes it is solid."

Of course, in practice the indirect diffusion will have many many links.

This cascade of inductive trust has to be based on a rigorous reality to not be a house of cards .

Narratives that don't represent the underlying reality are kayfabe and might shatter.

But narratives that do represent the ground truth in a rigorous way (that is, as people investigate them closer, they become more convinced in the broad strokes of the narrative) are resilient and useful.

42The fun of the game comes from the surprise, from being on the edge of your ability and mastery.

Once you've learned a dominant strategy in the game that always works, it becomes boring, a chore.

All of the discovery is gone.

All that's left is the monotonous effort.

At the beginning of your career it's a thrill to figure out how to navigate organizations to make things happen--yes, it's a treadmill making it hard to make forward progress, but you can take the thrill and the challenge of achieving it.

But once you learn how to navigate it well and resiliently, all that's left is how hard you have to work to counteract the treadmill to make anything interesting happen.

43Chat is not the right paradigm for LLM apps.

It's a natural first paradigm: it's inherently interactive, and includes the human in the loop to correct and guide.

"No, not like that, more like this…"

This feedback loop makes them viable even when the answers aren't perfect.

If you take a prompt that works great in a chat and jam it into an app shape it doesn't work, because there's no feedback loop if it does it wrong.

The edges are not soft and recoverable, they are hard and sharp.

Software today has hard, sharp edges. Not soft, malleable edges.

But chats have no structure; they have no permanence. Just an append-only transcript.

A better model is enchanted artifacts that you can chat with and modify.

Imagine a TODO list where many of the items on the list could expand into more actionable steps, and even in some cases self-execute?

Chats then become a special case of an append-only enchanted artifact.

44A Saruman won't listen to a Radagast by default.

A powerful Saruman needs to vouch for a Radagast to get other Sarumans to listen.

A bona fide Saruman can vouch for themselves.

It's possible to look like a Saruman if the viewer only looks at you for a few seconds at a time.

A commanding leadership vibe.

Saying things like "if you want to survive when swimming with sharks, you've got to be a shark, that's just the way the world works".

Flexing for the camera.

That allows you to get them to listen to an argument that they'd otherwise reject as too Radagast-y and woo.

45The hero's journey is an OODA loop.
46A set of tactics for surfing along an emotional rollercoaster.

When you feel good, close your eyes and sit with the feeling, and locate where it is in your body.

Then touch that part of your body.

Later, when you want to load up the good feeling (e.g. before a high-stakes meeting), touch that part of your body to activate the memory.

When you feel bad, don't push it away.

Your body's reaction to emotion tends to dissipate in a minute or two if you don't try to push against it.

Our emotions are not some silly distraction off to the side; they directly color how we experience the world.

Don't ignore your emotions, surf them.

47Someone who thinks they're a big thinker (n-ply) but is actually a superficial thinker (1 ply) is a danger to themselves (and, if other people think that they are indeed a big thinker) to others.
48Retrospectives are magic.

Because you have the direct, in-the-arena experience and knowhow, but you can take a breather to reflect / synthesize / learn / grow from that experience.

You're with the tiger in the arena, about to die. You slayed it. Now you're safe, you can think about it.

What would you do differently next time?

How could you avoid being in the arena with a tiger in the first place?

Instead of leaving the arena to get a different vantage point from the balcony (impossible and dangerous in the moment), you go up to the balcony after the fight is over.

Being able to look at a situation from the balcony is about a different longer term vantage point. That can happen in space or in time.

49One thing we take for granted in today's laws of physics: services know nothing about you.

You visit a site and expect it to give you the generic one-size-fits-all experience.

Not until you log in and start accumulating actions can it start to morph itself for you.

But if you change the laws of physics so services come to you, and there's a privacy model that allows safe speculative execution, then all of that changes.

Imagine: every service you interact with is perfectly personalized to you, but also perfectly private.

50A community without a boundary either evaporates, or it attracts the social climbers and it becomes increasingly performative, transactional, and kayfabe.

This happens to every community that can go viral... even if there's no direct profit motive, there's an indirect profit motive to every interesting assemblage of people.

People who view but do not participate in a community are free-riders.

They get the value of the community but they don't have to contribute, and also make the stakes of experimentation in the community higher.

A key question for a community: the degree of overlap of creators and audience.

If they are highly overlapping, then the context will be a more infinite mindset, more authentic, more creative.

If they are non overlapping (the vast majority of participants are audience, not creator) then the more finite, performative, and transactional it will be.

51Conspiracy theories are a form of overfitting of a narrow set of anomalous data
52A life hack: do stuff and then write about it.

If you just write about stuff, it's all theory, from the balcony.

If you just do stuff, then it has limited leverage; other people can't learn from it.

53A primary use case is why a user uses a product.

A secondary use case is just a bonus.

But some secondary use cases have a network effect.

Their value goes up super-linearly with use and adoption.

This can allow the secondary use case to quickly eclipse the original primary use case.

The primary use case typically must work on its own without a network effect so the very first users get sufficient value to use it even when no one else does yet.

A recipe for a hyper-viral product:

When users do their primary use case, they leave byproducts in the system that are an input to the secondary use case.

The secondary use case has combinatorial value that scales with the amount of byproducts the user left.

The secondary use case can be activated proactively with zero friction.

As more users use the product, they create huge amounts of zero-friction adjacent use cases of self-ratcheting quality.