Bits and Bobs 8/12/24
1LLMs are great at reasoning shallowly but broadly, and extremely quickly.
Humans can do deep reasoning narrowly and slowly.
Both are useful on their own
But what's really powerful is the two working together in a tight collaborative loop.
The LLM does 90% of the heavy lifting; the human applies the differentiated judgment and steering.
The result is something an order of magnitude better than what either the LLM or the human could have done alone.
LLMs are patient–or rather, are so quick that they don't need to be patient.
LLMs as preternaturally patient people.
The information that the human gives the LLM in these cooperative sessions would be extremely potent training data.
It's laser focused on the stuff the human didn't like or wanted to change, the parts to update to make the model better.
2There are a huge number of problems that previously required preternatural patience from a human to do.
There are huge classes of problems like this, where it was possible for a human to do it before, but unlikely (too costly for too unsure a reward).
But now you can do it in 5 seconds by asking an LLM.
How many interesting insights, low-hanging fruit that a human could have discovered but didn't, that are there for the taking?
3The ways that LLMs find great ideas is different than humans.
A human with a high IQ (a Newton) could think deeply about a problem for extended periods of time.
A Newton, given infinite time, could probably figure out nuclear fission.
A 100-IQ person, even given infinite time, probably couldn't.
But LLMs find great ideas the same way that swarms of individually unexceptional humans do.
You swarm all of the possible solutions.
The vast majority are crap and kind of fall away, unused.
The small subset that are great are all that remain.
Humans have a trick the LLMs don't: the world can cache intermediate useful results.
When someone finds something useful, it becomes a durable part of the environment, as they and others invest in keeping it around.
Each individual human trying out something new can use the pieces other humans figured out before them, "standing on the shoulders of giants".
The swarm of humans doing things in the world can reach ever further because of this.
LLMs don't get that ability because they can't change the world directly.
The swarm of search heads executing different LLM paths can only find as much as the LLM as it was trained could.
4Claude can do impressive synthesis and summarization tasks.
I fed it as much of the Bits and Bobs as I could, and then asked it a specific strategic question that I had previously written up in a document it couldn't see.
It got the document almost exactly right.
How did it do such a good job?
Well, these personal reflections have tons of information about what's on my mind.
All of the key components of the argument are sitting here in the bits and bobs, floating amongst lots of other, unrelated, reflections.
It's not that Claude was able to synthesize things that no one else could see.
It's that Claude is patient enough to sift through hundreds of pages of content, identify throughlines, and then focus in on the subset to collage into a narratively coherent argument.
LLMs are savvy enough, and can instantly do tasks that would require a preternaturally patient human weeks of focused effort sifting through inputs to do.
5In any community of people there's a power law of motivation.
Typically 90% are passive consumers.
1% are engaged, active creators: perhaps writing code.
9% are engaged tinkerers, but not savvy enough to write code.
Those 9% are the people who are on the precipice of being activated by LLMs.
"I could never code!" transforms into "look, I made this!"
6There are a class of tinkerer-style ideas for e.g.
There are a class of tinkerer-style ideas for e.g. calendars that have been hard to activate.
The idea of creating a better calendar scheduling algorithm requires first building a calendar app that is better than alternatives, and convinces people to give you–an app they just met–all of their most sensitive data.
A very steep hill to climb.
Also, a very scary one: most tinkerers don't want all of that radioactive data.
But what if there were an open platform that everyone knew that everyone knew was safe and secure and kept user's data out of view of everyone but the user.
People could create clever little things like calendar scheduling algorithms without that steep, scary gradient.
The open platform would become the schelling point for little things like this, which would make it more attractive to other participants: a self-accelerating loop, a gravity well.
7The entire software industry, absolutely everything in it, has a load-bearing assumption that software is expensive to write.
It's always been true so we never thought to question it.
Software being expensive is a force of gravity.
But it's a force of gravity that changed!
And now that it has all kinds of things will change that we never thought could change.
8Software isn't precious anymore.
Code used to be so precious that we'd give up our data to get access to aggregator's software.
Code is now cheap, expendable, disposable.
When something that used to be scarce becomes abundant, something else becomes the new scarcity.
So what's the new scarcity? What's newly precious in this world?
Data.
Data is precious.
What if that data could be not just precious, but intelligent, able to work for you?
9Aggregator's products are like fast food.
Convenient, cheap, one size fits all.
Unhealthy.
10A friend's teenage son is learning to code in an age of LLMs.
He can build extremely impressive applications.
He asks Claude or another LLM to write the "goop" – the black boxed, magical incantations he needs to wire into his application to get it to do something.
It used to be that each individual component of your application was something you understood as the creator of it.
Outside of the occasional copy/paste from stack overflow, of course.
But now vast swathes of codebases can be impenetrable goop, even to their creator.
Traditional programmers look at this development, aghast.
"What if you need to change some of the goop?"
The new programmer's reply: "...I'll simply ask the LLM to write some more goop."
11Valtown is "not no code, it's just code."
Code, when you slice it down into teeny pure functions, is actually pretty simple.
It's all the harnesses and infrastructure and cruft around it that's hard.
A bunch of goop.
12LLMs enable whole new categories of software.
How should you start a new category?
One option is the "sustaining" playbook that's typically used.
You create a thing whose primary use case is an existing category, just a bit better.
"Discord, but with a more professional UI"
Then, the magical category-bending stuff is the secondary use case: a thing that users only appreciate once they're in the app, but then grows to become indispensable.
But that's a tough gradient. Why would users use a thing that is only marginally better than the thing they already use?
Especially if the app requires a significant amount of investment to load up data (not just a lot of effort, but downside risk from an app that is a bad steward of that data).
Huge switch costs! Huge activation friction!
Another approach is to lean into being a totally new kind of thing that defies categorization.
WebSim is a great example of this.
What the heck even is this thing? How would you describe it by saying "Like X but for Y?"
But that's what gives it a magical possibility space..
A weird, strangely engaging toy that turns out to have an uncapped possibility.
13Aggregators are the best situated to do cross-silo spanning use cases, but also the least capable of doing it.
Eric Beinhocker in The Origin of Wealth captures a similar dynamic:
"We thus have two opposing forces at work in organizations: the informational economies of scale from node growth, and the diseconomies of scale from the buildup of conflicting constraints. Taken together, these opposing forces help us understand why big is both beautiful and bad: as an organization grows, its degrees of possibility increase exponentially while its degrees of freedom collapse exponentially.
Put simply, large organizations inherently have more attractive opportunities before them than small organizations do (the large can theoretically do everything the small can do, plus more). But reaching those future opportunities involves trade-offs, and the more densely connected the organizational network, the more painful those trade-offs will be. The politics of organizations are such that local pain in particular groups or departments is often sufficient to prevent the organization from moving to a new state, even if that state is more globally fit."
Aggregators have more data in more verticals under one roof, so it's possible to create, for example, a smart calendar scheduling algorithm that takes into account your contact list and email history.
But also, the aggregators are large companies with massive coordination costs.
This means it's very very hard for aggregators to coordinate around anything but massive use cases.
The PM would have to corral dozens of overworked people across the company to collaborate on a use case that is a P2.
Aggregators are terrible at swarms of P2 style features, despite that being where most of the value is.
And at the scale of an aggregator, use cases that would support an entire unicorn of a startup look like P2s.
The best way to unlock this swarm of cross-silo P2 features that are currently impossible is to create an open, safe ecosystem where the swarm can create that value without any top-down coordination.
14The LLM you use to help you in your work and the LLM you use to power a feature in your app will likely be different.
The LLM you use for work you'll want to be the very best.
You're only using it a capped amount (there's only so much one individual can use it), so the extra cost isn't the end of the world.
The LLM you use for the feature in your app you want to be as cheap as you can while still being good enough.
You could get possibly uncapped usage.
15"Do what I mean, not what I say."
Computers do what you say, not what you mean.
That's one of the reasons that programming is hard.
LLMs are good at doing what you mean, not what you say.
16LLMs are more forgiving for trivia style answers than search engines.
A common use case for me: someone tells me a half remembered quote and tells me a name that it's difficult to spell and I likely spelled incorrectly.
In the past, I could use Google, with a good helping of Google-fu, to construct a clever query that would help me find the original quote and who said it.
But if I didn't construct the query correctly it wouldn't work.
People with less Google-fu would be out of luck.
But LLMs are very resilient to this. Just drop in the word vomit you have and say "what is the quote and who said this" and it will almost certainly get it right.
And now that you have the proposed actual quote and speaker it is easy and quick to confirm with Google.
Another great use I've found: helping me find the quote from a book I'm thinking of.
I tell it the general idea I'm thinking of and then pass it the hundreds of pages of Readwise.io highlights I took from that book and ask it to select the quote that best captures the vibe I'm thinking of.
Previously this would have required patiently scrolling through hundreds of pages, or remembering a distinctive word from the quote.
17Software's potential is not just the individual potential of a piece of software, but the combinatorial possibility of combinations of software.
But that combinatorial potential can't be unleashed if data resides in silos, hermetically sealed off from each other like mutual biohazards.
18It's rare for open source to catch up or beat proprietary quality so early in a cycle.
That's why I think it's so incredibly optimistic that Llama 3.1 405B, an open-weight model, is so good.
No matter what happens from here, we have a world-scale intuition model that allows derivative models to be created from it.
Just an amazing gift for society.
A toehold, a foundation, that the overall ecosystem can use to reach ever farther, that can't be taken away.
Why did the open system win so early this time?
Typically open systems take a long time to catch up because the swarm can't coordinate, and you need to wait for the overall quality to become commodity
And also wait for knowhow about how to create that quality to diffuse out of the companies, as employees naturally leave the leading companies and bring that knowhow with them.
It's often hard for large players to make the case to their shareholders to do an open approach: it's a two-ply (or more) argument, and those are hard to make at scale.
But in this case, Mark Zuckerberg had the capital and leverage to just… do it.
Zuckerberg is a powerful, well-regarded founder who still directs the resources of his company. So he could just do it without having to ask anyone's permission.
And he's savvy enough to know that open source can simultaneously be a move to trip up your competitors and be a massive positive donation to society that will make you beloved by everyone but your competitors.
19Which is more important: quality or distribution?
It's driven primarily by switch cost.
If switch cost is high, then distribution matters much more than quality.
Unless quality is an order of magnitude better for the alternative it's easier to just stay with the one you know.
20How will the competitive dynamic of LLM-powered chatbots play out?
Will it be more like search engines or more like operating systems?
Search engines:
Hard to build: expensive fixed cost that requires specialized knowhow.
Free: marginal cost can be supported by advertising
Easy for a user to try: Just a click away
Not deeply sticky: very little meaningful state for a user in the system that's hard to build up elsewhere.
Moderately direct network effect: the more that users use it, the better the quality gets.
Despite this, Google stays in a commanding position because no one else has better quality, and Google gives a good enough answer to almost every query.
Operating systems:
Hard to build: expensive fixed cost that requires specialized knowhow.
Free: fixed cost of development can be supported by adjacent businesses, and zero marginal cost.
Hard for a user to try: high switch cost.
Very sticky: users buy applications and accumulate state that only works in that operating system.
Indirect network effect: More users leads to more incentive for developers to build for the platform.
Windows remains a powerful force, despite at various times Mac OS being better in every meaningful dimension.
Chatbots
Hard to build: (the UX is easy, the model is hard)
Costs money: marginal use is too expensive to support by advertising.
Easy for a user to try: at least in the free tier.
Not very sticky: state accumulates in conversations, but that state doesn't do much. The most stickiness comes from becoming a paying subscriber.
Indirect network effect: the model providers' actions imply the querystream isn't particularly valuable to increase model quality
OpenAI is the kleenex of AI - if consumers know a single model provider, they know it. And although other models arguably are higher quality now, they aren't an order of magnitude better.
OpenAI has a significant lead in subscriptions, meaning users will stay out of inertia – why try the other models when you already pay for this one, their quality isn't significantly better, and this one gives good-enough answers to most queries?
But this is not a particularly strong strategic position like a gravity well, just a kind of "shrug I guess I'll stay with this one because it's easier" advantage.
Interesting that the advantage to the first mover is so weak in this context! We're in the very early innings, lots of things could change.
This doesn't imply another Google, but maybe something like an Expedia.
21Today permissions apply on an origin level, no more granular than that.
If there's any part of what an app might do that is out of bounds of what the user wants, then the permission request is illegitimate and the user says no.
Or, more likely, says yes but feels bad about it.
A force that leads to people being less willing to try new things.
Even a small bit of illegal territory gives you a no (or a begrudging yes).
But this means that huge swathes of legal territory are not permitted because there are small bits of illegal territory.
Information Flow Control allows applying it more fine-grained, so there is more that can be permitted without including any illegal territory.
A huge amount more possibility can be unlocked.
22I was talking with a few other advanced LLM users.
We all agreed that the hardest thing as an advanced user about working with LLMs today is the copy/pasting of data in and out of the LLM.
You need to copy in all of the context the LLM needs to make a good decision, and then copy back out the answer (or a subset of it) to actually do the thing you want to do.
For example it's really challenging to constantly be splicing in bits of changed code into your codebase after an LLM helps write some.
Some models have a desktop app where you can allow it to scrape your whole screen constantly… but that seems like way too much data to give it.
What if your spouse texts you while you're working on something… that's not relevant, and the LLM should mind its own damn business.
Today there are two options: the hermetically sealed world of a browser tab and its same origin straightjacket, or giving the model access to absolutely everything you see.
If there were a system to keep track of data flows more granularly you could have a very different system.
23Version control makes experimentation more plausible.
Worst case scenario it's just opportunity cost.
You can land changes and go down a branch and later go, "nah, I'll just leave this here, this path isn't worth going down any more"
24A mental model: data is "radioactive" if it could be tied to someone's identity.
If that data touches other data, or is shown or shared in the wrong context, there could be an explosion.
But it's possible to "denature" datastreams, to make them impossible to identify users (below some differential privacy epsilon).
If you do this early in the data pipeline, then the data is much less dangerous and can be used more easily for things like improving quality of the system by wisdom of the crowds style approaches.
So why doesn't everyone do this by default?
Because if it turns out you need some facet of the data that was removed in the denaturing process, you're screwed.
In that case, all of the data you have sitting around is useless.
You have to have very good knowhow and expertise in a given quality domain to know what aspects of the data are important to maintain.
And in novel domains, it's not easy to know what facets of the data will be most useful. It requires experimentation.
Denaturing data basically removes its option value… the amount of harm data can do but also the amount of good things it can do.
Denaturing data early in the pipeline makes it very hard to do open-ended quality experimentation.
Finally, it's very hard to prove to users that you're doing this.
This is typically an internal implementation detail, with no external visibility or contracts (this lack of transparency makes it so you can change it later).
But if you do it and don't tell users, then outside of reducing catastrophic tail risk of a data leak, you don't get much benefit from doing it.
No benefit, all cost means that companies don't do it very often.
In practice companies say "screw it we'll keep it all… just in case. We're not like those other idiots who will have a data breach"
But then of course in the future there is a data breach, and users lose out.
Users learn to be generally wary of any new entity collecting their data, in a fuzzy, hard to articulate way.
Companies erroneously conclude "users don't really care about privacy anyway, just look not at what they say but their actions".
But users never really had much of a choice.
25Private cloud computing is not just trusting who runs the code (cloud provider) but also who wrote the code (service provider).
Who wrote the code is the bigger source of threats, because by design it could use your data in ways you don't want.
Whereas the one running the code obviously isn't supposed to muck with the code or data, and would only do so if compelled to do so or by accident.
That's why private cloud enclaves put more focus on the service provider than the cloud host.
26A hugely intentful datastream: corrections users make to a model's output.
It's right at the edge of almost right, but needs a tweak by a user.
A user not changing the output isn't super useful: maybe it just wasn't useful output.
Changes users make to a PR, rather than all of the code itself, narrows in on the precise things that were wrong about the PR in the first place: laser focused quality improving data.
It has very little noise, because a user wouldn't bother correcting it if it wasn't close enough to right but still not right.
A way of writing code: don't give it a big design doc, just tell it to do a thing and then give it feedback on specific things that are not what you wanted. You don't have to give feedback on the things that it just guesses right because they're the obvious thing.
27A conversation a user has with an LLM is chock full of private information.
You have to do quite a bit of denaturing to make sure it's not dangerous, to use it in other pipelines.
But imagine a situation where an LLM generates 4 different options in response to a user's high level query.
Then the user picks one of those 4 options as the one to keep.
The individual options aren't that private (they're the LLM's answer to a high-level prompt, not a user's).
The pick of the option isn't that sensitive – it's just a ¼ pick of options from an external system.
That means the signal is very useful and doesn't need to be denatured that much.
28"Local first" is not about everyone running locally.
Instead, it's about "if you can't run it locally if you choose to then you don't have full agency".
Most people won't run it locally, but the ability to run locally is what shows you you have control over it.
Same with the ability to fork. You fork rarely. It's that you credibly could.
29The cool thing about file systems is permissionless / adversarial innovation on file types from different applications.
The same origin model is what disallows adversarial innovation on data.
This is also what makes filesystems so dangerous for untrusted code to have access to!
31A handful of tweets I like:
Hunter Clarke:
"AI is truly world class at generating the scaffolding around a creative project"
Sahil Lavingia:
"Software used to take years to ship
Software used to take months to ship
Software used to take days to ship
Software used to take hours to ship"
32Making something new?
First, assemble the ingredients.
Make sure they do indeed exist.
Make sure they're not rancid.
Now make sure they can be combined into something coherent and interesting.
Who cares if you have flour, eggs, water, and oil.
Can you make a cake?
33Building on last week's riff about a needle in a haystack.
You want a haystack where finding a great needle is expensive and hard.
That makes it an effective prize: when you show it off everyone can instantly tell how much time, effort, and skill you invested.
You want a haystack where anyone can find a crappy needle within a few minutes of starting.
You get a dopamine hit, a "hey, maybe I could do this".
This gives a nice gradient of discovery and creativity. Every time you're about to give up, you find another slightly nicer needle, which helps you keep with it as you level up your ability.
Minecraft has this characteristic.
You've seen videos of people making a redstone computer in Minecraft.
But also within minutes of first playing minecraft you've punched a tree and gotten resources.
34If everyone has the same laws of physics, then one participant getting a slight edge in a compounding loop doesn't matter.
Yes, they are ahead of the others in a compounding loop… but the others could have the exact same compounding loop.
And if the other worked 10% better than the first competitor, they could surpass them quickly.
What's most strategically important is having your own compounding loop.
This effectively changes it so your laws of physics are different than everyone else's.
They invest a linear amount of effort and get linear returns.
You invest a linear amount of effort and get exponential returns.
Network effects give this kind of personal tweak to the laws of physics.
35It's worse to be early than wrong.
Because if you're early you have the frustration of having been right but still not benefited from it because you got the timing wrong.
But if you were wrong, it just wasn't meant to be.
36If the decision is reversible, optimize for fast decisions
If it's not reversible, optimize for good decisions.
37People who are muddled communicators, it's either muddled thinking with clear communication or clear thinking with muddled communication.
The latter is way better because when paired with a good communicator their clear thinking can become clear.
38Complex systems always sit at the edge of criticality.
That's where all of the interesting possibility happens, the most freedom of movement for the system to respond in useful ways.
Dynamic equilibrium, actively balanced, ready to pivot and respond.
That also means that there must be critical, "holy crap, what just happened?" events that happen sometimes.
39Steve Jobs had a particular philosophy around giving tough feedback.
https://www.littlealmanack.com/p/biggest-lesson-steve-jobs - "Are you avoiding giving feedback because it's easier for you to just be nice?"
It's easy for a powerful person to say that.
In a bottom-up environment, where you have to work with peers and get them to want to work with you, you have two choices:
1) be nice to people and have them like working with you, or
2) do the Legolas sprinting up the falling rocks thing where you go balls to the wall never stopping because stopping is death.
Just steamroll everything and everyone and never stop because you'll leave a trail of frustrated (or dead!) bodies behind you who will not want you to succeed.
The approach I took, uniformly, was the first. I couldn't bring myself to do the latter!
But people who are very successful in these kinds of environments often do the second.
The second is a kind of cheat code, a corrupting force.
40The most savage thing a person can do organizationally is behind closed doors start spreading a "what does X team really do anyway?"
Easy to spread that vibe, hard for the team you're throwing shade on to refute it.
Not questioning whether they're doing important work, but whether they're doing it well.
"They seem to have less output than I'd expect. How hard could it possibly be? If I were them I would simply do X. I think they're making excuses for their lack of ability."
Needless to say, I do no think it's a legitimate tactic to spread this kind of unfounded FUD.
In an organization you must defend against this possibility by making sure that everyone can see that your group is clearly producing more value than the political capital (e.g. headcount) it is spending.
The best way to counter such a vibe is to let it never get going in the first place.
If you don't defend against this, your team is on the path to death.
41Do you want to win, or do you want to get better?
Championship game? Play to win.
Any other game? Play to get better.
42Corruption infects everything around it.
You're forced to play or get knocked out of the game.
Kayfabe is a form of corruption.
A cheat code you are compelled to use or be competed out by people who did.
"Why do the hard real thing when I can do the performative easy thing?"
How can you bring yourself to care when the competition compels you to not?
Toxic for discretionary effort, creativity, general intelligence.
43When you're powerful in a context, you'll think you're more popular than you are.
"Everyone who shares an opinion about me tells me they like working with me!"
You won't notice the people who aren't sharing an opinion, because it's hard to notice an absence.
Someone telling a powerful person they don't like them is all downside.
Why do it?
Just quietly put up with it and look for an opportunity to exit, or push the lead off the stage.
44Everyone thinks the ecosystem they control is more open than it is.
Because everyone is slightly blind to their own power in a context.
The omnipresent wind at your back is easy to take for granted; it never changes.
45Be a Radagast inside your org, look like a Saruman outside of it.
Inside your org, everyone trusts one another and the vibe can work.
Outside of it, the org is big enough to need summary statistics, which means that Radagast magic can't work and can't be understood.
Radagasts can be rewarded in small organizations where everyone knows everyone and can trust each other and be willing to sense the indirect effects and attribute them to the causer, even if they couldn't prove it to a skeptical remote audience.
But if the org is big enough to need summary statistics, Radagasts can't be rewarded for the full value they create.
There's a difference between looking like a Saruman and being a Saruman.
46A renowned expert in a given domain gives their take on something in that domain: "...
A renowned expert in a given domain gives their take on something in that domain: "... but that's just my vibes based answer, I wouldn't give it too much credence."
"But your vibes are extremely valuable in this domain, because your intuition is finely calibrated for it–possibly the best indicator we have of what the right answer is"
47It's possible to respond to ideas you disagree with with openness.
"My understanding of your argument is that based on what you see, X, Y, Z".
The other person will think you're agreeing with them, but really you're just playing back (and ideally steelmanning) their idea back to them.
This will lead to lots of people finding you reasonable and "easy to work with".
As a bonus, you will now accumulate a huge diversity of steelmanned viewpoints.
Once you do, the correct answer is often quite straightforward.
The hard part is not the synthesis, it's making sure you have all of the steelmanned perspectives in the first place.
48Kayfabe looks imposing but is actually brittle, like glass.
When ground truth touches it with the slightest force it will shatter.
The bigger the kayfabe gets, the more that momentum it has, the harder it is to touch ground truth without shattering.
The only way to do it is to have continuous light interaction with ground truth to get it stronger and stronger.
49Ground truthing in organizations happens mainly at offsites.
Time together non-transactionally and overnight.
Time to whisper and gossip in a friendly way with everyone.
To come to see collaborators as competent people in problem domains more complex than you realized, not as tactical, incompetent obstacles.
They're expensive, a release valve of kayfabe. One of the first things that orgs tighten.
"We don't need these boondoggles".
They have to be fun enough that people are willing to ask their spouse to let them stay away for a night.
Far enough away that people can't slip back home.
The whole point is being there overnight in a low key environment together, and perhaps having a few drinks.
50A launch is a high-stakes moment with a lot of downside.
What happens if you were wrong, and the thing you built isn't actually viable?
If you built in a cave, it's hard to get the disconfirming evidence during development to make sure it's strong.
Another approach is to develop it in the open, and make it illegible and boring.
Even people who randomly stumble across it will bounce off.
But you can then get experts in your network who are rooting for you to take a look and engage.
Those experts can give you disconfirming evidence in a safe, low-stakes way, helping you improve it.
If the disconfirming evidence is slowing and activity in the system is picking up, that's a good sign you can hit on the gas a bit more.
51If you're climbing a hill that no one else can see, you have no competitors.
Just make sure it's a hill worth climbing... and is actually a hill in the first place.
Maybe you're just delusional!
52"Foo is the new hotness."
"I think you got one of the letters wrong. Hot mess"
53A cozy community can't be viral.
Virality is in tension with community.
Community is about trust.
Virality undermines trust.
Context collapse.
54A generative question: what is something that you believed when you started this that you no longer believe?
The most interesting beliefs are the ones that change.
Beliefs that change are the edge of the wedge, where the critical insights happen.
The most important information out of the sea of background information.
The things that don't change are either wrong or obvious.
"What bumper sticker of insight would you send to your past self embarking on this journey?"
56Creativity takes time and space.
Mundane bullshit, which takes up every square inch you give it, is like poison to creativity.
Creativity gives life force.
Mundane pointless bullshit takes life force.
There's a reason the renaissance happened when there were lots of patrons.
57Abstract thinking is a mix of self-indulgent and high-leverage, and it's very hard to distinguish which is which.
Concrete thinkers think it's all self-indulgent.
"The only reason they like doing it is it gives them the dopamine hit of an aha moment without actually clarifying anything"
Abstract thinkers think it's all high leverage.
It's always some mix.
Abstract thinkers like abstract thinking.
Concrete thinkers hate it.
58The world is inherently complex.
You can either deny it and get smacked in the face, or look at the world as it is and say "ok what now, how can we make big great things happen despite the inherent complexity"
If you think a problem is complicated but it's complex, when it fails you'll think "someone must have gotten evil, incompetent, or lazy".
But maybe there is no villain!
You creating a villain shifts the responsibility for understanding to someone else and away from yourself.
60If you want to start writing, just write!
Publish it somewhere quiet and unassuming and don't make it look too polished.
Assuming it's not controversial, if you point people at it and they don't like it, then they won't share it.
The number of people who read it will be tiny--capped downside.
But if they do like it, then they'll share it, and more people will read it, in proportion to how good it is.
Uncapped upside.
All that it costs is the opportunity cost to write it.
And if you like the act of writing, if it gives you energy, then there's no cost at all!
61Arguments that take themselves too seriously are hard to engage with.
Especially if the receiver has a choice of whether to care.
If they are in a work context where the argument is from their boss, they are obliged to listen.
But if it's an argument some random person makes about something, the receiver can choose to engage or just ignore it.
A too-strong argument feels like a steamroller; it's off-putting and intimidating.
An argument that doesn't take itself too seriously, that is fun and "charismatic" is more likely to be engaged with.
A good way of making it clear you don't take yourself too seriously is to liberally use emojis.
Emojis can make even a deck that is hundreds of slides long not be too intimidating.
62A parable forces the reader to hold it lightly since it's so ethereal and abstract.
Which helps people apply it to more situations, since it is a rough parallel to a lot of things and not a direct parallel to any one thing.
A concrete story is easier for people to grab onto (it's more obviously useful in some context), but harder to apply to parallel situations.
A parable grabs the reader not because it is concretely helpful, but because it is enjoyable aesthetically on its own.