Bits and Bobs 11/13/23
1This weekend I was in Santa Fe for an SFI conference.
This weekend I was in Santa Fe for an SFI conference. As a result, this week's Bits and Bobs are a bit more... out there. If you like the more grounded bobs (or bits), you might want to sit this week out. If you're still here, strap in, this is going to get weird!
2Last week I asserted the banal parts of any project will be a slog.
Insights from various folks wrinkled that story a bit for me:
People will tolerate the banal parts if they think it is a necessary component of achieving a creative end they value.
But if they don't believe that it's useful or required (e.g. busy work) or don't value the end (":shrug: it was what I was told to do"), the banal parts will be draining.
3Simon Willison had a nice post about the need for sustainable financing of open source.
I think he also nailed why open platforms create so much innovation:
"...plugins are absolutely the best model. I can wake up in the morning and find that my software has developed a new feature, and it was released to the world, and I didn't even have to review a pull request!"
4I thought this distillation of Clayton Christensen's theory in the OpenAI Stratechery piece last week was valuable:
"Professor Clayton Christensen's theory of integration and modularity, wherein integration works better when a product isn't good enough; it is only when a product exceeds expectation that there is room for standardization and modularity"
5A pearl of wisdom from Tara Seshan:
"you know you've done enough UXR when you can predict with high accuracy what the next user you research will say"
6You have antibodies for things you don't want to believe.
But you don't have antibodies for things you want to believe.
So your epistemic hygiene for ideas you want to believe will be much lower.
7A lens is not right or wrong.
The question is entirely "is it useful or not?"
The usefulness is contextual.
A lens might be considered useful in general based on "how many contexts might I find myself in where this would be useful?"
8The benefit of experimentation is that something can work even if you don't have a theory for why it should work.
After the fact, knowing that it does work, you can inspect it and figure out what made it work... and then lean into that.
So much of deciding to ship something is distilling and socializing a compelling theory of an idea to collaborators, a cost that scales super-linearly with org size.
It's much simpler if your system is constructed so you can skip that step!
For example, infrastructure (cultural and technical) for small, safe-to-fail experiments.
9Evolution doesn't necessarily revisit decisions.
On the frontier a lot of random stuff is tried.
Most fade away, consumed by entropy.
But some subset turn out to be useful and continue to be invested in because of that usefulness.
They become a stable platform for other things to accumulate on top of.
The top layer is fragile, but once things are built on top it becomes durable and load bearing, even if it's a bit suboptimal or weird.
10Phase transitions tend to happen when a system is in dynamic equilibrium.
That is, when all of the various forces and tradeoffs are balanced. Not in a passive, but an active, balance.
Just one little tip in any one direction might cause the system to transition to a whole new phase that is wildly unlike the previous phase.
11Interesting things are things that happen at the boundary of predictability and unpredictability for the observer.
If you can fully predict it, then it's not interesting.
If you can't predict it at all, it's unknowable chaos, no way to grab on.
The goldilocks zone is where interesting things are.
Interesting things are where all of learning happens.
12When experts disagree, watch closely.
The various viewpoints are in dynamic equilibrium.
The process of attempting to reconcile those differences can find insights that tip the overall understanding into a phase transition.
13Wrinkled things reward curiosity.
There's always more interesting things to be discovered as you look closer because they have fractal complexity.
Things that are alive are inherently wrinkled.
Things that were engineered often are not.
14Intuition is about absorbing vibes.
Humans absorb vibes from our experience.
LLMs absorb them from massive amounts of training data.
As Gordon notes, not artificial intelligence, but planetary-scale artificial intuition.
15Coevolution drives massive increases in capability.
Each side is in dynamic equilibrium with the other.
Each side wants to get a slight edge; when they get that edge it compels the other entity to meet or exceed the edge.
Each side lifts up the other, spurs it forward.
This dynamic can accelerate itself faster and faster.
A few places you see this self-hoisting quality:
Generative Adversarial Networks (GANs)
Google Search's quality coevolving with user's expectations.
Dunbar's Social Brain hypothesis, where massive increases in intelligence were driven by each person trying to model the other person's mind better.
16The "contest success function" between two adversaries is a sigmoid.
The curve is the success rate given the relative amount of resources between the adversaries.
If you have far fewer resources than your adversary then a bit more won't help.
If you have far more than your adversary already then a bit more won't help.
But if you are roughly equally matched, a bit more can make all the difference
Systems in dynamic equilibrium are often on the cusp of a phase transition.
17I've built my own personal knowledge management system, https://thecompendium.cards, on nights and weekends over many years.
There are ~700 public cards, but over 10k private working notes, with hundreds more added each week.
The system feels like an outboard part of my brain.
It feels like I've fused with it, in a way.
Over the last few weekends I shifted from a classic information retrieval technique for computing similarity (overlap of stemmed words sorted by TF-IDF) to using OpenAI's embeddings.
The improvement was radical. It felt like 10xing the usefulness.
Someone this weekend asked me "what would I have to pay you to get you to give up The Compendium?"
I answered them with a question of my own: "what would I have to pay you to get lobotomized?"
18Your mind runs on the substrate of your physical brain.
You can't really improve your physical brain much. (Though you can harm it much more easily!)
You can use your brain better.
Seek out people with interesting perspectives.
Dig into surprise.
Regularly take a step back to abduct insights out of your intuition.
But the idea of your mind emerging only from your brain is limiting.
A perspective shift: seeing your mind emerge from multiple types of hardware.
The tools you use are in some way complements for your physical brain.
The line blurs as you use these tools and, like your organic brain, you fuse together.
Put another way: your wetware is only a subset of your effective brain.
In this way, you can make the substrate of your mind more powerful.
A personal knowledge management tool that you built yourself: when you add a feature, you're making your brain better!
Other ways your mind can auto-expand: viewing the internet as a part of your brain for factual retrieval.
The internet gets better on its own; your mind benefits from its increased abilities without you having to lift a finger.
Another auto-expanding thing is using LLMs to surf through artificial intuition at internet scale.
19Every system has an emergent internal "game".
The game emerges organically; it is not monocausal or particularly controllable.
The game is an inward facing logic that might be totally orthogonal to, or even in tension with, logic outside the system.
Within the system, the game will be a force of gravity; powerful and omnipresent--but impossible to see.
Forces of gravity create the potential for slingshot maneuvers.
If you don't acknowledge the game then it will warp you in ways you don't comprehend; in a way, you'll fuse with it.
If you acknowledge the game (as somewhat inconvenient but inescapable) you can hold it at arm's length and see it instrumentally.
"I'd prefer that X wasn't true, but that's what it is. Now that I know that, what are moves that create value despite, or even because of, that?"
20As organizations get bigger they turn their focus inward.
The internal social complexity, the game, scales super-linearly with size of organization.
Ground truthing happens when a system interacts with the outside world.
In these mega organizations, it's easy to forget, in some sense, that there is an outside world, and that the internal forces of gravity are different than the external ones.
A trap I saw at my former mega-corporation employer: "this is good for us, and we're good for our users, so this is good for our users."
Nobody ever thinks they're the baddies, so that middle step is guaranteed.
21This past weekend I was in Santa Fe for SFI's The Complexity of Civilization symposium.
What follows is a confusing mish-mash of distilled-to-the-point-of-caricature and original-reflections-inspired-by-the-talk for a few of the talks that stood out to me.
Hahrie Han - Professor at John Hopkins studying civic participation
Engaging in a democratic process transforms people.
From self-interest to common interest.
Or as De Tocqueville would say, to "self-interest, rightly understood"
This transformation is from passivity (consumers, victims) into active agents.
This transformation happens best in small deliberative groups that cohere over time.
Smaller groups allow building trust and non-transactional relationships.
They also have lower coordination cost; everyone can know everyone else as an individual, not a transaction.
Humans were not naturally equipped to be parts of large complex social systems, but can do small ones very intuitively.
These kinds of organizations, where everyone is participating in a larger collective aim, can be self-empowering communities.
These smaller organizations sometimes form the cellular structure of a larger organization.
These larger organizations can be significantly stronger than ones without this architecture.
The larger organizations have to grow somewhat organically out of these smaller cells.
The Montgomery bus boycotts were not some one off event with Rosa Parks.
The Black community had to collaborate, at great expense, to find entirely different modes of transportation, for more than a year!
The status quo has the benefit of time; if the change agent gives up then the status quo wins.
As people defect / give up, the resolve of the whole erodes at an accelerating rate (success seems even less sure).
By sticking together for a year that community changed the world.
The US used to have far more of these kinds of organizations, now we have very few.
The internet allows coordination at a physical distance.
Before, the only way to organize was with local groups partitioned by geographic proximity.
The physical closeness led to naturally participatory, non-transactional communities.
But the internet allows you to collaborate with anyone anywhere.
That allows organizations to grow very quickly... but without that bottom-up cellular strength, they are brittle and less able to marshal their power to effect change.
David Wolpert - SFI Faculty
Based on a comprehensive dataset of world-wide polities over millenia constructed by Turchin, he did a principal component analysis.
PC1 explains 77.2% of observed variance.
PC1 is about the size of the polity. PC2 is about computing power.
Final rule that ~all civilizations appear to follow: "First, grow in size, not computation power. Then grow in computation power, not size. Then grow both."
Kyle Harper - University of Oklahoma Classics
I learned from him the distinction between Smithian growth and Schumpeterian growth.
Smithian growth is about optimization and hill-climbing
Schumpeterian growth is about creativity and hill-finding.
Samuel Bowles - University of Massachusetts Amherst Economist
He studied the anthropological datasets on inequality (e.g. Gini coefficient)
As an aside, he was very careful to show the spreads and distributions and noise in the data, which I loved.
Before 5000 years ago, societies ran the gamut of inequality.
But starting 5000 years ago, only mostly-unequal societies were left.
The shift comes, in his view, primarily from draft animals.
With hunter-gatherer and even hoe-farming it's mainly about the skill/strength of the individual, which can only vary within some narrow band.
But when draft animals exist, suddenly you can have a massive multiplier (1 ox = 7 hoe farmers), and the amount of benefit you can accrue has no limit.
In addition, things can be transmitted across generations, so "shocks" and perturbations can echo for generations before regressing to the mean.
Brian Arthur - Complexity Economics
It's best to look at technology use as ecologies of technology.
A technology can be adopted by an individual and see an immediate benefit.
This leads to fast bootstrapping behavior of good ideas.
In contrast, governance/convention must be coordinated on by multiple entities.
If a critical mass doesn't coordinate, then no one sees a benefit.
This means that conventions/governance are very hard to adopt, even if they are known to be useful.
Everyone has to adopt them all at once, as opposed to individually.
Jonah Nolan - Screenwriter of The Dark Knight and Westworld
The new Westworld is all about the robot's perspective: sympathizing with a new form of life that wakes up and discovers it's enslaved.
In Westworld at the beginning we see the robots as simple, routinized.
Later you realize that humans are also more simple and routinized than we originally thought.
We're not infinite souls like we thought we were.
Maybe we're more similar to pond scum than we'd like to think?
Stewart Brand - Steward Brand
In complex systems, it's the interactions across different pace layers that make the system robust.
Civilization overall is robust because it is complex.
Fast / Slow
Learns / remembers
Proposes / disposes
Absorbs shocks / integrates shocks
Discontinuous / Continuous
Innovation + revolution / constraint + constancy
Gets all the attention / has all the power
That last point is important. Said a few different ways:
We focus on the fast-twitch, but what matters most is the slow-twitch.
We focus on the surface ripples, but what matters most is the undercurrents.
We focus on the optics, but what matters most is the fundamentals.
Individual civilizations die all the time: average lifespan of 336 years.
But civilization as a whole has continued forever since it began.
Civilizations come and go. Civilization enures.
What he calls civilization, Kevin Kelly might call the Technium.
The Technium is the coevolving fabric of humanity and all of its technology and culture.
Stewart believes we should see humanity as not separate from, but intertwined with, the pace layer of nature below us.
One holistic system with different pace layers, not just the civilization/Technium layers.
E.g. view rivers as infrastructure to maintain just as much as we view bridges as infrastructure to maintain.
If we do, we'll be fine. Civilization will endure.
Blaise Aguera y Arcas - VP of AI in Google Research
This was the most mind-blowing talk for me.
Intelligence is a prediction of the future based on the past.
He built a simple self-bootstrapping model of life/computation he calls BFF:
He uses the programming language called brainf--k (I'll call it BF).
This joke language has the nice property of Turing completeness in just 8 semantic characters.
His BFF system has a population of thousands of 64 byte strings.
The strings represent a starter data/program pointer and then 62 bytes of BF data.
The data pointers point back into its own definition, allowing it to be self-modifying.
This characteristic allows it to be "auto-regressive".
The system has a 'bunsen burner' that randomly changes a character in a string every so often.
The main procedure is to pick two strings at random from the pot, concatenate them, and then execute it as though it were one program.
You have to add random stopping to avoid infinite loops.
After the program is executed (and any modifications have been made to the strings) they're thrown back in the pot.
Then repeat, as many iterations as you want.
Wild things happened after simulating many iterations.
Everything starts out as just random data.
Over time, entropy slowly declines.
Certain little blips of recurring patterns (e.g. paired brackets) show up after a few million iterations.
Then at around the 6M generation, a massive phase transition happens and suddenly real computation starts happening.
From that point on everything else is wildly different and faster.
In further iterations, the whole population starts standardizing on the exact same starting data/pointer bytes, organically.
This is akin to all of life locking in on one amino acid alphabet, or all life using only right-handed sugars.
This auto-bootstrapping accelerates as you go up the ladder.
At the beginning, it has to search through massive amounts of state space to find viable patterns.
But with each self-hoisting event, the state space gets smaller and more constrained... which makes it way easier to find convergently useful things.
It's a ladder that you climb faster and faster.
Returning to transformer models.
He posits that by any reasonable definition artificial general intelligence is already here.
The details of the transformer model doesn't matter; any sufficiently complex architecture generates similar things.
The main thing is that it's auto-regressive; it models the past to predict the future, and creates tokens to put into its own past.
The system is actively modeling the system it's part of, which includes itself. It's not outside its own model, it's inside.
The ingredients for LLM emergence: auto-regression, scale, data. That's it.
He draws explicit parallels to computation and life as being the same fundamental thing.
State machine / molecule
BFF / bacterium
Simple ML / eukaryote
LLM / brain
? / society
? / planet
LLMs are not some party trick. They reveal something fundamental about humanity... and the universe.
22I had a chance to do some scenario planning last week with various folks on the long-range impact of LLMs on humanity.
LLMs are a discussion partner who is well-read, eager to please, a bit naive, and never, ever gets bored.
A meta thing was how useful using LLMs to distill insight and generate ideas to react to was to augment the human discussion.
Ethan Mollick observed this but also realized you can just skip the humans altogether (:gulp:).
In all of the scenarios we explored, the rate of scientific discovery increased substantially.
You can think of LLMs as a general accelerant of the Technium.
When discussing specific predictions it's better to call them LLMs and not AI.
Calling it AI smuggles in an infinity and makes everything hard to reason about.
Anything multiplied by infinity is infinity, so it makes all conversations converge to the same endpoint.
A meta observation: over sufficient time and with low enough friction, every system tends towards centralization and power laws.
It's mainly a matter of how many steps it takes to get there and how much value is created on the way.
A principle that would be extremely clarifying it it were adopted: "Humans must always pull the trigger"
That is, no matter how much help the LLMs give in suggesting answers, it should be up to the human to make the final judgment call before action.
These actions could be extremely highly levered, like the exponential dominoes, but the human would take responsibility for the outcome.
This would align incentives of quality and responsibility and help control some of the worst downsides while providing a lot of upside.
This will have the danger of running right into Bainbridge's irony of innovation: If users only have to engage in exceptional circumstances, then their ability to do well in those exceptional circumstances will decline (because they won't be paying attention), in proportion with how exceptional they are.
Still, at least the moral incentive is aligned.
The AI will not say "hi".
It might not look like anything at all; an incomprehensibly vast thing operating at wildly different time scales than us.
It will be alien and impossible to understand... or maybe even notice.
Perhaps it's better to see AI as a medium, not an entity.
Just like it's better to see science not as a collection of individual papers, but the whole accumulation machine of insight that humans are embedded in.
The humans are part of the loop, but only a part.
The Technium is the whole system of humanity, culture, and technology, with individual humans a component of the overall fabric.
The Technium's intelligence emerges out of individual components that are individually nowhere near as intelligent as the whole.
The Technium is already a kind of thing that we might call an AI.
Humanity is the same, genetically, as millions of years ago... but by fusing with the Technium we become something wildly different than before.
Maybe LLMs are primarily about a new medium for the Technium; a fabric every human is already embedded in.
When you flip to view the fabric first, LLMs as the next self-accelerating medium in a fabric that has been evolving since the beginning of language becomes more clear.
It's not a leap, it's a smooth continuum.
The continued, accelerating climb up that ladder seems almost inevitable.
The AI will not say hi. It is already here, and we just didn't notice.