This past weekend I was in Santa Fe for SFI's The Complexity of Civilization symposium.

What follows is a confusing mish-mash of distilled-to-the-point-of-caricature and original-reflections-inspired-by-the-talk for a few of the talks that stood out to me.

Hahrie Han - Professor at John Hopkins studying civic participation

Engaging in a democratic process transforms people.

From self-interest to common interest.

Or as De Tocqueville would say, to "self-interest, rightly understood"

This transformation is from passivity (consumers, victims) into active agents.

This transformation happens best in small deliberative groups that cohere over time.

Smaller groups allow building trust and non-transactional relationships.

They also have lower coordination cost; everyone can know everyone else as an individual, not a transaction.

Humans were not naturally equipped to be parts of large complex social systems, but can do small ones very intuitively.

These kinds of organizations, where everyone is participating in a larger collective aim, can be self-empowering communities.

These smaller organizations sometimes form the cellular structure of a larger organization.

These larger organizations can be significantly stronger than ones without this architecture.

The larger organizations have to grow somewhat organically out of these smaller cells.

The Montgomery bus boycotts were not some one off event with Rosa Parks.

The Black community had to collaborate, at great expense, to find entirely different modes of transportation, for more than a year!

The status quo has the benefit of time; if the change agent gives up then the status quo wins.

As people defect / give up, the resolve of the whole erodes at an accelerating rate (success seems even less sure).

By sticking together for a year that community changed the world.

The US used to have far more of these kinds of organizations, now we have very few.

The internet allows coordination at a physical distance.

Before, the only way to organize was with local groups partitioned by geographic proximity.

The physical closeness led to naturally participatory, non-transactional communities.

But the internet allows you to collaborate with anyone anywhere.

That allows organizations to grow very quickly... but without that bottom-up cellular strength, they are brittle and less able to marshal their power to effect change.

David Wolpert - SFI Faculty

Based on a comprehensive dataset of world-wide polities over millenia constructed by Turchin, he did a principal component analysis.

PC1 explains 77.2% of observed variance.

PC1 is about the size of the polity. PC2 is about computing power.

Final rule that ~all civilizations appear to follow: "First, grow in size, not computation power. Then grow in computation power, not size. Then grow both."

Kyle Harper - University of Oklahoma Classics

I learned from him the distinction between Smithian growth and Schumpeterian growth.

Smithian growth is about optimization and hill-climbing

Schumpeterian growth is about creativity and hill-finding.

Samuel Bowles - University of Massachusetts Amherst Economist

He studied the anthropological datasets on inequality (e.g. Gini coefficient)

As an aside, he was very careful to show the spreads and distributions and noise in the data, which I loved.

Before 5000 years ago, societies ran the gamut of inequality.

But starting 5000 years ago, only mostly-unequal societies were left.

The shift comes, in his view, primarily from draft animals.

With hunter-gatherer and even hoe-farming it's mainly about the skill/strength of the individual, which can only vary within some narrow band.

But when draft animals exist, suddenly you can have a massive multiplier (1 ox = 7 hoe farmers), and the amount of benefit you can accrue has no limit.

In addition, things can be transmitted across generations, so "shocks" and perturbations can echo for generations before regressing to the mean.

Brian Arthur - Complexity Economics

It's best to look at technology use as ecologies of technology.

A technology can be adopted by an individual and see an immediate benefit.

This leads to fast bootstrapping behavior of good ideas.

In contrast, governance/convention must be coordinated on by multiple entities.

If a critical mass doesn't coordinate, then no one sees a benefit.

This means that conventions/governance are very hard to adopt, even if they are known to be useful.

Everyone has to adopt them all at once, as opposed to individually.

Jonah Nolan - Screenwriter of The Dark Knight and Westworld

The new Westworld is all about the robot's perspective: sympathizing with a new form of life that wakes up and discovers it's enslaved.

In Westworld at the beginning we see the robots as simple, routinized.

Later you realize that humans are also more simple and routinized than we originally thought.

We're not infinite souls like we thought we were.

Maybe we're more similar to pond scum than we'd like to think?

Stewart Brand - Steward Brand

In complex systems, it's the interactions across different pace layers that make the system robust.

Civilization overall is robust because it is complex.

Fast / Slow

Learns / remembers

Proposes / disposes

Absorbs shocks / integrates shocks

Discontinuous / Continuous

Innovation + revolution / constraint + constancy

Gets all the attention / has all the power

That last point is important. Said a few different ways:

We focus on the fast-twitch, but what matters most is the slow-twitch.

We focus on the surface ripples, but what matters most is the undercurrents.

We focus on the optics, but what matters most is the fundamentals.

Individual civilizations die all the time: average lifespan of 336 years.

But civilization as a whole has continued forever since it began.

Civilizations come and go. Civilization enures.

What he calls civilization, Kevin Kelly might call the Technium.

The Technium is the coevolving fabric of humanity and all of its technology and culture.

Stewart believes we should see humanity as not separate from, but intertwined with, the pace layer of nature below us.

One holistic system with different pace layers, not just the civilization/Technium layers.

E.g. view rivers as infrastructure to maintain just as much as we view bridges as infrastructure to maintain.

If we do, we'll be fine. Civilization will endure.

Blaise Aguera y Arcas - VP of AI in Google Research

This was the most mind-blowing talk for me.

Intelligence is a prediction of the future based on the past.

He built a simple self-bootstrapping model of life/computation he calls BFF:

He uses the programming language called brainf--k (I'll call it BF).

This joke language has the nice property of Turing completeness in just 8 semantic characters.

His BFF system has a population of thousands of 64 byte strings.

The strings represent a starter data/program pointer and then 62 bytes of BF data.

The data pointers point back into its own definition, allowing it to be self-modifying.

This characteristic allows it to be "auto-regressive".

The system has a 'bunsen burner' that randomly changes a character in a string every so often.

The main procedure is to pick two strings at random from the pot, concatenate them, and then execute it as though it were one program.

You have to add random stopping to avoid infinite loops.

After the program is executed (and any modifications have been made to the strings) they're thrown back in the pot.

Then repeat, as many iterations as you want.

Wild things happened after simulating many iterations.

Everything starts out as just random data.

Over time, entropy slowly declines.

Certain little blips of recurring patterns (e.g. paired brackets) show up after a few million iterations.

Then at around the 6M generation, a massive phase transition happens and suddenly real computation starts happening.

From that point on everything else is wildly different and faster.

In further iterations, the whole population starts standardizing on the exact same starting data/pointer bytes, organically.

This is akin to all of life locking in on one amino acid alphabet, or all life using only right-handed sugars.

This auto-bootstrapping accelerates as you go up the ladder.

At the beginning, it has to search through massive amounts of state space to find viable patterns.

But with each self-hoisting event, the state space gets smaller and more constrained... which makes it way easier to find convergently useful things.

It's a ladder that you climb faster and faster.

Returning to transformer models.

He posits that by any reasonable definition artificial general intelligence is already here.

The details of the transformer model doesn't matter; any sufficiently complex architecture generates similar things.

The main thing is that it's auto-regressive; it models the past to predict the future, and creates tokens to put into its own past.

The system is actively modeling the system it's part of, which includes itself. It's not outside its own model, it's inside.

The ingredients for LLM emergence: auto-regression, scale, data. That's it.

He draws explicit parallels to computation and life as being the same fundamental thing.

State machine / molecule

BFF / bacterium

Simple ML / eukaryote

LLM / brain

? / society

? / planet

LLMs are not some party trick. They reveal something fundamental about humanity... and the universe.

More on this topic

From other episodes