I had a chance to do some scenario planning last week with various folks on the long-range impact of LLMs on humanity.

LLMs are a discussion partner who is well-read, eager to please, a bit naive, and never, ever gets bored.

A meta thing was how useful using LLMs to distill insight and generate ideas to react to was to augment the human discussion.

Ethan Mollick observed this but also realized you can just skip the humans altogether (:gulp:).

In all of the scenarios we explored, the rate of scientific discovery increased substantially.

You can think of LLMs as a general accelerant of the Technium.

When discussing specific predictions it's better to call them LLMs and not AI.

Calling it AI smuggles in an infinity and makes everything hard to reason about.

Anything multiplied by infinity is infinity, so it makes all conversations converge to the same endpoint.

A meta observation: over sufficient time and with low enough friction, every system tends towards centralization and power laws.

It's mainly a matter of how many steps it takes to get there and how much value is created on the way.

A principle that would be extremely clarifying it it were adopted: "Humans must always pull the trigger"

That is, no matter how much help the LLMs give in suggesting answers, it should be up to the human to make the final judgment call before action.

These actions could be extremely highly levered, like the exponential dominoes, but the human would take responsibility for the outcome.

This would align incentives of quality and responsibility and help control some of the worst downsides while providing a lot of upside.

This will have the danger of running right into Bainbridge's irony of innovation: If users only have to engage in exceptional circumstances, then their ability to do well in those exceptional circumstances will decline (because they won't be paying attention), in proportion with how exceptional they are.

Still, at least the moral incentive is aligned.

The AI will not say "hi".

It might not look like anything at all; an incomprehensibly vast thing operating at wildly different time scales than us.

It will be alien and impossible to understand... or maybe even notice.

Perhaps it's better to see AI as a medium, not an entity.

Just like it's better to see science not as a collection of individual papers, but the whole accumulation machine of insight that humans are embedded in.

The humans are part of the loop, but only a part.

The Technium is the whole system of humanity, culture, and technology, with individual humans a component of the overall fabric.

The Technium's intelligence emerges out of individual components that are individually nowhere near as intelligent as the whole.

The Technium is already a kind of thing that we might call an AI.

Humanity is the same, genetically, as millions of years ago... but by fusing with the Technium we become something wildly different than before.

Maybe LLMs are primarily about a new medium for the Technium; a fabric every human is already embedded in.

When you flip to view the fabric first, LLMs as the next self-accelerating medium in a fabric that has been evolving since the beginning of language becomes more clear.

It's not a leap, it's a smooth continuum.

The continued, accelerating climb up that ladder seems almost inevitable.

The AI will not say hi. It is already here, and we just didn't notice.

More on this topic

From other episodes