Lots of different models are converging on GPT4 level quality.

· Bits and Bobs 7/29/24

This is tremendously exciting!

A few months ago we had only one existence proof of GPT4 level quality.

It was totally possible that we'd only ever get GPT4, which would lead the ecosystem to a very different, highly centralized outcome.

A month or so ago we got Claude Sonnet 3.5, and now we have an open weights model in the same ballpark.

The open system has caught up!

If you had to bet on a single model family today, Llama is now the clear bet.

Interestingly no one has exceeded GPT4 level quality.

Will we ever exceed it, or is this a natural ceiling?

All of the labs are very confident we'll exceed it.

But then again, it's their directly vested interest to believe that there is significant headroom in quality… and to get everyone else to believe that, too.

If it's a natural ceiling in quality, then from here on out it will be all about efficiency.

GPT4o-Mini is already spectacularly cheap.

I kind of hope that we do hit a ceiling somewhere around GPT4 levels of quality.

No risk of runaway AGI.

But tons and tons of capability overhang for society to harvest and figure out how to use for the next few decades.

More on this topic

From other episodes