AI is a confusing catch-all term.

· Bits and Bobs 3/11/24

One way to think about it:

AI 1.0: Linear regressions, random forests, etc.

AI 2.0: Deep learning. Supervised learning with bespoke, high-quality training data.

AI 3.0: Unsupervised learning. LLMs. Messy, kitchen-sink, highly scaled training data.

In any given situation, it's still possible to make a better-performing AI 2.0 specific model.

But the notable thing for LLMs is that they're reasonably good at just about everything.

They are robustly tolerable, not precariously optimal.

In the early days of computers there was a debate of if specialized ASICs (which can have orders of magnitude better performance) or generalized chips would win out.

The latter won out in all but the most performance sensitive niches; the flexibility was just too important.

An ASIC takes the logic into a much lower pace layer that is expensive and slow to change.

A general purpose chip allows the logic to change in software, many pace layers higher.

LLMs aren't the best at any task; but they're good enough at a surprising diversity of them.

More on this topic

From other episodes