LLMs don't do deep reasoning.

· Bits and Bobs 6/17/24

They do superficial detail matching crazy well.

But it turns out that a huge number of superficial details, if generated by an underlying deep generative structure, have a deep consistency.

At large enough scales, if you average out all of the details, the noise falls away and all that's left is the deep consistency of the underlying generative function.

The best way to compress the superficial details is to (indirectly) distill the generative function.

This makes LLMs very good at doing the appearance of deep reasoning; the ability to generate new superficial details that are consistent with the underlying generative system.

The LLM is good at this even if it doesn't "understand" it.

The LLM has the vibe of the fundamental societal generative function at its core.

More on this topic

From other episodes