LLMs don't reason, they intuit.
With enough scale, this can do an extremely convincing facsimile of reasoning.
LLMs appear to be possibly incapable of original reasoning, but so good at hyper-powered fuzzy intuition at scale that they can do a shockingly good facsimile.
A nice post on this from my friend Rohit: https://www.strangeloopcanon.com/p/what-can-llms-never-do
Humans are capable of reason.
But the vast, vast majority of the time we do what LLMs do.
We use a cached good-enough reasoning answer via hyper-powered fuzzy intuition.
Our "System 2" or reasoning center is extraordinarily expensive, and we'd rather not use it very often.
Every so often we need to fire it up to calculate a specific reasoned output.
Things like tests in academia are designed to force you to fire up your System 2 and demonstrate you can do novel reasoning in a context.
But in the vast majority of real-world situations, you don't need to actually do novel reasoning.
A good enough cached answer with a bit of fuzzy interpolation is totally fine.
Crucially, you don't have to do the reasoning yourself, you can crib off of what others have done.
If everyone around you does a certain task in a certain way and it seems to work well enough for them, why come up with something original? Just do that.
Perhaps this is where the human penchant for mimicry comes from.
At some point, some human applied reasoning to come up with that hypothesis and then execute it.
The idea turned out to be viable and work, so the person did it more times, and more people over time copied them.
If it hadn't worked, they would have never repeated the experiment.
The actions we see others around us do are, by and large, fundamentally likely to actually roughly work, otherwise they'd stop doing them.
They might "work" for a task that is not what the human intended but is still load-bearing in some other way, as in someone spending their afternoons in front of a slot machine.
Across society, a small number of people on any given day need to reason something unique, and then society as a whole can mimic the ideas that survive.
Society bootstraps its understanding and ratchets up, adding more good-enough rational moves to our collective repertoire.
And now LLMs have come along and can sample all of those existing moves and add them to its own artificial repertoire.