"The relationship between reasoning and language is similar to the relationship between physics and math.
When a supergiant star goes supernova, the universe doesn't do math to determine if it should end up forming a black hole or a neutron star; an unfathomably large number of complex interactions at the subatomic level dictates the outcome.
However, math is a surprisingly good substitute, to the point that (despite never having physically observed the process in real time), we can be confident that if the remnants of the supernova are below a precise threshold (~3x greater than the mass of the sun), a neutron star will form.
So, an LLM doesn't reason, but manipulating language is a very good substitute for reasoning.
If we're going to build "strong" AI a la Searle it will need to work in a different medium."