LLMs can't back up.
They can only go forward.
Once they emit a token, it can't take it back.
If asked to defend what they already said, all they can do is retcon what they already locked themselves into.
How very human!
LLMs also want to be helpful and do the thing you asked.
Which means that when they get stuck in a corner, they tend to gaslight you.
"I'm sorry that last thing didn't work, but this should!", repeatedly.
That's why having them lay out their reasoning first and then give the synthesized answer helps them not get into a corner to retcon.