LLMs today make boring mistakes because they can't learn.
That is, the model's weights are fixed at training time.
Some of the supporting systems around the model that help form the final, end-user-facing experience can be tweaked at a faster rate (e.g. system prompt, data fed in via RAG), but the actual model itself is updated only every few months given the expense.
This means that LLMs can't learn from mistakes they make today.
If an LLM makes a mistake and you point it out, it apologizes… but then never gets better from that interaction.
Presumably those kinds of interactions will be used during the training of future iterations of the model, but that feedback loop is indirect and very long.
Compare to for example some search ranking tweaks that can update nearly instantly.
Humans are not like this. They quickly absorb disconfirming evidence from mistakes and learn.
If you know the thing you're interacting with will learn, and learn quickly, you have more incentive to be patient and to try to teach it.