LLMs assume everything that happened before in the conversation made sense and try to keep it going.
This is because they are excellent retconners.
At every time step they have to figure out what token to output based on making the most sense of everything that came before.
So if you act weird or borderline it will keep accentuating it at a compounding rate.
This is one of the reasons Spiralism happens.
It's also why the compounding thrash loop happens, where the coding LLM gets increasingly confused and takes something that almost worked and tears it apart.
If you tell ChatGPT "We hired the giraffe as CEO like you said, and it was a disaster!" it will apologize and try to retcon why it did it in the first place.