LLMs are pachinko machines that have paths for anything that any writing humans have done in the past.
But if there wasn't any in the training set, it has no idea.
It matches based on superficial similarity, not fundamental similarity.
If there are things that are similar, fundamentally, to your task in the training data, but not superficially, it will get confused and not know what to do.