LLMs are built for 4-up evolution UIs.
That allow useful things to happen even for unreliable signals by allowing an intuitive and natural way for a human to be in the loop:
"choose which of these you like best."
Not a "critique what you don't like" or something complex, just "which of these four do you like best," which you could do with a gut reaction if you want.
You can take more time will help you make better decisions, but you can still make a decision quickly by gut.