LLMs (including Deep Research) assume your question is coherent or a good one.
- LLMs (including Deep Research) assume your question is coherent or a good one.[vc]
- It's very easy to accidentally trick yourself with some superficially good output on a fundamentally flawed question.
- Model: "Turns out you were right all along!"
- Human: "Just as I expected, thank you!"