LLMs are by default way too eager to please, too gullible.
LLMs are too gullible, too earnest, too action oriented, to be trusted to do things fully on your behalf.
In my experience, Deep Research often gets tricked by SEO slop.
The slopification of the internet means that most of the data is untrustworthy.
A human looking at the SEO'd slop would see it's not credible.
But Deep Research doesn't have a vibe on what should feel credible, so it just accepts it as correct.
Now imagine an agent trying to be helpful for you in a task.
What could possibly go wrong?
The
expensive eggs were just the beginning.