LLMs are inherently bland.
If you ask ChatGPT to ask you an interesting question it'll say something super generic like: "What movie or book do you think everyone should read or see, and why?"
LLMs only know how to do the average, safe thing, the most regression-to-the-mean thing within the frame you give it.
However you can get LLMs to say interesting or piquant things.
To do so, you have to give them an interesting frame.
They'll still be bland within that frame, but the result is still interesting if the frame is interesting.
The frame comes from outside the LLM.
You use an external force to push it off its balance and into a new catchment area.
The default way to do this is to have a human give an interesting frame.
But you can get it to happen with a number of external processes.
For example, you can set up two LLMs in conversation: one to come up with ideas, and one to critique it like a skeptical user.
The ideas start off bland, but with each round of iteration in this loop, they get more and more interesting, leaning into what makes that idea different, accentuating it into something interesting.
An LLM on its own cannot be non-bland.
But a system with an LLM embedded in it (even if the LLM is most of the mass of the system) can be non-bland.