A domain where LLMs will be useful: where things haven't been tried because they'd take too much time and patience.
For studying things within a discipline, where our existing structures have already particle collided a lot of ideas, LLMs aren't as useful.
LLMs are at their best interpolating (filing in gaps) rather than creating something new and out-of-sample.
However there's a large class of interdisciplinary things, a "apply model from discipline X to discipline Y domain" that real humans have never tried because it takes too much time to wade through the literature of another discipline and see what things to apply to your discipline.
So we only get it happening stochastically for weirdos who are deep in two disciplines simultaneously, a very small subset of possible combinations.
But LLMs are deep in every discipline!
They could presumably do some automated particle colliding of ideas across disciplines.