LLMs have the Harry Potter problem that recommender systems have.
- LLMs have the Harry Potter problem that recommender systems have.
- Imagine a recommender system that recommends books given that you liked a specific book.
- But books that are widely liked, like Harry Potter, are liked by everyone no matter what the previous book is.
- This means that the naive recommendations from the system will simply recommend Harry Potter to everyone.
- The solution is to create the baseline popularity of the books and then correct for that in the recommendations, so you get books that are more popular specifically because of the previous book, not because the recommended book is popular.
- A similar kind of insight as TF-IDF in information retrieval.
- LLMs do the same kind of thing.
- If you ask it to tell you something interesting, it will tell you the same interesting thing all the time.
- Things that most humans would find interesting, but not necessarily what you would find interesting.
- It's kind of similar to the kinds of questions you can't ask with RAG.
- RAG doesn't allow you to ask questions like "what are the themes in this work"; it can only select things that are surface-level, not emergent qualities.
- LLMs pull you towards the average, so you need to inject specific angles you want to go into.
- If you give the same prompt[lk] as others, you'll get the same answers as others.
- The prompt quality directly drives the output quality.
- To get LLMs to give you interesting results you have to ask it interesting questions.[ll]