The most important thing to drive LLMs is to curate good context.

· Bits and Bobs 9/9/24

With the right context, LLMs are very good at producing high-quality output.

The hard part is no longer the magical thing working on your data, it's having the relevant data all in one place.

Today's chatbot models for LLMs give an implied context: the previous parts of the conversation.

This allows you to accumulate more context as you converse with it.

But this is a limited form of context curation, because it is implicit, and more importantly, append only.

Once you find the right context in a conversation, you want to be able to prune the unrelated parts, the meandering dead ends from the conversation that might confuse the LLM in the future.

The more tightly tuned the context, the better the answer.

You can think of this curation as gardening of contexts.

How can you design a product so that incremental gardening feels natural, easy, and also productive?

In such a system, the user would be doing creative work, steering the LLM, but it would feel like a byproduct of doing gardening work that already made sense to do on their data.

More on this topic

From other episodes