Just because LLMs can talk doesn't mean your app should have them talk.

· Bits and Bobs 9/30/24

A lot of UX patterns today using LLMs imply working with a human.

When there's an implied human you have to reason through "what is its personality? its goals? its abilities? What is it thinking? Will it judge me for this question?"

But it's also possible to have a system that can understand your plain language thoughts and act on them... but doesn't have a personality.

Spellcheck and Github Copilot are two examples.

It doesn't feel like you're working with a genie, it's just instantly offering completions you can accept or not.

The question isn't "what is this agent thinking", but just "is this autocompletion useful or not."

You can still use LLMs and their judgment to produce better suggestions; using the LLM in the back office, not the front office.

More on this topic

From other episodes