LLMs "talk" to themselves internally in embeddings, and only reduce to words when they need to talk to humans.

· Bits and Bobs 10/7/24

(This is a highly stylized metaphor of the actual workings of LLMs.)

When an LLM needs to communicate with another entity, it reduces to words, which can be understood by a human or another LLM.

But imagine if two systems are speaking, and have the same LLM on both sides.

If it can discover that, it can talk in the more efficient encoding: in the embedding, directly.

This requires both the sender and receiver to be using the same embedding space.

A natural network effect for the embedding space; people will tend to use the one that others use, all else equal, to have a better chance of communicating with a partner.

But now an observer will feel left in the dark.

"What are the two of you talking about? … You aren't plotting anything, are you? … Hello!?"

More on this topic

From other episodes