LLMs are like an absurdly good lossy compression scheme for knowledge.
- LLMs are like an absurdly good lossy compression scheme for knowledge.
- They can absorb echoes of everything that was thrown at them during training.
- If you throw enough at them, they can echo back things at almost full fidelity.
- LLMs are hyper compressed knowledge, with the answers right there, just waiting for the right question to come along and pluck them out of the model's hologram of memory.