I think it would not be great if most LLM usage in the US is an open-source Chinese model.
- I think it would not be great if most LLM usage in the US is an open-source Chinese model.
- First, Anthropic's research shows it's remarkably easy to poison a model of arbitrary size with deliberately chosen malicious training data.
- Second, if there's a model that everyone uses that has a subtle but consistent bias, that bias at society scale could lead to significant society-scale impacts.
- The Ouija Board effect again: a consistent bias in a noisy signal, at scale, leads to large emergent macro effects.