Personal context would improve the behavior of LLM-based systems, but is fundamentally risky.

· Bits and Bobs 4/14/25
  • Personal context would improve the behavior of LLM-based systems, but is fundamentally risky.
    • There have been attempts, like RFC 9396, to describe how more fine-grained information could be permitted.
      • For example, you could express things like "only expose information that matches this regular expression, and is no more than 7 days old."
      • But those limitations are hard to administer, and still too binary and black and white.
    • For example, I'd be OK with a system that generates an insurance quote that can look at a wide swathe of my information–as long as the only thing the insurance company could ever learn directly is whether I'm approved or not at the end.
      • The insurance company would also want confidence that their algorithm was faithfully executed on real data, even if they can't see the data.