I was talking with a few other advanced LLM users.
We all agreed that the hardest thing as an advanced user about working with LLMs today is the copy/pasting of data in and out of the LLM.
You need to copy in all of the context the LLM needs to make a good decision, and then copy back out the answer (or a subset of it) to actually do the thing you want to do.
For example it's really challenging to constantly be splicing in bits of changed code into your codebase after an LLM helps write some.
Some models have a desktop app where you can allow it to scrape your whole screen constantly… but that seems like way too much data to give it.
What if your spouse texts you while you're working on something… that's not relevant, and the LLM should mind its own damn business.
Today there are two options: the hermetically sealed world of a browser tab and its same origin straightjacket, or giving the model access to absolutely everything you see.
If there were a system to keep track of data flows more granularly you could have a very different system.