I've seen a lot of approaches that mitigate the danger of malicious skills.
- I've seen a lot of approaches that mitigate the danger of malicious skills.
- Or that mitigate the danger of naive LLMs confusing themselves.
- But still nothing in the market that credibly mitigates prompt injection.