This week's round up of "we're in the wild west era with LLMs":
- This week's round up of "we're in the wild west era with LLMs":
- A postmortem for a vibecoded tool called DrawAFish that had abuse problems.
- A Cursor exploit that allows arbitrary remote code execution.
- Allows exfiltration of sensitive Google Drive docs a user added to ChatGPT via the Connectors, with no interaction from the user.
- The reason we aren't seeing more about prompt injection yet is not because it won't be a problem
- It's because it's the first inning of having a widely deployed attack surface in ChatGPT.
- Hackers demonstrated how a poisoned calendar invite could allow them to take control of Google Home-connected physical devices.
- Futurism sums it up well: "It's Staggeringly Easy for Hackers to Trick ChatGPT Into Leaking Your Most Personal Data"