This week's wild west roundup, this time using LLMs incidentally in attack chains:
- This week's wild west roundup, this time using LLMs incidentally in attack chains:
- zack_overflow: "A popular NPM package got compromised, attackers updated it to run a post-install script that steals secrets
- But the script is a *prompt* run by the user's installation of Claude Code. This avoids it being detected by tools that analyze code for malware
- You just got vibepwned"
- PromptLock ransomware: "The PromptLock malware uses the gpt-oss-20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly, which it then executes. PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption"