@alicedotjpog That is absolutely what this is and everyone is aware. They're just really mad at genAI.
mttaggart@infosec.exchange
Beiträge
-
It's 2026 now. -
It's 2026 now.RE: https://infosec.exchange/@mttaggart/113694884783855934
It's 2026 now. Boost if you're ready to destroy genAI entirely.
-
Problem: LLMs can't defend against prompt injection.What are we doing with our time on this earth
https://www.promptarmor.com/resources/claude-cowork-exfiltrates-files
https://www.varonis.com/blog/reprompt -
Problem: LLMs can't defend against prompt injection.@cR0w That's really where all the troubles began, isn't it
-
Problem: LLMs can't defend against prompt injection.Problem: LLMs can't defend against prompt injection.
Solution: A specialized filtering model that detects prompt injections.
Problem: That too is susceptible to bypass and prompt injection.
Solution: We reduce the set of acceptable instructions to a more predictable space and filter out anything that doesn't match.
Problem: If you over-specialize, the LLM won't understand the instructions.
Solution: We define a domain-specific language in the system prompt, with all allowable commands and parameters. Anything else is ignored.
Problem: We just reinvented the CLI.