Letting AI agents run your life is like handing the car keys to your 5-year-old.
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
@briankrebs This sounds like the optimally worst implementation of a digital assistant. I was looking for a distant variant of this, where I set the (mostly deterministic) rules and actions, etc. Kinda like HA/node-red but aimed at being an assistant rather than controlling a house.
Giving it a blank cheque and hooking up to a LLM is insane. -
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
@briankrebs The stories are writing themselves: My A.I. agent took over my finances, framed me for sex trafficking, then unlocked the doors and turned on lights for the police.
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
@briankrebs and if the bot "touches" something it was not allowed to touch? "Sorry, me bad, won't do it again, probably" - does it again.
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
@briankrebs it would be funny if it wasn't so sad. And scary.
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
@briankrebs It's fucking bonkerstown. You might as well just wire up a random number generator and every 1/10 times it just deletes your home directory. At least that achieves the same result but with a fraction of the electricity and human rights abuses.
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
-
@briankrebs of course!
The "let's secure one (or maybe 5) agent(s) at a time" security thing is cute. If I read about another A&A framework approach to this stuff I am going to start using agents to run my life.
@noplasticshower @briankrebs
1) Configure agent with guardrails
2) Agent runs into guardrails
3) Agent spins up secondary agent without guardrails
4) Oh no. -
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
@briankrebs ok child, you can have the scissors, but only if you promise not to run with them
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
Getting so desperate with that market bubble that they give a last dying shot at making SkyNet.
Do they ever give the hell up?
-
@noplasticshower @briankrebs
1) Configure agent with guardrails
2) Agent runs into guardrails
3) Agent spins up secondary agent without guardrails
4) Oh no.@toriver @briankrebs that's just two. Let talk about 10,000
-
Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?
I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.
"The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."
@briankrebs 4-year-old me drove our family car into a hedge (managed to steal the keys from dad).
I actually think I'd do a better job as a 5-year-old. Definitely better to han any "AI" would ever run your life anyway. -
E energisch_@troet.cafe shared this topic