Mastodon Skip to content
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum
Einklappen
Grafik mit zwei überlappenden Sprechblasen, eine grün und eine lila.
Abspeckgeflüster – Forum für Menschen mit Gewicht(ung)

Kostenlos. Werbefrei. Menschlich. Dein Abnehmforum.

  1. Home
  2. Uncategorized
  3. Letting AI agents run your life is like handing the car keys to your 5-year-old.

Letting AI agents run your life is like handing the car keys to your 5-year-old.

Geplant Angeheftet Gesperrt Verschoben Uncategorized
24 Beiträge 20 Kommentatoren 0 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • jonas@social.jonaskoeritz.deJ jonas@social.jonaskoeritz.de

    @royal @briankrebs I faintly remember "Chaos Monkey" being a legit tool that Netflix (?) used internally to just cause random outages in their systems to build resilience. This reminds me of that.

    royal@theres.lifeR This user is from outside of this forum
    royal@theres.lifeR This user is from outside of this forum
    royal@theres.life
    schrieb zuletzt editiert von
    #5

    @jonas @briankrebs Yes, it was Netflix, and I was thinking of something similar.

    1 Antwort Letzte Antwort
    0
    • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

      Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

      I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

      "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

      https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

      noplasticshower@infosec.exchangeN This user is from outside of this forum
      noplasticshower@infosec.exchangeN This user is from outside of this forum
      noplasticshower@infosec.exchange
      schrieb zuletzt editiert von
      #6

      @briankrebs old style thinking about new stuff

      If you start working on this ping me and we can talk

      briankrebs@infosec.exchangeB 1 Antwort Letzte Antwort
      0
      • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

        Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

        I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

        "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

        https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

        womble@infosec.exchangeW This user is from outside of this forum
        womble@infosec.exchangeW This user is from outside of this forum
        womble@infosec.exchange
        schrieb zuletzt editiert von
        #7

        @briankrebs if there's one thing that the entire history of humanity has taught us, it's that as a species we are great at making nuanced, highly context-specific decisions with incomplete information. I foresee that this is going to turn out great.

        1 Antwort Letzte Antwort
        0
        • noplasticshower@infosec.exchangeN noplasticshower@infosec.exchange

          @briankrebs old style thinking about new stuff

          If you start working on this ping me and we can talk

          briankrebs@infosec.exchangeB This user is from outside of this forum
          briankrebs@infosec.exchangeB This user is from outside of this forum
          briankrebs@infosec.exchange
          schrieb zuletzt editiert von
          #8

          @noplasticshower Are you letting AI agents manage your life?

          noplasticshower@infosec.exchangeN 1 Antwort Letzte Antwort
          0
          • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

            Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

            I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

            "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

            https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

            hellpie@raru.reH This user is from outside of this forum
            hellpie@raru.reH This user is from outside of this forum
            hellpie@raru.re
            schrieb zuletzt editiert von
            #9

            @briankrebs I am going to start saying some stuff like "back in my day APTs didn't forget shit all the time" and "when spyware was made the old fashioned way it just worked without needing to ask you anything"

            1 Antwort Letzte Antwort
            0
            • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

              Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

              I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

              "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

              https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

              leahprice@hcommons.socialL This user is from outside of this forum
              leahprice@hcommons.socialL This user is from outside of this forum
              leahprice@hcommons.social
              schrieb zuletzt editiert von
              #10

              @briankrebs Makes you nostalgic for 1860, when machine-generated text meant a new magazine. (from PUNCH)

              1 Antwort Letzte Antwort
              0
              • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                slyborg@vmst.ioS This user is from outside of this forum
                slyborg@vmst.ioS This user is from outside of this forum
                slyborg@vmst.io
                schrieb zuletzt editiert von
                #11

                @briankrebs Can’t wait for the slew of stories / TikTok shorts about ‘how my lifehack AI agent phone buddy converted my life savings to Monero and sent it to North Korea’ and that nobody will learn a thing from them.

                1 Antwort Letzte Antwort
                0
                • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                  @noplasticshower Are you letting AI agents manage your life?

                  noplasticshower@infosec.exchangeN This user is from outside of this forum
                  noplasticshower@infosec.exchangeN This user is from outside of this forum
                  noplasticshower@infosec.exchange
                  schrieb zuletzt editiert von
                  #12

                  @briankrebs of course!

                  The "let's secure one (or maybe 5) agent(s) at a time" security thing is cute. If I read about another A&A framework approach to this stuff I am going to start using agents to run my life.

                  toriver@mas.toT 1 Antwort Letzte Antwort
                  0
                  • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                    Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                    I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                    "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                    https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                    zl2tod@mastodon.onlineZ This user is from outside of this forum
                    zl2tod@mastodon.onlineZ This user is from outside of this forum
                    zl2tod@mastodon.online
                    schrieb zuletzt editiert von
                    #13

                    @briankrebs

                    Thread re #clawdbot :

                    https://infosec.exchange/@munin/115963686278347109

                    1 Antwort Letzte Antwort
                    0
                    • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                      Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                      I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                      "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                      https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                      C This user is from outside of this forum
                      C This user is from outside of this forum
                      crouton@aus.social
                      schrieb zuletzt editiert von
                      #14

                      @briankrebs This sounds like the optimally worst implementation of a digital assistant. I was looking for a distant variant of this, where I set the (mostly deterministic) rules and actions, etc. Kinda like HA/node-red but aimed at being an assistant rather than controlling a house.
                      Giving it a blank cheque and hooking up to a LLM is insane.

                      1 Antwort Letzte Antwort
                      0
                      • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                        Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                        I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                        "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                        https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                        hal8999@infosec.exchangeH This user is from outside of this forum
                        hal8999@infosec.exchangeH This user is from outside of this forum
                        hal8999@infosec.exchange
                        schrieb zuletzt editiert von
                        #15

                        @briankrebs The stories are writing themselves: My A.I. agent took over my finances, framed me for sex trafficking, then unlocked the doors and turned on lights for the police.

                        1 Antwort Letzte Antwort
                        0
                        • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                          Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                          I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                          "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                          https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                          carsten@minnesotasocial.netC This user is from outside of this forum
                          carsten@minnesotasocial.netC This user is from outside of this forum
                          carsten@minnesotasocial.net
                          schrieb zuletzt editiert von
                          #16

                          @briankrebs and if the bot "touches" something it was not allowed to touch? "Sorry, me bad, won't do it again, probably" - does it again.

                          1 Antwort Letzte Antwort
                          0
                          • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                            Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                            I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                            "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                            https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                            zompetto@mastodon.artZ This user is from outside of this forum
                            zompetto@mastodon.artZ This user is from outside of this forum
                            zompetto@mastodon.art
                            schrieb zuletzt editiert von
                            #17

                            @briankrebs it would be funny if it wasn't so sad. And scary.

                            1 Antwort Letzte Antwort
                            0
                            • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                              Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                              I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                              "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                              https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                              theorangetheme@en.osm.townT This user is from outside of this forum
                              theorangetheme@en.osm.townT This user is from outside of this forum
                              theorangetheme@en.osm.town
                              schrieb zuletzt editiert von
                              #18

                              @briankrebs It's fucking bonkerstown. You might as well just wire up a random number generator and every 1/10 times it just deletes your home directory. At least that achieves the same result but with a fraction of the electricity and human rights abuses.

                              1 Antwort Letzte Antwort
                              0
                              • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                                Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                                I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                                "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                                https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                                S This user is from outside of this forum
                                S This user is from outside of this forum
                                spacelifeform@infosec.exchange
                                schrieb zuletzt editiert von
                                #19

                                @briankrebs

                                So sayeth the bot.

                                It is smarter not to talk to them in the first place.

                                1 Antwort Letzte Antwort
                                0
                                • noplasticshower@infosec.exchangeN noplasticshower@infosec.exchange

                                  @briankrebs of course!

                                  The "let's secure one (or maybe 5) agent(s) at a time" security thing is cute. If I read about another A&A framework approach to this stuff I am going to start using agents to run my life.

                                  toriver@mas.toT This user is from outside of this forum
                                  toriver@mas.toT This user is from outside of this forum
                                  toriver@mas.to
                                  schrieb zuletzt editiert von
                                  #20

                                  @noplasticshower @briankrebs
                                  1) Configure agent with guardrails
                                  2) Agent runs into guardrails
                                  3) Agent spins up secondary agent without guardrails
                                  4) Oh no.

                                  noplasticshower@infosec.exchangeN 1 Antwort Letzte Antwort
                                  0
                                  • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                                    Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                                    I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                                    "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                                    https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                                    craignicol@glasgow.socialC This user is from outside of this forum
                                    craignicol@glasgow.socialC This user is from outside of this forum
                                    craignicol@glasgow.social
                                    schrieb zuletzt editiert von
                                    #21

                                    @briankrebs ok child, you can have the scissors, but only if you promise not to run with them

                                    1 Antwort Letzte Antwort
                                    0
                                    • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                                      Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                                      I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                                      "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                                      https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                                      puppyfromlosandes@kolektiva.socialP This user is from outside of this forum
                                      puppyfromlosandes@kolektiva.socialP This user is from outside of this forum
                                      puppyfromlosandes@kolektiva.social
                                      schrieb zuletzt editiert von
                                      #22

                                      @briankrebs

                                      Getting so desperate with that market bubble that they give a last dying shot at making SkyNet.

                                      Do they ever give the hell up?

                                      1 Antwort Letzte Antwort
                                      0
                                      • toriver@mas.toT toriver@mas.to

                                        @noplasticshower @briankrebs
                                        1) Configure agent with guardrails
                                        2) Agent runs into guardrails
                                        3) Agent spins up secondary agent without guardrails
                                        4) Oh no.

                                        noplasticshower@infosec.exchangeN This user is from outside of this forum
                                        noplasticshower@infosec.exchangeN This user is from outside of this forum
                                        noplasticshower@infosec.exchange
                                        schrieb zuletzt editiert von
                                        #23

                                        @toriver @briankrebs that's just two. Let talk about 10,000

                                        1 Antwort Letzte Antwort
                                        0
                                        • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

                                          Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

                                          I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

                                          "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

                                          https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

                                          grumpydad@infosec.exchangeG This user is from outside of this forum
                                          grumpydad@infosec.exchangeG This user is from outside of this forum
                                          grumpydad@infosec.exchange
                                          schrieb zuletzt editiert von
                                          #24

                                          @briankrebs 4-year-old me drove our family car into a hedge (managed to steal the keys from dad).
                                          I actually think I'd do a better job as a 5-year-old. Definitely better to han any "AI" would ever run your life anyway.

                                          1 Antwort Letzte Antwort
                                          0
                                          • energisch_@troet.cafeE energisch_@troet.cafe shared this topic
                                          Antworten
                                          • In einem neuen Thema antworten
                                          Anmelden zum Antworten
                                          • Älteste zuerst
                                          • Neuste zuerst
                                          • Meiste Stimmen



                                          Copyright (c) 2025 abSpecktrum (@abspecklog@fedimonster.de)

                                          Erstellt mit Schlaflosigkeit, Kaffee, Brokkoli & ♥

                                          Impressum | Datenschutzerklärung | Nutzungsbedingungen

                                          • Anmelden

                                          • Du hast noch kein Konto? Registrieren

                                          • Anmelden oder registrieren, um zu suchen
                                          • Erster Beitrag
                                            Letzter Beitrag
                                          0
                                          • Home
                                          • Aktuell
                                          • Tags
                                          • Über dieses Forum