Mastodon Skip to content
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum
Einklappen
Grafik mit zwei überlappenden Sprechblasen, eine grün und eine lila.
Abspeckgeflüster – Forum für Menschen mit Gewicht(ung)

Kostenlos. Werbefrei. Menschlich. Dein Abnehmforum.

  1. Home
  2. Uncategorized
  3. Letting AI agents run your life is like handing the car keys to your 5-year-old.

Letting AI agents run your life is like handing the car keys to your 5-year-old.

Geplant Angeheftet Gesperrt Verschoben Uncategorized
24 Beiträge 20 Kommentatoren 0 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

    Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

    I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

    "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

    https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

    craignicol@glasgow.socialC This user is from outside of this forum
    craignicol@glasgow.socialC This user is from outside of this forum
    craignicol@glasgow.social
    schrieb zuletzt editiert von
    #21

    @briankrebs ok child, you can have the scissors, but only if you promise not to run with them

    1 Antwort Letzte Antwort
    0
    • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

      Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

      I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

      "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

      https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

      puppyfromlosandes@kolektiva.socialP This user is from outside of this forum
      puppyfromlosandes@kolektiva.socialP This user is from outside of this forum
      puppyfromlosandes@kolektiva.social
      schrieb zuletzt editiert von
      #22

      @briankrebs

      Getting so desperate with that market bubble that they give a last dying shot at making SkyNet.

      Do they ever give the hell up?

      1 Antwort Letzte Antwort
      0
      • toriver@mas.toT toriver@mas.to

        @noplasticshower @briankrebs
        1) Configure agent with guardrails
        2) Agent runs into guardrails
        3) Agent spins up secondary agent without guardrails
        4) Oh no.

        noplasticshower@infosec.exchangeN This user is from outside of this forum
        noplasticshower@infosec.exchangeN This user is from outside of this forum
        noplasticshower@infosec.exchange
        schrieb zuletzt editiert von
        #23

        @toriver @briankrebs that's just two. Let talk about 10,000

        1 Antwort Letzte Antwort
        0
        • briankrebs@infosec.exchangeB briankrebs@infosec.exchange

          Letting AI agents run your life is like handing the car keys to your 5-year-old. What could go wrong?

          I was marveling while reading this PCMag piece, which describes how to secure an agentic AI setup that essentially mimics malware: To do it's job properly, the AI agent has to be able to read private messages, store credentials, execute commands, and maintain a persistent state. How do you do that? You chase after it like you would your child.

          "The important thing is to make sure you limit "who can talk to your bot, where the bot is allowed to act, [and] what the bot can touch" on your device, the bot's support documentation says."

          https://www.pcmag.com/news/clawdbot-moltbot-hot-new-ai-agent-creator-warns-of-spicy-security-risks?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=A

          grumpydad@infosec.exchangeG This user is from outside of this forum
          grumpydad@infosec.exchangeG This user is from outside of this forum
          grumpydad@infosec.exchange
          schrieb zuletzt editiert von
          #24

          @briankrebs 4-year-old me drove our family car into a hedge (managed to steal the keys from dad).
          I actually think I'd do a better job as a 5-year-old. Definitely better to han any "AI" would ever run your life anyway.

          1 Antwort Letzte Antwort
          0
          • energisch_@troet.cafeE energisch_@troet.cafe shared this topic
          Antworten
          • In einem neuen Thema antworten
          Anmelden zum Antworten
          • Älteste zuerst
          • Neuste zuerst
          • Meiste Stimmen



          Copyright (c) 2025 abSpecktrum (@abspecklog@fedimonster.de)

          Erstellt mit Schlaflosigkeit, Kaffee, Brokkoli & ♥

          Impressum | Datenschutzerklärung | Nutzungsbedingungen

          • Anmelden

          • Du hast noch kein Konto? Registrieren

          • Anmelden oder registrieren, um zu suchen
          • Erster Beitrag
            Letzter Beitrag
          0
          • Home
          • Aktuell
          • Tags
          • Über dieses Forum