Mastodon Skip to content
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum
Einklappen
Grafik mit zwei überlappenden Sprechblasen, eine grün und eine lila.
Abspeckgeflüster – Forum für Menschen mit Gewicht(ung)

Kostenlos. Werbefrei. Menschlich. Dein Abnehmforum.

  1. Home
  2. Uncategorized
  3. This might be a controversial post.

This might be a controversial post.

Geplant Angeheftet Gesperrt Verschoben Uncategorized
groksex
28 Beiträge 6 Kommentatoren 3 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • mina@berlin.socialM mina@berlin.social

    @rhelune

    Publication is a different thing from personal "use".

    I don't think, hate speech should be allowed.

    rhelune@todon.euR This user is from outside of this forum
    rhelune@todon.euR This user is from outside of this forum
    rhelune@todon.eu
    schrieb am zuletzt editiert von
    #17

    @mina If it was only for personal use, how did you find out about it?

    mina@berlin.socialM 1 Antwort Letzte Antwort
    0
    • rhelune@todon.euR rhelune@todon.eu

      @mina If it was only for personal use, how did you find out about it?

      mina@berlin.socialM This user is from outside of this forum
      mina@berlin.socialM This user is from outside of this forum
      mina@berlin.social
      schrieb am zuletzt editiert von
      #18

      @rhelune

      People (journalists) wrote about what they managed to make their chatbots say.

      rhelune@todon.euR 1 Antwort Letzte Antwort
      0
      • mina@berlin.socialM mina@berlin.social

        This might be a controversial post.

        Now that everybody is getting railed up about Elon Musk's #Grok producing #sex⁣ually charged text (or images?) involving minors, I wonder if there isn't a certain hypocrisy in that discussion.

        Are we again at the "video games cause mass shootings" point?

        As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harm.

        Actually,

        1/2

        lulu@hachyderm.ioL This user is from outside of this forum
        lulu@hachyderm.ioL This user is from outside of this forum
        lulu@hachyderm.io
        schrieb am zuletzt editiert von
        #19

        @mina

        How will AI know how to create CSA representation if it doesn't have real CSA material to imitate?

        lulu@hachyderm.ioL 1 Antwort Letzte Antwort
        0
        • mina@berlin.socialM mina@berlin.social

          @rhelune

          People (journalists) wrote about what they managed to make their chatbots say.

          rhelune@todon.euR This user is from outside of this forum
          rhelune@todon.euR This user is from outside of this forum
          rhelune@todon.eu
          schrieb am zuletzt editiert von
          #20

          @mina I know there are more concerns than grownups getting generated BDSM fiction involving minors and not showing it to anyone else. Children's LLM powered toys are already teaching children about kink. But those journalists might be arguing just against that, IDK, haven't seen it. Though, what was in their mind to prompt LLMs to generate that in the first place 😬

          1 Antwort Letzte Antwort
          0
          • lulu@hachyderm.ioL lulu@hachyderm.io

            @mina

            How will AI know how to create CSA representation if it doesn't have real CSA material to imitate?

            lulu@hachyderm.ioL This user is from outside of this forum
            lulu@hachyderm.ioL This user is from outside of this forum
            lulu@hachyderm.io
            schrieb am zuletzt editiert von
            #21

            @mina

            And to make my point clearer, if that AI was trained on non-sexual child nudity content and now is asked to use it for creating sexualized images, I would argue that this is an act of sexualizing the children in the training material, even if the result doesn't look like them.

            mina@berlin.socialM 1 Antwort Letzte Antwort
            0
            • lulu@hachyderm.ioL lulu@hachyderm.io

              @mina

              And to make my point clearer, if that AI was trained on non-sexual child nudity content and now is asked to use it for creating sexualized images, I would argue that this is an act of sexualizing the children in the training material, even if the result doesn't look like them.

              mina@berlin.socialM This user is from outside of this forum
              mina@berlin.socialM This user is from outside of this forum
              mina@berlin.social
              schrieb am zuletzt editiert von
              #22

              @lulu

              I don't know the training data, I don't know the algorithms, and I am not inside the twisted mind of people trying to evoke the creation of simulated abuse material via prompts.

              What I find suspicious, is the sudden moral panic around one aspect of a technology that is ultimately just designed to produce any result people want to see.

              lulu@hachyderm.ioL 1 Antwort Letzte Antwort
              0
              • mina@berlin.socialM mina@berlin.social

                @rhelune

                I mean: I hate all that LLM shit with my whole heart. I hate child abusers even a thousand times more, but I don't think, there should be something like a "thought crime".

                whophd@mastodon.socialW This user is from outside of this forum
                whophd@mastodon.socialW This user is from outside of this forum
                whophd@mastodon.social
                schrieb am zuletzt editiert von
                #23

                @mina @rhelune Trust me when I say — it is a deliberate wedge tactic to “force” us to choose between two rights (not true: we can keep both). But they want us to give up one of these in the name of the other, and are positioning the technology to make (what seems like) a tough fork in the road for our future society.

                Policework and arrests didn’t stop the day encryption became universal, and this is no different.

                1 Antwort Letzte Antwort
                0
                • mina@berlin.socialM mina@berlin.social

                  @lulu

                  I don't know the training data, I don't know the algorithms, and I am not inside the twisted mind of people trying to evoke the creation of simulated abuse material via prompts.

                  What I find suspicious, is the sudden moral panic around one aspect of a technology that is ultimately just designed to produce any result people want to see.

                  lulu@hachyderm.ioL This user is from outside of this forum
                  lulu@hachyderm.ioL This user is from outside of this forum
                  lulu@hachyderm.io
                  schrieb am zuletzt editiert von
                  #24

                  @mina

                  I didn't see the moral panic. I have nothing against artificially generated images that satisfy people's desires even if these desires would be very important in the real world. But one reoccurring issue with generative AI is that it is actually plagiarizing other work. In this case it might actually be pictures of real children and I find that morally problematic. I wouldn't care if it were completely artificial.

                  mina@berlin.socialM 1 Antwort Letzte Antwort
                  0
                  • lulu@hachyderm.ioL lulu@hachyderm.io

                    @mina

                    I didn't see the moral panic. I have nothing against artificially generated images that satisfy people's desires even if these desires would be very important in the real world. But one reoccurring issue with generative AI is that it is actually plagiarizing other work. In this case it might actually be pictures of real children and I find that morally problematic. I wouldn't care if it were completely artificial.

                    mina@berlin.socialM This user is from outside of this forum
                    mina@berlin.socialM This user is from outside of this forum
                    mina@berlin.social
                    schrieb am zuletzt editiert von
                    #25

                    @lulu

                    Needless to say that I find the usage of real children's images for training purposes not only problematic, but criminal.

                    lulu@hachyderm.ioL 1 Antwort Letzte Antwort
                    0
                    • mina@berlin.socialM mina@berlin.social

                      @lulu

                      Needless to say that I find the usage of real children's images for training purposes not only problematic, but criminal.

                      lulu@hachyderm.ioL This user is from outside of this forum
                      lulu@hachyderm.ioL This user is from outside of this forum
                      lulu@hachyderm.io
                      schrieb am zuletzt editiert von
                      #26

                      @mina

                      I would assume for the very least that grok is trained on images from twitter and from the vast internet. I don't think its training data had real children's images filtered out. This is why I would assume that generated CSA representations by grok would constitute sexualization of real children and indeed be criminal.

                      1 Antwort Letzte Antwort
                      0
                      • mina@berlin.socialM mina@berlin.social

                        as a parent, I feel far more comfortable if wannabe child abusers¹ satisfy their desires in private with a chatbot or a doll in whatever shape, as if they were trying to abuse real children, be it online or offline.

                        2/2

                        ¹they call themselves "paedophiles", but it's not the right word, as there is nothing loving and caring in what they want.

                        strypey@mastodon.nzoss.nzS This user is from outside of this forum
                        strypey@mastodon.nzoss.nzS This user is from outside of this forum
                        strypey@mastodon.nzoss.nz
                        schrieb zuletzt editiert von
                        #27

                        @mina
                        > As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harm

                        Yes, this is a controversial take, at least in some cultures. There is a strong parallel with the lolicon debate:

                        https://ansuz.sooke.bc.ca/entry/335

                        In Japan the social consensus seems to agree with you; no harm, no foul. Whereas in the US there tends to be a presumption that simulation is a gateway to realization.

                        mina@berlin.socialM 1 Antwort Letzte Antwort
                        0
                        • strypey@mastodon.nzoss.nzS strypey@mastodon.nzoss.nz

                          @mina
                          > As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harm

                          Yes, this is a controversial take, at least in some cultures. There is a strong parallel with the lolicon debate:

                          https://ansuz.sooke.bc.ca/entry/335

                          In Japan the social consensus seems to agree with you; no harm, no foul. Whereas in the US there tends to be a presumption that simulation is a gateway to realization.

                          mina@berlin.socialM This user is from outside of this forum
                          mina@berlin.socialM This user is from outside of this forum
                          mina@berlin.social
                          schrieb zuletzt editiert von
                          #28

                          @strypey

                          This is quite a rabbit hole.

                          Thank you for sharing the blog post.

                          I haven't fully made my mind upon the whole issue, yet, but I have the strong impression that most people treat the subject without much concern for honest intellectual analysis, nuance and empirical data.

                          1 Antwort Letzte Antwort
                          0
                          Antworten
                          • In einem neuen Thema antworten
                          Anmelden zum Antworten
                          • Älteste zuerst
                          • Neuste zuerst
                          • Meiste Stimmen



                          Copyright (c) 2025 abSpecktrum (@abspecklog@fedimonster.de)

                          Erstellt mit Schlaflosigkeit, Kaffee, Brokkoli & ♥

                          Impressum | Datenschutzerklärung | Nutzungsbedingungen

                          • Anmelden

                          • Du hast noch kein Konto? Registrieren

                          • Anmelden oder registrieren, um zu suchen
                          • Erster Beitrag
                            Letzter Beitrag
                          0
                          • Home
                          • Aktuell
                          • Tags
                          • Über dieses Forum