This might be a controversial post.
-
@mina Can't comment on that, except that it is a result of copyright infringement as everything LLM generated.
I mean: I hate all that LLM shit with my whole heart. I hate child abusers even a thousand times more, but I don't think, there should be something like a "thought crime".
-
@mina i’m not in much into the topic, so no deep knowledge here. but i did read about this project
https://sexualmedizin.charite.de/forschung/kein_taeter_werden/
and i think they have some more material there.
from a very personal standpoint i find this convincing: if you really want to do something, anything, a simulation is never enough, but instead feeds the desire to do the real thing. this is not scientific ofc

OK. Da ist viel weiterführende Info, die ich jetzt nicht lesen kann.
Das mache ich am Wochenende oder nächste Woche.
Im Einführungstext ist definitiv nicht von rein simulierten Darstellungen (etwa Texte, Comics oder Puppen) die Rede, sondern von echtem Missbrauch und echten Missbrauchsdarstellungen.
-
@mina regarding catholic priests: the amount of perpetrators in that demographic is embarrassing and really bad at the same time, for sure. but somehow i think becoming a priest might also be understood as a misdirected attempt to not become a perpetrator. finding peace in god and all that weird stuff.
@mina on the other hand i’m quite convinced that churches draw in men (sic!) that get off on power over others, children specifically or just general people. so this line of questioning might better be started at “priests” not at “pedophiles”.
-
@mina i’m not in much into the topic, so no deep knowledge here. but i did read about this project
https://sexualmedizin.charite.de/forschung/kein_taeter_werden/
and i think they have some more material there.
from a very personal standpoint i find this convincing: if you really want to do something, anything, a simulation is never enough, but instead feeds the desire to do the real thing. this is not scientific ofc

@mina regarding catholic priests: the amount of perpetrators in that demographic is embarrassing and really bad at the same time, for sure. but somehow i think becoming a priest might also be understood as a misdirected attempt to not become a perpetrator. finding peace in god and all that weird stuff.
-
I mean: I hate all that LLM shit with my whole heart. I hate child abusers even a thousand times more, but I don't think, there should be something like a "thought crime".
@mina No, but there is incitement to hatred and many fantasies written in the manosphere would fall under it if anyone cared to look into them.
-
@mina No, but there is incitement to hatred and many fantasies written in the manosphere would fall under it if anyone cared to look into them.
Publication is a different thing from personal "use".
I don't think, hate speech should be allowed.
-
Publication is a different thing from personal "use".
I don't think, hate speech should be allowed.
@mina If it was only for personal use, how did you find out about it?
-
@mina If it was only for personal use, how did you find out about it?
People (journalists) wrote about what they managed to make their chatbots say.
-
This might be a controversial post.
Now that everybody is getting railed up about Elon Musk's #Grok producing #sexually charged text (or images?) involving minors, I wonder if there isn't a certain hypocrisy in that discussion.
Are we again at the "video games cause mass shootings" point?
As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harm.
Actually,
1/2
How will AI know how to create CSA representation if it doesn't have real CSA material to imitate?
-
People (journalists) wrote about what they managed to make their chatbots say.
@mina I know there are more concerns than grownups getting generated BDSM fiction involving minors and not showing it to anyone else. Children's LLM powered toys are already teaching children about kink. But those journalists might be arguing just against that, IDK, haven't seen it. Though, what was in their mind to prompt LLMs to generate that in the first place

-
How will AI know how to create CSA representation if it doesn't have real CSA material to imitate?
And to make my point clearer, if that AI was trained on non-sexual child nudity content and now is asked to use it for creating sexualized images, I would argue that this is an act of sexualizing the children in the training material, even if the result doesn't look like them.
-
And to make my point clearer, if that AI was trained on non-sexual child nudity content and now is asked to use it for creating sexualized images, I would argue that this is an act of sexualizing the children in the training material, even if the result doesn't look like them.
I don't know the training data, I don't know the algorithms, and I am not inside the twisted mind of people trying to evoke the creation of simulated abuse material via prompts.
What I find suspicious, is the sudden moral panic around one aspect of a technology that is ultimately just designed to produce any result people want to see.
-
I mean: I hate all that LLM shit with my whole heart. I hate child abusers even a thousand times more, but I don't think, there should be something like a "thought crime".
@mina @rhelune Trust me when I say — it is a deliberate wedge tactic to “force” us to choose between two rights (not true: we can keep both). But they want us to give up one of these in the name of the other, and are positioning the technology to make (what seems like) a tough fork in the road for our future society.
Policework and arrests didn’t stop the day encryption became universal, and this is no different.
-
I don't know the training data, I don't know the algorithms, and I am not inside the twisted mind of people trying to evoke the creation of simulated abuse material via prompts.
What I find suspicious, is the sudden moral panic around one aspect of a technology that is ultimately just designed to produce any result people want to see.
I didn't see the moral panic. I have nothing against artificially generated images that satisfy people's desires even if these desires would be very important in the real world. But one reoccurring issue with generative AI is that it is actually plagiarizing other work. In this case it might actually be pictures of real children and I find that morally problematic. I wouldn't care if it were completely artificial.
-
I didn't see the moral panic. I have nothing against artificially generated images that satisfy people's desires even if these desires would be very important in the real world. But one reoccurring issue with generative AI is that it is actually plagiarizing other work. In this case it might actually be pictures of real children and I find that morally problematic. I wouldn't care if it were completely artificial.
Needless to say that I find the usage of real children's images for training purposes not only problematic, but criminal.
-
Needless to say that I find the usage of real children's images for training purposes not only problematic, but criminal.
I would assume for the very least that grok is trained on images from twitter and from the vast internet. I don't think its training data had real children's images filtered out. This is why I would assume that generated CSA representations by grok would constitute sexualization of real children and indeed be criminal.
-
as a parent, I feel far more comfortable if wannabe child abusers¹ satisfy their desires in private with a chatbot or a doll in whatever shape, as if they were trying to abuse real children, be it online or offline.
2/2
¹they call themselves "paedophiles", but it's not the right word, as there is nothing loving and caring in what they want.
@mina
> As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harmYes, this is a controversial take, at least in some cultures. There is a strong parallel with the lolicon debate:
https://ansuz.sooke.bc.ca/entry/335
In Japan the social consensus seems to agree with you; no harm, no foul. Whereas in the US there tends to be a presumption that simulation is a gateway to realization.
-
@mina
> As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harmYes, this is a controversial take, at least in some cultures. There is a strong parallel with the lolicon debate:
https://ansuz.sooke.bc.ca/entry/335
In Japan the social consensus seems to agree with you; no harm, no foul. Whereas in the US there tends to be a presumption that simulation is a gateway to realization.
This is quite a rabbit hole.
Thank you for sharing the blog post.
I haven't fully made my mind upon the whole issue, yet, but I have the strong impression that most people treat the subject without much concern for honest intellectual analysis, nuance and empirical data.