This might be a controversial post.
-
This might be a controversial post.
Now that everybody is getting railed up about Elon Musk's #Grok producing #sexually charged text (or images?) involving minors, I wonder if there isn't a certain hypocrisy in that discussion.
Are we again at the "video games cause mass shootings" point?
As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harm.
Actually,
1/2
-
This might be a controversial post.
Now that everybody is getting railed up about Elon Musk's #Grok producing #sexually charged text (or images?) involving minors, I wonder if there isn't a certain hypocrisy in that discussion.
Are we again at the "video games cause mass shootings" point?
As long as no real child sex abuse material was used for training and no real person's identity (e.g. face) is used, I don't see the harm.
Actually,
1/2
How will AI know how to create CSA representation if it doesn't have real CSA material to imitate?
-
How will AI know how to create CSA representation if it doesn't have real CSA material to imitate?
And to make my point clearer, if that AI was trained on non-sexual child nudity content and now is asked to use it for creating sexualized images, I would argue that this is an act of sexualizing the children in the training material, even if the result doesn't look like them.
-
And to make my point clearer, if that AI was trained on non-sexual child nudity content and now is asked to use it for creating sexualized images, I would argue that this is an act of sexualizing the children in the training material, even if the result doesn't look like them.
I don't know the training data, I don't know the algorithms, and I am not inside the twisted mind of people trying to evoke the creation of simulated abuse material via prompts.
What I find suspicious, is the sudden moral panic around one aspect of a technology that is ultimately just designed to produce any result people want to see.
-
I don't know the training data, I don't know the algorithms, and I am not inside the twisted mind of people trying to evoke the creation of simulated abuse material via prompts.
What I find suspicious, is the sudden moral panic around one aspect of a technology that is ultimately just designed to produce any result people want to see.
I didn't see the moral panic. I have nothing against artificially generated images that satisfy people's desires even if these desires would be very important in the real world. But one reoccurring issue with generative AI is that it is actually plagiarizing other work. In this case it might actually be pictures of real children and I find that morally problematic. I wouldn't care if it were completely artificial.
-
I didn't see the moral panic. I have nothing against artificially generated images that satisfy people's desires even if these desires would be very important in the real world. But one reoccurring issue with generative AI is that it is actually plagiarizing other work. In this case it might actually be pictures of real children and I find that morally problematic. I wouldn't care if it were completely artificial.
Needless to say that I find the usage of real children's images for training purposes not only problematic, but criminal.