He allegedly used Stable Diffusion, a text-to-image generative AI model, to create “thousands of realistic images of prepubescent minors,” prosecutors said.
I would love to see research data pointing either way re #1, although it would be incredibly difficult to do so ethically, verging on impossible. For #2, people have extracted originals or near-originals of inputs to the algorithms. AI generated stuff - plagiarism machine generated stuff, runs the risk of effectively revictimizing people who were already abused to get said inputs.
It’s an ugly situation all around, and unfortunately I don’t know that much can be done about it beyond not demonizing people who have such drives, who have not offended, so that seeking therapy for the condition doesn’t screw them over. Ensuring that people are damned if they do and damned if they don’t seems to pretty reliably produce worse outcomes.
I would love to see research data pointing either way re #1, although it would be incredibly difficult to do so ethically, verging on impossible. For #2, people have extracted originals or near-originals of inputs to the algorithms. AI generated stuff - plagiarism machine generated stuff, runs the risk of effectively revictimizing people who were already abused to get said inputs.
It’s an ugly situation all around, and unfortunately I don’t know that much can be done about it beyond not demonizing people who have such drives, who have not offended, so that seeking therapy for the condition doesn’t screw them over. Ensuring that people are damned if they do and damned if they don’t seems to pretty reliably produce worse outcomes.