- cross-posted to:
- stablediffusion@lemmit.online
- cross-posted to:
- stablediffusion@lemmit.online
A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.
Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.
The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.
Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.
He is fapping to porn that was generated by an AI that trained on csam.
Yes, just like the pictures of astronauts on horses were trained on an extensive collection of space derby pictures.
Not quite. You see, unfortunately, space derbies don’t actually exist. The other, unfortunately, actually does.
Be in denial if you want. That csam is trained on csam.
Any proof for this? Would be an interesting read.
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
No, it’d be hard to tell since models are usually close lipped. But Twitter has been included in a lot of the image models and traditionally has a very large issue with cp.
Oh… You sounded so confident at first.
See the sibling comment with a link.
Kind of just contradicted yourself there. And have you ever heard the phrase “correlation does not imply causation”?
But how can that be? Surely just the fact that it can create those pictures is incontravertable proof that it was trained on pictures of spacesuited cowboys?
He used Stable Diffusion, which, for all we know, was NOT trained on CSAM.
Csam is in the training data. From a few months ago
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
Thanks for the correction!
It’s worth noting that this only includes CSAM accidentally scraped along with everything else on the open Web. No specialized CSAM training took place.
In any way, I welcome the efforts at filtering such content before it enters the dataset.
It’s obviously accidental, but that doesn’t change that it happened and is something that will be near impossible to avoid as long as they continue to scrape data in the way they do for their models. They would need a human to filter it out like they already use for most LLMs.