A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.

Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.

The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.

Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.

  • @Allero
    link
    English
    2
    edit-2
    2 months ago

    Can be baked in pixels, or even better sent to identification for a system similar to what Apple uses to detect CSAM, but as an “alright” ID (but just in police’s hands, not on device or something).

    • Bob Robertson IX
      link
      fedilink
      English
      02 months ago

      But even then, if every pixel gets marked as ‘created by AI’, it would still be trivial to take real CSAM and run it through an image-to-image generator with denoising turned down to 0.05 and suddenly you have real CSAM that has been marked as ‘legal’ since it is technically AI generated.

      Also, keep in mind that there are several open source projects out there where anyone who knows what they are doing could just strip out any protections that might be put in place.

      • @Allero
        link
        English
        12 months ago

        Apple-like ID system solves the latter by technical means.

        As per image-to-image, feeding the model with recognised CSAM should be unavailable to begin with.