• @horncorn@lemmynsfw.com
    link
    fedilink
    English
    371 month ago

    Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer. Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.

    • @ricecake@sh.itjust.works
      link
      fedilink
      English
      31 month ago

      Legally, a sufficiently detailed image depicting csam is csam, regardless of how it was produced. Sharing it is why he got caught, inevitably, but it’s still illegal even if he never brought a minor into it.

    • @retrospectology@lemmy.world
      link
      fedilink
      English
      -17
      edit-2
      1 month ago

      Lemmy really needs to stop justifying CP. We can absolutely do more than “eDuCaTiOn”. AI is created by humans, the training data is gathered by humans, it needs regulation like any other industry.

      It’s absolutely insane to me how laissez-fair some people are about AI, it’s like a cult.

      • @msage@programming.dev
        link
        fedilink
        English
        201 month ago

        While I agree with your attitude, the whole ‘laissez-fair’ thing is probably a misunderstanding:

        There is nothing we can do to stop the AI.

        Nothing.

        The genie is out of the bottle, the Pandora’s box has been opened, everything is out and it won’t ever return. The world will never be the same, and it’s irrelevant what people think.

        That’s why we need to better understand the post-AI world we created, and figure out what do to now.

        Also, to hell with CP. (feels weird to use the word ‘fuck’ here)

        • @retrospectology@lemmy.world
          link
          fedilink
          English
          -9
          edit-2
          1 month ago

          Thats not the question, the question is not “can we stop AI entirely” it’s about regulating its development and yes, we can make efforts to do that.

          This attitude of “it’s inevitable, can’t do anything about it” is eerily similar logic to what is used in climate denial and other right-wing efforts. It’s a really poor attitude to have, especially about something as consequential as AI.

          We have the best opportunity right now to create rules about its uses and development. The answer is not “do nothing” as if it’s some force of nature, as opposed toa tool created by humans.

          • @msage@programming.dev
            link
            fedilink
            English
            61 month ago

            I hear you, and I don’t necessarily disagree with you, I just know that’s not how anything works.

            Regulations work for big companies, but there isn’t a big company behind this specific case. And those small-time users have run away and you can’t stop them.

            It’s like trying to regulate cameras to not store specific images. Like, I get the sentiment, but sorry, no. It’s not that I would not like that, it’s just not possible.

            • @retrospectology@lemmy.world
              link
              fedilink
              English
              -41 month ago

              This argument could be applied to anything though. A lot of people get away with myrder, we should still try and do what we can to stop it from happening.

              You can’t sit in every car and force people to wear a seatbelt, we still have seatbelt laws and regulations for manufacturers.

              • @msage@programming.dev
                link
                fedilink
                English
                41 month ago

                Physical things are much easier to regulate than software, much less serverless.

                We already regulate certain images, and it matters very little.

                The bigger payoff will be from educating the public and accepting that we can’t win every war.

                • @retrospectology@lemmy.world
                  link
                  fedilink
                  English
                  -5
                  edit-2
                  1 month ago

                  So accept defeat from the start, that’s really just a non-starter. AI models run on hardware, they are developed by specific people, their contents are distributed by specific individuals, code bases are hosted on hardware and on specific outlets.

                  It really does sound like you’re just trying to make excuses to avoid regulation, not that you genuinely have a good reason to think it’s not possible to try.

          • @L_Acacia@lemmy.one
            link
            fedilink
            English
            3
            edit-2
            1 month ago

            The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.

          • @GBU_28@lemm.ee
            link
            fedilink
            English
            3
            edit-2
            1 month ago

            Dude the amount of open source, untrackable, distributed ai models is off the charts. This isn’t just about the models offered by subscription from the big players.

            • @retrospectology@lemmy.world
              link
              fedilink
              English
              11 month ago

              This is still one of the weaker arguments. There is a lot of malware out there too, people are still prosecuted when they’re caught developing and distributing it, we don’t just throw up our hands and pretend there’s nothing that can be done.

              Like, yeah, some pedophile who also happens to be tech saavy might build his own AI model to make CP, that’s not some self-evident argument against attempting to stop them.

              • @GBU_28@lemm.ee
                link
                fedilink
                English
                0
                edit-2
                1 month ago

                No, like, the tools to do these things are common and readily available. It’s not malware, it’s generalized ai tools, completely embroiled with non image ai work.

                Pandora’s box is wide open. All of this work can be done trivially, completely offline with a basic PC. Anyone motivated can be offline and up and running in a weekend

                You’re asking to outlaw something like a spreadsheet.

                You download a general purpose image ai model, then train and prompt it completely offline

      • DarkThoughts
        link
        fedilink
        21 month ago

        You don’t need CSAM training data to create CSAM images. If your model knows how children looks like, how naked human bodies look like, then it can create naked children. That’s simply how generative models like this work and has absolutely nothing to do with specifically trained models for CSAM using actual CSAM material.

        So while I disagree with him, in that lack of education is the cause of CSAM or pedophilia… I’d say it could help with the general hysteria about LLMs, like the one’s coming from you, who just let their emotions run wild when those topics arise. You people need to understand that the goal should be the protection of potential victims, not the punishment of victimless thought crimes.

      • Autonomous User
        link
        fedilink
        English
        2
        edit-2
        1 month ago

        One of two classic excuses, virtue signalling to hijack control of our devices, our computing, an attack on libre software (they don’t care about CP). Next, they’ll be banning more math, encryption, again.

        It says gullible at the start of this page, scroll up and see.