

Thanks for context!
Thanks for context!
The thing is, banning is also a consequential action.
And based on what we know about similar behaviors, having an outlet is likely to be good.
Here, the EU takes an approach of “banning just in case” while also ignoring the potential implications of such bans.
Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?
Yup
Eh, I knew something was fishy - otherwise it would be such a great option for a compression algorithm!
No luck again :D
Thanks for elaborating!
+1 for Eternity. Video embedded correctly.
“Bizarre phenomenon”
“Cannot fully explain it”
Seriously? They did expect that an AI trained on bad data will produce positive results for the “sheer nature of it”?
Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.
Honestly I was not able to retrieve information by those coordinates (hexagon number, wall, shelf, volume, page). Gonna play around more with it - maybe I didn’t get something.
As an advocate for online and offline safety of children, I did read into the research. None of the research I’ve found confirm with any sort of evidence that AI-generated CSAM materials increase risks of other illicit behavior. We need more evidence, and I do recommend to exercise caution with statements, but for the time being, we can rely on the studies in other forms of illegal behaviors and the effects of their decriminalization, which paint a fairly positive picture. Generally, people will tend to opt for what is legal and more readily accessible - and we can make AI CSAM into exactly that.
For now, people are criminalized for the zero-evidence-its-even-bad crime, while I tend to look quite positively on what it can bring on the table instead.
Also, pedophiles are not human trash, and this line of thinking is also harmful, making more of them hide and never get adequate help from a therapist, increasing their chances of offending. Which, well, harms children.
They are regular people who, involuntarily, have their sexuality warped in a way that includes children. They never chose it, they cannot do anything about it in itself, and can only figure out what to do with it going forward. You could be one, I could be one. What matters is the decisions they take based on their sexuality. The correct way is celibacy and refusion of any sources of direct harm towards children, including the consumption of real CSAM. This might be hard on many, and to aid them, we can provide fictional materials so they could let some steam off. Otherwise, many are likely to turn to real CSAM as a source of satisfaction, or even turn to actually abusing children IRL.
Much as all in modern AI - it’s able to train without much human intervention.
My point is, even if results are not perfectly accurate and resembling a child’s body, they work. They are widely used, in fact, so widely that Europol made a giant issue out of it. People get off to whatever it manages to produce, and that’s what matters.
I do not care about how accurate it is, because it’s not me who consumes this content. I care about how efficient it is at curbing worse desires in pedophiles, because I care about safety of children.
That’s exactly how they work. According to many articles I’ve seen in the past, one of the most common models used for this purpose is Stable Diffusion. For all we know, this model was never fed with any CSAM materials, but it seems to be good enough for people to get off - which is exactly what matters.
I actually do not agree with them being arrested.
While I recognize the issue of identification posed in the article, I hold a strong opinion it should be tackled in another way.
AI-generated CSAM might be a powerful tool to reduce demand for the content featuring real children. If we leave it legal to watch and produce, and keep the actual materials illegal, we can make more pedophiles turn to what is less harmful and impactful - a computer-generated image that was produced with no children being harmed.
By introducing actions against AI-generated materials, they make such materials as illegal as the real thing, and there’s one less reason for an interested party not to go to a CSAM site and watch actual children getting abused, perpetuating the cycle and leading to more real-world victims.
I’m afraid Europol is shooting themselves in the foot here.
What should be done is better ways to mark and identify AI-generated content, not a carpet ban and criminalization.
Let whoever happens to crave CSAM (remember: sexuality, however perverted or terrible it is, is not a choice) use the most harmless outlet - otherwise, they may just turn to the real materials, and as continuous investigations suggest, there’s no shortage of supply or demand on that front. If everything is illegal, and some of that is needed anyway, it’s easier to escalate, and that’s dangerous.
As sickening as it may sound to us, these people often need something, or else things are quickly gonna go downhill. Give them their drawings.
And if you want to have two weekends, 60 hours in 5 days is 12 hours of work a day, minus 8 hours for sleep you get 4 hours, minus ~2 hours commute you get 2 hours, and the rest is basic cooking and eating. This leaves 0 hours for anything else, including rest or even any other duties that you’ll end up resolving throughout the weekends. This will absolutely kill you in the long run.
Parabolic is super easy to use and allows you to download either a single video or entire playlist/channel. Also if you download playlists, it makes separate folders for them and embeds all the metadata you may need.
Open-source (GPL-licensed), available for Windows and Linux.
Zen for regular activities (I pin all important services), Firefox for browsing for something else.
GNU IceCat is also amazing as concept, but generally unusable since it ends up blocking too much and manually allowing everything is a hassle. But still, the pages that work are clean, and I love that by default the browser doesn’t do anything without your permission - it doesn’t even connect to update and telemetry services, it has 0 connections on startup, unlike almost anything (qutebrowser does the same, but, unless you are a strong Vim fanboy, you won’t like the experience).
They stopped their official Mastodon presence citing having not enough resources to maintain communities everywhere, which caused outrage among the Fediverse.
Previously, they vocally supported the actions of Trump administration on the matters of Internet privacy, which caused a massive backlash.
So, essentially, they have alienated a lot of userbase by making questionable moves.
Some form of digital signatures for allowed services?
Sure, it will limit the choice of where to legally generate content, but it should work.