Original question by @SpiderUnderUrBed@lemmy.zip
Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/
It won’t change the scenario at all and will goad people into creating worse stuff. The best thing possible is to normalize both the positive and negative aspects of this like what happened much slower with Photoshop/digital editing fakes. Those were actually pretty high quality even before AI but were relegated to the weirder corners of the internet.
The more normal it is to be skeptical of recorded media, the better. AI can be tweaked and tuned in private on enthusiast level hardware. I can and have done fine tuning with a LLM and a CNN at home and offline. It is not all that hard to do. If the capabilities are ostracized, the potential to cause far larger problems becomes greater. A person is far less likely to share their stories and creations when they get a poor reception, and may withdraw further and further within themselves until they make use of what they have created.