Original question by @SpiderUnderUrBed@lemmy.zip
Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/
What does the watermark really give you?
It gives a false sense that you can tell what’s AI and what’s not. Especially when anything created malicously is likely going to remove that watermark anyway. Pandoras box is already open on those abilities and there’s no putting the lid back.
And, even in the case of non-maliciously generated work, if you suspect that something is AI, but it doesnt have a watermark, do you start investigations into how a video/image/story(text) was created? Doesn’t that mean that any artist or author is now going to need to prove their innocence just because someone suspects that their work had some form of AI involved in the process at some point?
It’s bad enough that they have to worry about those accusations from average people to begin with, but now you’re just giving ammunition for anyone (or any corporation) to drag them through the legal system based on what “appears” to be AI generated.
Edit: typo