Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/
Training AI models is completely different, though. That requires massive amounts of compute and data and electricity and water, and that’s all very easy for the government to track.
If someone trains an open source AI model to fingerprint its output, someone else can use abliteration or other methods to defeat that. It will not require re-training. An example of this is deepseek-r1’s “1776” variant, where someone uncensored it, and now it will talk freely about Tiananmen Square.
Even without that, it’s not practical for a government to find all instances of model training. Thousands of people can rent the same GPUs in the same data centers. A small organization training one model can have the same power consumption as a large organization running inference. It would take advanced surveillance to get around that.
It’s also becoming possible to train larger and larger models without needing a data center at all. nVidia is coming out with a 128GB desktop machine that delivers 1 petaflop @ FP4 for 170 watts. FP8 would be on the order of hundreds of teraflops. Ten of them could talk over an InfiniBand switch. You could run that setup in an apartment, or in a LAN closet.
It’s practical for a government to regulate Microsoft, Google, Amazon, OpenAI, etc. Who cares if they can’t catch everything? Focusing on the biggest problems is perfectly fine imo, the worst offenders are the biggest companies as usual.
Your company’s AI model got retrained and used in a way that violates regulations? Whelp, looks like your company is liable for that. Oh, that wasn’t done by your company or anyone involved? Too fucking bad, should have made it harder to retrain your model.
And if they resist, break them on the fucking wheel.You act like it’s impossible and so we shouldn’t even try, which is honestly just an anti regulation talking point that is trotted out for literally everything.
What do you mean by “retrain your model”? Retaining it would erase it. It’s not practical to prevent adjusting the weights on an open source model because the weights have to be published for it to work at all. Plenty of open source software can be used to do evil things, and isn’t regulated on that account. If someone was to sue the developers of Wireshark because it was used to exploit their network, they would be very likely to lose because that software has many legitimate non-criminal uses.
Requiring US commercial vendors to implement fingerprinting would disadvantage them against open source models, and against vendors from other countries (like DeepSeek) who wouldn’t comply. A theoretical government could try to do that, but I don’t know if it would survive legal challenges. The current US government is very unlikely to try in the first place, so it seems like a moot point for the next few years. After that, I don’t know.