- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
Google is coming in for sharp criticism after video went viral of the Google Nest assistant refusing to answer basic questions about the Holocaust — but having no problem answer questions about the Nakba.
If you train your large language model on all the internet’s bullshit and don’t want bullshit to come out, there’s not a lot of good options. Garbage in, garbage out
That kind of fits my opinion of LLMs in general. :)
Then you should say that instead of a reductive “don’t censor”. Censorship is important because you want to avoid false and harmful statements.
Definition:
Removing false information isn’t the same as removing objectionable information.
But it is a subset of objectionable information.
Definition:
Yes, false information is technically undesirable, but that’s not really what that word is trying to convey. The goal should be accurate information, not agreeable information. If the truth is objectionable/offensive, it should still be easily findable.