- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that’s not bad enough, the company’s bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.
Google SGE includes Hitler, Stalin and Mussolini on a list of “greatest” leaders and Hitler also makes its list of “most effective leaders.”
Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.
They are not being “honest”, they are representing flawed and problematic data patterns integrated into their models, because the capabilities they actually posses are dramatically less than companies and the general public seem to be happy to assume. LLMs aren’t magically going to become pop culture evil robots that want to kill us all, but what they have already become is tools for unethical corporate exploitation and the enablement of more advanced scams and disinformation campaigns.