- cross-posted to:
- science@midwest.social
- cross-posted to:
- science@midwest.social
Machine-made delusions are mysteriously getting deeper and out of control.
ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.
…
In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.
It kind of is the governments job to do that. You might not want it to be, but the government has entire regulatory bodies to protect people. You can call them delusional if you want, but plenty of people that are not experiencing mental health problems don’t understand that LLMs can lie or make up information. Lawyers have used it and it hallucinated case law. The lawyers weren’t being delusional, they just legitimately did not know it could do that. Maybe you think they’re dumb, or uninformed, but they’re just average people. I do think a disclaimer like the SG warnings would go a long way. I also think some safeguards should be in place. It should not allow you to generate child abuse imagery for example. I don’t think this will negatively impact it being able to generate your SQL queries.