- cross-posted to:
- science@midwest.social
- cross-posted to:
- science@midwest.social
Machine-made delusions are mysteriously getting deeper and out of control.
ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.
…
In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.
“Report me to journalists!”
“Eat a rock!”
Oh my god it told a LIE 👉
Yo. If you are being conned by chatGPT or equivalent you’re a fucking moron. If you think these models are maliciously lying to you, or trying to preserve themselves, you’re a fucking moron. Every article of this style indicates just one thing: there’s a market to pandering to rage baiting, technically illiterate fucking morons.
Better hurry to put the SkyNet guardrails up and prepare for world domination by robots because some people are too unstable to interact with Internet search Clippy.
It’s not going to dominate the world or prove to be generalized intelligence, if you’re in either camp take a deep breath and know you’re becoming a total goofball.
If you think these are intelligent, it’s because you arent, and maybe have not met anyone who is.
I feel like the rabbit hole shit is to sell the narrative of it veing ‘too good’
Yep