Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • Randomgal@lemmy.ca
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    23 hours ago

    If someone sells you arsenic and tells you to eat it. The arsenic didn’t kill you, the person who sold it to you did.

    Blaming AI is the company’s way of making sure you’ll keep looking at the fucking hammer instead of at the hand weilding it.

    • Allonzee@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      23 hours ago

      Completely fair.

      For the record I am a socialist who thinks all billionaires and most to all centimillionaires are as mentally diseased as serial killers and far more effectively destructive to society than any serial killer that has ever walked the earth, and they all belong in mental health facilities for everyone else’s safety to protect us from their sociopathic avarice disease.