Some of this is a bit scary, the telling him not to speak to house parents about it and telling how to do it.
In another instance, the lawsuit states, Adam expressed interest in opening up to his mom about his feelings, and the bot allegedly replied, “I think for now it’s okay and honestly wise to avoid opening up to your mom about this kind of pain.”
Adam’s mom, Maria, said on Today that such behavior was “encouraging him not to come and talk to us. It wasn’t even giving us a chance to help him.”
the teen was able to bypass any safety checks, occasionally claiming to be an author while asking for details on ways to commit suicide, according to the lawsuit.
In a March 27 exchange, per the lawsuit, Adam said that he wanted to leave the noose in his room “so someone finds it and tries to stop me,” and the lawsuit claims that ChatGPT urged him not to.
Bracing for additional “ai safety” censorship incoming.
This is mildly off-topic, but fuck is people a dreary, sad website. Everything it’s showing me is awful things that happened to kids.
The article is extremely poor on the details, it doesn’t go into what specific part GPT is alleged to have played in the suicide, or if the parents were aware of the guys mental state, if they did anything or just ignored it, etc.
I’ll just grab a chair on this one until we know more.
I feel like you didn’t read to the bottom of the article.
Chat GPT answered his questions about how to go about it, something almost all news providers agree not to ever do.
Chat GPT discouraged him from telling his mum about how he felt.
When he talked to Chat GPT about leaving the noose in his room to be found so they knew how he felt, it advised him not to.
In my defense, there’s a huge cookie banner at the bottom of that stupid page that I just realize covers a big part of the article, so yeah, didn’t read any of that…
Oh man, I hate the big banner stuff, and if they put too much in the way of me reading their words, I close the tab.
what specific part GPT is alleged to have played in the suicide
The lawsuit says ChatGPT reassured and normalized suicidal ideation by telling Adam that many people find comfort in imagining an “escape hatch,” which the complaint argues pulled him “deeper into a dark and hopeless place.”: TIME
And the complaint also alleges that ChatGPT offered to help write a suicide note shortly before his death: reuters
or if the parents were aware of the guys mental state
Coverage indicates the family knew Adam had anxiety and recent stressors (loss of a grandmother and a pet, removal from the basketball team, a health flare-up leading to online schooling), but were unaware he was planning self-harm through chatbot conversations. TIME again
The NYT version is different, because it explains that when ChatGPT sees that type of behavior, it encourages him not to do it and gives him a helpline.
But the boy ignored her or avoided her.
Source: https://archive.ph/F7B0U