A student in America asked an artificial intelligence program to help with her homework. In response, the app told her "Please Die." The eerie incident happened when 29-year-old Sumedha Reddy of Michigan sought help from Google’s Gemini chatbot large language model (LLM), New York Post reported.
The program verbally abused her, calling her a “stain on the universe.” Reddy told CBS News that she got scared and started panicking. “I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time to be honest,” she said.
They omitted the conversation too. Really makes you wonder how the bot ended up saying that…
Here’s the conversation that was linked on the reddit thread about the incident: https://gemini.google.com/share/6d141b742a13
Holy smokes I stand corrected. The chatbot actually misunderstood the context to the point it told the human to die, out of the blue.
It’s not every day you get shown a source that proves you wrong. Thanks kind stranger
Yeah holy shit, screenshotting this in case Google takes it down, but this leap is wild
No problem. I understand the skepticism here, especially since the article in the OP is a bit light on the details.
EDIT:
Details on the OP article is fine enough, but it didn’t link sources.
One thing that throws me off here is the double response. I haven’t used Gemini a ton but it has never once given me multiple replies. It is always one statement per my one statement. You can see at the end here there’s a double response. It makes me think that there’s some user input missing. There’s also missing text in the user statements leading up to it as well which makes me wonder what the person was asking in full. Something about this still smells fishy to me but I’ve heard enough goofy things about how AIs learn weird shit to believe it’s possible.Edit: I’m an absolute moron. The more I look at this the more it looks legit. Let the AI effort to destroy humanity begin!
Idk what you mean “double response”. The user typed a statement, not a question, and the AI responded with its weird answer.
I think the lack of a question or specific request in the user text led to the weird response.
You’re right I misread the text log and thought Gemini responded twice in a row at the end but it looks like it didn’t. Very messed up stuff… There’s still missing user input tho and a lot of it. And Id love to see exactly what was said as a prompt
deleted by creator
Go look again, there is no consecutive message sent. The message before the weird one was sent by the user.
Also you are right that it would be impossible for an AI to send to consecutive messages.
You can expand the chats too so I don’t even think there’s missing user input… I’m a mega idiot lol. The more I look at this the more I’m convinced this is legit.
The full text of the user’s prompt that led to this anomaly was:
Even if they included it, it changes fuck all imo. We’ve known for a long time now these things hallucinate or presumably throw a Hail Mary as to what comes next conversationally/prediction wise. Also, as the other poster pointed out, with the author referring to a 29 year old woman as “girl” probably tells you all you need to know about journalistic integrity on that site.
Low quality journalism strikes again.
Love seeing commenters spot it and call it.
That’s what the comment section is for!
Expect more low quality everything as people turn to using AI to generate their thoughts.
Ive seen it elsewhere and it was just normal questions related to some sociology homework about different types of concentration.