• ContrarianTrail@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    7
    ·
    4 hours ago

    I bet there are people who committed suicide after their Tamagotchi died. Jumping into the ‘AI bad’ narrative because of individual incidents like this is moronic. If you give a pillow to a million people, a few are going to suffocate on it. This is what happens when you scale something up enough, and it proves absolutely nothing.

    The same logic applies to self-driving vehicles. We’ll likely never reach a point where accidents stop happening entirely. Even if we replaced every human-driven vehicle with a self-driving one that’s 10 times safer than a human, we’d still see 8 people dying because of them every day in the US alone. Imagine posting articles about those incidents and complaining they’re not 100% safe. What’s the alternative? Going back to human drivers and 80 deaths a day?

    Yes, we should strive to improve. Yes, we should try to fix the issues that can be fixed. No, I’m not saying ‘who cares’ - and so on with the strawmans I’m going to receive for this. All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.

    • babybus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 hour ago

      A chatbot acts like a human, it’s also very supportive, polite, and courteous. It doesn’t get angry or judge you. This can affect one’s mind in a way that other things you’ve mentioned like a Tamagotchi, a pillow, or a self-driving car can’t. We simply can’t compare AI to these things. Adults fall for this, let alone teenagers who are fueled by extreme levels of hormones.

    • Roflmasterbigpimp@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      3 hours ago

      All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.

      Based and true.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    6 hours ago

    We are playing with some dark and powerful shit here.

    We are social creatures. We’re primed to care about our social identity more than our own lives.

    As the sociologist Brooke Harrington puts it, if there was an E = mc2 of social science, it would be SD > PD, “social death is more frightening than physical death.”

    …yet we’re making technologies that tap into that sensitive mental circuitry.

    And the cruel irony on top of it is:

    Because we care so much about preserving our social status, we have a tendency to deny or downplay how vulnerable we all are to this kind of “obvious” manipulation.

    Just think of how many people say “ads don’t affect me”.

    <If I can find it, I’ll add a link to an interesting study on distracted driving and hands-free options. If I recall correctly, talking to someone in the passenger seat was only mildly distracting, but having the same conversation over a hands-free call was way more distracting. And their proposed explanation was that a passenger has the same context and naturally understands if the conversation needs to pause, but on a call there is no shared context so there is social pressure to keep going. Unnervingly, voice assistant interactions had the same problem. They trip the same part of our brain that worries about politeness at the expense of our physical safety.>

    I’m worried we’re going to severely underestimate the extent to which this stuff warps our brains.

    • peopleproblems@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      5 hours ago

      I was going to make a joke about how my social status died over a decade ago, but then I realized that no, it didn’t. It changed.

      Instead of my social status being something amongst friends and classmates, it’s now coworkers, managers, and clients. A death in the social part of my world - work - would be so devastating that it motivates me to suffer just a little bit more. Losing my job would end a lot of things for me.

      I need to reevaluate my life

      • Samvega@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        What we need is a human society predicated on affording human decency, rather than on taking it away to make profit for those who already have the most.

  • foggy@lemmy.world
    link
    fedilink
    arrow-up
    94
    arrow-down
    1
    ·
    edit-2
    9 hours ago

    Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him… Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation… But you know, as a grown ass adult.

    You can witness first hand… He found a chatbot that was a psychologist… And it argued with him up and down that it was indeed a real human with a license to practice…

    It’s alarming

    • GrammarPolice@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      56
      arrow-down
      6
      ·
      9 hours ago

      This is fucking insane. Unassuming kids are using these services being tricked into believing they’re chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.

      • BreadstickNinja@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 hours ago

        The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don’t think the issue is that he thought that a Game of Thrones character was real.

        This is someone who was suffering a severe mental health crisis, and his parents didn’t get him the treatment he needed. It says they took him to a “therapist” five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.

        I’m skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don’t buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatrist emergency. The Game of Thrones chatbot is not the issue here.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          33
          arrow-down
          3
          ·
          9 hours ago

          Look around a bit, people will believe anything. The problem is the tech is now decent enough to fool anyone not aware or not paying attention. I do think blaming the mother for “bad parenting” misses the real danger, as there are adults that can just as easily go this direction, and are we going to blame their parents? Maybe we’re playing with fire here, all because AI is perceived as a lucrative investment.

          • orcrist@lemm.ee
            link
            fedilink
            arrow-up
            8
            arrow-down
            1
            ·
            5 hours ago

            If your argument is that “people will believe anything” when the name is “Character AI”, then I’m not sure what to make of your position… If there’s ever a time to say “you should have known it was AI”, this is that time. I can’t think of a clearer example.

          • foggy@lemmy.world
            link
            fedilink
            arrow-up
            12
            arrow-down
            2
            ·
            9 hours ago

            Obvs they didn’t.

            But I think more importantly, go over to chat GPT and try to convince it that it is even remotely conscious.

            I honestly even disagree, but I won’t get into the philosophy of what defines consciousness, but even if I do that with the chat GPT it shuts me the fuck down. It will never let me believe that it is anything other than fake. Props to them there.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 hours ago

      Wow, that’s… somethin. I haven’t paid any attention to Character AI. I assumed they were using one of the foundation models, but nope. Turns out they trained their own. And they just licensed it to Google. Oh, I bet that’s what drives the generated podcasts in Notebook LM now. Anyway, that’s some fucked up alignment right there. I’m hip deep in the stuff, and I’ve never seen a model act like this.

  • macniel@feddit.org
    link
    fedilink
    arrow-up
    76
    arrow-down
    11
    ·
    edit-2
    10 hours ago

    Maybe a bit more parenting could have helped. And not having a fricking gun in your house your kid can reach.

    On and regulations on LLMs please.

    • Samvega@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      3 hours ago

      The fact that stupid low effort comments like this are upvoted indicates that Lemmy is exactly the same as Reddit.

    • Samvega@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      3 hours ago

      Maybe a bit more parenting could have helped.

      Yes, maybe that would have made you a better person.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      2
      ·
      edit-2
      9 hours ago

      He ostensibly killed himself to be with Daenerys Targaryen in death. This is sad on so many levels, but yeah… parenting. Character .AI may have only gone 17+ in July, but Game of Thrones was always TV-MA.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        7 hours ago

        Issue I see with character.ai is that it seem to be unmoderated. Everyone with a paid subscription can submit their trained character. Why the Frick do sexual undertones or overtones come even up in non-age restricted models?

        They, the provider of that site, deserve the full front of this lawsuit.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      7 hours ago

      Seriously. If the risk is this service mocks a human so convincingly that lies are believed and internalized, then it still leaves us in a position of a child talking to an “adult” without their parents knowing.

      There were lots of folks to chat with in the late 90s online. I feel fortunate my folks watched me like a hawk. I remember getting in trouble several times for inappropriate conversations or being in chatrooms that were inappropriate. I lost access for weeks at a time. Not to the chat, to the machine.

      This is not victim blaming. This was a child. This is victim’s parents blaming. They are dumb as fuck.

    • Nuke_the_whales@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      6
      ·
      7 hours ago

      At some point you take your kid camping for a few weeks or put him in a rehab camp where he has no access to electronics

    • dohpaz42@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      14
      ·
      8 hours ago

      Maybe a bit more parenting could have helped.

      No.

      If someone is depressed enough to kill themselves, no amount of “more parenting” could’ve stopped that.

      Shame on you for trying to shame the parents.

      And not having a fricking gun in your house your kid can reach.

      Maybe. Maybe not. I won’t argue about the merits of securing weapons in a house with kids. That’s a no-brainer. But there is always more than one way to skin the proverbial cat.

      On and regulations on LLMs please.

      Pandora’s Box has been opened. There’s no putting it back now. No amount of regulation will fix any of this.

      Maybe a Time Machine.

      Maybe…


      I do believe that we need to talk more about suicide, normalize therapy, free healthcare (I’ll settle for free mental healthcare), funding for more licensed social workers in schools, train parents and teachers on how to recognize these types of situations, etc.

      As parents we do need to be talking more with our kids. Even just casual check ins to see how they’re doing. Parents should also talk to their kids about how they are feeling too. It’ll help the kids understand that everybody feels stress, anxiety, and sadness (to name a few emotions).

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        7 hours ago

        They failed to be knowledgeable of their child’s activity AND failed to secure their firearms.

        One can acknowledge the challenge of the former, in 2024. But one cannot excuse the latter.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        8
        arrow-down
        3
        ·
        8 hours ago

        Yes parenting could have helped to distinguish between talking to a real person and a unmoving cold machine.

        And sure regulations now would not change what happend, duh. And regulations need to happen, companies like OpenAI and Microsoft and Meta are running amok, their LLMS as unrestricted they are now are doing way too much damage to society as they are helping.

        This needs to stop!

        Also I feel no shame, shaming parents who don’t, or rather inadequate, do their one job. This was a presentable death.

        • Samvega@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 hours ago

          Yes parenting could have helped to distinguish between talking to a real person and a unmoving cold machine.

          Hi, I’m a psychologist. I am not aware of peer-researched papers which reach the conclusion that, for all disorders that involve an unsatisfactory appraisal of reality, parenting is a completely effective solution. Please find sources.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    10 hours ago

    Yeah, if you are using an AI for emotional support of any kind, you are in for a bad, bad time.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      I thought he killed himself. Ah well, maybe I didn’t read the article carefully enough.

  • saltesc@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    20
    ·
    9 hours ago

    I guess suimg is part of the grieving process; right before.accepting your own guilt.

    • tal
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      9 hours ago

      your own guilt

      Hmm.

      I have a pretty hard time blaming Character.AI, at least from what’s in the article text.

      On the other hand, it’s also not clear to me from the article that his mom did something unreasonable to cause him to commit suicide either, whether or not her lawsuit is justified – those are two different issues. Whether-or-not she’s taking out her grief on Character.AI or even looking for a payday, that doesn’t mean that she caused the suicide either.

      Not every bad outcome has a bad actor; some are tragedies.

      I don’t know what his life was like.

      I mean, people do commit suicide.

      https://sprc.org/about-suicide/scope-of-the-problem/suicide-by-age/

      In 2020, suicide was the second leading cause of death for those ages 10 to 14 and 25 to 34

      Always have, probably always will.

      Those aren’t all because someone went out and acted in some reprehensible way to get them to do so. People do wind up in unhappy situations and do themselves in, good idea or no.

      • Goldmage263@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        Agree. Not enough info for me to judge. Maybe Lemmings shouldn’t make this site into one for snap judgements and witch hints.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      2
      arrow-down
      8
      ·
      5 hours ago

      Ha. They didn’t want to parent before, so you can be sure that guilt is the farthest thing from their minds.

      • Pandantic [they/them]@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        They literally brought him to a therapist when they noticed he was withdrawn and his grades were slipping, which is more than a lot of parents would do. Maybe they should have taken more control of his phone, but they were ignorant of the situation happening.