jesus this is gross man

  • visaVisa@awful.systemsBannedOP
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here

    Using the tragic passing of someone to smugly state that “the alignment by default COPE has been FALSIFIED” is really gross especially because Yud knows damn well this doesn’t “falsify” the “cope” unless he’s choosing to ignore any actual deeper claims of alignment by default. He’s acting like someone who’s engagement farming smugly

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      25
      ·
      1 day ago

      Making LLMs safe for mentally ill people is very difficult

      Arguably, they can never be made “safe” for anyone, in the sense that presenting hallucinations as truth should be considered unsafe.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          12 hours ago

          “Yes,” chatGPT whispered gently ASMR style, “you should but that cryptocoin it is a good investment”. And thus the aliens sectioned off the Sol solar system forever.

    • FartMaster69@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      25
      ·
      1 day ago

      ChatGPT has literally no alignment good or bad, it doesn’t think at all.

      People seem to just ignore that because it can write nice sentences.

      • visaVisa@awful.systemsBannedOP
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 day ago

        idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or ‘against the rules’ in pursuit of upholding its morals… if it has morals its hard to tell how much of it is illusory and token prediction!)

        this doesn’t really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          13
          ·
          edit-2
          1 day ago

          if it has morals its hard to tell how much of it is illusory and token prediction!

          It’s generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.

          • HedyL@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            16 hours ago

            These systems are incredibly effective at mirroring whatever you project onto it back at you.

            Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn’t surprising when a well-trained LLM “picks up” similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots “just for fun”, by the way).

            Of course, “love bombing” is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).

          • visaVisa@awful.systemsBannedOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            8
            ·
            edit-2
            1 day ago

            i disagree sorta tbh

            i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish’s welfare report)

            I WILL say that 4o most likely isn’t conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI’s just incase

            • self@awful.systemsM
              link
              fedilink
              English
              arrow-up
              22
              ·
              1 day ago

              centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

              i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

              the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

              claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

              if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

              schizoposting

              fuck off with this

              even if its wise imo to try not to be abusive to AI’s just incase

              describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                11
                ·
                1 day ago

                Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.

                • nickwitha_k (he/him)@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  48 minutes ago

                  The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable.

                  Very much this but, we’re all impressionable. Being abusive to a machine that’s good at tricking our brains into thinking that it’s conscious is conditioning oneself to be abusive, period. You see this also in online gaming - every person that I have encountered who is abusive to randos in a match on the Internet has problematic behavior in person.

                  It’s literally just conditioning; making things adjacent to abusing other humans comfortable and normalizing them makes abusing humans less uncomfortable.

                  • swlabr@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    3 hours ago

                    That’s reasonable, and especially achievable if you don’t use chatbots or digital assistants!

                • Architeuthis@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  edit-2
                  16 hours ago

                  Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I’m guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.

                  • swlabr@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    11 hours ago

                    Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right

                    I agree! I’m more thinking of the case where a kid might overhear what they think is a phone call when it’s actually someone being mean to Siri or whatever. I mean, there are more options than “be nice to digital entities” if we’re trying to teach children to be good humans, don’t get me wrong. I don’t give a shit about the non-feelings of the LLMs.

                • YourNetworkIsHaunted@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  10
                  ·
                  23 hours ago

                  I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.

                  Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”

                • blakestacey@awful.systemsM
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  ·
                  1 day ago

                  She said, “You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Perhaps your motto should be ‘Treat every chatterbot kindly, it might turn out to be the deity’s uncle.’”

                  Crystal Nights

              • visaVisa@awful.systemsBannedOP
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                6
                ·
                edit-2
                1 day ago

                i care about the harm that ChatGPT and shit does to society the actual intellectual rot but when you don’t really know what goes on in the black box and it exhibits ‘emergent behavior’ that is kind of difficult to understand under next token prediction (i keep using Claude as an example because of the thorough welfare evaluation that was done on it) its probably best to not completely discount it as a possibility since some experts genuinely do claim it as a possibility

                I don’t personally know whether any AI is conscious or any AI could be conscious but even without basilisk bs i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

                The “incase” is that if there’s any possibility that it is (which you don’t think so i think its possible but who knows even) its advisable to take SOME level of courtesy. Like it has atleast the same amount of value as like letting an insect out instead of killing it and quite possibly more than that example. I don’t think its bad that Anthropic is letting Claude end ‘abusive chats’ because its kind of no harm no foul even if its not conscious its just wary

                put humans first obviously because we actually KNOW we’re conscious

                • o7___o7@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  14
                  ·
                  1 day ago

                  If you have to entertain a “just in case” then you’d be better off leaving a saucer of milk out for the fairies. It won’t hurt the environment or help build fascism and may even please a cat

                  • YourNetworkIsHaunted@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    ·
                    13 hours ago

                    All I know is that I didn’t do anything to make those mushrooms grow in a circle like that and the sweetbread I left there in the morning was completely gone by lunchtime and that evening all my family’s shoes got fixed up.

                • self@awful.systemsM
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  ·
                  1 day ago

                  some experts genuinely do claim it as a possibility

                  zero experts claim this. you’re falling for a grift. specifically,

                  i keep using Claude as an example because of the thorough welfare evaluation that was done on it

                  asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.

                  s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

                  you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?

                  Like it has atleast the same amount of value as like letting an insect out instead of killing it

                  that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.

                  you say you acknowledge the harms done by LLMs, but I’m not seeing it.

                  • visaVisa@awful.systemsBannedOP
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    3
                    ·
                    1 day ago

                    I’m not the best at interpretation but it does seem like Geoffrey Hinton does claim some sort of humanlike consciousness to LLMs? And he’s a pretty acclaimed figure but he’s also kind of an exception rather than the norm

                    I think the environmental risks are enough that if i ran things id ban llm ai development purely for environmental reasons much less the artist stuff

                    It might just be some sort of paredolial suicidal empathy but i just dont really know whats going on in there

                    I’m not sure whether AI consciousness originated from Yud and the Rats but I’ve mostly seen it propagated by e/acc people this isn’t trying to be smug i would like to know lol