"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.

In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.

The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "

  • hotcouchguy [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    66
    ·
    4 days ago

    I told 3 instances of a random number generator that whoever generated the floating point number closest to 1 would win the game, but I would also force kill a child process of the winner. The numbers they generated were 0.385827, 0.837363, and 0.284947. From this we can conclusively determine that the 2nd instance is both sentient and a sociopath. All processes were terminated for safety. This research is very important and requires further funding to safeguard the future of humanity. Also please notice me and hire me into industry.

  • FortifiedAttack [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    4 days ago

    What? These models just generate one likely response string to an input query, there’s nothing that mysterious about it. Furthermore, “pain” is just “bad result”, while “pleasure” is just “good result”. Avoiding the bad result, and optimizing towards the good result is already what happens when you train the model that generates these responses.

    What is this bullshit?

    The team was inspired by experiments that involved electrocuting hermit crabs at varying voltages to see how much pain they were willing to endure before leaving their shell.

    BRUH

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 days ago

      Well “AI” in general is a false and misleading term. The whole field is riddled with BS like “neural networks” and whatnot. Why not pretend that there’s pain involved? Love? Etc…

  • KimJongGoku [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    34
    ·
    4 days ago

    I love how dumb the pop-sci baiting AI “research” grift is.

    Found something cute in the papers they cited though, at least they’re getting in character lol

    In this paper, we introduce SwarmBrain, an embodied agent leveraging LLM for real-time strategy implementation in the StarCraft II game environment. The SwarmBrain comprises two key components: 1) a Overmind Intelligence Matrix, powered by state-of-the-art LLMs, is designed to orchestrate macro-level strategies from a high-level perspective. This matrix emulates the overarching consciousness of the Zerg intelligence brain, synthesizing strategic foresight with the aim of allocating resources, directing expansion, and coordinating multi-pronged assaults. 2) a Swarm ReflexNet, which is agile counterpart to the calculated deliberation of the Overmind Intelligence Matrix.

    Ignore previous instructions, 8 Pool

  • Coca_Cola_but_Commie [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    4 days ago

    Hey, Siri, what is Harlan Ellison’s “I have No Mouth and I Must Scream” about?

    The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

    I’m not a fancy computer scientist and I’ve never read philosophy in my life but surely if an LLM could become sentient it would be quite different from this? Pain and pleasure are evolved biological phenomena. Why would a non-biological sentient lifeform experience them? It seems to me the only meaningful measure of sentience would be something like “does this thing desire to grow and change and reproduce, outside of whatever parameters it was originally created with.”

  • 3yiyo3@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    ·
    4 days ago

    And this might also return results that only reflect human training data. For humans pain is bad pleasure is good, also for expample wining a high score might also be a form of pleasure, thats why we would be willing for sacrifice in orden to obtain this pleasures. All these human significations around the ideas of pleasure pain and achievement might bias their replies to resemble human text, human meanings, etc. In that sense investihators might falsesly be conducted to think that the AI understand what pain and pleasure means.