• wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

    And who the hell argues the animals don’t have free will? They don’t have full sapience, but they absolutely have will.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

      I just dont find it a particularly useful concept.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 hours ago

        I’d say it ends when you can’t predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there’s an additional random number generator I don’t have access too.

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            5 hours ago

            So I’d go with no at the moment because I can easily get an LLM to contradict itself repeatedly in increcibly obvious ways.

            I had a long ass post but I think it comes down to that we don’t know what conciousness or self awareness even are and just kind of collectively agree upon it when we think we see it, sort of like how morality is pretty much a mutable group consensus.

            The only way I think we could be truly sure would be to stick it in a simulated environment and see how it reacts over a few thousand simulated years to figure out wether its one of the following:

            • Chinese room: The potential AI in question just keeps dying because despite seeming intelligent when prompted with training data it has no ability to function when its not spoon-fed the required information in advance. (I think current LLMs are here given my initial statement in this post).
            • Animal: It survives but never really advances beyond figuring out the behaviours required for survival, its certainly concious at this point but works more like a dog where it can follow commands and carry out tasks but has no true understanding of the meaning behind them.
            • Person: It starts seeking out information in ways not immediately neccesary for its survival and basically does what we did with the whole tool thing and speculative reasoning skills, if it invents an equivelent to writing then we can be pretty damn certain its human level and not more like corvids (tools) or ants (agriculture)

            Now personally I think that test is likely impractical so we’re probably going to default to its concious when it can convince the majority of people that its concious for a sustained period… So I guess it has free will when it can start or at least spark a large grass roots civil rights movement?

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          If viruses have free will when they are machines made out of rna which just inject code into other cells to make copies of themselves then the concept is meaningless (and also applies to computer programs far simpler than llms).