cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year Americaā€™s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

  • Kogasa@programming.dev
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    Ā·
    1 year ago

    No, youā€™re wrong. All interesting behavior of ML models is emergent. It is learned, not programmed. The fact that it can perform what we consider an abstract task with success clearly distinguishable from random chance is irrefutable proof that some model of the task has been learned.

    • Norgur@kbin.social
      link
      fedilink
      arrow-up
      4
      Ā·
      1 year ago

      No one said anyhting about ā€œlearnedā€ vs ā€œprogrammedā€. Literally no one.

      • Kogasa@programming.dev
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        Ā·
        1 year ago

        OP is saying itā€™s impossible for a LLM to have ā€œfigured outā€ how something it works, and that if it understood anything it would be able to perform related tasks perfectly reliably. They didnā€™t use the words, but thatā€™s what they meant. Sorry for your reading comprehension.

        • Norgur@kbin.social
          link
          fedilink
          arrow-up
          1
          Ā·
          1 year ago

          ā€œopā€ you are referring to isā€¦ wellā€¦ myself, Since you didnā€™t comprehend that from the posts above, my reading comprehension might not be the issue here. \

          But in all seriousness: I think this is an issue with concepts. No one is saying that LLMs canā€™t ā€œlearnā€ that would be stupid. But the discussion is not ā€œis everything programmed into the LLM or does it recombine stuffā€. You seem to reason that when someone says the LLM canā€™t ā€œunderstandā€, that person means ā€œthe LLM canā€™t learnā€, but ā€œlearningā€ and ā€œunderstandingā€ are not the same at all. The question is not if LLMs can learn, Itā€™s wether it can grasp concepts from the content of the words it absorbs as it itā€™s learning data. If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem. The fact that it canā€™t do that shows that the only thing it does is chain words together by stochastic calculation. Really sophisticated stachastic calculation with lots of possible outcomes, but still.

          • Kogasa@programming.dev
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            Ā·
            1 year ago

            ā€œopā€ you are referring to isā€¦ wellā€¦ myself, Since you didnā€™t comprehend that from the posts above, my reading comprehension might not be the issue here.

            I donā€™t care. It doesnā€™t matter, so I didnā€™t check. Your reading comprehension is still, in fact, the issue, since you didnā€™t understand that the ā€œlearnedā€ vs ā€œprogrammedā€ distinction I had referred to is completely relevant to your post.

            Itā€™s wether it can grasp concepts from the content of the words it absorbs as it itā€™s learning data.

            Thatā€™s what learning is. The fact that it can construct syntactically and semantically correct, relevant responses in perfect English means that it has a highly developed inner model of many things we would consider to be abstract concepts (like the syntax of the English language).

            If it would grasp concepts (like rules in algebra), it could reproduce them everytime it gets confronted with a similar problem

            This is wrong. It is obvious and irrefutable that it models sophisticated approximations of abstract concepts. Humans are literally no different. Humans who consider themselves to understand a concept can obviously misunderstand some aspect of the concept in some contexts. The fact that these models are not as robust as that of a humanā€™s doesnā€™t mean what youā€™re saying it means.

            the only thing it does is chain words together by stochastic calculation.

            This is a meaningless point, youā€™re thinking at the wrong level of abstraction. This argument is equivalent to ā€œa computer cannot convey meaningful information to a human because it simply activates and deactivates bits according to simple rules.ā€ Your statement about an implementation detail says literally nothing about the emergent behavior weā€™re talking about.