I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    13 hours ago

    No, a chat bot as it’s talked about here is not an LLM. This article is discussing limitations of LLM training data and inferring that chat bots can not scale as a result. There are many techniques that can be used to continue to improve chat bots.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 hours ago

      The chatbot is a front end to an LLM, you are being needlessly pedantic. What the chatbot serves you, is the result of LLM queries.

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        13 hours ago

        That may have been true for the early LLM chatbots but not anymore. ChatGPT for instance, now writes code to answer logical questions. The o1 models have background token usage because each response is actually the result of multiple background LLM responses.