• Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      And some of those citations and quotes will be completely false and randomly generated, but they will sound very believable, so you don’t know truth from random fiction until you check every single one of them. At which point you should ask yourself why did you add unneccessary step of burning small portion of the rainforest to ask random word generator for stuff, when you could not do that and look for sources directly, saving that much time and energy

      • PapstJL4U@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        I, too, get the feeling, that the RoI is not there with LLM. Being able to include “site:” or “ext:” are more efficient.

        I just made another test: Kaba, just googling kaba gets you a german wiki article, explaining it means KAkao + BAnana

        chatgpt: It is the combination of the first syllables of KAkao and BEutel - Beutel is bag in german.

        It just made up the important part. On top of chatgpt says Kaba is a famous product in many countries, I am sure it is not.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          You do have this issue, you can’t not have this issue, your LLM, no matter how big the model is and how much tooling you use, does not have criteria for truth. The fact that you made this invisible for you is worse, so much worse.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          LLMs are great at cutting through noise

          Even that is not true. It doesn’t have aforementioned criteria for truth, you can’t make it have one.
          LLMs are great at generating noise that humans have hard time distinguishing from a text. Nothing else. There are indeed applications for it, but due to human nature, people think that since the text looks like something coherent, information contained will also be reliable, which is very, very dangerous.