• kate
    link
    fedilink
    English
    -14 months ago

    Can’t even rly blame the AI at that point

    • @TheFriar@lemm.ee
      link
      fedilink
      114 months ago

      Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.

      • kate
        link
        fedilink
        English
        54 months ago

        Should an LLM try to distinguish satire? Half of lemmy users can’t even do that

        • @KevonLooney@lemm.ee
          link
          fedilink
          94 months ago

          Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.

        • ancap shark
          link
          14 months ago

          If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that