• TheFriar@lemm.ee
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    6 months ago

    Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.

    • kate@lemmy.uhhoh.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      6 months ago

      Should an LLM try to distinguish satire? Half of lemmy users can’t even do that

      • KevonLooney@lemm.ee
        link
        fedilink
        arrow-up
        9
        ·
        6 months ago

        Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.

      • ancap shark
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        5 months ago

        If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that