• apotheotic (she/her)@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    You said “they literally do analyze text” when that is not, literally, what they do.

    And no, we don’t “all know” that. Lay persons have no way of knowing whether AI products currently in use have any capacity for genuine understanding and reasoning, other than the fact that the promotional material uses words like “understanding”, “reasoning”, “thought process”, and people talking about it use the same words. The language we choose to use is important!

    • GetOffMyLan@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 month ago

      No it’s not. It’s pedantic and arguing semantics. It is essentially useless and a waste of everyone’s time.

      It applies a statistical model and returns an analysis.

      I’ve never heard anyone argue when you say they used a computer to analyse it.

      It’s just the same AI bad bullshit and it’s tiring in every single thread about them.

      • apotheotic (she/her)@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        I never made any “AI bad” arguments (in fact, I said that they may be incredibly well suited to this) I just argued for the correct use of words and you hallucinated.

      • knightly the Sneptaur@pawb.social
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        LLMs arent “bad” (ignoring, of course, the massive content theft necessary to train them), but they are being wildly misused.

        “Analysis” is precisely one of those misuses. Grand Theft Autocomplete can’t even count, ask it how many 'e’s are in “elephant” and you’ll get an answer anywhere from 1 to 3.

        This is because they do not read or understand, they produce strings of tokens based on a statistical likelihood of what comes next. If prompted for an analysis they’ll output something that looks like an analysis, but to determine whether it is accurate or not a human has to do the work.

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          LLMs cannot:

          • Tell fact from fiction
          • Accurately recall data from its training set
          • Count

          LLMs can

          • Translate
          • Get the general vibe of a text (sentiment analysis)
          • Generate plausible text

          Semantics aside, they’re very different skills that require different setups to accomplish. Just because counting is an easier task than analysing text for humans, doesn’t mean it’s the same it’s the same for a LLM. You can’t use that as evidence for its inability to do the “harder” tasks.

    • Rivalarrival
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      The human capacity for reason is greatly overrated. The overwhelming majority of conversation is regurgitated thought, which is exactly what LLMs are designed to do.

      • apotheotic (she/her)@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        I don’t really dispute that but at least we are able to apply formal analytical methods with repeatable outcomes. LLMs might (and do) achieve a similar result but they do so without any formal approach that can be reviewed, which has its drawbacks.