When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • @gcheliotis@lemmy.world
    link
    fedilink
    English
    224 minutes ago

    The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.

  • @n0m4n@lemmy.world
    link
    fedilink
    English
    51 hour ago

    If this were some fiction plot, Copilot reasoned the plot twist, and ran with it. Instead of the butler, the writer did it. To the computer, these are about the same.

    • @wintermute@discuss.tchncs.de
      link
      fedilink
      English
      206 hours ago

      Exactly. LLMs don’t understand semantically what the data means, it’s just how often some words appear close to others.

      Of course this is oversimplified, but that’s the main idea.

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        English
        6
        edit-2
        4 hours ago

        no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm’s output. It’s as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.

    • @Rivalarrival
      link
      English
      -95 hours ago

      It’s a solveable problem. AI is currently at a stage of development equivalent to a 2-year-old, just with better grammar. Everything it is doing now is mimicry and babbling.

      It needs to feed it’s own interactions right back into it’s training data. To become a better and better mimic. Eventually, the mechanism it uses to select the appropriate data to form a response will become more and more sophisticated, and it will hallucinate less and less. Eventually, it’s hallucinations will be seen as “insightful” rather than wild ass guesses.

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        English
        84 hours ago

        also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

        • @linearchaos@lemmy.world
          link
          fedilink
          English
          -32 hours ago

          This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

          • @vrighter@discuss.tchncs.de
            link
            fedilink
            English
            1
            edit-2
            15 minutes ago

            yes it is, and it doesn’t work.

            edit: too expand, if you’re generating data it’s an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won’t be in the set (because you didn’t know about them, so the network never sees any)

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        English
        5
        edit-2
        4 hours ago

        The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it’s not solvable. Not with LLMs. not now, not ever.

      • @linearchaos@lemmy.world
        link
        fedilink
        English
        -52 hours ago

        Good luck being pro AI here. Regardless of the fact that they could just put a post on the prompt that says The writer of this document was not responsible for the act they are just writing about it and it would not frame them as the perpetrator.

        • @vrighter@discuss.tchncs.de
          link
          fedilink
          English
          13 minutes ago

          the problem isn’t being pro ai. It’s people puling ai supposed ai capabilities out of their asses without having actually looked at a single line of code. This is obvious to anyone who has coded a neural network. Yes even to openai themselves, but if they let you believe that, then the money stops flowing. You simply can’t get an 8-ball to give the correct answer consistently. Because it’s fundamentally random.

        • @Hacksaw@lemmy.ca
          link
          fedilink
          English
          52 hours ago

          If you already know the answer you can tell the AI the answer as part of the question and it’ll give you the right answer.

          That’s what you sound like.

          AI people are as annoying as the Musk crowd.

          • @linearchaos@lemmy.world
            link
            fedilink
            English
            -51 hour ago

            You know what, don’t bother responding back to me I’m just blocking you now, before you decide to drag some more of that tired right wing bullshit that you used to fight with everyone else with, none of your arguments on here are worth anyone even reading so I’m not going to waste my time and responding to anything or reading anything from you ever again.

          • @linearchaos@lemmy.world
            link
            fedilink
            English
            -51 hour ago

            How helpful of you to tell me what I’m saying, especially when you reframe my argument to support yourself.

            That’s not what I said. Why would you even think that’s what I said.

            Before you start telling me what I sound like, you should probably try to stop sounding like an impetuous child.

            Every other post from you is dude or LMAO. How do you expect anyone to take anything you post seriously?

  • Queen HawlSera
    link
    fedilink
    English
    1310 hours ago

    It’s a fucking Chinese Room, Real AI is not possible. We don’t know what makes humans think, so of course we can’t make machines do it.

    • @stingpie@lemmy.world
      link
      fedilink
      English
      12 minutes ago

      I don’t think the Chinese room is a good analogy for this. The Chinese room has a conscious person at the center. A better analogy might be a book with a phrase-to-number conversion table, a couple number-to-number conversion tables, and finally a number-to-word conversion table. That would probably capture transformer’s rigid and unthinking associations better.

    • @KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      -58 hours ago

      You forgot the ever important asterisk of “yet”.

      Artificial General Intelligence (“Real AI”) is all but guaranteed to be possible. Because that’s what humans are. Get a deep enough understanding of humans, and you will be able to replicate what makes us think.

      Barring that, there are other avenues for AGI. LLMs aren’t one of them, to be clear.

      • @PhlubbaDubba@lemm.ee
        link
        fedilink
        English
        77 hours ago

        I actually don’t think a fully artificial human like mind will ever be built outside of novelty purely because we ventured down the path of binary computing.

        Great for mass calculation but horrible for the kinds of complex pattern recognitions that the human mind excels at.

        The singularity point isn’t going to be the matrix or skynet or AM, it’s going to be the first quantum device successfully implanted and integrated into a human mind as a high speed calculation sidegrade “Third Hemisphere.”

        Someone capable of seamlessly balancing between human pattern recognition abilities and emotional intelligence while also capable of performing near instant multiplication of matrices of 100 entries of length in 15 dimensions.

  • @Ilovethebomb@lemm.ee
    link
    fedilink
    English
    2712 hours ago

    I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.

    • @catloaf@lemm.ee
      link
      fedilink
      English
      -412 hours ago

      I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.

      • @RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        3211 hours ago

        If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.

        If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.

      • @TheFriar@lemm.ee
        link
        fedilink
        English
        1411 hours ago

        So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

      • @Ilovethebomb@lemm.ee
        link
        fedilink
        English
        1512 hours ago

        I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can’t trust anything it says.

      • Stopthatgirl7OP
        link
        fedilink
        English
        1111 hours ago

        If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

        • @lunarul@lemmy.world
          link
          fedilink
          English
          -710 hours ago

          Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.

          • Stopthatgirl7OP
            link
            fedilink
            English
            8
            edit-2
            9 hours ago

            So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

            • @lunarul@lemmy.world
              link
              fedilink
              English
              07 hours ago

              I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.

      • @kibiz0r@midwest.social
        link
        fedilink
        English
        311 hours ago

        If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

        Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

        Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

  • @deegeese@sopuli.xyz
    link
    fedilink
    English
    5715 hours ago

    It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.

    Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.

    Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.

    • 100
      link
      fedilink
      1014 hours ago

      just shows that these “ai”'s are completely useless at what they are trained for

      • @catloaf@lemm.ee
        link
        fedilink
        English
        1812 hours ago

        They’re trained for generating text, not factual accuracy. And they’re very good at it.

  • @tiramichu@lemm.ee
    link
    fedilink
    English
    1814 hours ago

    The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.

    You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn’t know how it ended up that way.

    • @catloaf@lemm.ee
      link
      fedilink
      English
      812 hours ago

      We’re already there, no AI needed. Rates are all generated by computer. Ask your agent why your rate went up and they’ll say “idk computer said so”.

  • sunzu2
    link
    fedilink
    -314 hours ago

    These are not hallucinations whatever thay is supposed to mean lol

    Tool is working as intended and getting wrong answers due to how it works. His name frequently had these words around it online so AI told the story it was trained. It doesn’t understand context. I am sure you can also it clearify questions and it will admit it is wrong and correct itself…

    AI🤡

      • @mindlesscrollyparrot@discuss.tchncs.de
        link
        fedilink
        English
        12 hours ago

        Sure, but which of these factors do you think were relevant to the case in the article? The AI seems to have had a large corpus of documents relating to the reporter. Those articles presumably stated clearly that he was the reporter and not the defendant. We are left with “incorrect assumptions made by the model”. What kind of assumption would that be?

        In fact, all of the results are hallucinations. It’s just that some of them happen to be good answers and others are not. Instead of labelling the bad answers as hallucinations, we should be labelling the good ones as confirmation bias.

        • femtech
          link
          fedilink
          English
          11 hour ago

          It was an incorrect assumption based on his name being in the article. It should have listed him as the author only, not a part of the cases.

        • chiisana
          link
          fedilink
          English
          -210 hours ago

          The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

          The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

          • snooggums
            link
            fedilink
            English
            210 hours ago

            The users’ assumption/expectation of the output being factual is what is wrong.

            So randomly spewing out bullshit is the actual design goal of AI models? Why does it exist at all?

            • @ApexHunter@lemmy.ml
              link
              fedilink
              English
              36 hours ago

              They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

              Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

              At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.

              • snooggums
                link
                fedilink
                English
                12 hours ago

                They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

                That it generates correct answers > 50% of the time is actually quite a marvel.

                So good as a translator as long as accuracy doesn’t matter?

              • chiisana
                link
                fedilink
                English
                1
                edit-2
                3 hours ago

                If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.

                The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!

                Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M