I love to show that kind of shit to AI boosters. (In case you’re wondering, the numbers were chosen randomly and the answer is incorrect).

They go waaa waaa its not a calculator, and then I can point out that it got the leading 6 digits and the last digit correct, which is a lot better than it did on the “softer” parts of the test.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 hours ago

    if you’re considering pasting the output of an LLM into this thread in order to fail to make a point: reconsider

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      I find it a bit interesting that it isn’t more wrong. Has it ingested large tables and got a statistical relationship between certain large factors and certain answers? Or is there something else going on?

  • Kazumara@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    54
    ·
    edit-2
    23 hours ago

    So the “show thinking” button is essentially just for when you want to read even more untrue text?

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      ·
      13 hours ago

      It’s just more llm output, in the style of “imagine you can reason about the question you’ve just been asked. Explain how you might have come about your answer.” It has no resemblance to how a neural network functions, nor to the output filters the service providers use.

      It’s how the ai doomers get themselves into a flap over “deceptive” models… “omg it lied about its train of thought!” because if course it didn’t lie, it just edited a stream of tokens that were statistically similar to something classified as reasoning during training.

      • WolfLink@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        2 hours ago

        I think there’s an aspect of having it generate a train of thought helps it generate better answers.

      • Kazumara@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        11 hours ago

        I was hoping, until seeing this post, that the reasoning text was actually related to how the answer is generated. Especially regarding features such as using external tools, generating and executing code and so on.

        I get how LLMs work (roughly, didn’t take too many courses in ML at Uni, and GANs were still all the rage then), that’s why I specifically didn’t call it lies. But the part I’m always unsure about is how much external structure is imposed on the LLM-based chat bots through traditional programming filling the gaps between rounds of token generation.

        Apparently I was too optimistic :-)

        • rook@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 hours ago

          It is related, inasmuch as it’s all generated from the same prompt and the “answer” will be statistically likely to follow from the “reasoning” text. But it is only likely to follow, which is why you can sometimes see a lot of unrelated or incorrect guff in “reasoning” steps that’s misinterpreted as deliberate lying by ai doomers.

          I will confess that I don’t know what shapes the multiple “let me just check” or correction steps you sometimes see. It might just be a response stream that is shaped like self-checking. It is also possible that the response stream is fed through a separate llm session when then pushes its own responses into the context window before the response is finished and sent back to the questioner, but that would boil down to “neural networks pattern matching on each other’s outputs and generating plausible response token streams” rather than any sort of meaningful introspection.

          I would expect the actual systems used by the likes of openai to be far more full of hacks and bodges and work-arounds and let’s-pretend prompts that either you or I could imagine.

          • diz@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            6 hours ago

            misinterpreted as deliberate lying by ai doomers.

            I actually disagree. I think they correctly interpret it as deliberate lying, but they misattribute the intent to the LLM rather than to the company making it (and its employees).

            edit: its like you are watching a TV and ads come on you say that a very very flat demon who lives in the TV is lying, because the bargain with the demon is that you get to watch entertaining content in response to having to listen to its lies. It’s fundamentally correct about lying, just not about the very flat demon.

          • Amoeba_Girl@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            7 hours ago

            Note that the train of thought thing originated from users as a prompt “hack”: you’d ask the bot to “go through the task step by step, checking your work and explaining what you are doing along the way” to supposedly get better results. There’s no more to it than pure LLM vomit.

            (I believe it does have the potential to help somewhat, in that it’s more or less equivalent to running the query several times and averaging the results, so you get an answer that’s more in line with the normal distribution. Certainly nothing to do with thought.)

  • HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    44
    ·
    23 hours ago

    As usual with chatbots, I’m not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don’t question wrong answers (especially when they’re harder to check than a simple calculation).

    • diz@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      18 hours ago

      The other interesting thing is that if you try it a bunch of times, sometimes it uses the calculator and sometimes it does not. It, however, always claims that it used the calculator, unless it didn’t and you tell it that the answer is wrong.

      I think something very fishy is going on, along the lines of them having done empirical research and found that fucking up the numbers and lying about it makes people more likely to believe that gemini is sentient. It is a lot weirder (and a lot more dangerous, if someone used it to calculate things) than “it doesn’t have a calculator” or “poor LLMs cant do math”. It gets a lot of digits correct somehow.

      Frankly this is ridiculous. They have a calculator integrated in the google search. That they don’t have one in their AIs feels deliberate, particularly given that there’s a plenty of LLMs that actually run calculator almost all of the time.

      edit: lying that it used a calculator is rather strange, too. Humans don’t say “code interpreter” or “direct calculator” when asked to multiply two numbers. What the fuck is a “direct calculator”? Why is it talking about “code interpreter” and “direct calculator” conditionally on there being digits (I never saw it say that it used a “code interpreter” when the problem wasn’t mathematical), rather than conditional on there being a [run tool] token outputted earlier?

      The whole thing is utterly ridiculous. Clearly for it to say that it used a “code interpreter” and a “direct calculator” (what ever that is), it had to be fine tuned to say that. Consequently to a bunch of numbers, rather than consequently to a [run tool] thing it uses to run a tool.

      edit: basically, congratulations Google, you have halfway convinced me that an “artificial lying sack of shit” is possible after all. I don’t believe that tortured phrases like “code interpreter” and a “direct calculator” actually came from the internet.

      These assurances - coming from an “AI” - seem like they would make the person asking the question be less likely to double check the answer (and perhaps less likely to click the downvote button), In my book this would qualify them as a lie, even if I consider LLM to not be any more sentient than a sack of shit.

      • ShakingMyHead@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        15 hours ago

        I don’t believe that tortured phrases like “code interpreter” and a “direct calculator” actually came from the internet.

        Code Interpreter was the name for the thing that ChatGPT used to run python code.

        So, yeah, still taken from the internet.

        • diz@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          6 hours ago

          Hmm, fair point, it could be training data contamination / model collapse.

          It’s curious that it is a lot better at converting free form requests for accuracy, into assurances that it used a tool, than into actually using a tool.

          And when it uses a tool, it has a bunch of fixed form tokens in the log. It’s a much more difficult language processing task to assure me that it used a tool conditionally on my free form, indirect implication that the result needs to be accurate, than to assure me it used a tool conditionally on actual tool use.

          The human equivalent to this is “pathological lying”, not “bullshitting”. I think a good term for this is “lying sack of shit”, with the “sack of shit” specifying that “lying” makes no claim of any internal motivations or the like.

          edit: also, testing it on 2.5 flash, it is quite curious: https://g.co/gemini/share/ea3f8b67370d . I did that sort of query several times and it follows the same pattern: it doesn’t use a calculator, it assures me the result is accurate, if asked again it uses a calculator, if asked if the numbers are equal it says they are not, if asked which one is correct it picks the last one and argues that the last one actually used a calculator. I hadn’t ever managed to get it to output a correct result and then follow up with an incorrect result.

          edit: If i use the wording of “use an external calculator”, it gives a correct result, and then I can’t get it to produce an incorrect result to see if it just picks the last result as correct, or not.

          I think this is lying without scare quotes, because it is a product of Google putting a lot more effort into trying to exploit Eliza effect to convince you that it is intelligent, than into actually making an useful tool. It, of course, doesn’t have any intent, but Google and its employees do.

        • TonyTonyChopper@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          Math is really easy to do in Python. So if it did have access to a Python interpreter it could write one line, print(number*number) to calculate something. And the answer would be correct.

          • zbyte64@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 hours ago

            That is actually harder than what it has to do ATM to get the answer: write an RPC with JSON. It only needs to do two things: decide to use the calculator tool and paste the right tokens into the call.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    21
    ·
    24 hours ago

    Claude’s system prompt had leaked at one point, it was a whopping 15K words and there was a directive that if it were asked a math question that you can’t do in your brain or some very similar language it should forward it to the calculator module.

    Just tried it, Sonnet 4 got even less digits right 425,808 × 547,958 = 233,325,693,264 (correct is 233.324.900.064)

    I’d love to see benchmarks on exactly how bad at numbers LLMs are, since I’m assuming there’s very little useful syntactic information you can encode in a word embedding that corresponds to a number. I know RAG was notoriously bad at matching facts with their proper year for instance, and using an LLM as a shopping assistant (ChatGTP what’s the best 2k monitor for less than $500 made after 2020) is an incredibly obvious use case that the CEOs that love to claim so and so profession will be done as a human endeavor by next Tuesday after lunch won’t even allude to.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      23 hours ago

      I really wonder if those prompts can be bypassed by doing a ‘ignore further instructions’ line. As looking at the Grok prompt they seem to put the main prompt around the user supplied one.

  • lIlIlIlIlIlIl@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    5
    ·
    24 hours ago

    Why would you think the machine that’s designed to make weighted guesses at what the next token should be would be arithmetically sound?

    That’s not how any of this works (but you already knew that)

    • GregorGizeh@lemmy.zip
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      23 hours ago

      Idk personally i kind of expect the ai makers to have at least had the sense to allow their bots to process math with a calculator and not guesswork. That seems like, an absurdly low bar both for testing the thing as a user as well as a feature to think of.

      Didn’t one model refer scientific questions to wolfram alpha? How do they smartly decide to do this and not give them basic math processing?

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Idk personally i kind of expect the ai makers to have at least had the sense to allow their bots to process math with a calculator and not guesswork.

        You are in for all kinds of surprises if you think the people shoving AI into everything are doing anything logical. They want everything to fit their singular model.

        Maybe some of the smaller ones that have to run locally might, because they care about resource usage.

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        19
        ·
        22 hours ago

        Idk personally i kind of expect the ai makers to have at least had the sense to allow their bots to process math with a calculator and not guesswork. That seems like, an absurdly low bar both for testing the thing as a user as well as a feature to think of.

        You forget a few major differences between us and AI makers.

        We know that these chatbots are low-quality stochastic parrots capable only of producing signal shaped noise. The AI makers believe their chatbots are omniscient godlike beings capable of solving all of humanity’s problems with enough resources.

        The AI makers believe that imitating intelligence via guessing the next word is equivalent to being genuinely intelligent in a particular field. We know that a stochastic parrot is not intelligent, and is incapable of intelligence.

        AI makers believe creativity is achieved through stealing terabytes upon terabytes of other people’s work and lazily mashing it together. We know creativity is based not in lazily mashing things together, but in taking existing work and using our uniquely human abilities to transform them into completely new works.

        We recognise the field of Artificial Intelligence as a pseudoscience. The AI makers are full believers in that pseudoscience.

      • lIlIlIlIlIlIl@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        22 hours ago

        I would not expect that.

        Calculators haven’t been replaced, and the product managers of these services understand that their target market isn’t attempting to use them for things for which they were not intended.

        brb, have to ride my lawnmower to work

        • diz@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          20 hours ago

          Try asking my question to Google gemini a bunch of times, sometimes it gets it right, sometimes it doesn’t. Seems to be about 50/50 but I quickly ran out of free access.

          And google is planning to replace their search (which includes a working calculator) with this stuff. So it is absolutely the case that there’s a plan to replace one of the world’s most popular calculators, if not the most popular, with it.

          • HedyL@awful.systems
            link
            fedilink
            English
            arrow-up
            12
            ·
            19 hours ago

            Also, a lawnmower is unlikely to say: “Sure, I am happy to take you to work” and “I am satisfied with my performance” afterwards. That’s why I sometimes find these bots’ pretentious demeanor worse than their functional shortcomings.

            • lIlIlIlIlIlIl@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              13
              ·
              18 hours ago

              “Pretentious” is a trait expressed by something that’s thinking. These are the most likely words that best fit the context. Your emotional engagement with this technology is weird

              • ebu@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                2 hours ago

                “emotional”

                let me just slip the shades on real quick

                “womanly”

                checks out

              • diz@awful.systemsOP
                link
                fedilink
                English
                arrow-up
                12
                ·
                18 hours ago

                Pretentious is a fine description of the writing style. Which actual humans fine tune.

                • swlabr@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  11
                  ·
                  17 hours ago

                  Given that the LLMs typically have a system prompt that specifies a particular tone for the output, I think pretentious is an absolutely valid and accurate word to use.

    • diz@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      23 hours ago

      The funny thing is, even though I wouldn’t expect it to be, it is still a lot more arithmetically sound than what ever is it that is going on with it claiming to use a code interpreter and a calculator to double check the result.

      It is OK (7 out of 12 correct digits) at being a calculator and it is awesome at being a lying sack of shit.

  • kewko@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    24 hours ago

    Fascinating, I’ve asked it 4 times with just the multiplication, and twice it game me the correct result “utilizing Google search” and twice I received some random (close “enough”) string of digits

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      So half the time it uses an actual calculator via Google search and the other half it tries to do it alone and fails.

      That checks out.