The US blocked high power graphics cards to specific countries, and then got all shaken up when their money moat was pole-vaulted by an embargo’d country wielding jank cards.

Why is this a big deal, exactly?

Who benefits if the US has the best AI, and who benefits if it’s China?

Is this like the Space Race, where it’s just an effort to spit on each other, but ultimately no one really loses, and cool shit gets made?

What does AI “supremacy” mean?

  • droplet6585@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    No. We are not.

    With typical capitalist efficiency, the titans of industry are going to boil off half an ocean in an ignorant attempt to simulate a human brain that requires what, about 2 kilowatt-hours of relatively clean chemical energy a day?

    Never mind there being no shortage of said brains.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    14
    ·
    2 days ago

    I think a bunch of ignorant politicians in the US think that’s going to be their ticket for competing with China because they refuse to invest into workers. They’re basically betting that AI would allow them to automate a lot of the jobs, and that’s how they’ll get back on top.

  • Liv@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    40
    ·
    2 days ago

    The two biggest implications in my opinion are firstly that it shows that this “trillion dollar” industry is a massively overvalued bubble waiting to pop. What takes an American company several hundred billion dollars and a decade of research takes deepseek less than $6M and 18 months. To drive the nail in the coffin even further they recently announced Janus Pro which is an image generator rivaling Dall-E and Stable Diffusion. All this by an embargoed company that didn’t even exist when the first editions of these chatbots and and image generators were released.

    Second, there’s the “national security” implications since the US wants to aggressively militarize AI tech and China just demonstrated that they’re already caught up in a fraction of the time for a fraction on the cost so there’s no way they don’t surpass US capabilities within the next year, if they haven’t already.

    I think this may be major turning point for global alliances and there will be massive realignment away from the US and toward China on the geopolitical stage. The US and its oligarchy have been called out on their bullshit essentially.

    • tetris11@lemmy.mlOP
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      I think you hit the nail on the head with the military aspect: combat drones and robotics

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        It also opens up a big question, is US military tech and the US army as much better as they claim because of their funding, or is it overvalued by orders of magnitude as their AI tech seems to be?

        • Liv@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          Absolutely overvalued. Companies overcharging on military contracts by orders of magnitude is the standard. Hell, the air force was buying mugs for over $1k/mug not too long ago, I’m not sure if they ever actually did anything about it but I remember it being reported on a couple of years ago.

          The US is scary because of its nuclear arsenal. Most of the $850B budget goes to the contractors solely for R&D, sustained production is rare, and even the “sustained” results in at most 200 units.

          AI has been proven to show bias because the data its trained on shows bias but the us doesn’t care as long as that bias is pointed at the “enemy” (read: anyone south of Texas or east of Ukraine) so that enemy can be most effectively eliminated. We’re not leading in any development, production, or ethics, we’re just paying rich assholes to make indiscriminate killing machines unbound by morals and easily scapegoated when things go wrong.

          I see people actually in the military constantly complaining about how far behind technologically the military is. Only the special forces/CIA/seals/etc get the really cool toys

    • chuckleslord@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      The issue I have is there is no surpassing to happen here. We’ve platued on possible AI milestones, so the only new move will be the next big thing… which is impossible to predict when it’ll happen.

  • pr06lefs@lemmy.ml
    link
    fedilink
    arrow-up
    45
    ·
    edit-2
    2 days ago

    Some assholes gave congresscritters a bunch of money to get their businesses a cozy monopoly and special treatment. They were seeing AI as a profit center they could corner the market on thanks to govt-industry collusion. Looks like ginormous data centers and export controlled GPU cards may not be as essential to AI research as thought, and now the emergency is their stock is tanking.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I think, people who say that believe that we’re close to actually-intelligent AI (or artificial general intelligence, AGI). And when we get there, it’s possible that we might suddenly be able to automate lots of complex tasks, possibly even shove it onto robots and have it take on physical labor and things like that.

    It’s the wet dream of capitalists, because they don’t need to employ anyone anymore. And I guess, folks are also afraid that such AI could be used for war.

  • dangling_cat@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    2 days ago

    Well, LLM academic research has always been open and improving very fast. Then a bunch of MBAs makes this a sport game and a national arms race so they can put more money in their pockets and create a monopoly instead of reinvesting the tech. Now they are surprised that other researchers can read and implement those papers too.

    BTW, Sam Altman is gay and an immigrant. He is betraying his own kind on multiple levels.

    Edit: my brain has failed me sorry

    • davel [he/him]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      BTW, Sam Altman is gay and an immigrant. He is betraying his own kind on multiple levels.

      Altman was born in Chicago. If you want to call him an immigrant, then 99% of us are.

      But more importantly, Altman’s “kind” is neither of those things: it’s his class, namely the capitalist class, with which he has class solidarity.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Well, they’re not actually open-source. The models are freely available, but the training data is not, so it’s not actually possible for competitors to reproduce the same result.

    • hoshikarakitaridia@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Asterisk on that - I consider black forest’s image generation models to be leading, and their pro variants are commercial only.

      That said everything else has leading models that are all open source I think, except for ChatGPT which is becoming obsolete right now.

      This news cycle is one hell of a nothing burger.

  • BroBot9000@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    They have sunken too much money and can’t go backwards now. Thats all. They will eat themselves in a bid to stay ahead.

  • fckreddit@lemmy.ml
    link
    fedilink
    arrow-up
    11
    arrow-down
    8
    ·
    2 days ago

    Don’t believe the hype: LLMs are not AI. Not even close. They are in fact, much closer to pattern recognition models. Fundamentally, our brains are able to ‘understand’ any query posed to it. Only problem is we don’t know what ‘understanding’ even means. How can we then even judge if some model is capable of understanding, or is the output just something that is statistically most likely?

    Second, can AI even know what a human experience is like? We cannot give AI inputs in the exact form we receive them in. In fact, we cannot input the sensations of touch, flavor and smell to AI at all. So, AI as of yet cannot tell you how a freshly baked bread smells like or feels like, for example. Human experience is still our domain. That means our inspirations are intact and AI cannot create works of art that feel truly human.

    Finally, AI by default has no concept of truth or false. It takes every statement in it’s training data as true, unless, they are labelled individually by hand. Of course, such an approach doesn’t scale well for petabytes of text data. So, LLMs tend to hallucinate stuff because again it is only giving out text that is only statistically most likely, given the input.

    In short, we still don’t have many pieces of puzzle that is true AI. We know it is possible because we exist, but that’s about it. Sure, AI is doing better than humans in specific cases, but they nowhere close humans in understanding and reasoning.

      • fckreddit@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        2 days ago

        I guess you are right. Think of it this way, LLMs are doing great at solving specific sets of problems. Now, people in charge of the money think that LLMs are the closest thing to an intelligent agents. All they have to do is reduce the hallucinations and make it more accurate by adding more data and/or tweaking the model.

        Our current incentive structure reward results over everything else. That is the primary reason for this AI race. There are people who falsely believe that by throwing money at LLMs they can make it better and eventually reach true AGI. Then, there are others who are misleading the money men, even when they know the truth.

        But, just because something is doing great at some limited benchmark doesn’t mean that model can generalise it to all the infinite situations. Again look at my og comment for why it is so. Intelligence is multi-faceted and multi dimensional.

        This is unlike space race in one primary way. In space race, we understood the principles for going to space well enough since the time of Newton. All we had to do was engineer the rocket. For example, we knew that we have to find the fuel that can generate maximum thrust per kg of fuel oxygen mixture burnt. The only question was what form it would. Now you could just have many teams look for many different fuels to answer this question. It is scalable. Space race was an engineering question.

        Meanwhile, AI is a question of science. We don’t understand the concept of intelligence itself very well. Focussing on LLMs solely is a mistake because the progress here might not even translate well and maybe even harm the larger AI research.

        There are in scientific community who believe that we might never be able to understand intelligence because to understand it a higher level of intelligence is needed. Again, not saying it is true. Just that there are many ideas and viewpoints present with regards to AI and intelligence in general.

    • Kraiden@kbin.earth
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      LLMs not being able to tell us what bread tastes like has nothing to do with intelligence. it’s a qualia. I think you meant it cannot KNOW what bread tastes like… although I still don’t understand why you’d think that’s a requirement for intelligence