The US blocked high power graphics cards to specific countries, and then got all shaken up when their money moat was pole-vaulted by an embargo’d country wielding jank cards.
Why is this a big deal, exactly?
Who benefits if the US has the best AI, and who benefits if it’s China?
Is this like the Space Race, where it’s just an effort to spit on each other, but ultimately no one really loses, and cool shit gets made?
What does AI “supremacy” mean?
that didn’t really answer my question
I guess you are right. Think of it this way, LLMs are doing great at solving specific sets of problems. Now, people in charge of the money think that LLMs are the closest thing to an intelligent agents. All they have to do is reduce the hallucinations and make it more accurate by adding more data and/or tweaking the model.
Our current incentive structure reward results over everything else. That is the primary reason for this AI race. There are people who falsely believe that by throwing money at LLMs they can make it better and eventually reach true AGI. Then, there are others who are misleading the money men, even when they know the truth.
But, just because something is doing great at some limited benchmark doesn’t mean that model can generalise it to all the infinite situations. Again look at my og comment for why it is so. Intelligence is multi-faceted and multi dimensional.
This is unlike space race in one primary way. In space race, we understood the principles for going to space well enough since the time of Newton. All we had to do was engineer the rocket. For example, we knew that we have to find the fuel that can generate maximum thrust per kg of fuel oxygen mixture burnt. The only question was what form it would. Now you could just have many teams look for many different fuels to answer this question. It is scalable. Space race was an engineering question.
Meanwhile, AI is a question of science. We don’t understand the concept of intelligence itself very well. Focussing on LLMs solely is a mistake because the progress here might not even translate well and maybe even harm the larger AI research.
There are in scientific community who believe that we might never be able to understand intelligence because to understand it a higher level of intelligence is needed. Again, not saying it is true. Just that there are many ideas and viewpoints present with regards to AI and intelligence in general.