Chinese artificial intelligence startup DeepSeek’s latest AI model sparked a $1 trillion rout in US and European technology stocks, as investors questioned bloated valuations for some of America’s biggest companies.
Chinese artificial intelligence startup DeepSeek’s latest AI model sparked a $1 trillion rout in US and European technology stocks, as investors questioned bloated valuations for some of America’s biggest companies.
OK, hold on, so I went over to huggingface and took a look at this.
Deepseek is huge. Like Llama 3.3 huge. I haven’t done any benchmarking, which I’m guessing is out there, but it surely would take as much Nvidia muscle to run this at scale as ChatGPT, even if it was much, much cheaper to train, right?
So is the rout based on the idea that the need for training hardware is much smaller than suspected even if the operation cost is the same… or is the stock market just clueless and dumb and they’re all running on vibes at all times anyway?
I thought everyone knew stocks were all vibes by now, private market might improve with competition but a public stock will always pick the most flashy option even if it’s shit just for appeal or they quite literally lose everything if it goes slightly wrong
Everything I’ve seen from looking into it seems to imply it’s on par for training and performance as other (LLM only) models.
I feel like I’m missing something here or that the market is “correcting” for other reasons.
The market is correcting because the AI bubble is gonna pop and someone needs to be left holding the bag.
it does not take an entire nvidia datacenter to serve one customer. the largest model appears to run on a high end rig.
The largest model, the only one that beats GPT is like 700gb.
It’s not running on a high end rig, it’s running on a server.
Deepseek is the based on either llama or qwen, but can be put on top of any model?
I tested qwen which sucked dick IMHO
Now deepseek qwen is best thing I tried locally