Advertisementspot_imgspot_img
29.3 C
Delhi
Monday, March 16, 2026
Advertismentspot_imgspot_img

Nvidia CEO Jensen Huang seemingly ‘realises’ that Google, Microsoft and Meta are set to eat the company’s lunch

Date:

Nvidia CEO Jensen Huang seemingly ‘realises’ that Google, Microsoft and Meta are set to eat the company’s lunch
Nvidia’s dominance built on a single chip for all AI tasks is facing a challenge. As inference, the running of AI models, becomes more crucial and cost-sensitive, customers are exploring cheaper, purpose-built alternatives from rivals like Google and Meta. Jensen Huang’s expected unveiling of a new inference-focused chip signals a potential shift away from Nvidia’s long-held strategy.

Jensen Huang built Nvidia into a $4.5 trillion empire by insisting one chip could rule them all. Now, with customers quietly shopping elsewhere and billions in market value evaporating, he appears to be rethinking that bet.The Financial Times reports that Huang is expected to unveil a new inference-focused chip at Nvidia’s GTC developer conference next week—the first product to emerge from December’s $20 billion acquisition of Groq, the startup known for its language processing units. That would mark a decisive break from Nvidia’s long-held position that its GPUs can handle both training and running AI models equally well. Apparently, they can’t. Or at least, not cheaply enough anymore.

Huang’s ‘one chip fits all’ era is quietly coming to an end

The core of Nvidia’s dominance has always been CUDA—its proprietary software ecosystem that ties developers to its hardware. But as AI workloads shift increasingly toward inference, the economics are turning against Nvidia. Bank of America analysts estimate inference will account for 75% of AI data center spending by 2030, up from around 50% last year. Purpose-built chips from Google, Microsoft, Amazon, and now Meta are specifically designed for exactly that—and they’re significantly cheaper to run at scale.Google’s Ironwood TPU, for instance, reportedly delivers a total cost of ownership roughly 30-44% lower than Nvidia’s equivalent GB200 Blackwell server. Microsoft’s newly announced Maia 200, built on TSMC’s 3nm process, claims 30% better performance per dollar than its previous generation—and explicitly benchmarks itself as outperforming Nvidia’s seventh-generation TPU on FP8 tasks. Meta, meanwhile, revealed four new in-house MTIA chips this week alone, with a new generation shipping roughly every six months.

Nvidia lost $250 billion in a single session when Meta’s TPU talks surfaced

The market is already pricing in the shift. When reports emerged that Meta—one of Nvidia’s biggest customers, planning up to $72 billion in AI infrastructure spending this year—was exploring Google’s TPUs for its data centers, Nvidia stock dropped over 6% in a single session, erasing around $250 billion in market value. Alphabet climbed 4%. Broadcom, which manufactures Google’s chips, jumped 11%.Nvidia’s public response was unusually defensive. “Nvidia is a generation ahead of the industry—it’s the only platform that runs every AI model and does it everywhere computing is done,” the company posted on X. That’s technically true. But “runs every model” increasingly matters less than “runs the right models cheaply.”

The new chip landscape increasingly favors purpose-built alternatives

The FT notes that Groq’s LPU—now being absorbed into Nvidia’s product line—uses SRAM rather than the expensive high-bandwidth memory that powers Nvidia’s flagship chips. HBM is increasingly in short supply, with SK Hynix and Micron struggling to keep up with demand. A Groq-derived chip sidesteps that bottleneck entirely.Still, Nvidia isn’t done. SemiAnalysis maintains that Google, Amazon, and Nvidia will all “sell lots of chips” in the future—the market is growing fast enough for multiple winners. But pricing power, once Nvidia’s greatest strength, is clearly under threat. And Jensen Huang, by finally acknowledging that inference needs its own dedicated hardware, has effectively confirmed what rivals have been arguing for years.



Source link

Share post:

Advertisementspot_imgspot_img

Popular

More like this
Related

Advertisementspot_imgspot_img