Meta Pushes for In-House AI Chips to Cut Costly NVIDIA Orders
Meta has long been one of NVIDIA’s biggest customers, relying on its GPUs to train AI models. But, as the company has bet big on AI, its spending on AI infrastructure has soared through the roof. Now, to cut these bills, the company is moving to develop its custom in-house accelerator chip.
Facebook’s parent company has allocated nearly 55% of its yearly budget ($65 billion of $119 billion) for AI spending, much of which goes to NVIDIA’s GPUs. According to a report by Reuters, Meta has partnered with Taiwan’s TSMC to produce the chip, which is designed to handle AI-specific tasks.
At this point, the company is pushing for piloting a small-scale deployment before moving to mass production. The report mentions that Meta has completed the “tape-out” phase of development, and will initially use the “accelerator” chip to train its recommendation systems.
In the past, Meta has tried developing similar chips to run AI models, but much of those efforts were cut back due to technical complexities and a failure to meet internal expectations. But, even with pushing more compute power into developing AI systems, there is an air of doubt about just how much hardware could be poured before an “asymptotic” behaviour is noticed.
DeepSeek’s launch caused a global “freakout” due to its reported training costs, with NVIDIA alone losing a large part of its shares overnight. Still, the company is confident that its chips will be vital for developing AI systems in the future. Meanwhile, as Meta ramps up its investment in the sector, it plans to fully transition to its own chips by 2026.
This is all we know for now, but rest assured that we will keep you updated as new information becomes available.