Friday, June 20, 2025

Meta Begins Testing Custom AI Training Chips to Reduce Dependence on Nvidia

Meta, the parent company of Facebook, has begun testing its first in-house chip dedicated to training artificial intelligence systems. It is a key step towards the company’s vision of creating custom hardware and reducing its dependence on external providers like Nvidia.
The chip, one of the series of Meta Training and Inference Accelerator (MTIA) chips, is a dedicated accelerator for AI-specific tasks. It aims to enhance power efficiency compared to standard graphics processing units (GPUs) that are usually employed to handle AI workloads. The chip has been manufactured with Taiwan Semiconductor Manufacturing Company (TSMC), following a successful “tape-out,” a critical step in the manufacture of chips where an initial design is forwarded to a fabrication facility.
Meta has used a limited number of these chips and plans to increase production for wider use if the initial tests prove successful. The project is part of Meta’s overall long-term plan to cut infrastructure costs while investing heavily in AI technologies for expansion. The company has put total 2025 spending at between $114 billion and $119 billion, of which up to $65 billion will be for capital expenditures driven mainly by investment in AI infrastructure.
Meta’s attempts to develop in-house chips have come under a cloud before. The company initially planned to roll out its chips in 2022 but put the project on the backburner after they were not able to meet internal benchmarks. This drove a shift away from CPUs toward GPUs for training AI, with Meta having to rearchitect its data centers and drop some projects. Meta was, however, able to begin using an MTIA chip for inference-based work in Facebook and Instagram recommend systems last year.
Executives at Meta are also anticipating that they can have their proprietary chips trained in 2026, but only for data-greedy operations of stuffing large volumes of data into artificial intelligence systems. It is envisioned initially to feed recommendation systems and extend to other uses of generative AI, including the Meta AI chatbot. This falls into the broad technological trend for major industries where behemoths seek to wean themselves from dependence on outside hardware providers.
The application of bespoke silicon solutions has the potential to reduce AI infrastructure expenses substantially, a significant cost for business entities such as Meta. By creating chips tailored to meet specific AI workloads, Meta can tailor performance and power consumption, and hence be able to reduce costs and become more competitive in the AI space.
Other than Meta, other businesses have also looked into developing in-house chips for use in AI applications. It is a movement caused by mounting demand on the part of tech giants to possess their own hardware supply chains and tailor their AI technologies for specific applications, as opposed to taking general-purpose products from outside companies. Since AI continues to sit at the forefront of stimulating innovation and development within the technology field, designing dedicated AI chips will become ever more important.

- Advertisment -
Google search engine

Most Popular