
According to multiple media reports, Meta Platforms has revealed its roadmap for in-house AI chip development, planning to release four new ASIC-based AI chips over the next 2–3 years to support rapidly growing AI data center demands.
The new chips, branded as Meta Training and Inference Accelerator (MTIA), are designed to handle workloads ranging from AI model training to generative AI inference. By building its own hardware ecosystem, Meta aims to reduce reliance on external GPUs, boosting cost efficiency and performance.
The MTIA lineup includes MTIA 300, 400, 450, and 500. The MTIA 300 is already in use, powering content ranking and recommendation systems on Facebook and Instagram. It features a chiplet architecture with 216 GB of HBM memory and 200 GB/s network bandwidth, optimized for recommendation and ad model computation.
The second-generation MTIA 400 delivers a roughly 400% increase in FP8 performance over its predecessor and a 51% increase in HBM bandwidth. Its modular design allows 72 chips to interconnect via a backplane, forming larger compute systems. This chip has completed lab testing and is gradually being deployed in data centers.
Future MTIA 450 and 500 chips will focus on generative AI inference, supporting applications like text and image generation as well as content recommendations. Meta anticipates large-scale deployment around 2027, with higher-capacity HBM further improving inference throughput.
Meta’s VP of Engineering, Yee Jiun Song, highlighted that a modular design approach enables the team to release a new chip every six months, allowing component upgrades without fully redesigning the architecture. This accelerates development and adapts to rapidly evolving AI workloads.
MTIA chips are primarily manufactured by TSMC, with Broadcom assisting in certain design aspects. Meta has not disclosed whether U.S.-based TSMC fabs in Arizona will be used for mass production.
The surge in generative AI has driven high demand for HBM, making supply chains critical. Meta says it has adopted diversified procurement strategies to mitigate potential bottlenecks.
Even as Meta pushes for ASIC development, it remains one of the largest GPU users worldwide, having signed multi-billion-dollar contracts with NVIDIA and AMD to secure AI compute for the coming years. This dual strategy balances internal chip innovation with external GPU acceleration.
Amid rapid AI infrastructure expansion, Meta expects capital expenditures to reach $135 billion by 2026, primarily for data center construction and AI hardware deployment.