Meta Partners With Arm to Develop New Class of Data Center Silicon
Original: Meta Partners With Arm to Develop New Class of Data Center Silicon View original →
Meta and Arm said on March 24, 2026 that they are working on a new class of CPUs for AI-optimized data centers, extending Meta's effort to design more of its own infrastructure stack. The companies framed the partnership as a response to a practical constraint: AI training, inference, and general-purpose computing are all scaling faster than traditional data center CPUs were built to handle.
Under the agreement, Meta and Arm will co-develop multiple generations of chips. The first product is the Arm AGI CPU, which Arm describes as its first data center CPU built specifically for the AI era. Meta said the chip is designed to deliver faster performance per rack and better efficiency than legacy CPUs, which matters as operators try to pack more compute into facilities with tight power, cooling, and space limits.
What Meta highlighted
- The CPU program is meant to support both growing AI workloads and broader general-purpose computing inside Meta's infrastructure.
- Meta will act as the lead partner and co-developer for the first Arm AGI CPU generation.
- The new CPU is intended to work alongside Meta's existing MTIA silicon rather than replace the rest of the custom stack.
- Meta said it plans to release board and rack designs for the CPU through the Open Compute Project later in 2026.
That last point is important because it signals that Meta is not treating the chip only as an internal optimization. Arm said the Arm AGI CPU will also be available to the broader AI ecosystem through Arm, while Meta's Open Compute Project contribution could make the hardware design more reusable across data center operators and suppliers.
The announcement also shows how the custom-silicon race is widening. For several years, large AI companies focused public attention on accelerators and training clusters. Meta and Arm are arguing that CPUs still matter as foundational orchestration and systems components, especially when massive AI deployments need more performance density and tighter power efficiency at the rack level.
Meta did not publish launch timing or benchmark tables in this announcement, so the practical impact will depend on later rollout details. Even so, the message is clear: the company wants more direct control over the compute layer that supports its next wave of AI services, and Arm wants its architecture positioned as a first-class option for large-scale agentic AI infrastructure. Source: Meta Newsroom.
Related Articles
Meta said on March 11, 2026 that it is developing and deploying four new generations of MTIA custom chips within the next two years. The company is positioning MTIA as a central part of its AI infrastructure strategy for ranking, recommendations, and GenAI inference workloads.
Meta said on January 9, 2026 that new agreements with Vistra, TerraPower, and Oklo could support up to 6.6 GW of new and existing clean power by 2035. The company tied the effort directly to the energy demands of its growing AI infrastructure, including the Prometheus supercluster in Ohio.
Meta said on March 11, 2026 that it is accelerating its in-house MTIA roadmap across four generations, from MTIA 300 through MTIA 500. The company is using custom silicon to push harder on ranking, recommendation, and especially GenAI inference economics at Meta scale.
Comments (0)
No comments yet. Be the first to comment!