Meta and NVIDIA Forge Multi-Year AI Infrastructure Deal With Millions of Blackwell and Rubin GPUs

Read in other languages: 한국어日本語
AI Feb 22, 2026 By Insights AI 1 min read 4 views Source

NVIDIA announced a multiyear, multigenerational strategic partnership with Meta on February 17, 2026, covering on-premises, cloud, and AI infrastructure deployments. The deal integrates NVIDIA's full computing stack—CPUs, GPUs, and networking—across Meta's hyperscale data centers optimized for both training and inference workloads.

Millions of GPUs

The partnership centers on deploying millions of NVIDIA Blackwell and Rubin GPUs across Meta's infrastructure. The Rubin platform delivers up to a 10x reduction in inference token cost and 4x fewer GPUs needed to train mixture-of-experts models compared to Blackwell.

First Large-Scale Grace CPU Deployment

Meta is also expanding its use of Arm-based NVIDIA Grace CPUs as standalone chips—marking what NVIDIA describes as the first large-scale Grace-only deployment. Next-generation NVIDIA Vera CPUs are planned for potential large-scale rollout in 2027.

Networking and Privacy Computing

Meta will integrate NVIDIA Spectrum-X Ethernet switches for AI-scale networking with predictable, low-latency performance. The companies also confirmed NVIDIA Confidential Computing for WhatsApp's private processing, enabling AI capabilities while maintaining user data confidentiality.

Source: NVIDIA Press Release

Share:

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.