NVIDIA and Marvell expand NVLink Fusion for semi-custom AI infrastructure
Original: NVIDIA and @MarvellTech join forces through NVLink Fusion to expand ecosystem and give customers greater choice and flexibility in developing next-generation infrastructure. https://nvidianews.nvidia.com/news/nvidia-ai-ecosystem-expands-as-marvell-joins-forces-through-nvlink-fusion View original →
What NVIDIA said on X
On March 31, 2026, NVIDIA's Newsroom account posted on X that Marvell is joining forces with NVIDIA through NVLink Fusion to expand the ecosystem and give customers more flexibility in building next-generation infrastructure. The linked press release makes clear that this is not a narrow interconnect update. It is a broader attempt to widen the set of silicon and networking combinations that can plug into NVIDIA-centered AI systems.
The announcement is corporate in tone and should be read as a company statement, but it still carries meaningful product-strategy signals. NVIDIA says the deal connects Marvell to its AI factory and AI-RAN ecosystem through NVLink Fusion, while also adding a silicon photonics collaboration and a $2 billion NVIDIA investment in Marvell.
What the partnership includes
According to the press release, Marvell will contribute custom XPUs and NVLink Fusion-compatible scale-up networking. NVIDIA says it will provide the surrounding rack-scale technologies, including Vera CPU, ConnectX NICs, BlueField DPUs, NVLink interconnect, Spectrum-X switches, and rack-scale AI compute.
The language around NVLink Fusion is important. NVIDIA describes it as a platform for building semi-custom AI infrastructure that remains fully compatible with NVIDIA systems. That means the company is not only pushing fixed full-stack boxes. It is also trying to become the compatibility layer around which partners can insert custom compute and networking components.
- NVIDIA says NVLink Fusion lets customers build heterogeneous AI systems while staying compatible with NVIDIA infrastructure.
- Marvell is positioned as a provider of custom XPUs, scale-up networking, and silicon photonics expertise.
- The release also states that NVIDIA has invested $2 billion in Marvell.
Why this matters for AI infrastructure
This is a high-signal infrastructure story because it shows how the AI stack is fragmenting and consolidating at the same time. Fragmenting, because hyperscalers and large enterprises increasingly want differentiated systems, not just standard reference designs. Consolidating, because those custom systems still need a dominant interoperability layer if they are going to scale inside existing software, networking, and supply-chain ecosystems.
An inference from NVIDIA's announcement is that NVLink Fusion is becoming a strategy for absorbing partner silicon without giving up platform control. NVIDIA appears to be saying: build custom compute if you want, but do it around our interconnect, networking, rack architecture, and ecosystem. If that holds, the company can preserve influence even as customers ask for more semi-custom designs.
There is a clear caveat. This is a forward-looking press release, not a deployed customer case study with measured production results. Still, the March 31 X post is high-signal because it captures the next phase of AI infrastructure competition: not only who has the fastest accelerator, but who controls the interoperability layer for custom, rack-scale AI systems.
Sources: NVIDIA Newsroom X post · NVIDIA Newsroom release
Related Articles
Thinking Machines Lab said it signed a multi-year strategic partnership with NVIDIA to deploy at least one gigawatt of next-generation Vera Rubin systems. The companies also plan to co-design training and serving systems and widen access to frontier AI and open models for enterprises, research institutions, and the scientific community.
NVIDIA and Emerald AI said they are working with major energy companies to design AI factories that connect to the grid faster and can also support grid reliability. The plan centers on Vera Rubin DSX, DSX Flex, and Emerald AI's Conductor platform.
NVIDIA and Thinking Machines Lab said on March 10, 2026 that they will deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems under a multiyear partnership. The agreement also covers co-design of training and serving systems plus an NVIDIA investment in Thinking Machines Lab.
Comments (0)
No comments yet. Be the first to comment!