NVIDIA and Marvell expand NVLink Fusion for semi-custom AI infrastructure

Original: NVIDIA and @MarvellTech join forces through NVLink Fusion to expand ecosystem and give customers greater choice and flexibility in developing next-generation infrastructure. https://nvidianews.nvidia.com/news/nvidia-ai-ecosystem-expands-as-marvell-joins-forces-through-nvlink-fusion View original →

Read in other languages: 한국어日本語
AI Apr 1, 2026 By Insights AI 2 min read 2 views Source

What NVIDIA said on X

On March 31, 2026, NVIDIA's Newsroom account posted on X that Marvell is joining forces with NVIDIA through NVLink Fusion to expand the ecosystem and give customers more flexibility in building next-generation infrastructure. The linked press release makes clear that this is not a narrow interconnect update. It is a broader attempt to widen the set of silicon and networking combinations that can plug into NVIDIA-centered AI systems.

The announcement is corporate in tone and should be read as a company statement, but it still carries meaningful product-strategy signals. NVIDIA says the deal connects Marvell to its AI factory and AI-RAN ecosystem through NVLink Fusion, while also adding a silicon photonics collaboration and a $2 billion NVIDIA investment in Marvell.

What the partnership includes

According to the press release, Marvell will contribute custom XPUs and NVLink Fusion-compatible scale-up networking. NVIDIA says it will provide the surrounding rack-scale technologies, including Vera CPU, ConnectX NICs, BlueField DPUs, NVLink interconnect, Spectrum-X switches, and rack-scale AI compute.

The language around NVLink Fusion is important. NVIDIA describes it as a platform for building semi-custom AI infrastructure that remains fully compatible with NVIDIA systems. That means the company is not only pushing fixed full-stack boxes. It is also trying to become the compatibility layer around which partners can insert custom compute and networking components.

  • NVIDIA says NVLink Fusion lets customers build heterogeneous AI systems while staying compatible with NVIDIA infrastructure.
  • Marvell is positioned as a provider of custom XPUs, scale-up networking, and silicon photonics expertise.
  • The release also states that NVIDIA has invested $2 billion in Marvell.

Why this matters for AI infrastructure

This is a high-signal infrastructure story because it shows how the AI stack is fragmenting and consolidating at the same time. Fragmenting, because hyperscalers and large enterprises increasingly want differentiated systems, not just standard reference designs. Consolidating, because those custom systems still need a dominant interoperability layer if they are going to scale inside existing software, networking, and supply-chain ecosystems.

An inference from NVIDIA's announcement is that NVLink Fusion is becoming a strategy for absorbing partner silicon without giving up platform control. NVIDIA appears to be saying: build custom compute if you want, but do it around our interconnect, networking, rack architecture, and ecosystem. If that holds, the company can preserve influence even as customers ask for more semi-custom designs.

There is a clear caveat. This is a forward-looking press release, not a deployed customer case study with measured production results. Still, the March 31 X post is high-signal because it captures the next phase of AI infrastructure competition: not only who has the fastest accelerator, but who controls the interoperability layer for custom, rack-scale AI systems.

Sources: NVIDIA Newsroom X post · NVIDIA Newsroom release

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.