A `r/singularity` post highlighted reporting that roughly half of planned U.S. data center projects have been delayed or canceled because transformers, switchgear, batteries, and related power equipment remain supply constrained. The story resonated because it reframes AI expansion as a grid and industrial logistics problem, not only a chip problem.
#data-centers
RSS FeedNVIDIA and Emerald AI said they are working with major energy companies to design AI factories that connect to the grid faster and can also support grid reliability. The plan centers on Vera Rubin DSX, DSX Flex, and Emerald AI's Conductor platform.
Meta says a new multi-year deal with NVIDIA will support AI-optimized data centers for training, inference, and core workloads. The announcement also connects privacy, networking, and future Vera Rubin clusters to the same infrastructure roadmap.
Amazon said on March 2, 2026 that it will raise its planned Spain investment to €33.7 billion to expand data center infrastructure and AI capacity across Europe. The company says the program should support 29,900 jobs annually and add €31.7 billion to Spain’s GDP through 2035.
Cloudflare said on March 24, 2026 that it is working with Arm to deploy the Arm AGI CPU across its global network. Arm's newsroom says the chip is the company's first production silicon product and is aimed at AI data center workloads such as accelerator management, control planes, and API hosting.
A high-traffic HN thread zeroed in on Arm's new AGI CPU pitch: not a GPU replacement, but a Neoverse-based control-plane processor for rack-scale agentic AI infrastructure.
NVIDIA and Emerald AI said on March 23, 2026 that they are working with AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra on power-flexible AI factories. The concept combines Vera Rubin DSX infrastructure with DSX Flex so AI campuses can connect faster and behave more like grid assets than passive loads.
Meta said its long-term AMD agreement will provide up to 6GW of AMD Instinct GPU capacity for AI infrastructure. First shipments are planned for the second half of 2026 on Helios rack-scale systems.
Meta announced a multi-year infrastructure partnership with AMD, targeting up to 6GW of AMD Instinct GPU capacity for AI workloads. The agreement also aligns roadmaps across silicon, systems, and software, with first deployments expected in the second half of 2026.