Microsoft detailed a new MicroLED-based datacenter networking system on March 17, 2026. The project matters because it tackles one of the less visible constraints on AI scaling: the energy, distance and reliability limits of the links connecting servers and GPUs.
#azure
RSS FeedOn March 16, 2026, Microsoft used NVIDIA GTC to expand Foundry Agent Service and observability, add NVIDIA Nemotron models, outline Azure infrastructure built for inference-heavy reasoning workloads, and introduce an Azure Physical AI Toolchain. The announcement is notable because it connects agent operations, hyperscale AI infrastructure, and physical-world systems in one stack.
Azure posted on March 14, 2026 that Claude Opus 4.6 and Sonnet 4.6 now support 1M-token context in Microsoft Foundry with flat pricing and higher media limits. Microsoft and Anthropic documentation confirm the 1M window, 600 image/PDF-page cap, and standard pricing across the full context range.
Microsoft says Fireworks AI is now part of Microsoft Foundry, bringing high-performance, low-latency open-model inference to Azure. The launch emphasizes day-zero access to leading open models, custom-model deployment, and enterprise controls in one place.
Azure says GPT-5.4 is now available in Microsoft Foundry for production-grade agent workloads. Microsoft’s supporting post adds GPT-5.4 Pro, pricing, and initial deployment options, with governance controls positioned as part of the pitch.
Microsoft and OpenAI said on February 27, 2026 that OpenAI's new funding and new partners do not change the previously disclosed terms of their relationship. The companies said Azure remains the exclusive cloud for stateless OpenAI APIs while OpenAI still has room to secure additional compute elsewhere, including through Stargate-scale infrastructure projects.
Microsoft Azure announced that Microsoft Foundry now offers GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex. The stated focus is low-latency voice interactions and long-running engineering workflows.
In a February 27, 2026 joint statement, OpenAI and Microsoft said new funding and partner announcements do not alter their existing partnership framework. They reaffirmed unchanged IP access, revenue-share terms, and Azure exclusivity for stateless OpenAI APIs.
Microsoft announced Maia 200 (codenamed Braga) on 2026-01-26 as its second-generation in-house AI accelerator. The company says selected Copilot and Azure AI workloads show up to 1.7x performance versus Maia 100.