News

Microsoft Ramps Up AI Compute Power with Lambda and IREN Deals

As the global race to dominate generative AI intensifies, Microsoft has inked two massive infrastructure deals designed to turbocharge its AI capabilities.

The agreements with Lambda and IREN will significantly expand Microsoft’s access to high-performance GPUs and scalable datacenter capacity, enabling faster deployment of AI services across its cloud platform as part of Microsoft's long-term AI infrastructure plans.

Announced on Nov. 3, the Lambda deal includes a multiyear commitment to deploy tens of thousands of NVIDIA GPUs — including the new GB300 NVL72 systems — in liquid-cooled U.S. datacenters. This marks a major scale-up of the companies’ eight-year relationship and positions Lambda as a key AI compute partner for Microsoft, even though financial terms remain undisclosed.

"It's great to watch the Microsoft and Lambda teams working together to deploy these massive AI supercomputers," said Stephen Balaban, CEO of Lambda. "We've been working with Microsoft for more than eight years, and this is a phenomenal next step in our relationship."

Under the new agreement, Lambda will continue operating the infrastructure while Microsoft leverages it for Azure's expanding AI services.

Also on Monday, IREN Limited announced it had entered into a $9.7 billion agreement with Microsoft to supply AI cloud infrastructure over the next five years. The deal includes up to 200 megawatts of datacenter capacity and will be executed in phases through 2026. The infrastructure will be located at IREN's Childress, Texas campus.

As part of the agreement, IREN will provide physical hosting, power, and other support services for high-performance GPU clusters, including the GB300-class systems. IREN will also plan to integrate alternative power sources, including renewable energy sources, to meet Microsoft's carbon and energy efficiency requirements.

Both deals highlight Microsoft's urgency in securing AI compute resources as the demand for generative AI and large language model services continues to rise. Microsoft has already committed significant investment into building out its Azure infrastructure to support models developed by OpenAI and other partners. These new agreements with Lambda and IREN expand Microsoft's access to GPU resources without requiring immediate in-house datacenter builds.

While IREN shared more details about the money involved, both deals show that Microsoft is making long-term investments to deal with the growing shortage of AI computing power. Using liquid-cooled hardware and flexible datacenter setups is part of a wider industry move toward building more powerful, space-efficient systems designed for training and running AI models.

The announcements came days before Amazon detailed its own infrastructure plans to support OpenAI workloads using AWS services, underscoring the competitive pressure in the cloud market to scale up AI capabilities.

Microsoft hasn't confirmed which Azure services or regions will be directly affected by these deals, but initial infrastructure deployment is expected to begin in 2026.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured