By E+T editorial staff
Published Thursday, November 16, 2023
Microsoft has unveiled the Azure Maia 100 and Cobalt 100 – the first two custom silicon chips designed by the company in a bid to reduce reliance on Nvidia.
Microsoft is following in the footsteps of its rivals and jumping into the production of homegrown artificial intelligence (AI) semiconductors for its cloud infrastructure. The company also announced new software that lets clients design their own AI assistants.
The two new chips are named Azure Maia 100 and Cobalt 100.
The company said it has no plans to sell the chips, which it intends to use to power its own subscription software offerings and as part of its Azure cloud computing service. The two models are expected to debut in some Microsoft data centres at the start of 2024.
The Maia 100 chip is said to provide Microsoft Azure cloud customers with a new way to develop and run large language models that power AI applications. The company is already testing the chip with its Bing and Office AI products, and Microsoft’s main AI partner, ChatGPT creator OpenAI, is also testing the processor.
“Microsoft is building the infrastructure to support AI innovation, and we are reimagining every aspect of our data centres to meet the needs of our customers,” said Scott Guthrie, executive vice-president of Microsoft’s Cloud + AI Group. “At the scale we operate, it’s important for us to optimise and integrate every layer of the infrastructure stack to maximise performance, diversify our supply chain and give customers infrastructure choice.”
Meanwhile, the Cobalt 100 CPU is built on Arm architecture to deliver greater efficiency and performance in cloud native offerings. It is designed to power general cloud services on Azure. The company said the choice to use the Arm technology was a key element in Microsoft’s sustainability goal.
“The architecture and implementation is designed with power efficiency in mind,” said Wes McCullough, corporate vice-president of hardware product development. “We’re making the most efficient use of the transistors on the silicon. Multiply those efficiency gains in servers across all our data centres, it adds up to a pretty big number.”
Microsoft’s announcement follows renewed efforts by Intel and AMD to develop their own silicon chips to compete with Nvidia, which holds a virtual monopoly on the GPUs needed to train large AI models.
Over the past few years, Microsoft has been increasingly investing in AI-powered solutions. Microsoft recently announced it was making a “multi-year, multi-billion dollar investment” in OpenAI. The company described its new agreement as the third stage of a growing partnership with the San Francisco-based firm that began with a $1bn investment in 2019.
Microsoft recently revealed its plans to spend A$5bn (£2.6bn) on expanding AI and cloud computing abilities in Australia over the next two years.
The two new chips announced are said the first in a series, with follow-up versions in development.