Intel Foundry Secures Contract to Build Microsoft’s Next-Gen Maia 2 AI Processor on Node 18A/18A-P, Report Says – Could Be the First Step in an Ongoing Collaboration
When Intel and Microsoft announced their plan to build a “custom processor” using Intel’s 18A manufacturing process in early 2024, neither company even mentioned the purpose of the silicon, leaving a lot of room for guesswork and interpretation by industry observers. Today, Semi-accurate apparently broke the silence regarding Intel Foundry’s 18A foundry customers, reporting that Intel Foundry (IF) is on track to produce an AI processor on the 18A or 18A-P processors for Microsoft.
So far, Intel Foundry has only officially acquired one major external customer for its 18A manufacturing technology, which is Microsoft. While we usually think of Microsoft as a cloud and software giant, the company has a pretty mighty hardware development (or at least hardware definition) team that creates custom silicon chips for a variety of data center applications, including Cobalt processors, DPUs, and Maia AI accelerators, to name a few. As it turns out, one of Microsoft’s next-generation AI processors is said to be manufactured by Intel Foundry.
If the deal turns out to be real, Microsoft will have access to a U.S. chip supply chain that is not as vulnerable to the capacity constraints we see in both chip manufacturing and advanced packaging at TSMC. Additionally, the deal could be seen as beneficial to Microsoft in other respects, given the U.S. government’s investment in Intel.
Due to the lack of details, we can only speculate which of the next-generation Maia processors will be produced by IF, but it would be a major achievement for Intel. Since we’re dealing with data center-grade silicon, we’re talking about processors with a fairly enormous die size. So if they are produced at the Intel Foundry, the company’s 18A (or 8% higher performance 18A-P) production process is expected to be good enough not only for Intel itself (which is on track to launch its Xeon 6+ “Clearwater Forest” processors in 2026), but also for its foundry customers. The contract could mean good performance for Intel’s node: performance would have a significant impact on a enormous processor like Maia, meaning Microsoft would likely choose to apply a product based on a smaller die if performance issues were experienced on the Intel node.
Microsoft’s original Maia 100 processor is a huge 820 mm^2 piece of silicon that houses 105 billion transistors and is larger than Nvidia’s H100 (814 mm^2) or B200/B300 (750 mm^2) compute chiplets. While the lion’s share of Microsoft Azure’s AI offering runs on Nvidia’s AI accelerators, the company invests heavily in joint optimization of hardware and software to achieve higher performance while increasing productivity and therefore lowering the total cost of ownership. That’s why Maia is an vital project for Microsoft.
Assuming that Microsoft’s AI processors to be manufactured by Intel Foundry continue to apply near-grid-sized compute dies, Intel’s 18A manufacturing process is on track to achieve a low enough defect density to ensure decent performance for such chips. Of course, Microsoft could split its next-generation AI processor into several smaller computational chiplets connected by Intel EMIB or Foveros technologies, but this could impact performance, so we are most likely talking about a enormous die or dies with a size similar to the size of the EUV tool grid, i.e. 858 mm^2.
To reduce the risk associated with such a enormous component, Intel and Microsoft will almost certainly run DTCO loops in which Intel tunes transistor and metal stack parameters for Mai’s workloads and goals. Additionally, Microsoft could embed spare compute arrays or redundant MAC blocks into the next-generation Maia chip to enable post-production persistence or repair, something companies like Nvidia are doing in their designs.
Meanwhile, the large question is what exactly Intel Foundry will produce for Microsoft and when. Based on the latest rumors, Microsoft is currently working on a next-generation processor codenamed Braga (Maia 200?) that will apply TSMC’s 3nm node and HBM4 memory, expected to arrive probably in 2026, as well as Clea (Maia 300?), expected to arrive later.