Micron Delivers Production-Ready 12-Hi HBM3E ICs for Next-Generation AI GPUs—Up to 36GB per Stack at Speeds Over 9.2GT/s

Published:

Micron officially announced its 12-Hi HBM3E memory stacks on Monday. The recent products have a capacity of 36GB and are designed for cutting-edge processors for AI and HPC workloads, such as Nvidia H200 and B100/B200 GPUs.

The Micron 12-Hi HBM3E memory stacks have a capacity of 36 GB, 50% more than the previous 8-Hi versions, which had 24 GB. This increased capacity allows data centers to run larger AI models, such as Llama 2, with up to 70 billion parameters on a single processor. This capability eliminates the need for regular CPU offloads and reduces the latency of communication between GPUs, accelerating data processing.

In terms of performance, Micron’s 12-Hi HBM3E stacks deliver over 1.2 TB/s of memory bandwidth with data transfer rates exceeding 9.2 Gb/s. According to the company, despite offering 50% more memory capacity than competitors, Micron’s HBM3E consumes less power than 8-Hi HBM3E stacks.

- Advertisement -

The Micron HBM3E 12-high includes fully programmable memory-embedded self-test (MBIST) to provide customers with faster time-to-market and reliability. This technology can simulate system-level motion at full speed, enabling correct testing and faster validation of recent systems.

Micron’s HBM3E memory devices are compatible with TSMC’s CoWoS (CoWoS) packaging technology, commonly used to package AI processors such as the Nvidia H100 and H200.

“TSMC and Micron have enjoyed a long-term strategic partnership,” said Dan Kochpatcharin, head of TSMC’s Ecosystem and Alliance Management Division. “As part of the OIP ecosystem, we have worked closely together to enable Micron’s HBM3E-based system and chip-on-wafer-on-substrate (CoWoS) packaging design to support our customers’ AI innovations.”

Micron is shipping production-ready 12-Hi HBM3E units to key industry partners for qualification testing across the AI ​​ecosystem.

Micron is already developing next-generation memory solutions, including HBM4 and HBM4E. These upcoming memory types will continue to push the boundaries of memory performance, ensuring Micron remains a leader in meeting the growing demands for advanced memory used by AI processors, including Nvidia GPUs based on Blackwell and Rubin architectures.

Related articles