Micron confirmed that it will cooperate with TSMC in the production of the basic logic for the fresh generation HBM4E memory, with production directed at 2027. The advertisement, issued during connection with the company’s fiscal earnings on September 23, adds even more details to the already occupied road spot.
Micron sends early HBM4 samples at a speed above 11 Gb/s on PIN, providing bandwidth up to 2.8 TB/s, and has already closed most of the HBM3E 2026 power supply contracts. But the huge amount is that Micron will give TSMC the task of producing both standard and non -standard HBM4E logic, opening the door for loads. And
Partly configuration subsystem
The industry is already familiar with HBM Cadence: HBM3E Today, HBM4 in 2025–2026 and HBM4E around 2027, and each fresh generation brings higher data speeds to PIN and higher piles. SK Hynix has already confirmed the 12-h HBM4 with a full 2048-bit interface operating at 10 GT/S, while Samsung deletes similar capacities with its own logical processes. Micron sends its own HBM4 piles and claims that more than 20% better performance than HBM3E.
HBM4E is an extension of this road map, but Micron treats it as something more. The company emphasized that the basic matrix will be fabricated in TSMC, not on its own, and non -standard logic projects will be offered to customers ready to pay the bonus. By opening the basic matrix for adaptation, Micron effectively turns HBM into a semiconductor subsystem. Instead of a universal interface layer, GPU suppliers can demand additional SRAM, dedicated compression engines or tuned signal paths.
This approach reflects what we saw from SK Hynix, which has already described the configurable dying dying as part of the HBM4 strategy. Considering that non -standard memory is more profitable and more critical for customers trying to squeeze each Watt and every cycle from the AI accelerator, it will probably become a lucrative market segment.
The importance of AI
The time of micron plans for HBM4E does not seem to be accidental. Both NVIDIA and AMD have a fresh generation GPU in 2026, which will introduce HBM4, and HBM4E looks great in relation to its successors.
Rubin Nvidia architecture, which is to follow Blackwell in 2026, is built around HBM4. It is expected that the RUBIN GPU will provide about 13 TB/s memory capacity and up to 288 GB capacity, jump from the ceiling 8 TB/s to Hopper with HBM3E. Another platform, Rubin Ultra, is already on the NVIDIA road map for 2027. This platform especially requires HBM4EIN With each graphic processor supporting the terabajte memory and total bandwidth at the level of the stand measured in petabytes per second.
The AMD trajectory is equally aggressive. His instinct family Mi400, expected at about the same time as Rubin, also moves to HBM4. Leaks suggest up to 432 GB HBM4 and 19.6 TB/s bandwidth, more than twice as much as to the added AMD MI350. Like Rubin, Mi400 uses a chiplet project associated with ultra all memory, which makes HBM4 a necessity. Then there is HBM4E, which is set to 2027 or 2028, depending on the ecosystem performance and readiness.
This term of office means that the Micron’s partnership from TSMC is particularly critical. By moving the base to the main logical process and offering adaptation, Micron can synchronize its road map with the needs of Rubin Ultra, Mi400’s successors and anything subsequent in the accelerator space.
Thinking about a wider picture, the Micron partnership from TSMC raises questions about how HBM4E may widely spread in AI data centers. At the moment, only GPU and TPU are the highest class operate HBM, with most servers still based on DDR5 or LPDDR. This can change significantly, because the loads are still balloon.
Micron has already said that his HBM customer base has increased to six, and Nvidia among them. The company also cooperates with NVIDIA on the implementation of LPDDR on servers. The partnership from TSMC suggests that Micron intends to make HBM4E a widely accepted element of AI infrastructure, potentially making HBM4E a standard level of memory for AI nodes in the second half of the decade.
Follow Tom’s equipment in Google News To get current messages, analysis and reviews in your channels. Click the Fight button.