SK hynix is expanding its U.S. presence with a up-to-date office in the Seattle metropolitan area, putting the world’s leading HBM provider within minutes of Nvidia, Amazon and Microsoft.
According to industry sources cited, among others, DigiTimesthe South Korean company has leased approximately 5,500 square feet in City Center Bellevue, east of Seattle. Despite its modest size, the location and timing make the expansion much more impactful than the routine opening of a regional office.
SK hynix is right in the middle of the ongoing AI hardware boom, supplying most of the HBMs used in Nvidia’s data center accelerators and increasingly serving hyperscale customers building their own AI silicon. Establishing a physical presence in the Pacific Northwest brings the company closer to the customers that power its fastest-growing and most significant business.
Getting closer to the center of AI development
Tom’s Hardware Premium development plans
SK hynix has spent the last two years transforming from a cyclical DRAM commodity supplier to a leading AI infrastructure provider. This is evident in high-bandwidth memory (HBM), where SK hynix was the first to mass-produce HBM3 memory and maintained its leadership in performance and volume as customers migrated from HBM2E. In August 2025, the company overtook Samsung in global DRAM revenue for the first time, a change largely driven by HBM shipments for Nvidia’s H100 and H200 AI accelerators.
Seattle and its surrounding suburbs have become one of the densest clusters of the artificial intelligence industry ecosystem outside of Silicon Valley. While Nvidia maintains a significant engineering presence in the region, Amazon’s AWS Skills Center is nearby, and Microsoft’s silicon and Azure groups are scattered throughout Redmond and Bellevue.
HBM is not a plug-and-play component. Memory stacks are co-designed with GPUs and AI accelerators, and performance, power and reliability targets are refined through repeated rounds of co-validation. Physical proximity will enable faster iteration when issues arise, whether they are related to signal integrity, temperature or packaging tolerances.
Amazon’s recent launch of its Trainium3 AI accelerator, which integrates 144GB of HBM3E memory, shows how quickly hyperscalers are increasing their reliance on stack memory. Microsoft and Google are following similar paths in building custom accelerators. Each of these programs depends on close coordination between the silicon designer and the memory vendor; The Seattle office gives SK hynix a seat at the table.
Wider localization in the US
The Bellevue office is also part of a broader expansion in the U.S. that goes beyond customer service. In 2024, SK hynix announced plans to build a $4 billion advanced packaging plant in Indiana, marking its first manufacturing investment in the United States. This facility will handle advanced HBM packaging and testing, with production expected to begin in 2028. While the Indiana project is focused on manufacturing, it appears the Seattle office will be focused on R&D, application engineering and customer engagement.
Taken together, these moves suggest that SK hynix is deliberately locating more of its AI operations in the US, where most of the demand is generated. Advanced packaging has become as significant as wafer fabrication for AI accelerators, and the ability to package memory close to customers reduces both logistical complexity and geopolitical risk. It is also consistent with U.S. industrial policy aimed at securing domestic supply chains for critical technologies.
There has been occasional speculation that SK hynix would eventually be able to build a full DRAM factory in North America, although the company has not publicly committed to such a plan. Even without a factory, the expansion of U.S. engineering and packaging capabilities strengthens SK hynix’s position with U.S. customers at a time when memory supply is extremely tight, making long-term capacity planning a competitive differentiator.
Race to HBM4
SK hynix’s expansion in Seattle also reflects increasing competition in the HBM market. Samsung remains a formidable rival, with extensive manufacturing resources and its own investments in the U.S., including advanced packaging capabilities tied to its Texas operations. Micron, the only U.S.-based DRAM maker, is pursuing the high-end server and automotive markets and is sampling its own HBM4 designs, although its near-term capacity development is more constrained.
Meanwhile, SK hynix has already completed development of HBM4 and is believed to have provided samples to Nvidia. Early involvement is especially significant for HBM4 because the transition involves more stacks, smaller power budgets, and more convoluted thermal challenges. Acquiring these projects early can easily ensure long-term supplier relationships.
Ultimately, with its expansion in Seattle, SK hynix is telling Samsung and Micron that it intends to be embedded in the AI ecosystems of its largest customers, rather than as a distant component supplier. The physical proximity of Nvidia and Amazon’s engineering teams during the HBM4 transition will significantly augment the company’s chances of maintaining its leadership position, while supporting related activities such as joint work with Nvidia on AI-optimized SSDs.
