Samsung announced its own SOCAMM2 LPDDR5-based memory module designed specifically for AI data center platforms, positioning it to give servers the power efficiency and bandwidth advantage of LPDDR5X without the long-term compromise of constant soldering, while aligning the form factor with the emerging JEDEC standard for accelerated and AI-centric systems.
Samsung says it is already working with Nvidia on Nvidia-accelerated infrastructure built around the module, which makes SOCAMM2 a natural answer to rising memory power costs, density limitations and serviceability issues in large-scale deployments.
Broadly speaking, SOCAMM2 is aimed at a specific and growing class of systems in which CPUs or CPU-GPU superchips are coupled to vast pools of system memory that must provide high throughput at lower power than conventional server DIMMs can provide, all in a smaller footprint. As inference workloads raise and AI servers transition to persistent, continuous operation, memory energy efficiency can no longer be viewed as an additional optimization; this has a significant impact on operating costs at the rack level. SOCAMM2 is a reflection of this.
Why LPDDR is moving to the data center
LPDDR has long been associated with smartphones, making it an ideal application due to its low voltage operation and aggressive power management. However, for servers, its employ has been confined more than any other practical problem: LPDDR is usually soldered directly to the board, which complicates large-scale hardware upgrades, repairs, and reuse. This makes it a tough sell for hyperscalers and other potential buyers who want a memory refresh regardless of the rest of the platform.
SOCAMM2 is Samsung’s attempt to solve this mismatch. The module uses LPDDR5X devices but packages them in a detachable, compressible chassis designed for server deployments. Samsung highlights that SOCAMM2 has twice the bandwidth of DDR5 RDIMMs, as well as reduced power consumption and a more compact footprint, making board routing and cooling easier in dense systems. The company also emphasizes serviceability, arguing that the modular LPDDR module allows memory to be replaced or upgraded without the need to scrap entire boards, reducing downtime and reducing total cost of ownership over the life of the system.
Samsung’s SOCAMM2 module is expected to be compliant with the JEDEC JESD328 standard for compression-attached memory modules within CAMM2. The goal of this standard is to make LPDDR-based memory modules interchangeable and vendor-independent in the same way as today’s standard RDIMMs, while maintaining the signal integrity needed for LPDDR5X to operate at very high data rates. As AI racks consume larger pools of memory, DDR5 memory will continue to incur power and thermal losses that scale poorly with capacity. SOCAMM2 will enable increased effective throughput while reducing power consumption, provided it can be integrated into platforms that support modular components.
SOCAMM2 vs. RDIMM
Understanding where SOCAMM2 fits requires looking at the full memory hierarchy in AI systems. At the top is the HBM, tightly coupled in the same package as GPUs and accelerators to provide extreme throughput at the expense of price and capacity constraints. HBM is necessary for high-throughput training and inference, but is not a general-purpose memory solution. Below this, classic DDR5 DIMMs provide vast, relatively inexpensive capacity for CPUs, but at higher power consumption and lower bandwidth per pin.
SOCAMM2 targets this lower level. By using LPDDR5X, it can operate at lower voltages and achieve higher data rates per pin than DDR5, which translates into better bandwidth per watt for processor-attached memory. Samsung positions it as a complement to HBM rather than a competitor, bridging the gap between the accelerator’s local memory and slower, more power-hungry system memory.
Samsung’s announcement states that SOCAMM2 is particularly well-suited for inference-intensive deployments where consistent throughput and energy efficiency are more crucial than peak training performance. In such environments, reducing memory power can have outsized effects at the rack and data hall levels, especially since inference workloads typically run continuously rather than in bursts.
However, there is a fundamental trade-off in the SOCAMM2 design in terms of latency. LPDDR5X achieves higher throughput and lower power through design choices that raise access latency compared to standard DDR5 DRAM. This is one of the reasons LPDDR is confined to tightly controlled system designs rather than socket servers or desktop memory.
AI workloads, on the other hand, operate under a different set of constraints. The training and inference pipelines are bandwidth-constrained and highly parallel, with performance dominated by continuous data movement. In this context, the higher latencies of LPDDR5X are largely amortized, while higher transfer speeds and lower power consumption provide material benefits.
So while LPDDR modular form factors have struggled to gain traction in deployments such as consumer desktops, where interactive applications (such as games) are highly sensitive to memory latency, it has proven to be a more natural fit for AI applications where bandwidth and performance are more crucial.
Standardization, ecosystem support and open questions
One of the most crucial aspects of SOCAMM2 is not the module itself, but the fact that it is aligned with the JEDEC standard. Memory buyers are concerned about proprietary form factors that limit them to a single vendor, and server platforms live or die by ecosystem support. By tying SOCAMM2 to an open specification, other memory and platform vendors will of course participate.
Micron has already publicly stated that it is sampling SOCAMM2 modules with capacities up to 192 GB, indicating that the form factor is not confined to niche configurations. High-capacity modules are indispensable if SOCAMM2 is to be taken seriously as a replacement or complement to RDIMMs in AI servers, where memory consumption per socket can be enormous.
Even with standardization underway, several technical questions remain open. One of them is thermal behavior under long-term load. LPDDR devices are productive, but cramming many of them into a compact module creates thermal density challenges, especially in horizontally mounted configurations. Signal integrity at the upper end of LPDDR5X data rates is another concern, especially as platforms approach the limits of what board layouts and connectors can reliably support.
Reliability and error handling can also be a challenge. Enterprise buyers expect stalwart ECC support, telemetry, and predictable failure modes. JEDEC’s inclusion of SPD and management capabilities in the SOCAMM2 specification is intended to address this issue, but real-world verification will depend on platform implementation and firmware maturity.
Finally, there is the issue of costs. LPDDR5X is not inherently cheaper than DDR5, and SOCAMM2 adds fresh packaging and mechanical complexity. Its value proposition is based on the economics of the overall system, not on the price of individual modules. Lower power consumption can reduce cooling requirements and operational costs over years of deployment, and modularity can improve resource utilization by allowing memory to be reused or independently upgraded. Whether these savings outweigh any upfront fee will vary by implementation and will likely be a factor in adoption.
Ultimately, Samsung’s SOCAMM2 announcement fits into a broader pattern in the data center industry that is returning to the assumptions made when building servers primarily for general purpose applications. AI workloads have changed the balance between compute, memory, power and serviceability, and memory vendors are responding by offering form factors that would have seemed unnecessary a decade ago. SOCAMM2 does not in itself redefine server memory, but it reflects the recognition that classic DIMMs may not be a viable solution for large-scale AI systems.
