
On October 23, 2025, Micron Technology, Inc. announced that its 192GB SOCAMM2 (Small Outline Compression Attached Memory Module) has officially entered the sampling phase, marking a significant step in expanding low-power memory applications across AI data centers.
Building on the industry's first LPDRAM SOCAMM introduced in March 2025, the new SOCAMM2 delivers a 50% capacity increase within the same form factor. This expanded capacity can reduce the first-token generation time (TTFT) in real-time AI inference workloads by more than 80%. Powered by Micron's advanced 1-gamma DRAM process, SOCAMM2 achieves over 20% higher energy efficiency, supporting optimized power design for large-scale data center clusters. In full-rack AI deployments, the efficiency gains are particularly notable, enabling configurations of more than 40TB of CPU-attached low-power DRAM. Its modular design enhances maintainability and lays the groundwork for future capacity scaling.
Leveraging a five-year collaboration with NVIDIA, Micron pioneered the introduction of low-power server memory for data center environments. SOCAMM2 combines LPDDR5X ultra-low power operation with high bandwidth, meeting the growing demands of large-scale AI platforms. It delivers high data throughput for AI workloads, sets new standards for energy efficiency, and positions itself as a key memory solution for both AI training and inference systems.
Designed with specialized features and rigorous testing, SOCAMM2 upgrades low-power DRAM originally developed for mobile devices to a data center-grade solution. Micron's extensive experience in high-quality data center DDR memory ensures that SOCAMM2 meets the stringent reliability and performance requirements of modern AI infrastructures.
Compared to RDIMM modules with similar energy efficiency, SOCAMM2 offers more than a 66% increase in power efficiency while fitting into just one-third of the module size. This not only optimizes data center space but also maximizes capacity and bandwidth. Its modular architecture and innovative stacking technology improve maintainability and facilitate integration into liquid-cooled server designs.
As an active contributor to JEDEC SOCAMM2 specifications, Micron works closely with industry partners to advance standards and accelerate low-power adoption in AI data centers. SOCAMM2 samples, featuring up to 192GB per module and speeds of 9.6 Gbps, are now available, with mass production aligned with customer deployment schedules.