Shunlongwei Co. ltd.

IGBT Module / LCD Display Distributor

Customer Service
+86-755-8273 2562

Memory in the Age of AI Data

Posted on: 12/08/2023

The number of generative AI users is increasing daily, and at an unprecedented rate. This rapid adoption is resulting in the need for more CPU processing power and an equally unprecedented set of memory solutions to keep up with the workload demands of generative AI. As Cloud Service Providers work to employ the latest CPU, DPU, xPU, Smart NICs, FPGAs and other processor solutions in order to keep up with workload demands, memory expansion is the next frontier needed to develop viable solutions in order to maximize the efficiency of the increased CPU processing power.

Samsung’s Memory Technology Day (MTD), held October 20, 2023 in Santa Clara, California, demonstrated a number of technological advancements in the area of memory processors that are ushering in the new era of AI computing. Among the several advancements discussed were the commercializing and delivering of the CXL memory expansion in Samsung’s memory devices.

Leading the charge at Memory Tech Day was Christina Day, Director of Memory Product Marketing discussed the trends in the field of “generative AI” and the impact it is having on today’s market and computing systems. She described the latest Samsung technologies that have been developed to address the memory needs in the computing space. Among the developments discussed was the announcement of the latest High Bandwidth Memory (HBM) interface, HBM3E (Shinebolt). HBM3E 8H was announced as fully developed and currently shipping to customers. The development of future memory innovations was discussed such as, HBM4, Low-Latency Wide-IO (LLW), Processing-In-Memory (PIM), LPDDR5X memory module with speeds of up to 9.6Gbps and other Near Memory Solutions.

Next up was Tae-Young Terry Oh, Executive VP and Head of DRAM Design who covered the roadmap out to 2030 of the latest DRAM devices in the DDR family. Most noted were DDR5, LPDDR5X/6, LPDDR5X CAMM2 and GDDR7. He emphasized the newest technologies that are currently being developed and some of the challenges they face with the new innovative ways in which they are being designed. Some of the roadmap features pointed out during this presentation were the mass production of a 1b node and the completion of the industry’s first 32Gb DDR5 at the same 1b node. The roadmap calls for yearly DDR5 speed upgrades of up to 8800Mbps by the year 2027 with 9600Mbps DDR6 projected to be available by the year of 2028 with even higher speed rates. Also noted was the High NA EUV equipment being used to support delicate patterning required for future DRAM development.

Identified in this presentation were the advances of the LPDDR5X devices. The power efficiency improvement technologies are consistently improved, including voltage scaling, adaptive body biasing (ABB), enhanced power gating, high-K metal gate transistors, and internal voltage control. Samsung’s leadership with the industry’s LPDDR5X Compression Attached Memory Module 2 (CAMM2) was underlined. This module utilizes power efficiency and the speed of LPDDR5X in client and server environments with the capability of density expansion. Next up is the LPDDR6, the next-generation JEDEC standard low-power DRAM, and it is no surprise that Samsung is leading the industry in defining the specification.

Mr. Tae-Young also pointed out that Samsung has always been the leader of graphic DRAM technology, and they continue to be so with the world’s-first GDDR7 product. It supports 32Gbps per pin speed with DRAM’s first 3-level pulse amplitude modulation IO technology.

Kyomin Sohn, Ph.D., Master, Executive in charge of DRAM Design, emphasized the latest trends in Near Memory Technology. Stated was that the era of CPU speeds doubling every 1.5 – 2 years has slowed due to the difficulty in CPU scaling and domain-specific processors such as GPU and NPU have emerged to address the limitation. Despite this, new Near Memory Solutions with high bandwidth such as HBM, LLW, and PIM have been proposed to alleviate the bottle neck created in the Von Neumann Architecture where there is a discrepancy between the speed of processors and the rate at which memory chips can supply data. These solutions are putting the memory closer to the CPU to increase memory access, increase bandwidth, reduce latency, and drive down power consumption.

It is clear from these presentations that Samsung has emerged as the industry leader in providing memory solutions for the generative AI era. For the remainder of this decade, Samsung will continue to provide faster and more efficient solutions to keep this new era of data consumption in step with the industry demands.