Samsung ready for next-gen AI accelerators, HBM4 planned for 2025
High-bandwidth memory (HBM) technology has seen remarkable progress since its introduction to the market less than a decade ago. The Radeon RX Fury and RX Vega are not very popular cards these days, but the HBM technology advancements have pushed the technology much further, just not for gamers.
Initially offering increased data transfer rates and expanded capacity, HBM now stands on the precipice of its most significant transformation to date, all thanks to the increasing popularity of AI workloads. A recent report hints at the arrival of next-generation HBM4 memory stacks featuring a 2048-bit memory interface, doubling the current 1024-bit interface used since 2015.
A significant player in the memory chip landscape, Samsung Electronics Co., has unveiled its ambitions for the future. According to Sang Joon Hwang, executive vice president of DRAM product & technology at Samsung, the company plans to introduce sixth-generation HBM4 DRAM chips in 2025.
Later on, Samsung mass-produced HBM2E and HBM3, and has developed 9.8 gigabits-per-second (Gbps) HBM3E, which we’ll soon start sampling to customers in our drive to enrich the HPC/AI ecosystem. Looking ahead, HBM4 is expected to be introduced by 2025 with technologies optimized for high thermal properties in development, such as non-conductive film (NCF) assembly and hybrid copper bonding (HCB).
— Sang Joon Hwang, executive vice president of DRAM product & technology at Samsung
In his article on Samsung’s semiconductor newsroom, Hwang Sang-jun confirmed the ongoing mass production of HBM2E and HBM3. Samsung is also gearing up to provide customer samples of HBM3E with speeds of 9.8 Gbps. Most notably, the development of HBM4 is on track for a 2025 release. While HBM4 details remain scarce, Samsung hints at innovative features, such as a “non-conductive film” and “hybrid copper bonding” to enhance power efficiency and thermal dissipation.