SK hynix HBM3 memory shown at OCP Summit 2021
ServeTheHome spotted next-generation high-bandwidth-memory out in the wild.
SK Hynix only recently confirmed they have developed 24GB HBM3 memory with up to 819 GB/s bandwidth per stack. The next-gen memory for high-performance computing products such as GPUs and possibly CPUs will require even more capacity and higher bandwidth, which is where HBM3 comes in.
JEDEC, which is a body responsible for HBM3, has not yet published the final specifications of the new standard. Hynix has updated the specs from initial 5.2 Gbps to 6.4 Gbps, but it is unclear which specs are closer to whatever the company plans to mass-produce for next-gen accelerators.
The module that was already shown features 12 stacks each attached to a 1024-bit interface. While controller bus width has not changed since HBM2, a higher number of stacks in addition to higher frequency increases the bandwidth per stack from 461 GB/s to 819 GB/s. A comparison chart has been put together by AnandTech:
|SK Hynix HBM Memory Comparison|
|Max Bandwidth Per Pin|
|Number of DRAM ICs per Stack|
|Effective Bus Width|
|Bandwidth per Stack|
AMD Instinct MI250X accelerator, which was announced just this Monday, offers 8 HBM2e stacks clocked at 3.2 Gbps. Each features 16GB of capacity, thus the whole device offers 128 GB. Meanwhile, TSMC had already revealed its plans for CoWoS-S (Chip-on-Wafer-on-Substrate) packaging technology featuring up to 12 HBM stacks. The first products featuring this technology are now expected in 2023.
By the time such products launch, the HBM3 should be widely available, so theoretically a product featuring twelve 12Hi HBM3 stacks from SK Hynix would offer 288 GB of capacity and up to 9.8 TB/s of maximum bandwidth.
Source: ServerTheHome via Andreas Schilling