JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack
Ever since it debuted in 2015, High Bandwidth Memory (HBM) has promised to deliver huge amounts of RAM bandwidth at power consumption levels that traditional GDDR arrays can’t match. It’s mostly kept that promise, albeit at pricing that has kept it out of reach for most consumer GPUs, and limited its applicability to the high end of the market. JEDEC has just announced an extension to the existing HBM2 standard, increasing overall density and increasing its speed, though it’s not clear if it’ll be enough to spark further adoption in GPUs.
According to the new standard, HBM2 now supports up to 24GB per stack in a 12-Hi arrangement. Previously, HBM2 topped out at 16GB per stack in an 8-Hi arrangement (AMD’s 7nm Vega-derived Radeon Instinct MI60 offers 32GB of HBM2 memory in four stacks, with a 4096-bit bus total and 1TB/s of memory bandwidth). The new maximum transfer rate for HBM2 has been increased from 2Gbps per pin to 2.4Gbps, which works out to a total per-channel bandwidth of 307GB/s, up from 256GB/s. A 7nm Vega equipped with this RAM would, therefore, hit 1.23TB/s of memory bandwidth — not too shabby, by any stretch of the imagination, and a far cry from where we were just a few years ago.
It’s not clear, however, how much runway HBM2 has outside these specialized markets. In a recent evaluation of the future HBM3 standard for exascale, the Exascale Computing Project found that taking advantage of HBM3 will require updates and improvements over the simulated Knights Landing architecture that project used to estimate the value of HBM3’s increased bandwidth (expected to double over HBM2). This extension to the existing HBM2 standard actually delivers some of those gains early, though the final version of HBM3 is expected to double available bandwidth, not just improve it by 20 percent. Still, the findings suggest that further rearchitecting of chips to take advantage of the massive bandwidth HBM makes available will be required.