JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack

JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack

Ever since it debuted in 2015, High Bandwidth Memory (HBM) has promised to deliver huge amounts of RAM bandwidth at power consumption levels that traditional GDDR arrays can’t match. It’s mostly kept that promise, albeit at pricing that has kept it out of reach for most consumer GPUs, and limited its applicability to the high end of the market. JEDEC has just announced an extension to the existing HBM2 standard, increasing overall density and increasing its speed, though it’s not clear if it’ll be enough to spark further adoption in GPUs.

According to the new standard, HBM2 now supports up to 24GB per stack in a 12-Hi arrangement. Previously, HBM2 topped out at 16GB per stack in an 8-Hi arrangement (AMD’s 7nm Vega-derived Radeon Instinct MI60 offers 32GB of HBM2 memory in four stacks, with a 4096-bit bus total and 1TB/s of memory bandwidth). The new maximum transfer rate for HBM2 has been increased from 2Gbps per pin to 2.4Gbps, which works out to a total per-channel bandwidth of 307GB/s, up from 256GB/s. A 7nm Vega equipped with this RAM would, therefore, hit 1.23TB/s of memory bandwidth — not too shabby, by any stretch of the imagination, and a far cry from where we were just a few years ago.

JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack

It’s not clear, however, how much runway HBM2 has outside these specialized markets. In a recent evaluation of the future HBM3 standard for exascale, the Exascale Computing Project found that taking advantage of HBM3 will require updates and improvements over the simulated Knights Landing architecture that project used to estimate the value of HBM3’s increased bandwidth (expected to double over HBM2). This extension to the existing HBM2 standard actually delivers some of those gains early, though the final version of HBM3 is expected to double available bandwidth, not just improve it by 20 percent. Still, the findings suggest that further rearchitecting of chips to take advantage of the massive bandwidth HBM makes available will be required.

Samsung’s diagram for its own HBM2 design.
Samsung’s diagram for its own HBM2 design.

Continue reading

Mercedes-Benz Unveils 56-Inch ‘Hyperscreen’ Dashboard Panel
Mercedes-Benz Unveils 56-Inch ‘Hyperscreen’ Dashboard Panel

Ahead of the now-virtual CES 2021, Mercedes-Benz has unveiled the MBUX Hyperscreen, a 56-inch-wide, curved cinematic display that stretches across the entire dashboard, from the left air vent to the right.

Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed
Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed

Samsung has developed a new type of processor-in-memory, built around HBM2. It's a new achievement for AI offloading and could boost performance by up to 2x while cutting power consumption 71 percent.

Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2
Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2

Sapphire Rapids, Intel's next server architecture, looks like a large leap over the just-launched Ice Lake SP.

Rumor: AMD Working on ‘Milan-X’ With 3D Die Stacking, Onboard HBM
Rumor: AMD Working on ‘Milan-X’ With 3D Die Stacking, Onboard HBM

There's a rumored chip on the way from AMD with far more memory bandwidth than anything the company has shipped before. If the rumor is true, Milan-X will offer a mammoth amount of HBM bandwidth.