JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack

Ever since it debuted in 2015, High Bandwidth Memory (HBM) has promised to deliver huge amounts of RAM bandwidth at power consumption levels that traditional GDDR arrays can’t match. It’s mostly kept that promise, albeit at pricing that has kept it out of reach for most consumer GPUs, and limited its applicability to the high end of the market. JEDEC has just announced an extension to the existing HBM2 standard, increasing overall density and increasing its speed, though it’s not clear if it’ll be enough to spark further adoption in GPUs.
According to the new standard, HBM2 now supports up to 24GB per stack in a 12-Hi arrangement. Previously, HBM2 topped out at 16GB per stack in an 8-Hi arrangement (AMD’s 7nm Vega-derived Radeon Instinct MI60 offers 32GB of HBM2 memory in four stacks, with a 4096-bit bus total and 1TB/s of memory bandwidth). The new maximum transfer rate for HBM2 has been increased from 2Gbps per pin to 2.4Gbps, which works out to a total per-channel bandwidth of 307GB/s, up from 256GB/s. A 7nm Vega equipped with this RAM would, therefore, hit 1.23TB/s of memory bandwidth — not too shabby, by any stretch of the imagination, and a far cry from where we were just a few years ago.

It’s not clear, however, how much runway HBM2 has outside these specialized markets. In a recent evaluation of the future HBM3 standard for exascale, the Exascale Computing Project found that taking advantage of HBM3 will require updates and improvements over the simulated Knights Landing architecture that project used to estimate the value of HBM3’s increased bandwidth (expected to double over HBM2). This extension to the existing HBM2 standard actually delivers some of those gains early, though the final version of HBM3 is expected to double available bandwidth, not just improve it by 20 percent. Still, the findings suggest that further rearchitecting of chips to take advantage of the massive bandwidth HBM makes available will be required.

Continue reading

Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth
Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs.

Why Latency Impacts SSD Performance More Than Bandwidth Does
Bandwidth matters to SSD performance, but latency matters more. It's also why SSDs are such good upgrades for such a wide range of hardware, including machines that are 15-20 years old.

New Intel Compute Express Link Boosts Accelerator, CPU Bandwidth
Intel and a group of additional technology companies have announced a new common interconnect standard, CXL. The new standard should be ready to debut in hardware by 2021.

Google Stadia Will Eat 1TB Bandwidth Caps for Breakfast
Google's Stadia service comes with bandwidth consumption requirements that could put a serious strain on 1TB data caps across the country. 4K isn't going to be accessible to a lot of folks — not without overage fees.