Samsung Claims It Could Double HBM2 Manufacturing, Fail to Meet Demand
On Tuesday at ISC 2018, Samsung discussed its Aquabolt HBM2 technology and made a rather unusual claim about demand for its high-end memory standard. According to the company, even if it doubled its manufacturing capacity for HBM2 today, it still wouldn’t be able to meet existing demand for the standard.
This would seem to convey two different things about HBM2. On the one hand, of course, it implies that HBM2 is robustly demanded by solutions all across the market. On the other, it implies that HBM2 remains so difficult to manufacture, or represents such a tiny percentage of Samsung’s overall manufacturing capability, that even doubling the amount of HBM2 memory it builds wouldn’t really move the needle much as far as answering market need. Neither of those statements say much good about the chances of seeing HBM2 on consumer graphics cards, and indeed, the focus for the memory technology really doesn’t seem to be in the consumer GPU market.
Samsung could manufacture 2x the HBM2 and it would still not be enough to satisfy market demand. No wonder it’s so expensive! #ISC18 pic.twitter.com/QoF4EtMasW
— Glenn K. Lockwood (@glennklockwood) June 25, 2018
Samsung is advertising Aquabolt as being capable of delivering up to 307GBps per chip in 8GB capacities, which would put a 4-Hi stack similar to the one AMD used on the first Radeon Fury X at well over 1TBps of memory bandwidth. To put that in additional perspective, one Aquabolt HBM2 stack can provide more memory bandwidth than a GTX 1070 or any AMD GPU in the RX 500 family. It’s also far more bandwidth per-stack than AMD specifies for its Vega 64, which offers 484GB/s of bandwidth in two stacks, or 242GB/s per stack.
The impact of increasing HBM2 adoption in GPUs is one of the bigger puzzles in the larger GPU market. Years ago, it seemed as if HBM would begin a straightforward process of replacing GDDR in high-end GPUs, before eventually waterfalling into at least midrange cards. Back in 2014-2015, we predicted that Fury X would use HBM only at the high-end, Vega would push HBM2 into the midrange, and by HBM3 or so we’d see it replacing GDDR on all but the lowest-end cards. This migration path roughly parallels the previous adoption of GDDR5 or GDDR3, with the memory initially debuting only at the high-end of the market before rolling out across entire families.
This has not occurred. Instead, HBM2 remains isolated to AMD’s top-end and none of Nvidia’s consumer cards. None of the rumors we hear about Nvidia’s next-generation GPUs suggest they’ll be adopting HBM2, either. One could make an argument that AMD’s need for HBM2 was partially driven by higher power consumption in its Polaris and Vega class of GPUs than the company might have preferred — which opens the door for a return to more standard memory types, even at the highest end, for both companies. But as of right now, HBM2 seems like a genuine success story — with only limited potential in the consumer market.
Continue reading
Mercedes-Benz Unveils 56-Inch ‘Hyperscreen’ Dashboard Panel
Ahead of the now-virtual CES 2021, Mercedes-Benz has unveiled the MBUX Hyperscreen, a 56-inch-wide, curved cinematic display that stretches across the entire dashboard, from the left air vent to the right.
Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed
Samsung has developed a new type of processor-in-memory, built around HBM2. It's a new achievement for AI offloading and could boost performance by up to 2x while cutting power consumption 71 percent.
Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2
Sapphire Rapids, Intel's next server architecture, looks like a large leap over the just-launched Ice Lake SP.
Rumor: AMD Working on ‘Milan-X’ With 3D Die Stacking, Onboard HBM
There's a rumored chip on the way from AMD with far more memory bandwidth than anything the company has shipped before. If the rumor is true, Milan-X will offer a mammoth amount of HBM bandwidth.