Samsung Claims It Could Double HBM2 Manufacturing, Fail to Meet Demand

On Tuesday at ISC 2018, Samsung discussed its Aquabolt HBM2 technology and made a rather unusual claim about demand for its high-end memory standard. According to the company, even if it doubled its manufacturing capacity for HBM2 today, it still wouldn’t be able to meet existing demand for the standard.
This would seem to convey two different things about HBM2. On the one hand, of course, it implies that HBM2 is robustly demanded by solutions all across the market. On the other, it implies that HBM2 remains so difficult to manufacture, or represents such a tiny percentage of Samsung’s overall manufacturing capability, that even doubling the amount of HBM2 memory it builds wouldn’t really move the needle much as far as answering market need. Neither of those statements say much good about the chances of seeing HBM2 on consumer graphics cards, and indeed, the focus for the memory technology really doesn’t seem to be in the consumer GPU market.
Samsung could manufacture 2x the HBM2 and it would still not be enough to satisfy market demand. No wonder it’s so expensive! #ISC18 pic.twitter.com/QoF4EtMasW
— Glenn K. Lockwood (@glennklockwood) June 25, 2018
Samsung is advertising Aquabolt as being capable of delivering up to 307GBps per chip in 8GB capacities, which would put a 4-Hi stack similar to the one AMD used on the first Radeon Fury X at well over 1TBps of memory bandwidth. To put that in additional perspective, one Aquabolt HBM2 stack can provide more memory bandwidth than a GTX 1070 or any AMD GPU in the RX 500 family. It’s also far more bandwidth per-stack than AMD specifies for its Vega 64, which offers 484GB/s of bandwidth in two stacks, or 242GB/s per stack.
The impact of increasing HBM2 adoption in GPUs is one of the bigger puzzles in the larger GPU market. Years ago, it seemed as if HBM would begin a straightforward process of replacing GDDR in high-end GPUs, before eventually waterfalling into at least midrange cards. Back in 2014-2015, we predicted that Fury X would use HBM only at the high-end, Vega would push HBM2 into the midrange, and by HBM3 or so we’d see it replacing GDDR on all but the lowest-end cards. This migration path roughly parallels the previous adoption of GDDR5 or GDDR3, with the memory initially debuting only at the high-end of the market before rolling out across entire families.
This has not occurred. Instead, HBM2 remains isolated to AMD’s top-end and none of Nvidia’s consumer cards. None of the rumors we hear about Nvidia’s next-generation GPUs suggest they’ll be adopting HBM2, either. One could make an argument that AMD’s need for HBM2 was partially driven by higher power consumption in its Polaris and Vega class of GPUs than the company might have preferred — which opens the door for a return to more standard memory types, even at the highest end, for both companies. But as of right now, HBM2 seems like a genuine success story — with only limited potential in the consumer market.
Continue reading

Google Pixel Slate Owners Report Failing Flash Storage
Google's product support forums are flooded with angry Pixel Slate owners who say their devices are running into frequent, crippling storage errors.

Astronomers Might Finally Know the Source of Fast Radio Bursts
A trio of new studies report on an FRB within our own galaxy. Because this one was so much closer than past signals, scientists were able to track it to a particular type of neutron star known as a magnetar.

How to Build a Face Mask Detector With a Jetson Nano 2GB and AlwaysAI
Nvidia continues to make AI at the edge more affordable and easier to deploy. So instead of simply running through the benchmarks to review the new Jetson Nano 2GB, I decided to tackle the DIY project of building my own face mask detector.

Apple’s New M1 SoC Looks Great, Is Not Faster Than 98 Percent of PC Laptops
Apple's new M1 silicon really looks amazing, but it isn't faster than 98 percent of the PCs sold last year, despite what the company claims.