HBM2 has had a difficult 12 months. The memory standard hasn’t followed the path of its predecessors. If it had, we should’ve already seen widespread rollouts from AMD and Nvidia across midrange and high-end products. Instead, AMD’s Vega 56 and 64 are currently the only consumer GPUs to carry the memory standard at all.
One of the rumors that dogged HBM2 last year was that clock speeds were well below what was targeted Indeed, neither of AMD’s GPUs hit the expected 2Gbps per-pin signal rates (Hynix added 2.0Gbps speeds to its HBM2 offerings in August 2016, then removed them in 2017). Samsung, however, may have found a solution to this problem, and plans to introduce new HBM2 clocked as high as 2.4Gbps per pin.
The new HBM2, codenamed Aquabolt, would be clocked 1.2x higher than the previous HBM2 standard and 1.5x higher than the RAM on AMD’s RX Vega 56. The other major advantage for higher per-pin bandwidth is it theoretically reduces the need for additional HBM2 stacks.
This capability could be more important than you might think. Rumors have suggested that the manufacturing difficulties with HBM2 have centered around the interposer — specifically, the difficulty of aligning and connecting the through-silicon vias that run through the memory stacks and into the interposer itself before connecting with the GPU. Some of you may remember the following document from AMD’s various discussions of HBM.
That image only shows one HBM stack and its relation to an associated SoC die. AMD’s GPUs field two HBM2 stacks, and up to four are possible. Samsung’s 2.4Gbps per-pin signaling isn’t going to be enough for a single-stack Vega 56 compared with the current model, but a single 1024-bit HBM2 link @ 2.4Gbps would offer 307.2GB/s of memory bandwidth — 1.2x more than the current bandwidth 256GB/s AMD’s RX 580 delivers.
Samsung has also managed to hit these increased clock speed without increasing HBM2’s voltage. The company’s first-generation HBM2 could run at 1.6Gbps at 1.2v or 2.0Gbps at 1.35v. These second-generation chips can run at 2.4Gbps at 1.2v. Samsung’s new chips used unspecified techniques related to TSV design and thermal control to connect the 5,000 TSVs per die and limit clock skew. It also added more thermal bumps between the HBM2 dies to improve heat dissipation and now adds a protective layer at the bottom of the HBM2 stack to improve physical strength.
Samsung did not give a ship date for its new Aquabolt chips, but did note Aquabolt is already in mass production. It’ll take a while for new GPU designs to ship that use HBM2, but it’s possible the beleaguered standard has finally turned the corner.
2019 Toyota Corolla Hatchback Review: More Tech, More Vroom, Less Room
Toyota Safety Sense 2.0 adds "lane trace assist." Translation: sort of self-driving, but you have to keep your hands on the wheel. Another step forward for safety standard in low-cost cars.
Samsung Claims It Could Double HBM2 Manufacturing, Fail to Meet Demand
Samsung claims HBM2 demand is so phenomenal, it could double production without manufacturing enough. That's not actually great news, depending on how you look at the situation.
JEDEC Extends HBM2 Standard to 24GB, 307GB/s Bandwidth Per Stack
JEDEC has updated the HBM2 standard for higher transfer speeds and increased density.
HBO CEO Leaving Amid AT&T Demand For More Shows, Higher Profits
The sudden departure of HBO's CEO suggests rumors about AT&T's plans for the network were accurate. It wants a Netflix competitor, hell or high water.