Samsung is throwing its hat into the GDDR6 ring and joining Micron in ramping the new memory technology for upcoming GPU products. It’s not a surprising move, but it does suggest that GDDR6 will be more widely adopted than its predecessor, GDDR5X.
Samsung is touting the new memory as being built on a 10nm* process at double the density of its previous RAM. 16Gbit chips means scaling up to 2GB of RAM per GDDR6 chip. This also clears the way for much higher amounts of RAM onboard GPUs over time, though I doubt we’ll see many 24GB GPUs in the near future. Even advanced 4K titles with HDR and other bells and whistles don’t push that kind of envelope (for now).
One potential advantage of the GDDR6 push is that we should finally see 2GB cards dropping off the map this generation. With Intel now fielding 4GB GPUs on its Radeon-integrated hardware, hopefully we’ll see a shift to larger RAM buffers across the board.
Samsung is claiming its GDDR6 can scale up to 72GB/s per channel (18Gbps per pin), which is more than twice as fast as the old GDDR5 standard and its 8Gbps performance. This ignores GDDR5X, of course, but since Samsung never built that type of RAM it can get away with skipping it as a point of comparison. Early GPUs are likely to opt for lower-clocked RAM, but a 72GB/s channel transfer rate is impressive, implying that a 256-bit GPU could hit 576GB/s of memory bandwidth. The next generation of midrange cards from AMD and Nvidia should be potent competitors for this reason alone.
The fact that Samsung is picking up GDDR6 also suggests robust demand from multiple companies. GDDR5X was an Nvidia-Micron play, but never saw wider adoption in the market. With multiple companies ramping GDDR6, it’s clear it’ll be more popular.
One big question is how GDDR6 will compare with HBM2 at high clocks and wide channels. HBM2’s higher cost and more difficult manufacturing process are balanced, to an extent, by its significantly lower power consumption and smaller, simpler board layouts. This allows for GPUs like AMD’s Radeon Nano, and improves overall efficiency. If GDDR6 can match or approach these outcomes, we may see HBM2 shrink back or vanish altogether. On the other hand, Samsung is touting its 2.4Gbps HBM2 stack as well, which would give GPUs based on it a substantial performance kick of their own.
Samsung claims its new standard offers a 35 percent improvement in power consumption compared with GDDR5, with 30 percent higher yields per wafer on GDDR6 compared with GDDR5 thanks to smaller process geometries. There’s no word on product introductions, but 2018 launches seem likely.
* – Samsung calls its 10nm manufacturing for GDDR6 “10nm-class” rather than “10nm.” The term denotes a process node between 10nm – 19nm, which is to say, it doesn’t define a process node at all. “10nm-class” or equivalent terms from all vendors should be considered fictional labels, not meaningful product classifications.
Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference
Intel's Raja Koduri will speak at a Samsung foundry event this week — and that's not something that would happen if Intel didn't have something to say.
Samsung, Stanford Built a 10,000 PPI Display That Could Revolutionize VR, AR
Ask anyone who has spent more than a few minutes inside a VR headset, and they'll mention the screen door effect. This could eliminate it for good.
Samsung to Announce Galaxy S9 at Mobile World Congress in February
Previous rumors pointed to a surprise Galaxy S9 unveiling at CES, which is underway now. However, Samsung is on hand not with the hotly anticipated new Galaxy phone, but with TV, smart home devices, and appliances — lots and lots of appliances.
Samsung Announces High-Speed HBM2 Breakthrough, Codenamed Aquabolt
Samsung has found a way to turn up the speed on HBM2 dramatically without needing a voltage increase. Could this be the turning point for the standard?