Samsung’s New GDDR6W Graphics Memory Rivals HBM2

Samsung’s New GDDR6W Graphics Memory Rivals HBM2

In the past, chip companies such as AMD have dabbled in High-Bandwidth Memory (HBM) instead of GDDR to increase memory bandwidth for GPUs. This vertically stacked memory boasts incredible bandwidth, but it’s a costly endeavor. AMD abandoned it in favor of GDDR memory after its ill-fated R9 Fury and Vega GPUs. Now Samsung has created a new type of GDDR6 memory it says is almost as fast as HBM without needing an interposer. Samsung says GDDR6W is the first “next-generation” DRAM technology, and that it will empower more realistic metaverse experiences.

Samsung took its existing GDDR6 platform and built it with Fan-Out Wafer-Level Packaging (FOWLP). With this technology, the memory die is mounted to a silicon wafer instead of a printed circuit board (PCB). Redistribution layers are fanned out around the chip allowing for more contacts and better heat dissipation. Memory chips are also double-stacked. Samsung says this has allowed it to increase bandwidth and capacity in the exact same footprint as before. Since there’s no increase in die size, its partners can drop GDDR6W into existing and future designs without any modifications. This will theoretically reduce manufacturing time and costs.

Samsung’s New GDDR6W Graphics Memory Rivals HBM2

The new memory offers double the I/O and bandwidth of GDDR6. Using its existing 24Gb/s GDDR6 as an example, Samsung says the GDDR6W version has twice the I/O as there are more contact points. It also doubles capacity from 16Gb to 32Gb per chip. As shown above, the height of the FOWLP design is just 0.7mm, which is 36 percent lower than its DDR package. Even though I/O and bandwidth have been doubled, it says it has the same thermal properties as existing DDR6 designs.

Samsung says these advancements have allowed its GDDR6W design to compete with HBM2. It notes that second-generation HBM2 offers 1.6TB/s of bandwidth, with GDDR6W coming close with 1.4TB/s. However, that number from Samsung is using a 512-bit wide memory bus with 32GB of memory, which isn’t something found in current GPUs. Both the Nvidia RTX 4090 and the Radeon RX 7900 XTX have a 384-bit wide memory bus and offer just 24GB of GDDR6 memory. AMD uses GDDR6 while Nvidia has opted for the G6X variant made by Micron. Both cards have around 1TB/s of memory bandwidth, though, so Samsung’s offering is superior.

The big news here is that thanks to Samsung’s chip-stacking, half the memory chips are required to achieve the same amount of memory as current packaging. This could result in reduced manufacturing costs. Overall, its maximum transmission rate per pin of 22Gb/s is very close to GDDR6X’s 21Gb/s. So the gains in the future probably won’t be for maximum performance, but rather memory capacity. You could argue nobody needs a GPU with 48GB of memory, but perhaps when we’re gaming at 16K that’ll change.

As far as products go, Samsung says it’ll be introducing GDDR6W soon in small form factor packages such as notebooks. It’s also working with partners to include it in AI accelerators and such. It’s unclear whether AMD or Nvidia will adopt it, but if they do it’ll likely be far in the future. That’s just because both companies are already manufacturing their current boards with GDDR6/X designs, so we doubt they’d swap until a new architecture arrives.

Continue reading

Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report
Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report

AMD's Smart Access Memory hasn't even shipped yet, but Nvidia claims it can duplicate the feature.

Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth
Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth

Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs.

AMD Will Bring Smart Access Memory Support to Intel, Nvidia Hardware
AMD Will Bring Smart Access Memory Support to Intel, Nvidia Hardware

AMD is reportedly working with Nvidia and Intel to bring hardware support for Smart Access Memory to other GPU and CPU platforms.

Tesla Ordered to Recall 150K+ Vehicles to Repair Memory Failures
Tesla Ordered to Recall 150K+ Vehicles to Repair Memory Failures

Tesla has been asked — or "asked" — to recall some 159,000 vehicles to repair a NAND memory issue that will eventually cause failures on every affected vehicle.