Samsung Flashbolt HBM2 Is 33 Percent Faster, Doubles Capacity

Samsung Flashbolt HBM2 Is 33 Percent Faster, Doubles Capacity

Samsung has announced its latest version of HBM2 (High Bandwidth Memory) with a sharp increase in capacity and overall performance. It’s likely a response to the advent of GDDR6, at least in part — the gap between the two memory standards has shrunk significantly.

Samsung Flashbolt HBM2 Is 33 Percent Faster, Doubles Capacity

“Flashbolt’s industry-leading performance will enable enhanced solutions for next-generation data centers, artificial intelligence, machine learning, and graphics applications,” said Jinman Han, senior vice president of Memory Product Planning and Application Engineering Team at Samsung Electronics. “We will continue to expand our premium DRAM offering, and improve our ‘high-performance, high capacity, and low power’ memory segment to meet market demand.”

Should We Ever Expect to See HBM in Mainstream Cards?

Up until now, HBM has been confined to the upper end of the PC market. In theory, it could come farther down the stack with 7nm, but I don’t think we’re going to see HBM in mainstream consumer cards in the 7nm generation. HBM-based cards are trickier to manufacture because routing thousands of wires through multiple layers of DRAM is a non-trivial process. Even AMD always acknowledged that HBM only gave them an economic and power consumption boost at a specific point in their own board stack. Below a certain RAM loadout and TDP, GDDR5 made more sense, even for Team Red.

With GDDR6 having replaced GDDR5, I suspect HBM will be confined back to the very top of the market — and that AMD may have opted to use it as a method of differentiating its 7nm professional and consumer product lines, much like Nvidia has. I say “may have” because those strategy decisions have already been made, even if we don’t know what they are yet.

Samsung is unlikely to have any problems either way. HBM2 has been of great interest to AI and ML companies as an alternative, high-bandwidth memory architecture for emerging compute applications. Regardless of what happens with GPUs, the memory standard should have a bright future ahead of it.

Continue reading

Mercedes-Benz Unveils 56-Inch ‘Hyperscreen’ Dashboard Panel
Mercedes-Benz Unveils 56-Inch ‘Hyperscreen’ Dashboard Panel

Ahead of the now-virtual CES 2021, Mercedes-Benz has unveiled the MBUX Hyperscreen, a 56-inch-wide, curved cinematic display that stretches across the entire dashboard, from the left air vent to the right.

Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed
Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed

Samsung has developed a new type of processor-in-memory, built around HBM2. It's a new achievement for AI offloading and could boost performance by up to 2x while cutting power consumption 71 percent.

Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2
Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2

Sapphire Rapids, Intel's next server architecture, looks like a large leap over the just-launched Ice Lake SP.

Rumor: AMD Working on ‘Milan-X’ With 3D Die Stacking, Onboard HBM
Rumor: AMD Working on ‘Milan-X’ With 3D Die Stacking, Onboard HBM

There's a rumored chip on the way from AMD with far more memory bandwidth than anything the company has shipped before. If the rumor is true, Milan-X will offer a mammoth amount of HBM bandwidth.