Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2

Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2

AMD has spent the last few years challenging Intel across the desktop, server, and mobile markets, but the gap between the two companies is arguably largest in server. At present, AMD ships up to 64 cores in a single socket, where Intel has only stepped up to shipping 40 cores this week with the launch of Ice Lake SP. Previous Intel Cascade Lake CPUs topped out at 28 cores. A new leak suggests Intel’s next-generation CPU platform, codenamed Sapphire Rapids, will finally seek to reduce some of the gaps between itself and AMD’s Epyc.

As always, take this leak with your daily dose of salt. This slide comes from VideoCardz and it builds on some data we’ve previously seen.

Sapphire Rapids CPU Leak: Up to 56 Cores, 64GB of Onboard HBM2

Sapphire Rapids, when it launches, will (supposedly) specify another TDP increase — up to 350W, this time. AMD’s current “Milan” CPUs top out at 280W, just like Rome. Memory support moves to DDR5, as expected, and the slide claims Sapphire Lake offers 1TB of bandwidth on 64GB of HBM2E. We knew Sapphire Lake was going to offer HBM2E as an option, but 64GB of on-die memory with 1TB/s of bandwidth is huge. It’d be really interesting to see how system performance scaling changes with this configuration compared with models without HBM2.

A top-end Sapphire Rapids, if these rumors are accurate, would offer a small pool of ultra-high bandwidth memory, backed by a far larger pool of lower-bandwidth memory. An eight-channel DDR5 system using DDR5-4800 would offer 307.2GB/s of memory bandwidth to up to 4TB of RAM (assuming Intel retains existing Ice Lake SP limits).

Sapphire Rapids is said to feature up to 80 PCIe 5.0 lanes on some SKUs, with others limited to just 64 lanes. It’s a four-tile design. This meshes with what we’ve learned about Intel’s plan for tiles, which are roughly analogous to AMD’s chiplets, but with different strategies for I/O, package routing, and interconnects.

As for when these chips will be in-market, that’s a little hard to read right now. Intel has made noise about shipping Sapphire Rapids in 2021, but we’ve also heard that the chip wasn’t likely to launch before 2022. In the past, there used to be a large difference between TSMC and Intel when it came to the question of “volume production.” That difference is shrinking.

Intel would use the term only a few months before a chip went on sale, while TSMC might announce volume production as long as a year before chips became available to consumers. Intel claimed to be in volume production for Ice Lake SP in January 2021 and launched in April, but reports from Dell suggest servers with the CPU won’t be available until May, and that this is “in sync with Intel’s timelines.” A January volume announcement with May availability is a four-month delay. That’s longer than is typical for Intel.

As of this writing, we’re guessing Sapphire Rapids will sample in 2021 but not launch until 2022. It’ll compete in-market against a mixture of Milan and Genoa parts. Genoa is expected to be built on 5nm and to use AMD’s Zen 4 architecture. There are rumors of a further core count increase, up to 96 cores, but that may or may not be true.

With Zen 3, AMD focused on improving Infinity Fabric performance and clock speeds, but it wound up spending significantly more power on “uncore” activities than Rome did. The company could choose to focus on improving IF and CPU efficiency with Zen 4 and hold core counts equal, or it may opt to take advantage of 5nm’s density improvements and push core counts once again. 96 cores without HBM2 and 12 memory channels tackling 56 cores with HBM2 and eight memory channels? Sounds fascinating to us.

This slide also mentions third-generation Optane, aka Crow Pass, and claims bandwidth could be improved by up to 2.6x in mixed read/write scenarios. None of the news regarding Optane has been good lately, to the point that we’re watching to see if Crow Pass even comes to market. Assuming that it does, however, it looks like the memory standard will finally get a real performance kick. No word on whether Crow Pass supports PCIe 4.0 or PCIe 5.0, but Intel is clearly pushing to get Xeon back on a competitive footing. Ice Lake SP is a solid effort for Chipzilla, but it doesn’t entirely close the gap with AMD. Sapphire Rapids gives Intel another shot at doing so.

Continue reading

Mercedes-Benz Unveils 56-Inch ‘Hyperscreen’ Dashboard Panel
Mercedes-Benz Unveils 56-Inch ‘Hyperscreen’ Dashboard Panel

Ahead of the now-virtual CES 2021, Mercedes-Benz has unveiled the MBUX Hyperscreen, a 56-inch-wide, curved cinematic display that stretches across the entire dashboard, from the left air vent to the right.

Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed
Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed

Samsung has developed a new type of processor-in-memory, built around HBM2. It's a new achievement for AI offloading and could boost performance by up to 2x while cutting power consumption 71 percent.

Rumor: AMD Working on ‘Milan-X’ With 3D Die Stacking, Onboard HBM
Rumor: AMD Working on ‘Milan-X’ With 3D Die Stacking, Onboard HBM

There's a rumored chip on the way from AMD with far more memory bandwidth than anything the company has shipped before. If the rumor is true, Milan-X will offer a mammoth amount of HBM bandwidth.

Rambus Shares New Details on Upcoming HBM3 Specification
Rambus Shares New Details on Upcoming HBM3 Specification

We know a bit more about HBM3 than we did before, thanks to a recent Rambus announcement. The new standard will offer over a terabyte of memory bandwidth per stack.