Towards PCIe 7.0 and Blazing-Fast Storage: Why Engineers Have Hit The Gas on Interconnect Standards
At present, the PCIe 4.0-compliant platforms that are available today support transfer rates of up to 64GB/s in bidirectional mode. PCIe 5.0 is technically available, but GPUs and SSDs don’t widely support the standard yet, so PCIe 7.0 represents an effective 8x increase in bandwidth compared to what’s actually available today. The first PCIe 5.0 devices should be buyable towards the end of this year.
The Bandwidth Bonanza
PCI Express debuted on the desktop with the launch of AMD’s Socket 939 platform back in 2004. With support for up to 4GB of unidirectional bandwidth (8GB bidirectional), it blew the doors off the old PCI standard. The reason I mention PCI instead of AGP is because high-end GPUs have never been particularly limited by interface bandwidth. Comparisons back in 2004 showed that the gap between 8x AGP and PCIe 1.0 performance was essentially nil, while moving from PCI to PCIe (and from a shared bus topology to a point-to-point interconnect) immediately improved the performance of ethernet adapters, storage controllers, and various other third-party devices.
From 2004 – 2011, the PCIe standard moved ahead at a brisk pace, with PCIe 2.0 and 3.0 each approximately doubling bandwidth. Then, from 2011 – 2018, the consumer bandwidth market stood still. We didn’t see PCIe 4.0 until 2018, with the launch of AMD’s Zen 2 microarchitecture and X570 motherboard chipset. Since then, however, the PCI-SIG has been on a tear. PCIe 5.0 deployed with Alder Lake in 2021, even if consumer hardware isn’t available yet. We don’t know when PCIe 6.0 might be available in consumer products, but 2023 – 2024 is a realistic time frame. Now we see those chips won’t even be in-market for more than a few years before PCIe 7.0 hardware starts pushing in.
So what changed?
Some of the issues were technical — there were real difficulties associated with continuing to ramp up bandwidth between PCIe 3.0 and PCIe 4.0, and some new signaling and material engineering challenges had to be solved. It’s also true, however, that there wasn’t a lot of pressure to beef up system interconnects during the same time period. That’s changed in the past few years, probably at least partly due to the increased presence of GPUs and multi-GPU servers. Intel and AMD are both much more concerned with interconnects and maximizing connection between the CPU and other accelerators like FPGAs and GPUs.
Another major difference between the late aughts and the present day is the near-ubiquitous nature of SSD storage. Mechanical spinning drives are slow enough that faster PCIe speeds above 1.0 offered limited benefits. That’s not the case any longer. We can reasonably assume that new PCIe 5.0 drives will deliver an appreciable fraction of maximum bandwidth. Ditto for PCIe 6.0 and 7.0 when these standards arrive.
PCIe performance increases are typically associated with GPUs, but it’s storage that’s been the greatest beneficiary, as shown in the chart below. Bandwidth figures are unidirectional instead of bidirectional, which is why values are half of what they are in the chart above.
From 2004 – 2022, main memory bandwidth increased by ~12x, while PCIe bandwidth grew by 16x. Consumer storage bandwidth, on the other hand, has risen by approximately 94x over the last 18 years. If you remember the days when faster storage performance was defined by onboard 8MB caches, 7200 RPM spindle speeds, and NCQ support, this is pretty heady stuff.
These improvements in storage bandwidth are why Sony and Microsoft are both focused on using fast PCIe storage as memory with their latest console launches instead of dramatically increasing the total available system RAM. Microsoft’s DirectStorage standard will extend these capabilities to Windows PCs as well. Windows systems may ship entirely with SSDs in the future (this does not mean that Windows would not install to a hard drive, only that hard drives would not ship as boot drives in Windows systems). We have long since reached the point where even a modest eMMC storage solution can outpace the performance of a hard drive.
We have also reached the point at which PC storage bandwidth is rivaling main memory bandwidth from 20 years ago. Bandwidth, of course, is just one aspect of a memory technology and the access latencies on NAND accessed via the PCIe bus are several orders of magnitude higher than what 2004-era DRAM could deliver, but it’s still an achievement that companies can leverage to improve overall system performance. A system is only as strong as its weakest chain, and HDDs were always the weakest link in PC performance. The shift to NAND has unlocked PC performance that was previously gate-kept by spinning media.
I do not know enough low level details to speculate on how operating systems and file systems might be improved if they were designed for an SSD first and foremost instead of for spinning media, but I suspect we’ll start to find out over the next decade. The encouraging thing about the continued development of these interconnect standards is that consumer devices should continue to benefit, even at the low end. The M2’s storage might be only half the speed of the M1 (and I understand why that could irk some buyers), but the half-speed storage of the M2 MacBook is literally faster than racks of hard drives in the pre-SSD era.
The PCI-SIG is making up for lost time by firing new standard versions, one right after the other. Our dates of 2024 and 2026 for adoption are speculative at this juncture, but we’d expect both in-market by 2025 / 2028 at the outside. Thus far, SSD vendors have been able to take advantage of the additional bandwidth unlocked by new storage standards almost as soon as those standards reach market. This is in stark contrast to GPUs, which typically show no launch performance difference at all between a new version of PCIe and the immediately previous version of the standard.
We can collectively expect PC storage to keep getting faster — and to reap the long-term benefits of that performance increase.
Continue reading
Intel’s New Omni-Directional Interconnect Combines EMIB, Foveros
At Semicon West, Intel unveiled major advances in interconnect design, including new methods for combining EMIB and Foveros together.