New Intel Compute Express Link Boosts Accelerator, CPU Bandwidth

New Intel Compute Express Link Boosts Accelerator, CPU Bandwidth

Intel and a consortium of leading technology companies (Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, and Microsoft) announced the release of the Common Express Link (CXL) 1.0 standard today, with the goal of creating a new product ecosystem and hardware standard. While it’s built squarely on PCIe (PCIe Gen 5, specifically), CXL offers features that the standard PCI Express bus lacks, such as maintaining memory coherency between the CPU and various attached accelerators.

New Intel Compute Express Link Boosts Accelerator, CPU Bandwidth

Intel has worked on CXL for at least four years before deciding to open the standard development process to a larger group of companies. Other firms will be able to join if they wish. Devices that support CXL should be capable of operating in ‘CXL Mode’ when inserted into a compatible PCIe slot (backward compatibility between PCIe mode and CXL mode should be seamless).

Initial deployments of the technology will focus on links between FPGAs and GPUs, the two most common types of ‘accelerator’ that a customer is likely to use. With that said, there are some significant names not on the CXL list, including AMD, ARM, Xilinx, and Nvidia (apparently pleased enough with its own NVLink work with IBM not to feel the need). Companies like Amazon and Baidu are nowhere to be seen, either. This could change if the industry standardizes on CXL, of course, but multiple firms that belong to CXL are also part of other initiatives. Dell, HP, and Huawei are also part of the Gen-Z consortium. Huawei is also a member of the CCIX consortium. Some firms are clearly supporting more than one technological effort to create next-generation standards.

Intel expects to complete the 1.0 standard and make it available to members in the first half of this year, with supporting products available in 2021. Expect to see a fair bit of technical discussion on these issues, particularly given how critical it is to minimize the cost and latency of moving data when working with accelerators. One of the barriers standing in the way between us and higher compute performance is the fundamental power cost of moving data in the first place. The standard with the best chance of adoption will be one that can minimize power costs without sacrificing performance to do it.

Continue reading

Hardware Accelerators May Dramatically Improve Robot Response Times
Hardware Accelerators May Dramatically Improve Robot Response Times

If we want to build better robots, we need them to be faster at planning their own motion. A new research team thinks it's invented a combined hardware/software deployment method that can cut existing latencies in half.

Google Deploys AI to Build Better AI Hardware Accelerators
Google Deploys AI to Build Better AI Hardware Accelerators

AI has advanced to the point that we're now using AI tools to build AI processors.

Specialized Chips Won’t Save Us From Impending ‘Accelerator Wall’
Specialized Chips Won’t Save Us From Impending ‘Accelerator Wall’

Hardware accelerators won't solve the fundamental performance scaling issues facing us when Moore's law runs out of gas.

Mini’s Urban-X Accelerator Imagines the Brooklyn You’ve Always Dreamed Of
Mini’s Urban-X Accelerator Imagines the Brooklyn You’ve Always Dreamed Of

It's not quite Amazon HQ2, but Mini's Urban-X incubator brings more tech to New York City. The startups' goals: Make cities more fun and livable. Even if that means displacing some of the cars.