New Intel Compute Express Link Boosts Accelerator, CPU Bandwidth

New Intel Compute Express Link Boosts Accelerator, CPU Bandwidth

Intel and a consortium of leading technology companies (Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, and Microsoft) announced the release of the Common Express Link (CXL) 1.0 standard today, with the goal of creating a new product ecosystem and hardware standard. While it’s built squarely on PCIe (PCIe Gen 5, specifically), CXL offers features that the standard PCI Express bus lacks, such as maintaining memory coherency between the CPU and various attached accelerators.

New Intel Compute Express Link Boosts Accelerator, CPU Bandwidth

Intel has worked on CXL for at least four years before deciding to open the standard development process to a larger group of companies. Other firms will be able to join if they wish. Devices that support CXL should be capable of operating in ‘CXL Mode’ when inserted into a compatible PCIe slot (backward compatibility between PCIe mode and CXL mode should be seamless).

Initial deployments of the technology will focus on links between FPGAs and GPUs, the two most common types of ‘accelerator’ that a customer is likely to use. With that said, there are some significant names not on the CXL list, including AMD, ARM, Xilinx, and Nvidia (apparently pleased enough with its own NVLink work with IBM not to feel the need). Companies like Amazon and Baidu are nowhere to be seen, either. This could change if the industry standardizes on CXL, of course, but multiple firms that belong to CXL are also part of other initiatives. Dell, HP, and Huawei are also part of the Gen-Z consortium. Huawei is also a member of the CCIX consortium. Some firms are clearly supporting more than one technological effort to create next-generation standards.

Intel expects to complete the 1.0 standard and make it available to members in the first half of this year, with supporting products available in 2021. Expect to see a fair bit of technical discussion on these issues, particularly given how critical it is to minimize the cost and latency of moving data when working with accelerators. One of the barriers standing in the way between us and higher compute performance is the fundamental power cost of moving data in the first place. The standard with the best chance of adoption will be one that can minimize power costs without sacrificing performance to do it.

Continue reading

Google Will Officially Support Installing Chrome OS on Your Old Computer
Google Will Officially Support Installing Chrome OS on Your Old Computer

Google has just acquired Neverware, and its CloudReady product is becoming an official Chrome OS offering.

IBM Promises 100x Faster Quantum Computing in 2021
IBM Promises 100x Faster Quantum Computing in 2021

Intel has plans to accelerate quantum workloads by up to 100 times this year, thanks to new software tools and improved support for classical and quantum computing workloads.

Europe Plans 20,000 GPU Supercomputer to Create ‘Digital Twin’ of Earth
Europe Plans 20,000 GPU Supercomputer to Create ‘Digital Twin’ of Earth

The plan to create a digital twin of Earth might end up delayed due to the relative lack of available GPUs, but this isn't going to be an overnight project.

Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications
Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications

Nvidia is already capitalizing on its ARM acquisition with a massively powerful new CPU-plus-GPU combination that it claims will speed up the training of large machine-learning models by a factor of 10.