Intel Announces Cooper Lake Will Be Socketed, Compatible With Future Ice Lake CPUs

Intel Announces Cooper Lake Will Be Socketed, Compatible With Future Ice Lake CPUs

Intel may have launched Cascade Lake relatively recently, but there’s another 14nm server refresh already on the horizon. Intel lifted the lid on Cooper Lake today, giving some new details on how the CPU fits into its product lineup with Ice Lake 10nm server chips already supposedly queuing up for 2020 deployment.

Cooper Lake’s features include support for the Google-developed bfloat16 format. It will also support up to 56 CPU cores in a socketed format, unlike Cascade Lake-AP, which scales up to 56 cores but only in a soldered, BGA configuration. The new socket will reportedly be known as LGA4189. There are reports that these chips could offer up to 16 memory channels (because Cascade Lake-AP and Cooper Lake both use multiple dies on the same chip, the implication is that Intel may launch up to 16 memory channels per socket with the dual-die version).

Intel Announces Cooper Lake Will Be Socketed, Compatible With Future Ice Lake CPUs

The bfloat16 support is a major addition to Intel’s AI efforts. While 16-bit half-precision floating point numbers have been defined in the IEEE 754 standard for over 30 years, bfloat16 changes the balance between how much of the format is used for significant digits and how much is devoted to exponents. The original IEEE 754 standard is designed to prioritize precision, with just five exponent bits. The new format allows for a much greater range of values but at lower precision. This is particularly valuable for AI and deep learning calculations, and is a major step on Intel’s path to improving the performance of AI and deep learning calculations on CPUs. Intel has published a whitepaper on bfloat16 if you’re looking for more information on the topic. Google claims that using bfloat16 instead of conventional half-precision floating point can yield significant performance advantages. The company writes: “Some operations are memory-bandwidth-bound, which means the memory bandwidth determines the time spent in such operations. Storing inputs and outputs of memory-bandwidth-bound operations in the bfloat16 format reduces the amount of data that must be transferred, thus improving the speed of the operations.”

Intel Announces Cooper Lake Will Be Socketed, Compatible With Future Ice Lake CPUs

The other advantage of Cooper Lake is that the CPU will reportedly share a socket with Ice Lake servers coming in 2020. One major theorized distinction between the two families is that Ice Lake servers on 10nm may not support bfloat16, while 14nm Cooper Lake servers will. This could be the result of increased differentiation in Intel’s product lines, though it’s also possible that it reflects 10nm’s troubled development.

Bringing 56 cores to market in a socketed form factor indicates Intel expects Cooper Lake to expand to more customers than Cascade Lake / Cascade Lake-AP targeted. It also raises questions about what kind of Ice Lake servers Intel will bring to market, and whether we’ll see 56-core versions of these chips as well. To-date, all of Intel’s messaging around 10nm Ice Lake has focused on servers or mobile. This may mirror the strategy Intel used for Broadwell, where the desktop versions of the CPU were few and far between, and the mobile and server parts dominated that family — but Intel also said later that not doing a Broadwell desktop release was a mistake and that the company had goofed by skipping the market. Whether that means Intel is keeping an Ice Lake desktop launch under its hat or if the company has decided skipping desktop again does make sense this time around is still unclear.

Cooper Lake’s focus on AI processing implies that it isn’t necessarily intended to go toe-to-toe with AMD’s upcoming 7nm Epyc. AMD hasn’t said much about AI or machine learning workloads on its processors, and while its 7nm chips add support for 256-bit AVX2 operations, we haven’t heard anything from the CPU division at the company to imply a specific focus on the AI market. AMD’s efforts in this space are still GPU-based, and while its CPUs will certainly run AI code, it doesn’t seem to be gunning for the market the way Intel has. Between adding new support for AI to existing Xeons, its Movidius and Nervana products, projects like Loihi, and plans for the data center market with Xe, Intel is trying to build a market for itself to protect its HPC and high-end server business — and to tackle Nvidia’s own current dominance of the space.