Intel’s tech day wasn’t just a CPU-centric affair; the company had quite a bit to say about its Xe graphics products and its new upscaling solution, XeSS. Let’s tackle the core first. The first Xe HPG (High Performance Gaming) product to come to market is code-named Alchemist, and it’ll debut under the Intel Arc brand.
A Xe GPU is built from Xe Cores. A Xe Core is approximately equivalent to an Nvidia SM or an AMD CU. In each case, these slices represent the functional building block of the GPU. In Intel’s case, each Xe slice contains 16 vector engines and 16 matrix engines. Intel is retiring its use of the “EU” moniker, but top-end Xe HPG cards are specced with up to 512 EUs/Xe Cores.
During the event, Intel confirmed that Xe will be built on 6nm and that it has partnered with TSMC to build the chip. TSMC’s 6nm is an extension of its 7nm process for customers who weren’t ready or willing to make the jump for 5nm. Its primary benefit over 7nm is thought to be increased density as opposed to significant improvements in performance or power consumption.
An Xe render slice consists of four Xe cores. Each core contains a ray tracing unit, and Intel claims to have improved both relative operating frequency and performance per watt by 1.5x compared with previous solutions. This doesn’t tell us anything directly about performance, but Tiger Lake’s GPU tops out at 1.45GHz and Intel’s graphics cards are rumored to target a ~2.2GHz clock speed. 1.45 * 1.5 = 2.175, so this would seem to check out. GPU clocks in the 1.5GHz – 2.5GHz range are fairly common now, so Intel’s ~2.1GHz would fit right in.
Each slice contains 16 pixel backends and some shared function units for geometry processing, rasterization, and what Intel calls HiZ. This is presumably a much more advanced version of what ATI once named HyperZ and introduced with the original Radeon. HyperZ is a technology ATI used to reduce overdraw and improve graphics core efficiency; modern GPUs continue to use these tactics in more advanced forms.
Here’s the entire GPU assembly. A render slice contains four Xe cores, and a Xe HPG GPU will be built from an array of render slices.
Enhancing Resolution with XeSS
Intel’s XeSS is its own version of DLSS but tuned for Intel GPUs. We’ll soon have three different quality-boosting technologies in-market: XeSS (Intel), DLSS (Nvidia), and FSR (AMD). FidelityFX Super Resolution is a spatial filter based on a refined Lanczos kernel and does not include a temporal component. It also does not require any kind of specialized hardware and can run on GPUs from any company.
DLSS and XeSS are a bit more like each other than either one is like FSR, but Intel isn’t exactly following Nvidia’s lead. According to Chipzilla, XeSS will function at its highest potential quality when run on GPUs with dedicated tensor hardware. This was implied to include Ampere and Turing, though Intel did not make any GPU-specific claims beyond referring to “the competition.”
During its briefing, Intel claimed that XeSS would offer visual fidelity equivalent to 4K, not merely approaching it. That’s a significant claim because even DLSS 2.0 doesn’t claim to perfectly match 4K native quality in all cases. The difference between “XeSS + XMX” and “XeSS + DP4a” is the difference between two different quality modes, not just two different rendering modes. There are multiple Nvidia Pascal GPUs that support DP4a, including GP102, GP104, and GP106, implying that cards as far back as 2016 may gain an image quality boost from this feature. DP4a is not listed as a supported instruction in the RDNA2 instruction manual. We have a question in regarding support, but could find no record of it.
We can’t fairly evaluate XeSS on the basis of a single, Intel-provided video, but here are some screenshots Intel provided to compare XeSS, native 4K, and 1080p:
I want to put the whole “4x” magnification in a little bit of context because it’s an easy way to make ordinary desktop resolutions look bad. Here’s a screenshot of Orcs Must Die 3, taken at 2560×1440, from my own PC.
Now, here’s a zoomed-in section of that image at 500 percent magnification. Intel is using 1080p and 400 percent, so I tossed in a higher zoom factor to make the comparison a little fairer.
This is not meant to imply that XeSS will not be excellent or that Intel is fudging its visual quality comparisons. Just be aware that it’s very easy to make a lower resolution look much worse than it actually is with high magnification. This is not unfair or wrong, but it doesn’t reflect the experience of playing the game, either. I’m a fan of approaches such as XeSS and DLSS because I think they represent the best chance to make ray tracing a practical reality people can experience (or afford). Techniques like XeSS, DLSS, and FSR allow GPUs to devote less horsepower to hitting specific resolutions and more towards casting rays. For now, AMD is the odd company out by not bringing a tensor-based solution to market. We’ll see what the company fields when RDNA3 ships next year.
There’s one piece of less-great news in all this: Intel appears to have no plan to bring a relevant or capable iGPU to market for desktop. Alder Lake will continue to sport 32 EUs in this configuration, just like its predecessors.
This isn’t the end of the world, since most of the interest around Xe centers around its discrete graphics potential, but AMD’s 5700G is likely to retain its title as the fastest integrated graphics you can buy if the slide above is accurate. How relevant this is to enthusiasts will depend on what the GPU market is doing in a few months, but the 5700G currently has a larger market window than is typical for APUs thanks to current discrete GPU prices.
Intel is still selling Xe on its sizzle, not the steak, but cards are expected to launch in Q1 2022. The back half of this year and early next year are going to be momentous times in the computer industry, between the first x86 hybrid processors and the first new GPU from a viable third party in nearly 20 years. We’re curious to see what XeSS brings to the table and how well it compares with Nvidia’s more mature DLSS solution.
Intel’s Desktop TDPs No Longer Useful to Predict CPU Power Consumption
Intel's higher-end desktop CPU TDPs no longer communicate anything useful about the CPUs power consumption under load.
Review: The Oculus Quest 2 Could Be the Tipping Point for VR Mass Adoption
The Oculus Quest 2 is now available, and it's an improvement over the original in every way that matters. And yet, it's $100 less expensive than the last release. Having spent some time with the Quest 2, I believe we might look back on it as the headset that finally made VR accessible to mainstream consumers.
New Intel Rocket Lake Details: Backwards Compatible, Xe Graphics, Cypress Cove
Intel has released a bit more information about Rocket Lake and its 10nm CPU that's been back-ported to 14nm.
Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference
Intel's Raja Koduri will speak at a Samsung foundry event this week — and that's not something that would happen if Intel didn't have something to say.