New 3DMark Benchmark Shows the Performance Impact of Variable Rate Sha
One of the new features baked into DirectX 12 is support for variable-rate shading, also known as coarse-grained shading. The idea behind variable-rate shading is simple: In the vast majority of 3D games, the player doesn’t pay equal attention to everything on-screen. As far as the GPU is concerned, however, every pixel on-screen is typically shaded at the same rate. VRS / CGS allows the shader work being done for a single pixel to be scaled across larger groups of pixels; Intel demoed this feature during its Architecture Day last year, showing off a 2×2 as well as a 4×4 grid block.
In a blog post explaining the topic, Microsoft writes:
VRS allows developers to selectively reduce the shading rate in areas of the frame where it won’t affect visual quality, letting them gain extra performance in their games. This is really exciting, because extra perf means increased framerates and lower-spec’d hardware being able to run better games than ever before.
VRS also lets developers do the opposite: using an increased shading rate only in areas where it matters most, meaning even better visual quality in games.
First, here’s a comparison of what the feature looks like enabled versus disabled.
VRS Disabled. Image provided by UL. Click to enlarge.
VRS Enabled. Image provided by UL. Click to enlarge.
There’s also a video of the effect in action, which gives you an idea of how it looks in motion.
As for the performance impact, Hot Hardware recently took the feature for a spin on Intel’s 10th Generation GPUs. Performance improvement from activating this feature was ~40 percent.
These gains are not unique to Intel. HH also tested multiple Nvidia GPUs and saw strong gains for those cards as well. Unfortunately, VRS is currently confined to Nvidia and Intel-only — AMD does not support the capability and may not have the ability to activate it in current versions of Navi.
It always takes time to build support for features like this, so lacking an option at debut is not necessarily a critical problem. At the same time, however, features that save GPU rendering horsepower by reducing the impact of using various features tend to be popular among developers. It can help games run on lower-power solutions and in form factors that they might not otherwise support. All of rasterization is basically tricks to model what the real world looks like without actually having to render one, and choosing where to spend one’s resources to maximize performance is an efficiency boosting trick developers love. Right now, support is limited to a few architectures — Turing and Intel Gen 11 integrated — but that will change in time.
VRS isn’t currently used by any games, but Firaxis has demoed the effect in Civilization VI, implying that support might come to that title at some point. The new VRS benchmark is a free update to 3DMark Advanced or Professional Edition if you own those versions, but is not currently included in the free Basic edition.
The top image for this article is the VRS On screenshot provided by UL. Did you notice? Fun to check either way.
Continue reading
Benchmark Results Show Apple M1 Beating Every Intel-Powered MacBook Pro
Apple's new M1 SoC can beat every single Intel system it sells, at least in one early benchmark result. We dig into the numbers and the likely competitive situation.
Leaked Benchmarks Paint Conflicting Picture of Intel’s Rocket Lake
Rumors about Rocket Lake have pointed in two opposite directions recently, but the more competitive figures are more likely to be true.
Cyberpunk 2077 Benchmarks Show Even the Fastest GPU in the World Can’t Play at 4K
It was probably impossible for Cyberpunk 2077 to live up to the hype after eight years in development, but the performance issues aren't helping.
Apple’s M1 Crushes Windows on ARM in 64-bit Benchmarks
Now that Windows on ARM can emulate 64-bit x86 apps, how do these systems compare against Apple's M1? Not well, it turns out.