AMD Radeon RX 6600 XT Review: Solid 1080p Rasterized Gaming, At a Cost

AMD Radeon RX 6600 XT Review: Solid 1080p Rasterized Gaming, At a Cost

AMD’s Radeon RX 6600 XT launches today, and AMD knows exactly what this $379 GPU is good for: rasterized 1080p gaming. The company has thumped this point in its messaging and press releases; the PR for today’s launch claims the 6600 XT is “designed to deliver the ultimate high-framerate, high-fidelity, and highly responsive 1080p gaming experience.”

The press release accompanying the 6600 XT mentions 1080p gaming no fewer than 10 times, while 1440p and 4K are not mentioned at all. This isn’t entirely out of band; the 6700 XT launch PR mentioned 1440p 7x and did not discuss other resolutions, but it speaks to an aspect of AMD’s messaging surrounding RDNA2: AMD is arguing that the Radeon 6700 XT is a 1440p card while the Radeon 6600 XT is a 1080p card, and it wants you to think of each GPU that way.

When Nvidia launched Turing back in 2018, it also raised GPU prices. AMD has effectively done the same thing with its own product stack; the 6700 XT’s MSRP is about $80 more than the 5700 XT and the 6600 XT is $100 more than the 5600 XT. We are not surprised that AMD has chosen to raise GPU prices — every semiconductor company is emphasizing its most valuable hardware SKUs right now — but we’re quite curious about the changes AMD made to the 6600 XT compared with the 6700 XT.

Due to shipping issues, we were not able to secure an RTX 3060 Ti for comparison with the 6600 XT in time for launch. While AMD prefers to reference the RTX 3060 and the GTX 1060, the MSRP on the 6600 XT ($379) makes the $399 RTX 3060 Ti the appropriate point of comparison. As for the GTX 1060, surpassing a five-year-old GPU with a modern card that costs 1.52x more is like watching a professional boxer take a victory lap after beating up a teenager. This article is a deep dive into the impact of AMD’s decision to limit the L3 cache and memory bandwidth on the 6600 XT versus the 6700 XT and what the impact of these changes is on the card’s performance.

The core configuration options AMD picked for the 6600 XT versus the 6700 XT provide an interesting opportunity to measure the impact of reducing RDNA2’s memory bandwidth and L3 cache. If you compare the 6600 XT and the 6700 XT default configurations, you’ll see that they don’t actually differ much on core count.

AMD Radeon RX 6600 XT Review: Solid 1080p Rasterized Gaming, At a Cost

The 6700 XT has 25 percent more cores and texture mapping units than the 6600 XT but the same number of ROPs. The major differences are the total L3 cache size, the L3 cache bandwidth, and the GPU’s absolute memory capacity. The 6700 XT has a 3x advantage in L3 cache size, 50 percent more memory bandwidth, and 50 percent more VRAM. When we reached out to AMD inquiring as to the motivation behind the cache and memory bandwidth reduction, we were told the following:

Based on the targeted resolutions and experiences, AMD carefully determined the right size of the Infinity Cache based on data load expectations on cache (AKA hit rate). On the AMD Radeon RX 6600 XT, for 1080p rasterized gaming, the hit rate of the infinity cache is as successful as other products higher in the product stack with larger cache sizes at their targeted 1440p or 4k gaming experiences. (Emphasis added)

AMD’s relentless focus on 1080p (and rasterized 1080p at that) made us curious as to whether the 6600 XT would show any evidence of memory bottlenecking when compared with the 6700 XT. Since we don’t have an RTX 3060 Ti, we’ve opted to focus on rasterization and ray tracing performance scaling between the 6700 XT and the 6600 XT. To make that comparison a little easier, we’ve tested our 6700 XT at a non-standard clock speed.

For this review, we lowered the clock on our 6700 XT to between 2.1 – 2.2GHz (AMD requires a 100MHz envelope). Testing showed the GPU held much closer to the minimum end of the range, with a measured clock between 2 – 2150MHz in the vast majority of cases. Our Radeon RX 6600 XT Asrock sample, in contrast, maintained a clock rate of around 2.6GHz in games.

We deliberately lowered the 6700 XT’s clock for two reasons. First, dropping to ~2.1GHz drops the pixel fill rate of the 6700 XT below the theoretical limit of the 6600 XT (134.4GPixels/s for the 6700 XT versus 166.4GPixels/sec for the 6600 XT), while equalizing the texture fill rate between the two GPUs at ~332 – 336GTexels/s. In any scenario where a title is limited by pixel or texture fill rates, the 6600 XT should match or beat the 6700 XT. The 6700 XT still has 3x the L3 cache and 1.5x the VRAM, however, giving us a chance to examine how bottlenecked the core is in this new configuration.

We chose a slightly different suite of tests for this comparison. AMD’s explicit declaration that this GPU targets 1080p rasterized performance raises the question of how it will perform in ray tracing workloads, so we’ve queued up a few of those; some friendlier to AMD than others. We’ve also spent a lot more time benchmarking some of the same games in different detail and ray tracing configurations to measure the impact of setting adjustments. What we’re looking for is evidence as to whether the 6600 XT can or cannot sustain high frame rates above 1080p and where, exactly, the bottlenecks are in relation to the 6700 XT.

Test Setup

We’ve focused entirely on the 6700 XT and 6600 XT for this performance testing, with the 6600 XT clocked at default speeds and the 6700 XT clocked at a lower 2.1GHz as discussed above. This results in a 6600 XT and 6700 XT comparison in which the two are closely matched in texture fill rate, the 6600 XT technically has a pixel fill rate advantage, and the 6700 XT retains a 50 percent advantage in VRAM and memory bandwidth, along with 3x the L3 cache.

We tested the 6600 XT and 6700 XT using a Ryzen 9 5900X and 32GB of DDR4-3600 in an MSI Godlike X570 motherboard. Smart Access Memory (aka Resizable BAR) was enabled for both GPUs, using the AMD-provided Radeon launch driver.

Performance:

There are some very interesting patterns and differences between the 6600 XT and the 6700 XT, especially as resolution scales upwards.

!function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async");

At 1080p, the 6600 XT and 2.1GHz 6700 XT score quite similarly in both Metro Exodus tests, as well as in Shadow of the Tomb Raider’s Highest and High detail settings — so long as ray tracing is disabled. In Bright Memory, the two cards are on top of each other. With ray tracing enabled, GPU performance diverges sharply. The full-speed 6700 XT is only 1.11x faster than the 6600 XT in SotTR 1080p without ray tracing, but it’s 1.38x faster with ultra ray tracing enabled.

The situation at 1440p is a mixed bag. In some tests, the 6600 XT keeps pace with the 6700 XT admirably. While the gap between the two cards increases in Shadow of the Tomb Raider once RT is enabled, it does not skyrocket. The problem starts creeping in with Godfall, where the 6600 XT falls off a cliff. What’s interesting about these results is that we’ve tested the RTX 3070 in this application and didn’t see the same memory problem in 1440p. The RTX 3070 swan dives at 4K on an 8GB buffer while the 6600 XT takes the hit at 1440p.

At 4K, especially 4K RT, the 6600 XT is badly memory bandwidth bound. While both the 6600 XT and 6700 XT take ugly hits when stepping up to this resolution at the highest detail levels, the 6600 XT takes a heavier penalty in every case.

!function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async");

To understand what these numbers tell us about the importance of the 6700 XT’s L3 cache and additional memory bandwidth, consider the relationship between the 2.1GHz 6700 XT and the full-clocked variant. The 6600 XT should have similar texture bandwidth and superior pixel fillrate compared with the hobbled 6700 XT, and we see some 1080p results where the two score similarly until memory bandwidth pressure begins to rise. The similar results in Metro Exodus imply that this title is fill rate or texture rate limited, not memory bandwidth limited.

The 6600 XT turns in solid ray tracing numbers for 1080p, but it sags thereafter. If you look at even the 2.1GHz 6700 XT versus the 6600 XT, it’s easy to see how the 6600 XT keeps falling on the wrong side of an invisible line. An average frame rate of 51fps in 1440p Shadow of the Tomb Raider means dips into the upper 30s or low 40s. An average frame rate of 41fps could mean dips into the upper 20s or lower 30s — and that’s poor enough that some PC gamers are going to start to complain. We also see evidence that the 8GB 6600 XT may run into memory pressure issues at 1440p with ray tracing enabled.

There’s one other behavior I want to highlight. Time and again we see evidence that raising the RX 6700 XT’s clock speed improves the GPU’s performance. Even at 4K at highest detail in Shadow of the Tomb Raider, increasing the 6700 XT’s clock improves performance by 8 percent. The 6600 XT is already pushing equivalent texture fill rates and higher pixel fill rates, but its performance lags even the downclocked 6700 XT in every test at resolutions above 1080p.

The 6600 XT’s even bigger problem is the performance of the fully clocked 6700 XT. While we’ve focused on comparing the lower-clocked variant, the card you’d actually buy is meaningfully faster and comparing against it makes the Radeon 6600 XT look like a worse deal.

This doesn’t happen in every game. There are a fair number of older, rasterized titles like Ashes of the Singularity, Hitman 2, and Borderlands 3 that show a reasonable relationship between the two cards all the way to 4K. In newer titles and ray-traced titles, the gaps are larger.

I suspect the reason AMD specifies that the RX 6600 XT is intended for rasterized 1080p gaming is that the GPU looks great compared with the RX 6700 XT in that mode. A great deal gets bandied about regarding 4K gaming and which GPUs do and do not deliver it, but it’s clear that the 6600 XT is capable of delivering playable 4K frame rates if you’re willing to compromise on detail.

The 6600 XT’s problems start when one starts trying to step outside that mode. Rasterized 1440p is still a pretty strong bet for the 6600 XT, but as Godfall shows, ray tracing is no sure thing. The real problem for the 6600 XT is that the 6700 XT starts demolishing its value proposition. At RT 1440p, the 6700 XT is 1.46x faster than the 6600 XT, and just 1.26x more expensive. That does make the 6700 XT look like a good deal, but it does the 6600 XT no favors.

I’m not going to give an explicit market recommendation when we haven’t examined the RTX 3060 in this article, but I do have a few points I want to make. The 6600 XT looks like a great GPU for the niche and market AMD is selling it into, but with uncertain prospects beyond that point. I honestly think its 1440p rasterization performance is strong enough to warrant considering it for that resolution, but its ray tracing performance is weaker and potentially uncertain.

Right now, AMD and Nvidia have created a situation in which the ray tracing performance and overall GPU longevity you buy into above the $400 price point is better than what you can get below it. Both companies have weaknesses: Nvidia’s limited GPU VRAM on most cards could be a problem in the future, while AMD offers lower overall ray tracing performance. RDNA’s ray tracing performance is between Turing and Ampere — typically better than the former but worse at the same price point than the latter. Ray tracing remains a modest value add for now, but the fact that it is now available on both consoles will surely speed its general market uptake.

The GPUs currently selling in the $300 – $400 market (if manufacturer MSRPs actually existed) are in an uncertain position. They lack the horsepower to offer a clean promise of meaningful ray tracing support in the future, at least not at the resolution gamers are used to targeting. Both AMD and Nvidia have a plan for this: AMD with FSR and Nvidia with DLSS.

It’s not clear which of these two solutions is more likely to be supported in the future. AMD’s FSR can be supported on Nvidia GPUs as far back as Pascal, which makes it easier for developers to justify and the code is open source, which addresses any licensing or transparency questions. DLSS takes advantage of specialized hardware built into Nvidia GPUs and is Nvidia-specific, but Nvidia also dominates the PC gaming market.

Developers have a vested interest in ensuring that their games run well on as many different PC configurations as is possible, which means they have good reason to deploy technologies like DLSS and FSR. Given the 6600 XT’s bandwidth limitations, adoption of these features may be important to its long-term future. By emphasizing 1080p rasterized performance AMD may be hedging its bets a bit.

The 6600 XT and 6700 XT are both several steps down from AMD’s top-end GPU line-up, but we feel much better about the balance of features on the higher-end card. The 6600 XT’s rasterization performance is strong and it can clearly handle 1080p ray tracing today, but AMD’s rather conservative positioning for the card, combined with the evidence we found for memory bandwidth bottlenecks, leaves us uncertain about its long-term strengths. If you just bought a high-end 1080p monitor and you know you won’t be upgrading, buy in confidence. Everyone else may want to think it over.

Of course, all of this discussion assumes manufacturer MSRPs were worth the paper they’re printed on. Since they aren’t, the 6600 XT at its $379 MSRP would be a fantastic buy compared with the actual state of the market. If you need a new GPU badly enough to be shopping in the current climate and you have a chance to score one of these at the actual suggested price, we’d recommend doing so. While prices have come down somewhat, they remain high enough that the GPU you can buy at MSRP is typically the best GPU value on the market regardless of other factors.

Continue reading

Scientists Confirm the Presence of Water on the Moon
Scientists Confirm the Presence of Water on the Moon

Scientists have confirmed the discovery of molecular water on the moon. Is there any of it in a form we can use? That's less clear.

Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference
Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference

Intel's Raja Koduri will speak at a Samsung foundry event this week — and that's not something that would happen if Intel didn't have something to say.

Review: The Oculus Quest 2 Could Be the Tipping Point for VR Mass Adoption
Review: The Oculus Quest 2 Could Be the Tipping Point for VR Mass Adoption

The Oculus Quest 2 is now available, and it's an improvement over the original in every way that matters. And yet, it's $100 less expensive than the last release. Having spent some time with the Quest 2, I believe we might look back on it as the headset that finally made VR accessible to mainstream consumers.

AMD Smashes Revenue Records as Zen 3, Xbox Series X, PS5 Ramp Up
AMD Smashes Revenue Records as Zen 3, Xbox Series X, PS5 Ramp Up

AMD's Q3 2020 results are in, and the results are excellent for the company, in every particular.