AMD Will Answer Nvidia’s Ray Tracing Technology, Eventually

AMD Will Answer Nvidia’s Ray Tracing Technology, Eventually

An interview with AMD’s Senior Vice President of Engineering, David Wang, in the Japanese publication 4Gamer states that AMD will bring its own ray tracing solution to market in-time, but that the company is focused on other priorities in the short-term. Ray tracing has been a hot technology discussion since Nvidia announced it was adding the capability to Turing, but AMD’s short-term response isn’t focused on introducing a consumer ray-tracing solution.

Before we go further, I should note that I was at AMD’s New Horizons event and the only GPU under discussion was the 7nm Vega for AI and machine learning markets. Navi wasn’t mentioned, AMD hasn’t released any details on that part, and the company wasn’t willing to talk about it. The interview with 4Gamer mentions this, and it’s important to read that context into what David Wang said (or didn’t say). AMD will “definitely respond” to DXR (DirectX ray tracing), but stated that “Utilization of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end.” Obviously, specialized ray tracing hardware isn’t baked into current Vega hardware — but there are a few points to touch on here as well.

AMD Will Answer Nvidia’s Ray Tracing Technology, Eventually
AMD Will Answer Nvidia’s Ray Tracing Technology, Eventually

Now, in PC gaming, all signs point to Nvidia holding a supermajority of the market, particularly at the high-end. This is much less true when you factor in console gaming, and the features and capabilities of PC games are often impacted by what the console versions of those titles can support. If developers know the Xbox Next or PlayStation 5 also offer ray tracing, they’re much more likely to treat the feature as a basic assumption rather than a luxury to be ladled on top of PC titles as a bit of extra gravy.

AMD’s Vega 7nm. Click to enlarge.
AMD’s Vega 7nm. Click to enlarge.

But right now, it’s really unclear how all this plays out. Frankly, it’s not even clear to me that there are enough die shrinks left in Moore’s law for ray tracing as its currently implemented to ever take off on GPUs. The best-case area shrink projections from TSMC for 7nm compared with 16FF+ are 70 percent (AMD claimed a 50 percent improvement for 7nm at New Horizons), while the projected gains at 5nm are much smaller, at 45 percent. Given that power consumption and performance are improving at much slower rates, it’s just not clear how much runway ray tracing has, from a scaling perspective, to really establish itself.

This can be ameliorated, in theory, by GPU companies giving up more hardware area to ray tracing as opposed to rasterization and dedicating more resources to it — except, of course, that such gains may be made at the expense of rasterization performance. Companies are having to make design shifts and compromises now that didn’t used to be required, like AMD’s new chiplet design that combines 14nm silicon with 7nm chips to optimize the performance of both. Whether such a modular approach could ever work for GPUs remains to be seen; AMD and Nvidia have discussed the idea of modular GPUs before but neither company has brought a product to market yet.

AMD’s decision to focus on its Radeon ProRender technology (ray tracing for professional users) for now makes sense, since that’s where it has a ray tracing play in-market, and we don’t know if the company will make a major pivot with Navi. Including the hardware with Navi doesn’t guarantee adoption — AMD had a tessellation unit baked into every HD 2000, 3000, and 4000 GPU, but we never saw it used — but it does improve the chances of broader PC uptake, alongside whatever push Nvidia continues to make.

And the technical question of how much of the GPU should be dedicated to these capabilities as opposed to conventional rasterization will continue to bedevil both companies, with the answer likely depending on whether gamers respond strongly to the feature in the first place, which will itself be dictated at the price the feature is available at. In this regard, Nvidia acted to prop up its profit margins with the RTX 2070, 2080, and 2080 Ti rather than to goose market adoption, but that’s also fairly par for the course. Nvidia has historically prioritized developing features it could charge a premium for, like G-Sync, and tends to position its capabilities with an eye towards upselling customers. Of course, that doesn’t stop the company from reducing price or responding to any competitive moves AMD might make on 7nm, which is what we suspect it intends to do.

In short, it’s complicated.