AMD, Nvidia Have Launched the Least-Appealing GPU Upgrades in History
Yesterday, AMD launched the Radeon VII, the first 7nm GPU. The card is intended to compete with Nvidia’s RTX family of Turing-class GPUs, and it does, broadly matching the RTX 2080. It also matches the RTX 2080 on price, at $700. Because this card began life as a professional GPU intended for scientific computing and AI/ML workloads, it’s unlikely that we’ll see lower-end variants. That section of AMD’s product stack will be filled by 7nm Navi, which arrives later this year.
Navi will be AMD’s first new 7nm GPU architecture and will offer a chance to hit ‘reset’ on what has been, to date, the least compelling suite of GPU launches AMD and Nvidia have ever collectively kicked out the door. Nvidia has relentlessly moved its stack pricing higher while holding performance per dollar mostly constant. With the RTX 2060 and GTX 1070 Ti fairly evenly matched across a wide suite of games, the question of whether the RTX 2060 is better priced largely hinges on whether you stick to formal launch pricing for both cards or check historical data for actual price shifts.
Such comparisons are increasingly incidental, given that Pascal GPU prices are rising and cards are getting harder to find, but they aren’t meaningless for people who either bought a Pascal GPU already or are willing to consider a used card. If you’re an Nvidia fan already sitting on top of a high-end Pascal card, Turing doesn’t offer you a great deal of performance improvement.
AMD has not covered itself in glory, either. The Radeon VII is, at least, unreservedly faster than the Vega 64. There’s no equivalent last-generation GPU in AMD’s stack to match it. But it also duplicates the Vega 64’s overall power and noise profile, limiting the overall appeal, and it matches the RTX 2080’s bad price. A 1.75x increase in price for a 1.32x increase in 4K performance isn’t a great ratio even by the standards of ultra-high-end GPUs, where performance typically comes with a price penalty.
I’ve theorized that Nvidia may have misread the overall GPU market and underestimated its own exposure to cryptocurrency. This would explain how the company wound up saddled with a huge number of Pascal GPUs in the back half of 2018, once the bottom fell out of the crypto market. But it may also partly explain Turing’s pricing. The only way for Nvidia to simultaneously launch a new suite of products and clear a bucketload of unwanted last-gen inventory is to price those cards aggressively. In fact, this is what happened — during the run-up to Christmas, 1070 Ti cards were selling for well under $400, for example. This may have helped NV clear its stock shelves, but it only made Turing look worse.
Then again, Turing didn’t need the help. The only GPU in the entire stack that significantly moved the bar forward in terms of raw performance is a $1,200 GPU. That’s literally never happened before. Previously, even GPU generations that were point updates, like the transition from the GeForce GTX 6xx to 7xx series, had at least a few SKUs that were significantly faster than the hardware they replaced below the $1,200 price point.
Are the Radeon VII, RTX 2060, RTX 2070, RTX 2080, and RTX 2080 Ti bad cards? No. The problem here is pricing. But if you aren’t well-heeled, pricing still matters, particularly given the fact that the GPU market is historically very good at delivering improved performance at the same price.
All Eyes on Navi
I don’t think AMD’s Radeon VII will be treated as the final word on 7nm. It’s fairly well known that the GPU began life as a high-end professional/scientific product, and its huge memory bandwidth and VRAM capacity cater more to that space than mainstream gaming. But as things stand today, both AMD and Nvidia have largely failed to deliver a performance-per-dollar improvement.
This simply isn’t how the GPU market typically works. It’s true that prices can fluctuate depending on the competitive standing between AMD and Nvidia at any given moment, and it’s true that the high end is much more expensive than it used to be. But if you examine the overall history of the GPU market, what you’ll find is that Nvidia and AMD both deliver consistent performance improvements in constant dollars over time. It’s historically been possible to set a price target — say, $250 — and reliably count on a new, faster GPU to drop into that price range. Lower-end product replacement cycles were a little more scattershot, but the long-term trend has always been towards better performance at the same price point.
There are two ways to read the current situation. One argues that the current less-than-inspiring crop of GPU architectures from AMD and Nvidia is an oddity. AMD has been operating on a shoestring budget until quite recently, with limited resources to pour into new GPUs. Nvidia may have misread the crypto market. Put the two together, and you have an awkward transition that doesn’t inspire anyone but ultimately doesn’t have much influence over the long-term evolution of the GPU market either. Fast forward 18 months and the market may look much more like its historical trend line.
The alternative is that GPU customers can now more-or-less look forward to paying significantly higher prices for performance improvements. That’s a grim prediction, and it’s one I’m explicitly not making. We don’t know if Vega 20 is a great vehicle from which to view AMD’s next-generation architecture. We also don’t know what kind of performance improvements Nvidia might be able to squeeze from 7nm when it brings these chips to retail. Intel also plans to enter the GPU space in 2020, so it’s possible we’ll see disruption from that direction as well.
But as things stand today, the consumer GPU market feels stuck in an unwelcome position where the only generational performance improvements available are reserved to those who can afford to jump at least a price bracket for their next GPU. Historically, there’s never been a need to do that. Hopefully, there still isn’t.