Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing

Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing

Now that the smoke has cleared and the hype has died down, Nvidia’s RTX 2080 and 2080 Ti nonetheless remain challenging cards to evaluate. While every GPU launches with a mixture of current and forward-looking futures, the RTX series leans into the idea of the future harder than most, which leaves it with comparatively little evidence with which to back up its claims. Fortunately, our ability to test these GPUs in shipping titles isn’t nearly so limited.

Nvidia’s Turing family currently consists of two GPUs: The RTX 2080, a $700 retail / $800 Founder’s Edition GPU, which replaces the GTX 1080 Ti in Nvidia’s product stack and competes directly against that card, and the RTX 2080 Ti, a $1,200 GPU that’s not yet shipping. The RTX 2080’s current shipping price is hovering between $770 and $790, as of this writing, the least expensive cards are $789. The 2080’s $700 launch price is $150 more than the GTX 1080 back in 2016 (which is why we’re now comparing it directly against the 1080 Ti), while the 2080 Ti’s launch price has risen by $500 compared with the Nvidia GeForce GTX 1080 Ti.

Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing
Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing
Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing

At the same time, however, I want to acknowledge that Nvidia has unambiguously positioned the RTX family as a fundamentally new type of GPU intended for new types of emerging workloads. These factors need to be taken into account in any evaluation of the GPU, and we’re going to do so. But unpacking the value proposition of Nvidia’s RTX family as it concerns next-generation games and the overall likely longevity of the family is a significant enough undertaking that it deserves to evaluated independently from the question of how the Turing RTX family performs in modern games. Let’s look at that first.

Evaluating the Long-Term Ray-Tracing Potential of the RTX 2080, RTX 2080 Ti

Every GPU launch is an opportunity for a company to argue both that it delivers better performance in currently shipping titles and a new set of features that will enhance performance or visual quality in the future. Turing is far from the first GPU to look ahead to what the future might bring, but it dedicates a substantial amount of its total die to enabling features that you can’t currently use in games. Architecturally, Turing closely resembles Volta, with a new set of processing cores Nvidia calls RT Cores. These RT cores perform ray-intersection tests critical to the implementation of Nvidia’s real-time ray tracing technology. Meanwhile, the Tensor Cores that debuted in Volta have been tweaked and improved for Turing, with support for new INT8 and INT4 operating modes at 2x and 4x FP16 performance, respectively.

Outside of DLSS, it’s not clear how Nvidia will use these features, and it hasn’t disclosed too much information about RTX either, though we’ve hit the available data previously. But just looking at the math on relative die sizes and power consumption tells us a lot: Nvidia has gone to the trouble to integrate its tensor cores and machine learning capabilities into consumer products because it believes these new technologies could yield game improvements if properly utilized. It’s willing to pay a die and performance penalty to try and hit those targets, and it’s picked a launch moment when there’s very little competition in the market in an attempt to establish them early.

It’s important to draw a distinction between what Nvidia is attempting to accomplish with RTX and DLSS — namely, the introduction of ray tracing and AI noise-processing features as complementary technologies to rasterization or even replacements for rasterization over the long-term, and the question of whether the RTX 2080 and 2080 Ti represent a good value for consumers. The first question will play out over the next 5-10 years across a number of GPU architectures from multiple companies, while the second focuses on specific cards. I’ve been a fan of ray tracing technology for years and have covered it for wfoojjaec on several occasions, including profiling Nvidia’s own iRay technology. After PowerVR’s attempt to push the tech in mobile came to naught, seeing it debut on desktops in the near future would be genuinely exciting.

In many ways, Turing reminds me of the G80 that powered GPUs like the GeForce 8800 GTX. Like the G80, Turing represents a new computing model — a model that Nvidia took a substantial risk to build. When it debuted in 2006, the G80 was the largest GPU ever built, with 681 million transistors and a then-enormous 480 mm sq die. Nvidia’s G70, launched in 2005, had been a 333 mm2 chip with 302 million transistors, for comparison. The G80 was the first PC GPU to use a unified shader architecture and the first to support DirectX 10. It was the first programmable GPU — Nvidia launched CUDA in 2006 and CUDA 1.0 was supported by the G80. Few today would look back at G80 and declare that the chip was a mistake, despite the fact that it represented a fundamentally new approach to GPU design.

Putting Turing in Historical Context

Part of Turing’s problem is the inevitable chicken-and-egg scenario that plays out any time a manufacturer introduces new capabilities. It takes developers a significant amount of time to add support for various features. By the time a capability has been widely adopted in-market, it’s often been available for several years. Buying into a hardware platform at the beginning of its life cycle is often a bad idea — as illustrated by the G80 itself. By examining how this history played out, we gain a better sense for whether it’s a good idea to leap at the very beginning of a major new technology introduction.

When the 8800 GTX came out, it blew the doors off every other GPU in DirectX 9, as shown in the slide below:

Image by Anandtech
Image by Anandtech

Its power efficiency and performance per watt were similarly advantageous; the 8800 GTX was more than twice as power-efficient as its competitors and predecessors.

Image by Anandtech
Image by Anandtech

Now, at first, these advantages seemed likely to be well-distributed across the entire product family. But once DirectX 10 and Windows Vista were both ready, this proved not to be the case. For these next slides, a bit of explanation is in order: The 8800 GTX and 8800 GTS were based on the same G80 silicon and debuted at the same time, in 2006. The 8800 GT shown in the results below is based on the smaller, more-efficient G92 core that debuted almost a year after the 8800 GTX. The launch price on the 8800 GT was $300, launch price on the 8800 GTS was $400. Keep that in mind when you look at the results from Techspot’s Crysis coverage below:

Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing
Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing

The gap between DX9 and DX10 performance was hardly unique to Nvidia; AMD owners often were hammered as well. But the point stands: Often, GPUs that performed quite well under DirectX 9 staggered with DX10, to the point that only the highest-end cards from either vendor could even use these features. And as the 8800 GT’s performance illustrates, a cheaper GPU that arrived a year later beat the snot out of its more-expensive cousin. The gap between the 8800 GTS and the 8800 GTX in DX9 and CoH is 1.44x. In Crysis under DX9, it’s 1.39x. In Crysis under DX10, it’s 1.77x.

Once RTX is available, how well will the RTX 2080 match the RTX 2080 Ti’s performance? How much will the RTX 2070 match the RTX 2080? We don’t know. Nvidia hasn’t said.

But it’s impossible to look back at the DirectX 9 to DX10 shift and identify any of the GPUs as a particularly good deal if your goal was to play DX10 games at good frame rates. The G80 offered great performance in DX9, but only the most-expensive chips were capable of handling DirectX 10 at high detail levels. Furthermore, by the time DX10 games were widely in-market, G80 had been replaced by G92 or even GT200. Gamers who bought into the 8000 family at lower price points got screwed when their GPUs were generally incapable of DX10 at all. The overall impact of this was ameliorated by the fact that DX10 ultimately didn’t end up delivering much in the way of benefits — but it still serves as an important example of how excellent performance in established APIs doesn’t guarantee excellent performance when new capabilities are switched on.

DirectX 11

DirectX 11 and DX12 don’t represent as large a break with previous GPUs as DX10 did, but there’s still some useful elements to each. In the case of DX11, new GPUs that supported the API did a generally better job of offering solid performance within its feature set. But at the marketing level, despite the promised importance of new features like tessellation, the long-term impact on gaming was much smaller than the marketing implied it would be. Shots like these were often used to illustrate the difference between having tessellation enabled versus disabled:

In reality, the difference looked more like this:

Tessellation enabled. Image by Hot Hardware. Click to enlarge.
Tessellation enabled. Image by Hot Hardware. Click to enlarge.
Tessellation disabled. Image by Hot Hardware. Click to enlarge.
Tessellation disabled. Image by Hot Hardware. Click to enlarge.

Did DirectX 11 improve image quality over DirectX 9? Absolutely. But if you read the early literature on DX11 or paid attention to Nvidia’s marketing on topics like tessellation you’d be forgiven for thinking the gap between DX9 and DX11 was going to be much larger than it actually turned out to be. And once again, we saw real improvements in API performance in the generations that followed the first DX10 cards. G80 could run the API, but G92 and GT200 ran it much more effectively. The GTX 480 supported DirectX 11, but it was the GTX 680 that really shone.

Mantle and DirectX 12

Mantle and DirectX12 offer us the chance to look at this question from AMD’s perspective rather than Nvidia’s. When Mantle debuted in 2013, there were people who argued for the then-imminent superiority of GPUs like AMD’s R9 290 and R9 290X because the advent of low-latency APIs would supercharge these GPUs past the competition. Reality has not been kind to such assessments. Five years after Mantle and three years after Windows 10 launched with DirectX 12 support, there are only a relative handful of low-latency API games in-market. Some of these do indeed run better on AMD hardware under DX12 as opposed to DX11, but if you bought a Hawaii in 2013 because you thought Mantle and DirectX 12 would usher in a new wave of gaming that you wanted to be positioned to take advantage of, you didn’t really receive much in the way of benefits.

Some DirectX 12 and Vulkan games have taken specific advantage of AMD GPUs’ ability to perform asynchronous compute and picked up some additional performance by doing so, but even the most optimistic look at DirectX 12 performance ultimately shows that it only boosts Team Red GPUs a moderate amount in a relatively small handful of titles. Given that some of this increase came courtesy of multi-threaded driver support being standard in DX12 (AMD didn’t support this feature in DX11, while Nvidia did), we have to count it as an intervening variable as well.

DirectX 12 and Mantle, in other words, don’t make a case for buying a GPU right at the beginning of a product cycle, either. Now, with RTX becoming part of the DirectX 12 specification, one can argue that we’re finally seeing DX12 start to pull away from previous API versions and establishing itself as the only game in town if you want to create certain kinds of games. But it’s taken three years for that to happen. And with AMD dominating consoles and the degree of simultaneous development in the industry, any major push to bring ray tracing into gaming as more than an occasional sponsored-game feature, will require buy-in from AMD and possibly Intel if that company is serious about entering the graphics market.

Why Turing Will Never Justify Its Price Premium

With Turing, Nvidia has chosen to launch its new GPUs at a substantial price premium compared with older cards. The question is, are the price premiums worth paying? While this always depends on one’s own financial position, our argument is that they are not.

The history of previous major technology transitions suggests that it will be the first generation of GPUs that takes the heaviest impact from enabling new features. Nvidia has historically offered new GPUs that quickly improved on their predecessors in then-new APIs and capabilities, including G92 over G80 and GTX 580 over GTX 480. The gap between 480 and 580 was less than eight months.

All available data suggests that the RTX 2080 Ti may be needed to maintain a 60fps frame rate target in RTX-enabled games. It’s not clear if the RTX 2080 or 2070 will be capable of this. The RTX push Nvidia is making is a heavy, multi-year lift. The history of such transitions in computing suggests they happen only over 3-5 years and multiple product generations. The worst time to buy into these products, if your goal is to make best use of the features they offer, is at the beginning of the capability ramp.

We also already know that 7nm GPUs are coming. While AMD’s 7nm Vega is intended as a low-volume part for the machine-learning market, we may well see 7nm GPUs in 2019. Turing could be a short-term partial product replacement at the high end, akin to Maxwell 1 and the GTX 750 Ti. But even if this isn’t the case, RTX and DLSS appear likely to be confined to the upper-end of Nvidia’s product roadmaps. Even if the RTX 2060 supports it, the price increases to the RTX 2070 and 2080 means that the RTX 2060 will be a ~$350 – $400 GPU, not a $200 – $300 GPU like the GTX 1060. We don’t know how far down their own stack Nvidia can enable RTX or DLSS, but nobody thinks these features will be making their way to the mainstream market this year.

Finally, there’s the fact that for years, Nvidia and AMD have beat a relentless drum on higher resolutions. It makes a certain degree of sense for Nvidia to pivot to address the 1080p market because frankly, there’s a whole hell of a lot more 1080p panels in the world than anything else. Steam reports 62.06 percent of the market uses 1080p, compared with 3.59 percent at 1440p and 1.32 percent at 4K. But it also means asking high-end gamers to choose between using the high-resolution and/or high-refresh-rate displays they may have paid a premium dollar for and asking them to tolerate gameplay in the 40-60fps range at 1080p. In some cases, DLSS and RTX together may more than compensate for any lost resolution or speed, but that’s going to inevitably depend on the size of the performance hit RTX carries as well as the GPU and game in question. The fact that Nvidia has played its performance cards so very close to its chest on this one isn’t a great sign for first-generation ray tracing as a whole. Put differently: If they could show ray tracing at 60fps on the equivalent of an RTX 2060, with an RTX 2080 Ti breaking 100fps, they’d have done it already.

The RTX Family Isn’t Priced for Adoption

The only way for first-generation RTX cards to be worth what Nvidia thinks you should pay for them is if the new features are both widely used and a significant improvement. The only way that’s ever going to happen is if developers (over and above the ones Nvidia partners with and pays to adopt the technology) bake it in themselves. And the only way that happens is if consumers own the cards.

Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing
Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing

Very few people own a GPU in the RTX 2070 – 2080 Ti price range. 2.69 percent of the market has a 1080 (corresponding to the 2070), 1.49 percent has a 1080 Ti (corresponding to the RTX 2080) and we don’t know how many people will buy $1,200 2080 Tis. But it’s not going to be tons. Nvidia’s ability to move the market towards RTX and DLSS will be constrained by the number of people who own these cards.

Second, we’ve taken the Top 20 cards, isolated the Nvidia GPUs, and broken down the results by architecture family. To be clear, these figures refer to each architecture’s market share within the Top 20 cards, not the entire Steam database. But more than two years after launch, during a time period when Nvidia has dominated the gaming market, we’re still at 65 percent Pascal, 31 percent Maxwell, and roughly 4 percent Kepler. Pascal hasn’t swept the field clear of Maxwell and Kepler in over two years, and that’s with a top-to-bottom set of solutions in-market. Turing will ramp more slowly. Its pricing and the limited number of cards guarantee it. That won’t change with more SKUs unless those SKUs are also RTX / DLSS capable, and nobody is taking bets on a 12nm RTX 2050 Ti that can handle ray tracing in 2019 at a $250 midrange price point.

Nvidia Isn’t Really Trying to Build a New Standard (Yet)

I believe Nvidia when it talks about wanting to drive gaming in a new direction and to utilize more of the silicon its built into its GPUs. I don’t know how much additional value Nvidia can squeeze from its AI and machine learning specialized silicon, but I completely understand why the company wants to try.

Image by Anandtech
Image by Anandtech

But at the same time, let’s acknowledge that launching RTX as a much-hyped feature only available on a handful of expensive cards is very much Nvidia’s MO. Whether you view it as a good thing or a bad thing, Nvidia has a long-standing habit of attempting to monetize enthusiast features. Its G-Sync display technology offers no intrinsic advantage over FreeSync, yet to this day Nvidia insists on the fiction that you must purchase a monitor with special, Nvidia-branded certification and a correspondingly higher price in order to use it.

When Nvidia was pushing PhysX ten years ago, it talked up the idea of buying higher-end GPUs to support the capability or even using two Nvidia GPUs in a single system for PhysX offloading. PhysX was very much positioned as a high-end visual feature that would add an incredible layer of depth and realism to gaming that others weren’t going to get. When Nvidia decided to jump onboard the 3D craze, it created its own sophisticated Nvidia 3D Vision system. Look back at the features and capabilities Nvidia has historically built and it’s clear that the company chooses to invest in creating high-end ecosystems that tie customers more tightly to Nvidia hardware and earn Nvidia more revenue, rather than focusing on open standards that would benefit the community but earn itself less money.

There’s nothing wrong with a company building a market for its own hardware by focusing on delivering a premium experience. I’m not implying that Nvidia has done anything ethically questionable by building a community of enthusiasts willing to pay top dollar for its own products. But there’s also nothing wrong with acknowledging that the same company busily engaged in upselling Turing has a history of this kind of behavior and a less-than-perfect track record of delivering the benefits its marketing promises.

What About Robust Early Support?

One way AMD and Nvidia both try to allay fears about weak feature support from developers is by lining up devs to promise feature adoption from Day 1. AMD did this with its Mantle launch, Nvidia is doing it now with RTX and DLSS. But for most people, lists like this are worth much less than the sum of their parts. Here’s Nvidia’s list:

Nvidia RTX 2080 and RTX 2080 Ti Review: You Can’t Polish a Turing

Of the games confirmed to use RTX, only Battlefield V and Metro Exodus have major brand presence. The DLSS list is larger and more interesting with five well-known titles (Ark, FFXV, Hitman, PUBG, We Happy Few), but DLSS also lacks some of the visual punch compared with real-time ray tracing. Only Shadow of the Tomb Raider will support both.

Will this list expand? I’m sure it will. But games take time to deliver and several titles on this list aren’t set to launch until 2019 already, assuming they aren’t delayed. It’s entirely possible that by the time many of these games are out, either Turing’s prices will have come down thanks to increased competition or that Nvidia will have introduced new cards already. But at the price points Nvidia has chosen for its launch, it’s clear the company is focused on taking profits out of the product line, not pushing them into the mass market to spark widespread adoption as quickly as possible. That’s an entirely valid strategy, but it also means the RTX ecosystem Nvidia wants to assemble will likely take longer to form. The company hasn’t done itself any favors in certain regards — you’ll need GeForce Experience to use DLSS according to Tom’s Hardware, despite the fact that there’s no objective reason whatsoever for Nvidia to receive any personal data from anyone who buys its video cards.

But even in the absolute best-case scenario, early support is going to be slow to build. How much this matters to you will always be a matter of personal preference — if all you play is Battlefield V, and BF5’s ray tracing implementation is amazing, it’s entirely possible you’d feel the RTX 2080 Ti is a bargain at twice the price. But companies build lists of titles with baked-in upcoming support precisely because they want you to focus on the number rather than the names. “21 games with upcoming support!” sounds much better than “21 games with upcoming support, of which 17 will ship, 10 will be good, six will have personal appeal, and you’ll actually buy two.”

This is scarcely unique to Nvidia. But it’s still a problem for anyone who wants to actually make use of these features today in more than a handful of titles.

Test Platform

That brings us back to the rest of today’s games, and the most important question here for anyone considering Nvidia’s latest cards: How does the RTX 2080 and RTX 2080 Ti perform on current titles? For a launch this significant, we’ve revisited our existing suite of benchmarks and updated our hardware platform. Our updated testbed consists of a Core i7-8086K CPU on an Asus Prime Z370-A motherboard, 32GB of DDR4-3200 RAM, and Windows 10 1803 updated with all available patches and updates. The RTX and GTX GPUs were tested using Nvidia’s RTX 411.63 driver, while the AMD Vega 64 was tested using Adrenaline Edition 18.9.3. We’ve also expanded our tests somewhat to include a larger range of AA options in various titles and to test a few titles in multiple APIs.

A few notes before we dive in.

We’ve tested Ashes of the Singularity in both DirectX 12 and Vulkan. Vulkan performance is notably lower for both companies and no exclusive fullscreen mode is offered but according to representatives at Oxide Games, it’s up to both AMD and Nvidia to improve their performance in that API when running Ashes. We’ve used a different graph style for these graphs because Ashes doesn’t return a simple minimal frame rate metric.

We’ve also tested Total War: Warhammer II in both DirectX 11 and DirectX 12 modes. AMD Vega 64 owners (and possibly Polaris gamers as well) should use DirectX 12 in that title for optimal performance, but Nvidia’s performance in DX12 is quite poor. Our aggregate charts for 1080p, 1440p, and 4K use the DirectX 12 results when calculating AMD’s median level of performance and the DirectX 11 results when calculating Nvidia’s. DirectX 12 is used for Ashes of the Singularity results for both vendors.

Finally, in the graphs below, in all games except Ashes of the Singularity, the orange bars are minimal frame rates, the green bars are average frame rates. High minimal frame rates combined with high average frame rates are a very good thing; the closer the two numbers are, the more smooth and consistent overall play is likely to be.

Evaluating Overall RTX 2080, 2080 Ti Performance

We’ve decided to break out our conclusions a little differently than in the past. We’ve taken the geometric average of our overall results at each resolution and plotted the overall score of each GPU as a percentage of the next-lowest GPU’s performance. We’ve also examined the performance-per-dollar rate that you’re effectively paying with each card by evaluating the cost of each GPU in terms of dollars spent per frame rate of performance. In this scenario, the higher performance of high-end cards works in their favor by making them (relatively) better deals as resolution increases. The gap between the Vega 64 and RTX 2080 Ti is smaller in relative terms at 4K than at 1080p as a result of this. Nonetheless, the RTX 2080 and 2080 Ti are markedly more expensive by this metric than previous generations.

When you consider the overall results, several patterns emerge. While they’re closely matched in performance, the GTX 1080 edges out Vega 64 in performance-per-dollar thanks to a lower base price tag and slightly higher results. Overall performance scaling as a function of price isn’t very good. We don’t expect the high-end market to offer strong results here, but the best gains we get are from the RTX 2080 over and above the GTX 1080 Ti, where a 4-6 percent performance increase is available for a ~1.1x increase in price. That’s not terrible, but it’s not particularly noteworthy, either. The 2080 Ti offers a significant level of performance improvement but the price increase outstrips the gains for all but the most well-heeled gamers.

Because the RTX 2080 and RTX 2080 Ti are so different, we’ve written a different conclusion for each.

Should You Buy the RTX 2080?

The RTX 2080 is both a better value than the RTX 2080 Ti and a less appealing GPU overall. The best we can say for it is this: The additional price premium it commands over the GTX 1080 Ti is roughly twice as much as the additional performance it offers over and above that GPU. This is objectively a bad deal, but it’s no worse than the typical scaling you get from luxury GPUs in the $700+ price range — and that means we can understand why someone might opt to pick up an RTX 2080 instead of a GTX 1080 Ti. The ray tracing and DLSS capabilities the RTX 2080 offers could be seen as justifying the ~10 percent additional price premium over the GTX 1080 Ti. All of this, of course, assumes you’re in the market for a $700 GPU in the first place.

The RTX 2080’s biggest problem is that it’s a completely unexciting GPU. Evaluated in today’s titles it offers marginally more performance than the 1080 Ti, but, with rare exception, not enough so that you’d notice. wfoojjaec acknowledges that the RTX 2080 is faster than the GTX 1080 Ti, but we do not find its positioning or performance compelling and we do not particularly recommend it over and above the 1080 Ti.

Should You Buy the RTX 2080 Ti?

The RTX 2080 Ti does offer impressive performance and absolutely moves the bar forward. But it does so with a mammoth cost increase that is completely unjustifiable. If money is no object, the 2080 Ti is a great GPU. But given that just 1.49 percent of Steam users own a GTX 1080 Ti — and that GPU is $500 less than the RTX 2080 Ti — it’s clear that money is an object for the vast majority of gamers. And that changes things.

The GTX 1080 Ti is 1.3x faster than the GTX 1080 at 4K and costs 1.5x more. The RTX 2080 Ti is 1.36x faster than the GTX 1080 Ti and costs 1.7x more. High end of the market or not, there comes a point when a component hasn’t done a great job justifying its price increase. And evaluated in currently shipping games, the RTX 2080 Ti hasn’t. wfoojjaec readily acknowledges that the RTX 2080 Ti is the fastest GPU you can buy on the market. It’s an impressive performer. But the premium Nvidia has slapped on this card just isn’t justified by the performance it offers in shipping titles. It’s not even clear if it’ll be defensible in RTX or DLSS gaming.

The single argument you can make for the RTX 2080 Ti that doesn’t apply to any other GPU in the stack is that it moves the needle on absolute gaming performance, pushing maximum speeds forward in a way the RTX 2080 doesn’t. And that’s great until you realize that we’re talking about a GPU that costs $1,200. Most of the gaming rigs I’ve built for people haven’t cost that much money.

Nvidia didn’t slap a $500 premium on this GPU to cover the cost of manufacturing it — it’s a big GPU, but it’s not big enough to justify that much of a price increase and TSMC’s 16/12nm node is now quite mature and manufacturing costs will have come down as a result. Nvidia slapped a $500 premium on this GPU because AMD isn’t currently competing at the top of the market and Intel won’t have cards out the door until sometime in 2020. As many have pointed out, this is just smart economics for Nvidia. But Jen-Hsun’s need for a solid gold bathtub doesn’t trump the need for Nvidia to justify the prices it charges for its own hardware. And it hasn’t. Nvidia’s overall gross margin for its entire business is sitting at 60 percent; the company is scarcely hurting for funds to drive all the development it’s doing.

wfoojjaec again acknowledges that the RTX 2080 Ti is the fastest GPU on the market, but unless price is literally no object to you, we don’t recommend you buy one. In fact, we explicitly recommend you don’t. While the performance is excellent, it’s not worth the premium Nvidia wants to charge for it in currently shipping titles. Buy this card, and you send a message to Nvidia that $1,200 ought to be the price for high-end GPUs.

The RTX 2080 and 2080 Ti aren’t bad GPUs, but they’re both bad deals with no reasonably priced, reasonably performing cards lower in the stack to anchor the family. Consumers outside the ultra-high-end will be better served by older cards from Nvidia’s Pascal family. Turing does not offer sufficient advantages in current games to justify the price premium Nvidia has slapped on it.

Will RTX’s Architecture Deliver in the Future?

This article is one of the longest pieces I’ve ever written for wfoojjaec and the longest single piece of work I’ve written in years. But I’ll acknowledge I’m taking a dimmer view of Turing than my colleagues in the review community. I wanted to lay out why.

If the RTX 2080 had come in at GeForce 1080 pricing and the RTX 2080 Ti had slapped $100 – $150 on the GTX 1080 Ti, I still wouldn’t be telling anyone to buy these cards expecting to dance the ray-traced mamba across the proverbial dance floor for the next decade. But there would at least be a weak argument for some real-world performance gains at improved performance-per-dollar ratios and a little next-gen cherry on top. With Nvidia’s price increases factored into the equation, I can’t recommend spending top dollar to buy silicon that will almost certainly be replaced by better-performing cards at lower prices and lower power consumption within the next 12-18 months. Turing is the weakest generation-on-generation upgrade that Nvidia has ever shipped once price increases are taken into account. The historical record offers no evidence to believe anything below the RTX 2080 Ti will be a credible performer in ray-traced workloads over the long term.

wfoojjaec, therefore, does not recommend purchasing either the RTX 2080 or RTX 2080 Ti for their hoped-for performance in future games. Ray tracing may be the future of gaming, but that future isn’t here yet. Mainstream support for these features will not arrive on 12nm GPUs and will not be adopted widely in the mass market before the next generation of cards. And again — every dollar you drop on $700 and $1,200 GPUs is a dollar that tells Nvidia it ought to be charging you more for your next graphics card.

The RTX 2080 and 2080 Ti are the beginning of a new approach to rendering. Long-term, the technological approach they champion may be successful. But that doesn’t make either of them a good buy today, while the historical record suggests no ground-breaking GPU is a particularly good buy its first time out of the gate. Ironically, that’s entirely Nvidia’s fault. If the company was less-good at immediately overtaking its own previous GPU architectures will new hardware that offers dramatically better performance and price/performance ratios, I’d have less of a reason to take the stance I’m taking. But between the price increases and Nvidia’s own historical performance, the smart move here is to wait. Whether you’re waiting for sane pricing, 7nm, or just happy with your current GPU — that’s up to you.

Continue reading

NASA: Asteroid Could Still Hit Earth in 2068
NASA: Asteroid Could Still Hit Earth in 2068

This skyscraper-sized asteroid might still hit Earth in 2068, according to a new analysis from the University of Hawaii and NASA’s Jet Propulsion Laboratory.

MSI’s Nvidia RTX 3070 Gaming X Trio Review: 2080 Ti Performance, Pascal Pricing
MSI’s Nvidia RTX 3070 Gaming X Trio Review: 2080 Ti Performance, Pascal Pricing

Nvidia's new RTX 3070 is a fabulous GPU at a good price, and the MSI RTX 3070 Gaming X Trio shows it off well.

Nvidia: RTX 3000 GPUs Will Remain Hard to Find Into 2021
Nvidia: RTX 3000 GPUs Will Remain Hard to Find Into 2021

There's no hope for a near-term improvement in RTX 3000 GPU availability. Shortages will likely continue through the end of this year and into the beginning of 2021.

SpaceX Starlink Beta Could Expand As Soon As January 2021
SpaceX Starlink Beta Could Expand As Soon As January 2021

SpaceX has been launching Starlink internet satellites for the last 18 months or so, and all they managed to do for most of that time is tick off astronomers. However, the first users have been able to log onto SpaceX's Starlink internet service, and their impressions are good. This is just a small beta test, but SpaceX is apparently planning a wider test early next year.