AMD Touts Radeon’s Performance Per Dollar/Watt Compared to Nvidia

AMD Touts Radeon’s Performance Per Dollar/Watt Compared to Nvidia

You know the GPU wars are heating up when manufacturers begin to openly talk smack about their competitors. Such is the case with what seems like a new marketing campaign from AMD. The company’s marketing chief posted a chart on Twitter this week that shows Radeon GPUs deliver better performance than Nvidia in two categories: performance per watt, and per dollar. Before you close this tab and go buy a Radeon card though, there’s some issues with the chart and how it’s presented.

The chart is titled Why Radeon without a question mark, which makes this editor’s eyeball twitch. The phrase has a trademark logo on it, which makes us think this is part of a new campaign from AMD. The chart displays each Radeon card in the product stack along with its Nvidia competitor. It then shows the price of the card, average FPS, and Total Board Power (TBP). Using these number, it then shows how the Radeon card outperforms its competition per-watt and per-dollar. The footnote on the tweet says the chart’s prices were taken from Newegg on May 10th.

AMD Touts Radeon’s Performance Per Dollar/Watt Compared to Nvidia

First of all, taking all your pricing information from Newegg is problematic for obvious reasons. It’s just a single source of data and as we all know prices can vary wildly from vendor-to-vendor. That said, AMD doesn’t seem to have gone too crazy with the prices. Although the prices for its RDNA2 refresh cards are all MSRP in the chart, that does seem to be what they’re going for on Newegg. Sure there are some outliers, and the 6800 XT is not averaging $850 on Newegg, but we’ll stipulate the prices are all close enough. We could nitpick this point card-by-card but the prices are close enough for government work.

The main issue with the chart is AMD doesn’t say how it acquired its performance numbers. We’re left to assume these are the numbers AMD acquired in its internal testing. Obviously companies strive to be fair (usually) when presenting benchmark numbers, but it’s helpful to know the exact parameters of the testing. What are the specs for each test system? What settings were used for the games? Which games? Was ray tracing on, or off? There’s a litany of variables here that can affect performance, so having the exact settings used for each game is critical. Without that information, it just comes off as marketing BS. It’s especially odd because by all accounts, Radeon cards are certainly competitive with Nvidia’s, though not necessarily in heavily ray-traced titles. Even in rasterized games, relative performance between the two vendors can vary by 5-15 percent depending on exactly which game settings are chosen.

As far as performance per watt goes, we don’t really see any problem with this argument. The days of Nvidia holding a major perf-per-watt advantage over AMD are also long gone. This isn’t the old days of GCN, and AMD has made major strides with its RDNA architecture. Interestingly, AMD doesn’t include Nvidia’s flagship GPU in its comparison chart. We assume it’s too expensive at $2,000 MSRP to warrant a comparison as AMD’s GPU is $1,100.

Overall though, this just seems unnecessary. Gamers care more about simple comparisons involving power and pricing. Sure, pricing is important, but all things being relatively close there’s much more important considerations. For example, drivers are critically important, and AMD releases drivers less frequently than Nvidia. The software stack itself is also a big driver of sales, as both company’s have radically different offerings on that point. A lot of it comes down to preference but software plays a big role, one would think. There’s even things such as how much noise and heat is generated by the GPU. All this is to say there’s a lot of factors that separate one GPU from another in a head-to-head grudge match. We get that AMD is playing to its strengths here since this is a marketing thing, but leaving out key details doesn’t help its cause. To be fair to AMD this is far from the most egregious PR spin we’ve seen from a GPU company. Nvidia is notorious for its vague performance charts, after all.

Continue reading

Comparison of Apple M1, A14 Shows Differences in SoC Design
Comparison of Apple M1, A14 Shows Differences in SoC Design

A new analysis of the M1 breaks down the die design versus the smartphone-class A14 SoC.

RISC vs. CISC Is the Wrong Lens for Comparing Modern x86, ARM CPUs
RISC vs. CISC Is the Wrong Lens for Comparing Modern x86, ARM CPUs

Try to investigate the differences between the x86 and ARM processor families (or x86 and the Apple M1), and you'll see the acronyms CISC and RISC. It's a common way to frame the discussion, but not a very useful one. Today, "RISC versus CISC" obscures more than it explains.

RISC vs. CISC Is the Wrong Lens for Comparing Modern x86, ARM CPUs
RISC vs. CISC Is the Wrong Lens for Comparing Modern x86, ARM CPUs

Try to investigate the differences between the x86 and ARM processor families (or x86 and the Apple M1), and you'll see the acronyms CISC and RISC. It's a common way to frame the discussion, but not a very useful one. Today, "RISC versus CISC" obscures more than it explains.

Comparing the DJI Mavic 3 and Autel EVO Lite+ Drones Head-to-Head
Comparing the DJI Mavic 3 and Autel EVO Lite+ Drones Head-to-Head

Now that we've been able to test the full-set of features promised for DJI's Mavid 3, we decided to pit it against the lower-cost Autel Lite+ to help those considering a new prosumer drone.