It’s no secret that AMD has struggled to compete with Nvidia in PC graphics these past few years. It’s worth taking a look at the situation in a bit more depth, especially with new console cycles on the horizon and a new die shrink deploying — a die shrink that’s already being used for AMD’s next-generation 7nm Vega devoted to machine intelligence and will be used for future Nvidia products as well (whether that includes the company’s next-generation consumer GPUs isn’t something we know yet).
At Forbes, Jason Evangelho has compiled some recent executive comments as well as an overview of AMD’s GPU releases the past few years. David Wang, for example, remarked that AMD had lost some momentum with gamers by chasing the AI and machine learning markets. There are rumors of tension at AMD between focusing on explicitly courting semi-custom customers like Sony and Microsoft, and focusing on the larger game market. Some of those tensions, and the need to get Ryzen out the door, may have been responsible for Vega’s less-than stellar performance against Nvidia by diverting engineering resources from the GPU side of the equation.
I don’t have any deep inside information to add on this, but it’s certainly true that AMD’s GPU focus and earnings transformed starting back in 2013. It began with the launch of the Radeon HD 7790 — a GCN 1.1 GPU with a feature set similar to what would eventually debut in the Xbox One and PlayStation 4. Initially, AMD continued to pursue a strategy of competing at the high end, but Hawaii was the last AMD GPU to launch and unambiguously sweep the competition. Post-Hawaii, AMD’s competitive positioning against Nvidia has been generally weaker.
Two aspects of AMD’s overall GPU strategy stand out to me: The continuing reliance and evolution of the GCN architecture, and the degree to which new midrange GPU releases have been aligned to semi-custom partner launches. Both AMD and Nvidia debuted new architectures in 2012, but AMD appears to have changed GCN less than the equivalent structural changes Nvidia has made to its own architectures. Granted, AMD hasn’t refreshed its chips as often as Nvidia did over the same time period — but that, in and of itself, is part of the issue.
As for the refresh alignment question, Evangelho claims that both the RX 460 and Vega 56 were essentially commissioned by Apple, with the broader consumer market as a secondary feature. That’s also true for the RX 400 family as a whole, which debuted on desktop a few months before the PS4 Pro came to market (the Xbox One X’s GPU, which came to market a year later, is also Polaris-derived). Navi may reflect work being done for the PlayStation 5. These GPUs all work well for the midrange and upper midrange, but AMD has struggled to challenge Nvidia across the entire GPU stack.
Part of the problem, I suspect, is that AMD is a company with limited funds that’s been pulled in a variety of directions. Keeping in mind that it can take 3 years to move a GPU from first idea to finished product; products coming to market in 2018 and 2019 were designed when AMD was still operating deep in the red. HBM2 had well-known yield and ramp problems, which has undoubtedly impacted Vega’s costs. The AI and machine learning markets have been red hot, and AMD is one of the only companies positioned to even take a hypothetical crack at that market (which is why the company decided to push ahead with a 7nm Vega GPU for that space). And the Sony and Microsoft deals almost certainly saved AMD as a going concern back in 2013-2014. If the company had been left solely dependent on its x86 PC and graphics business, it might not have survived at all.
Now, toss in the fact that the conventional PC graphics business, including professional and workstation products, never earned AMD much in the way of profit. Below we’ve graphed AMD’s GPU operating income from 2008 (the first year the segment was revenue-positive following the ATI acquisition) to the first half of 2013, when AMD combined GPU revenue with console sales.
Over five and a half years, AMD recorded $7.673B in GPU sales and operating income of $383 million, for an operating margin of five percent. That’s not great. In fact, it’s markedly worse than Nvidia’s consumer profit margin on GPUs over the same period, even though AMD’s figures include its professional GPU sales and Nvidia’s do not. In short, manufacturing GPUs and selling them into the PC market was never something that made AMD a lot of profit.
AMD’s EESC (Enterprise, Embedded, and Semicustom) business segment has been overwhelmingly dominated by console sales from 2013 to mid-2017, when Epyc’s ramp began to incur some additional costs that may have diluted the positive margin impact from the console business. Still, this graph covers the relevant time period, and AMD’s absolute earnings and its operating margins have both been better than they were in the solo GPU business — even though console margins aren’t really all that good. AI and machine learning margins, on the other hand, are generally thought to be incredible — which sheds some light on why AMD wants into this space.
AMD and PC Gaming
I’m convinced, based on conversations I’ve had with various AMDers, that the company absolutely cares about gaming in general and PC gaming, specifically. But AMD’s GPU divisions have also been building a lot of parts across the entire market to meet the needs of multiple customers, and doing all of it without a ton of money to go around. I also think it’s true that the company has chosen to prioritize the needs of certain customers and to align its GPU launches on the consumer side of the market to fit those plans.
But here’s the thing: I’m also not sure AMD could have survived any other way. And while the company has returned to profitability, it’s scarcely out of the woods. AMD needs to navigate the 7nm transition, launch its next generation of Zen processors, continue ramping Epyc, launch new consumer GPUs, and has $190M in debt maturing in 2019. None of this represents any kind of existential threat, but a few quarters in the black doesn’t mean the company can afford to pretend it has Apple levels of cash in the bank, either. Meanwhile, it’s also going to need to invest in the AI/ML customers it’s now courting with Vega — those GPUs won’t be worth squat if AMD’s tools and software can’t compete with what Nvidia and Intel are leveraging into the same space.
I think AMD can balance the demands of semi-custom, consumer, and the AI/machine learning market. But I think it’s going to take some sustained, longer-term success before we stop seeing some signs of strain when the company pivots from one market to another. In this, the situation is similar to Zen. While the Ryzen CPU core is excellent overall, there are some weak points where AMD was constrained from further improving the situation by time and money. Certain database benchmarks that benefit from large L3 caches, or that create large amounts of cross-die traffic, don’t run as well on Epyc precisely because they aren’t as favorable to its architecture. A server-specific variant of the Zen core with a larger on-die L3 cache or faster Infinity Fabric links might have been helpful to AMD in these instances, but that kind of flexibility wasn’t possible under the circumstances. AMD uses the same entry-level Ryzen die from the Ryzen 3 to the Ryzen 7 for the same reason — custom work would have saved some money by allowing for more efficient wafer utilization, but the upfront cost and time were obviously larger concerns.
The good news is, if AMD continues to succeed with its current products, it’ll be able to address these other issues more effectively long-term. Whether or not it’ll choose to — that’s a more complicated question. Answering it requires more insight into where AMD is putting its R&D dollars today than we have.
(Top image credit: AMD Zen Core/AMD)
Scientist Identifies Mysterious Fossils as the Oldest Animals on Earth
Like all life at the time, Dickinsonia were soft-bodied and therefore didn't fossilize often.
Go Self-Sovereign: How to Own Your Identity on the Internet
In everyday life, we rely on government-issued documents to establish our identity. But on the web, we hand that power over to private, for-profit corporations that don't even think of us as customers. We look at whether self-sovereign identities might let us take control of who we are on the internet.