Next year, Intel will introduce Alder Lake, a hybrid SoC with a mixture of big and little x86 CPUs on the same slice of silicon. The two chips will be based on Atom and Core, respectively, and they’ll offer up to 16 cores in an 8+8 configuration. There have been questions about whether AMD would do something equivalent, and while the company isn’t talking about long-term plans, it’s got no similar product coming to market in the short term. In a recent roundtable focused on AMD’s Ryzen 5000 CPUs, CVP and CTO Joe Macri shared details of how AMD sees that option.
After noting that big.Little dates back 15 years and that AMD has continually examined the concept, Macri said, “We’re not going to talk about whether we’ll do it or not, but I’m going to talk about some of the challenges of it and around what you really want to do with it. Is the goal power efficiency? Is the goal more performance? Is the goal just marketing, ‘I want more core count’, regardless of what it does for the other two variables?”
Macri then went on to note that AMD would not build any such chip for marketing reasons alone, before digging into the meat of what the company’s concerns are. A hybrid CPU core design with a mix of big and little cores is only useful if there’s scheduler support and, according to Macri, that support just isn’t there in Windows, at least not in a meaningful way that makes the feature appealing to AMD.
The only problem is, we already know Windows does support this kind of capability, on both ARM and x86. Windows on ARM supports big.Little because the existing devices running on Qualcomm silicon are functionally required to do so. Failing to support big.Little on an ARM platform would be like not supporting core aspects of Intel or AMD’s power management stack. This Microsoft blog post confirms that Microsoft has supported heterogeneous compute arrangements on Windows on ARM systems since mid-2018:
To support big.LITTLE architecture and provide great battery life on Windows 10 on ARM, the Windows scheduler added support for heterogeneous scheduling which took into account the app intent for scheduling on big.LITTLE architectures.
We also know, based on Lakefield review details released this year, that Windows supports the hybrid scheduling capabilities built into that Intel platform as well. It is possible that Macri is saying that the Windows scheduler doesn’t usefully support these features, or that the gains from using them aren’t large enough to justify AMD throwing a lot of R&D at the idea just at the moment. But the statements regarding scheduling could be read to imply that Windows hasn’t implemented support for these features at all, and that’s not the case.
We can’t speak to how well it works, because I don’t think anyone’s attempted to perform an exact comparison of power management under Android versus Windows on an identical SoC, but Windows does at least support these features. It’s possible that such support requires additional driver and software development and that Windows hasn’t implemented a standardized framework for these capabilities yet, but feature support has been integrated at some level.
Why It Makes Sense for Intel and AMD to Pursue Different Features
For the past five years, AMD and Intel have pursued very different features. Intel has focused on developing AI capabilities through both Xe and AVX-512, with its Lakefield hybrid CPU debuting an all-new heterogeneous architecture to compete against low-power ARM devices. If Intel hadn’t been stuck at 14nm for so many generations, I think we would have seen the company put more emphasis on improving general compute, but given these issues, we’ve seen more focus on improving computing in new and emerging markets rather than those where Intel was historically strongest.
AMD, meanwhile, had the opposite problem. Up until it launched Ryzen, the company had virtually no share in server and was concentrated in the low-end of the desktop and laptop markets. When you try to talk to AMD about whether it’s going to compete in AI workloads via support for AVX-512, AMD always gently steers the conversation back towards the idea of competing and winning in general compute workloads as opposed to competing in these emerging areas. Even when it bought Xilinx in October, AMD didn’t pump up the company’s AI efforts or work, focusing instead on the traditional core competencies of the FPGA market.
Intel already dominates servers, mobile, and desktop, so it wants to talk about new markets where it is attempting to win mind share, like AI, hyperscale servers, and cloud compute / data center. AMD, in contrast, wants to talk about carving into Intel’s major markets, because that’s where the big short-term opportunity for the company is.
As for the usefulness of big.Little cores, let’s be honest: Alder Lake may debut on desktop first, but the point of hybrid CPUs is not to use them on the desktop. A modern high-end desktop PC typically idles at 75W – 90W. Let’s say Alder Lake’s hybrid compute manages to cut that to, say, 45-55W. 20-30W is a nice slice off idle for a high-end PC, but it’s not going to change the universe. The big question is whether a big.Little approach can help x86 hit the
I have no doubt that Intel will position Alder Lake as some kind of response to AMD in one form or another, but that’s just not the entire point of the chip. Almost a decade ago, Intel announced a long-term effort to push higher-efficiency laptop computing. 15W, not 35W, became Intel’s baseline reference point. AMD followed suit, and the long-term result has been a tremendous improvement in CPU performance within a low power envelope. Intel may be hoping it can pull the same trick again by shifting to a hybrid architecture, helping x86 compete in these emerging spaces. AMD, in turn, may be happy to let Intel have that fight while it moves to improve performance in higher power envelopes.
As things stand right now, it looks as though Intel is more interested in AMD than competing against ARM in the spaces where ARM is encroaching, like laptops and HPC. AMD is more focused on the mass market and in winning socket space and mind share in general workloads. Macri stated, “I think there will be a point when we are going to need Little,” (meaning, little cores alongside big cores), but said that the company was currently making such rapid progress with big core designs, it was hard to come up with a cogent argument for little ones. Overall, the company seems more interesting in continuing to improve its general-purpose workload performance, and we may see it transition to using a hybrid architecture only when it makes sense to do so in this context.