Ever since Intel announced its 10nm ramp would be delayed into 2019, there’ve been questions about what caused the delay and what a fourth generation of 14nm hardware might offer. At the JP Morgan 46th Annual Technology Conference, Dr. Murthy Renduchintala, group president of the Technology, Systems Architecture & Client Group and chief engineering officer at Intel Corporation, spoke at some length about the problems Intel has run into with its 10nm ramp, the amount of headroom in 14nm, and the company’s overall plans for the future.
When asked about the future of Intel’s 14nm (we’re up to 14nm+++ at this point, if Intel continues to use that metric), Murthy notes:
[W]e found tremendous intra-node capability within our 14-nanometer process. In fact from the very first generation of our 14-nanometer to the latest generation of 14-nanometer product, we’ve been able to deliver over 70% performance improvement as a result of those intra-node modifications and desirable changes. And that’s quite frankly Harlan has given us the ability to make sure that we get 10-nanometer yields right before we go into mainstream production. And so, therefore we’re comfortable with the 14-nanometer roadmap that will give us leadership products in the next 12 to 18 months, as we seek to optimize the cost structure and yields of our 10-nanometer portfolio.
As far as 14nm goes, that’s true. Intel’s 14nm+ used slightly taller fins and packed transistors a little less tightly together. This allowed Kaby Lake to hit higher frequencies and better power consumption figures than Skylake. Similarly, 14nm++ allowed Intel to squeeze quad-core CPUs into the same TDP range it previously offered with dual-core/quad-thread CPUs. But the 70 percent performance improvement Murthy mentions, while real, doesn’t necessarily represent a rabbit Intel can keep pulling out of its hat. Intel may have upgraded some mobile Core i3 CPUs from 2C/4T to 4C/4T, but there’s no chance the company will whip around and debut a 6C/6T Core i3 or i5 CPU in a 15W TDP based on a 14nm+++ architecture.
The situation with 14nm is analogous to what GlobalFoundries and TSMC have done with their own process nodes — Intel just isn’t calling it an entirely new node. But there’s an inevitable limit to how much fine-tuning Intel can do, and given that it never planned to keep 14nm around as long as it has, I’d wager they’ve depleted most of the improvements they can offer.
What’s Going On With 10nm?
We’ve included both of Intel’s initial 10nm slide decks below, to give some context to the company’s claims about the process node and its capabilities. When asked about 10nm, Murthy said:
In terms of 10-nanometer, we are shipping 10-nanometer in low volumes. I think that if you go back to when we originally defined the recipe of 10-nanometer back in early 2014, we defined some very aggressive goals for our second-generation hyper-scaling. We targeted a 2.7x scaling factor, from 14-nanometers which was in the very stages of product ramp at that point in time.
And 14-nanometers with in and of itself of 2.4x scaling on 22 nanometers, so clearly our engineering team in TMG had very, very ambitious goals in terms of the transistor scaling required… I’ve given ourselves no specific timeline, again it’s when the economic timing makes greater sense for us in terms of when we hit the right point in the yield curve…
10-nanometer is basically with the generation that was really focusing on delivering 2.7 ex-scaling in an environment that wasn’t assisted by EUV. We had to go to self aligned quad patterning which in on itself is both complex and time consuming. As we moved to 10, we’ve been able to deliver a recipe with the quite diversified risk profile.
These statements suggest an answer to what happened to Intel’s 10nm ramp and why it’s so late. Put simply, the company bit off more than it could chew. Intel’s node technology has always been ahead of TSMC, Samsung, or GlobalFoundries — a 14nm chip from Intel is roughly equivalent to a 10nm CPU from one of these companies. With 10nm, as shown on the slides above, Intel wanted to widen that gap and make up for the time it lost in delaying 10nm (note that this was before 10nm slid into 2019).
SemiWiki has some additional information on this. All of the major foundries use similar methods for front end of line (FEOL) processing. But for back end of line (BEOL), Intel uses self-aligned quadruple patterning as opposed to the self-aligned double patterning that other foundries have deployed. Not only does this increase costs due to the need for additional photomasks, it’s a more complex process. It’s also inevitably slower, which means wafer throughput will be lower — at least at first.
It’s not clear why Intel chose to go with SAQP for BEOL at 10nm as opposed to SADP, but Murthy’s comments are straightforward. Intel’s current yields at 10nm are low and the cost curve isn’t good. The company is shipping 10nm in very limited volume, but sees no benefit to jamming the throttle on 10nm when its 14nm process continues to serve it so well. And the truth is, Murthy is probably right.
How Much Does This Delay Hurt Intel?
There’s been a lot of chatter about how this delay could cripple Intel or lead to ARM’s takeover of the x86 space. This simply isn’t going to happen. AMD is challenging Intel in data centers, to be sure, but Intel’s decision to leave the mobile market means it has little to fear from rival foundries. For all the buzz about x86 emulation in Snapdragon 835 PCs, a quick glance at their performance tells you everything you need to know about the underlying hardware. Tech Radar has benchmarked the x86 emulation capabilities of these systems and it’s not good.
Cinebench is one of the better tests for the Snapdragon 835’s emulation, and it’s well off the Celeron N3450 (4C/4T, 1.1GHz base, 2.2GHz Turbo) in single-thread performance. Even in multi-threaded code, where the Snapdragon 835 has eight cores and a higher max clock than the Celeron N3450, the Intel CPU still pulls out a win. And as I said — this is actually one of the best results for the Snapdragon 835’s emulation performance. In native code, performance is better, but not great.
Here, the Snapdragon 835 with eight CPU cores loses to the three-year-old Core i5-5200U. The Snapdragon 835 is an octa-core chip, and the i5-5200U is a 2C/4T configuration with a higher maximum frequency (2.7GHz) but fewer threads. The point here is not to bash the Snapdragon 835, which offers nearly 2x the battery life of the i5-5200U system, but to point out that in terms of raw performance, Intel doesn’t exactly need to lose any sleep over what’s going on with ARM.
Could it hurt Intel with regard to AMD? Possibly. AMD is pushing for 7nm with GlobalFoundries, and while we’re a bit concerned about what we’ve heard from GF this year, we’re still assuming AMD will launch Ryzen 2 at some point in 2019. But even if we assume that AMD could land a real sucker punch on Intel, we also know that Intel has pivoted away from the PC market and is focused largely on data centers. That focus and its excellent performance in that space is its own kind of buffer. Enterprise customers move more slowly than consumer PCs, and while AMD’s Epyc has been picking up design wins, nobody — including Lisa Su, AMD’s CEO — expects Epyc to seize more than 4-6 percent of the server market in 2018. Qualcomm’s apparent interest in exiting the ARM server market means AMD is once again the only real game in town when it comes to challenging Intel in that space, and it’s going to take several years more for AMD to ramp up and win market share.
Intel’s slip on 10nm is significant. It’s the first time in the last two decades, at least, that the company has taken so long to make a node transition. It’s absolutely opened up a bit more opportunity for AMD than might otherwise exist. But it’s also a straightforward issue related to Intel’s decision to aggressively push for higher transistor densities at 10nm, and the use of EUV at lower process nodes should help prevent the problem from occurring again. In aggregate, Murthy’s overall level of confidence is well placed. Intel can’t afford to rest on its laurels and ignore its competitors, but 10nm slipping into 2019 isn’t going to cripple the company, either.
New Intel Rocket Lake Details: Backwards Compatible, Xe Graphics, Cypress Cove
Intel has released a bit more information about Rocket Lake and its 10nm CPU that's been back-ported to 14nm.
Intel Details XPU Strategy, Launches New Server GPU, OneAPI Gold
Intel made a pair of announcements today regarding its OneAPI initiative and the launch of its first server GPU based on Xe graphics.
Google Details Spectre and Meltdown Fixes for Its Cloud Services
It wasn't easy, but Google rolled out patches to its services, and you didn't even notice.
New Jupiter Images From Juno Probe Reveal Amazing Detail
In a pair of recently released images, you can see an unprecedented amount of detail in Jupiter's clouds, and they were both created by citizen scientists.