At ISC 22 last week, Intel shared new details on its next major node, dubbed Intel 4. Before Intel rejigged its process node naming methodology, we would’ve referred to this as 7nm. Intel 4 is set to be a major step forward for the company and it looks to be tailored towards Intel’s strongest high-performance products. It will need to be. Intel 4 is a critical step for the company as it seeks to reclaim its manufacturing throne.
Moving Past 10nm
It is not an exaggeration to say that Intel’s 10nm manufacturing miss had a huge impact on both the company and the overall state of competition in the x86 market. Intel brought 22nm and FinFETs to market ahead of all of its competitors, but then had to delay its 14nm node to fix certain problems. With 10nm, the company promised a triumphant return to form. Even after it hit delays, it followed up by promising aggressive feature size shrinks and overall improvements that would keep Intel in a leadership position in mobile 10nm, while setting the company up for long-term desktop and server performance success once a third-generation “10nm++” was ready. 14nm would be improved for several generations to cover the gap.
Things did not play out this way, and Intel’s 10nm remained mired in difficulty for several years. Its first parts — based on the Cannon Lake microarchitecture — only shipped in a dual-core configuration with low clocks and no functional GPU. It took Intel years to fix 10nm and several successive generations of mobile product (Ice Lake, Tiger Lake). Intel did finally move its desktop chips over to Intel 7 (aka third-generation 10nm) with Alder Lake last year, but the company is several years behind schedule and has openly acknowledged that 10nm did not meet its initial expectations.
We’re mostly here today to talk about Intel 4, but the design of this node offers a few hints as to what went wrong back on 10nm. When asked about this, Intel was frank, acknowledging that it had tried to do too much at 10nm. While the company didn’t go into too much detail, Intel’s discussion of its next-generation process node offers a few new hints as to what went wrong.
Contact Over Active Gate
COAG stands for “Contact Over Active Gate,” but given the problems Intel ran into at 10nm, it could’ve stood for “Cobalt Offers Awful Gains,” or maybe “CObalt: Actually Garbage.” Intel deployed cobalt extensively throughout 10nm, using it for contacts, metallization, and vias for both M0 and M1. When it announced 10nm, Intel claimed cobalt would reduce contact line resistance by 60 percent compared to tungsten. Resistance to electromigration damage was said to increase by multiple orders of magnitude.
Unfortunately, cobalt is considered quite difficult to work with, and manufacturing problems with the material are thought to be at least partly responsible for Intel’s manufacturing delays at 10nm. RealWorldTech has an excellent writeup of Intel 4 for anyone who wants the deepest technical dive, so I’m going to let them explain this bit:
At smaller geometries, the contact layer is increasingly challenging due to alignment requirements, resistance, and potential capacitance between the contact and gate. A typical contact hole will be 20nm or less in diameter. Intel explicitly indicated that the metal has reverted from cobalt back to pure tungsten and that a single damascene process is used to form the contacts separate from the M0 layers. It is extremely likely that the contacts are printed using EUV [Extreme UltraViolet]. The switch from cobalt back to tungsten also implies that the contacts use a different process flow that increases the volume of tungsten which likely improves the contact resistance compared to the cobalt-based Intel 7.
The fact that Intel made a big deal about the switch from copper to cobalt in the move from 14nm to 10nm, only to turn and head back to enhanced copper with Intel 4 (original 7nm) points to problems with cobalt at the manufacturing level. Cobalt has much higher resistance than copper and this could partly explain the low frequencies Intel initially offered with Cannon Lake and Ice Lake. The company was clearly able to improve the problem in later 10nm / Intel 7 hardware, because Tiger Lake and Alder Lake have both hit higher clocks than Ice Lake, but Intel still isn’t sticking with its previous approach as it shrinks the node.
What’s New in Intel 4
Intel 4 is a full node shrink relative to Intel 7, with an estimated 20 percent performance improvement in the same power envelope, or a 40 percent reduction in power at the same clock. It’s the first full node shrink that Intel has announced since it re-launched its efforts to serve as a client foundry for other chip designers, but the company doesn’t expect its new customers to deploy Intel 4, though it stressed they will be able to use it if they want to. Instead, Intel believes its future leading-edge foundry customers will mostly target Intel 3 when that process is available.
One reason why Intel’s foundry customers might prefer Intel 3 over Intel 4 is because Intel 4 is optimized for high-performance silicon. Most of TSMC’s customers don’t prioritize raw performance the way Intel, Nvidia, AMD, IBM, and a handful of other companies do. Many chip designs are optimized for very high transistor densities and/or low power as opposed to high performance.
When a foundry deploys a node for a wide range of customers, it will create design libraries for both high-performance and high-density products. In this case, Intel has deployed high performance libraries suitable for building CPUs on Intel 4 and plans to introduce high density libraries suitable for GPUs and other ASICs on Intel 3. This implies that Intel will continue to tap TSMC for its non-CPU and chipset manufacturing for the foreseeable future.
Intel will introduce EUV into manufacturing with the Intel 4 process before deepening its use of the technology with Intel 3. Intel is the last of the top-tier semiconductor foundries to adopt EUV in volume manufacturing, despite being one of the first firms to call for its development 20 years ago. EUV is a replacement for the older 193nm “DUV” (Deep UltraViolet) lithography and is used to print smaller features and reduce the number of steps required in the chip manufacturing process.
According to Intel, the number of masks it needed to use per CPU would have jumped 30 percent from Intel 7 to Intel 4 without EUV. Instead, the number of masks required for Intel 4 dropped by 20 percent. Total process steps decreased by five percent. Like TSMC, Intel’s initial adoption of EUV will be limited. The company is reportedly using EUV for contacts, but only certain metal layers and vias. TSMC and Samsung both use EUV for contacts, vias, and metal layers. Intel is expected to widen its adoption of EUV with Intel 3, so this gap will narrow over time. RWT notes that Intel is still using SAQP (aka quad patterning) for certain metal layers, which implies the older technology is still more economical or effective than EUV in certain circumstances.
What Does All This Mean For Intel?
Intel 4 is explicitly designed for high performance microprocessors. When Pat Gelsinger came back to Intel, he pledged that the company would refocus on high performance microprocessors and on re-achieving market technical leadership. Intel 4 is intended to advance that goal and will power upcoming CPU tiles like Meteor Lake.
Introducing EUV is a big step for every manufacturer. Intel is choosing to move more cautiously with the technology. In this, it’s echoing something more like TSMC’s path to EUV than Samsung’s. TSMC introduced EUV in limited capacity at 7nm and then improved its 5nm usage. Samsung, in contrast, went all-in on EUV from 7nm forward. Samsung, however, has also been struggling with yield problems for years now. Yields on its 3nm GAA (Gate-all-around) process were reportedly in the 10-20 percent range earlier this year, while its 4nm process is only yielding at 35 percent. Yields on early nodes are always poor initially, but these numbers are going to drag on Samsung’s efforts to win customers for its leading-edge manufacturing.
By treating Intel 4 as the foundation Intel 3 will build on, Intel splits its EUV transition across two nodes and simplifies its own learning curve. EUV and Intel 4 will ramp at Intel’s Hillsboro fab first before being duplicated at Leixlip. Ramping production in a second fab presents its own challenges — this is why Intel uses its “Copy Exactly” formula for fab design — but Intel will be its own customer for these parts during the initial build-out. It should be easier to move from Intel 4 to Intel 3 than to jump straight from Intel 7 (non-EUV) to Intel 3 (EUV) with no step in the middle.
I talk a lot about how foundry manufacturing is a long game, and Intel’s 10nm saga and potential recovery show that trend well. When Intel introduced 22nm, it beat the rest of the world to FinFETs by several years. It took Intel years to fix its 10nm process, but the company was able to resolve its problems well enough to eventually transition its high performance microprocessors over to the node. Instead of trying to duplicate its “Everything + Kitchen Sink” approach to 10nm, Intel will split the improvements and focus on high performance parts first, with high density libraries, more EUV integration, and support for a broader range of customers all arriving with Intel 3.
Intel’s problems at 10nm are easily the biggest “miss” the company has suffered in decades, but Intel was never in any financial danger from its 14nm hangover. I wouldn’t bet against Intel competing again for leadership in semiconductor manufacturing for the same reason I wouldn’t have bet against Intel 17 years ago, when the Athlon 64 X2 was an ascending star and Intel’s dual-core Pentium D CPUs were sucking wind. Intel’s dual-core Pentium D CPUs (codenamed Smithfield) would eventually have a bit of revenge on the rest of the market as late-game star overclockers, but while Prescott-derived cores made the company a lot of money, they did nothing for its reputation. Speculation that AMD would bankrupt Intel reached a fever pitch among enthusiasts from 2004 – early 2006. Then Intel launched the Core 2 Duo “Conroe,” and spent the next 11 years as the unchallenged king of high-end x86 performance.
Hitting Intel is a lot like hitting a rubber wall with a hammer. Denting it is easy. Inflicting meaningful, long-term damage? That’s tougher, especially if the “damage” amounts to little more than “Making modestly less net profit on an annual basis.” It’s much too early to predict whether Intel will succeed in retaking the semiconductor performance throne, but the company has laid out a plausible path to get itself there by emphasizing its historic engineering and leadership role in the industry. TSMC shouldn’t be quaking in its boots, but it shouldn’t be ignoring the long-term threat, either.
Feature image is a test wafer for Meteor Lake, built on Intel 4. Image by Intel.
AMD Roadmap Leak: Major Platform, Graphics Changes Coming in Zen 4
Fresh rumors regarding AMD's long-term product roadmap have arisen and they imply some major improvements and changes coming with Zen 4.
Android 12 Could Include Major App Compatibility Improvements
Google has attempted to centralize chunks of Android over the years, and a major component called ART is set to get this treatment in Android 12. The result could be vastly improved app compatibility, which is sure to make everyone happy.
NASA’s SLS Rocket Fails Major Engine Test
NASA doesn't want to call it a failure, but plenty of others are.
A File Sharing App With 1 Billion Downloads Has a Major Security Flaw
Trend Micro says SHAREit is a security nightmare that could allow intruders to sneak a peek at your data or even install malware. Perhaps most troublingly, the developers have not responded to Trend Micro's warnings.