As Chip Design Costs Skyrocket, 3nm Process Node Is in Jeopardy
The semiconductor industry has had an increasingly hard time delivering new process nodes over the last few years, as the benefits of each new node have shrunk and the costs of adoption have grown. One of the biggest reasons foundries like TSMC, GlobalFoundries, Samsung, and Intel are all working to introduce extreme ultraviolet lithography (EUV) into upcoming process nodes is because the cost of not using EUV has become unsustainable.
But while EUV is expected to reduce the cost of manufacturing processors by cutting the number of masks required per design, it does nothing to reduce the cost of designing the chip in the first place — and chip design costs are rising so quickly, they could effectively kill long-term semiconductor scaling across the entire industry.
A recent article at Semiengineering explored this phenomena and the ugly cost curves driving it. Right now, Samsung and GlobalFoundries are hoping to deliver nanosheet FETs as a successor to FinFET, while TSMC is exploring both nanosheets and nanowires (Intel hasn’t said anything about its plans). All of these approaches are highly theoretical (3nm isn’t a near-term node in any case), but there’s a different problem waiting in the wings, even if these new transistor types can be refined and brought to market: design cost. The price of a 3nm chip is expected to range from between $500M to $1.5B, with the latter figure reserved for a high-end GPU from Nvidia.
The following chart from IBS shows expected design costs through 5nm — the 3nm data point isn’t even on the chart yet. Treating the “16nm” column as equivalent to the various 12/14/16nm chips that we’ve seen in-market thus far, it means a cost of roughly $100M to build a new GPU, CPU, or SoC. Even at 7nm, the design cost has tripled. But moving from 7nm to 3nm would mean increasing costs by a further factor of 5x.
“The industry needs to get a major increase in functionality as well as a small increase in transistor costs to justify the use of 3nm,” said Handel Jones, chief executive of International Business Strategies (IBS). “3nm will cost $4 billion to $5 billion in process development, and the fab cost for 40,000 wafers per month will be $15 billion to $20 billion.”
And all of that cash is being spun out in search of smaller and smaller improvements. By 3nm, the relative price/performance improvement expected to be offered is around 20 percent, compared to approximately 30 percent today.
Upending the Semiconductor Industry
Cost issues have already begun to transform the semiconductor industry as a whole. With fewer and fewer customers moving to new nodes, it’s more and more difficult for firms to justify these upgrades — which is one reason why there are so few major foundries left on the cutting edge. TSMC, Samsung, GlobalFoundries, and Intel are the last four leading-edge foundries standing. And the difficulty of driving FinFETs and subsequent designs forward could have another impact you don’t see much discussion of these days: the rehabilitation / improvement of older nodes, as opposed to the continual construction of new ones.
As the cost of moving to a new node rises, the relative value of improving old nodes as a means of offering meaningful customer improvements rises as well. We’ve already seen some evidence of these shifts already, with foundries sometimes highlighting refinements to older nodes or the use of older nodes combined with newer manufacturing techniques. When Samsung switched to building 3D NAND, for example, it did so on a 40nm process. By using an older process node, Samsung was able to improve the characteristics of its TLC NAND. While Micron and Intel haven’t specifically stated what process node they use for their quad-level cell (QLC) NAND, it’s almost certain to be built on an older process node as well. And GlobalFoundries has a 22nm node with FDSOI — an obvious attempt to cater to customers who wanted to move to an improved process node below 28nm, but needed low power and lower design costs compared with 14/16nm FinFET. (Design costs are higher for FinFET, wafer costs are higher for FD-SOI).
A $500M – $1.5B design cost would require AMD, Intel, or Nvidia to spread their work out over longer periods of time, with the exact length depending, of course, on the company’s overall income. But regardless it represents a huge cash outlay that these companies don’t currently pay. And that, in turn, could make it difficult for even the largest firms to justify that kind of investment. It also helps explain why these companies are so interested in the machine learning and AI markets. It’s going to take huge revenue figures to justify these expenses, and the high cost of hardware for the enterprise and data center markets may be the only way to justify building consumer hardware at all. If the cost curves get too ugly, overall scaling could effectively stop — even if the physicists aren’t quite out of headroom yet.
Continue reading
New MIT AI Designs Robots On Its Own
The team believes RoboGrammar could point researchers in new directions, leading to more efficient and inventive designs.
Leak Shows Off Samsung’s Redesigned Galaxy S21 Family
Based on some new teaser videos, you can look forward to a whole bunch of cameras.
Comparison of Apple M1, A14 Shows Differences in SoC Design
A new analysis of the M1 breaks down the die design versus the smartphone-class A14 SoC.
Rocket Lab’s Upcoming Reusable Rocket Is Designed for Deploying Mega-Constellations
The company says its upcoming Neutron rocket will be ideal for deploying mega-constellations, and it'll have a reusable first-stage a la the Falcon 9.