Comparison of Apple M1, A14 Shows Differences in SoC Design

Comparison of Apple M1, A14 Shows Differences in SoC Design

Ever since Apple launched the M1, there have been questions about how the new SoC would differ from the A14. While the two SoCs are based on a common CPU microarchitecture, the M1 incorporates additional on-die functionality that the A14 does not, along with more CPU cores in-total and a larger overall die size.

TechInsights has published a comparative shot of both SoC dies:

Comparison of Apple M1, A14 Shows Differences in SoC Design
Comparison of Apple M1, A14 Shows Differences in SoC Design

Compared with the A14, the M1 has 2x as many DDR interfaces, double the number of high-performance CPU cores, double the number of GPU cores, and the L2 cache backing up Apple’s Firestorm cores is 1.5x larger than the A14 equivalent. The smaller IceStorm cores use the same-sized L2 in both A14 and M1. The NPU is also identical between both chips.

The M1 has 25 percent less overall system cache than the A14, according to TechInsights, and its overall die size is 1.37x larger. The increased die size is mostly driven by two factors. First, the M1 increases the number of CPU and GPU cores, the L2 cache, and the number of DDR interfaces. Second, the M1 integrates silicon that the A14 doesn’t, like the Apple T2 security processor, as well as support for standards like PCIe.

According to TechInsights, Apple used about 2.1x more silicon to implement 2x the CPU and GPU cores found in the M1. The slight disparity could be evidence that Apple used transistor libraries optimized for performance rather than power consumption with the M1, but there’s no evidence as of yet to support this argument, and the M1’s clocks are only modestly higher than the A14’s.

There has been a great deal of talk about the deployment of specialized accelerator blocks inside the M1, but relatively little discussion of what they are. We know there’s an image signal processor and Apple’s own Thunderbolt 4 controller, and there will be the usual storage and I/O controllers any chip of this sort requires. It is possible that Apple even implemented certain software functions directly into hardware to make them faster and more power-efficient. This method of using so-called “dark” silicon is something we discussed back in 2013 as one way SoC designers might improve transistor utilization without blowing up power budgets.

It is not clear if Apple took the concept as far as mapping specific applications into circuitry, or what functions the company has mapped to the still-unlabeled hardware blocks. Rosetta 2 is not believed to map to any specific hardware functions on the chip, and there’s no indication that the M1 is somehow cheating at various benchmarks by utilizing some kind of secret hardware acceleration beyond that which PC CPUs possess. The fact that the M1 continues to perform strongly under Windows 10 when compared against Surface Pro X also pushes back against the idea that the M1’s performance is being driven by specialized hardware blocks.

Additional analysis work will probably shine a light on these resources long-term, but it’s interesting to see the curtain pulled back on Apple’s first homegrown laptop SoC, one bit at a time.

Continue reading

4K vs. UHD: What’s the Difference?
4K vs. UHD: What’s the Difference?

The terms UHD and 4K have become so conflated that TV makers, broadcasters, and tech blogs are using them interchangeably — but they're not the same thing.