Should Spectre, Meltdown Be the Death Knell for the x86 Standard?

Should Spectre, Meltdown Be the Death Knell for the x86 Standard?

Spectre and Meltdown are two of the most serious security flaws we’ve seen in years. While it’s not clear how often we’ll see either exploited in the wild, they’re dangerous because they target the fundamental function of the affected chips themselves rather than relying on any software flaw. Meltdown can be addressed by a patch, while Spectre’s attack methods are still being analyzed. Building CPUs that aren’t vulnerable to these attacks under any circumstances may not be possible, and mitigating some threat vectors may require fundamentally new design approaches.

Over at ZDNet, Jason Perlow argues these latest failures are proof the x86 standard itself needs to be destroyed, root and branch. He compares the flaws in x86 with a genetic disorder and writes:

Essentially, the only cure — at least today — is for the organism to die and for another one to take its place. The bloodline has to die out entirely.

The organism with the genetic disease, in this case, is Intel’s x86 chip architecture, which is the predominant systems architecture in personal computers, datacenter servers, and embedded systems.

Perlow goes on to discuss how software companies like Microsoft have pivoted towards the cloud (which doesn’t require x86 compatibility for backend services) and ultimately calls for the advent of new hardware development based on open-source hardware standards like RISC-V, which is completely open source. After discussing how OpenSPARC had promise, but withered on the vine following Sun’s acquisition by Oracle, he declares: “We need to develop a modern equivalent of an OpenSPARC that any processor foundry can build upon without licensing of IP, in order to drive down the costs of building microprocessors at immense scale for the cloud, for mobile and the IoT.”

It’s an interesting argument but, I’d argue, not an accurate one.

x86 Isn’t Going Anywhere

While it’s true the rise of ARM has expanded the overall consumer CPU ecosystem, thus far, the two CPU families live in different worlds. The ARM server market is, for the moment, nearly nonexistent. And while it’s theoretically possible for x86 to be pushed out by a superior CPU architecture, there are some significant barriers to that actually happening.

Among them: Emulated x86 performance on a device like the Windows 10 Snapdragon 835 will never match native code, emulation support isn’t extended across the entire legacy stack of Win32 applications, there’s a huge amount of x86 legacy code in-market, and precious little interest from anyone in a wholesale break with the past, particularly when there’s no evidence such a break would lead to meaningful improvements in CPU security (more on this later).

Intel made four attempts to design non-x86 architectures that were either explicitly intended to replace it or, at the least, could have replaced it if x86 had run out of steam and these other CPUs met their design goals: iAPX 432 (1981), i960 (1984), i860 (1989), and Itanium (2001). Itanium was particularly discussed as a long-term replacement for x86 in the run up to its own launch. Back then, before AMD created x86-64, Intel was resolute that 32-bit was the end of the line for its x86 chips, with Itanium taking over all 64-bit workloads in the future. Didn’t happen that way, but it wasn’t for a lack of trying on Santa Clara’s part.

CPU performance is dictated by design decisions much more than ISA.
CPU performance is dictated by design decisions much more than ISA.

Furthermore, ISA comparisons performed several years ago showed as far as efficiency is concerned, CPU architectural decisions have much more of an impact than ISA. That’s why the Cortex-A15 uses significantly more power than the old Cortex-A9 in the graph above, and it’s why the Core i7′ s power consumption is so much higher than Atom (Bonnell microarchitecture) or AMD’s Bobcat. Getting rid of x86 might still be worth it if the x86 CPU families were particularly or uniquely broken, but they aren’t — which brings us to our next point:

No One is Getting Rid of Out-of-Order Execution

The flaws that make Intel CPUs particularly susceptible to Meltdown have to do with how Intel implements speculative execution memory accesses. The flaws that allow Spectre to function aren’t particular to Intel or even to x86 at all. They affect CPUs from ARM, AMD, and Intel alike, including Apple’s custom CPU cores that are based on ARM but offer much higher per-core performance than any other ARM SoC available in the consumer market.

Without diving into too much detail, these attack methods work by exploiting certain CPU intrinsic behaviors that are closely linked to many of the performance-enhancing techniques CPU developers have relied on for decades. The reason we rely on them is because alternative solutions don’t work as well. That doesn’t mean chip architects won’t find better solutions, but CPU security is always going to be an evolving game. The attack vectors being used in Spectre and Meltdown hadn’t been thought of when OoOE techniques were being developed and refined. And no one is going to build chips that stop using them when various OoOE techniques are mostly responsible for the level of CPU performance we currently enjoy and the current patches don’t (yet) seem to hit consumer desktop performance.

IP Licenses Aren’t a Major Cost Driver

A 2014 semiconductor cost analysis from Adapteva found IP licensing fees and royalty rates aren’t a large driver of total chip design or production costs. Royalty rates can absolutely vary, but they tend to do so depending on the complexity and performance of the chip you’re trying to build.

Credit: Adapteva
Credit: Adapteva

The $0-$10M range for royalty fees isn’t small, but it’s dwarfed by hardware and software development fees, which can run into the hundreds of millions of dollars. This is not to say making cores cheaper wouldn’t help some would-be developers, but it’s not a magic fee to unlocking dramatically better cost structures. Fabs like TSMC, GlobalFoundries, and UMC all earn money on older process nodes for chips that don’t need the latest and greatest technology, with relatively low licensing costs.

An Open Source CPU Doesn’t Solve These Problems

Spectre and Meltdown are examples of what happens when researchers take an idea — attacking specific areas of memory to extract the data they hold — and apply them in new and interesting ways. To the best of our knowledge, the difference in Meltdown exposure between AMD, Apple, ARM, and Intel has nothing to do with any specific effort to build more secure processors. Everyone is exposed to Spectre regardless.

Making a chip design open source does nothing to prevent future researchers from finding attack methods that work against CPUs that weren’t designed to mitigate them because the attack methods didn’t exist yet. It doesn’t automatically provide a means of securing future CPUs or even make it more likely that a scenario for closing the vulnerability without hurting performance will be found. The number of people in the world who are qualified to contribute reasonably good code to an open source software project is rather higher than the number of people who are qualified to work as advanced CPU designers in partnership with cutting-edge foundries.

Conclusion

The idea x86 represents some kind of millstone around Intel and AMD’s collective neck rests on an intrinsic assumption that x86 is old and being old equals bad. But let’s be honest here: While a modern Core i7 or Ryzen 7 1800X can still execute legacy 32-bit code that ran on an 80386, there’s no 80386 hardware still knocking around inside your desktop CPU. Even in scenarios where the CPU is running the same code, it isn’t running that code through the same circuits. Modern CPUs aren’t made with the same materials or processes that we used 30 years ago, they aren’t built to the same specifications, they don’t rely on the same techniques to maximize performance, and referring to the age of x86 is a way of painting an architecture poorly for rhetorical purposes, not an accurate way to capture the benefits and weaknesses of various CPU designs.

There may well come a day when we replace x86 with something better. But it isn’t going to happen just because x86 chips, like non-x86 chips, are impacted by design decisions common to high performance processors from every vendor. Open source hardware is a nifty idea and I welcome the advent of RISC-V, but there’s no proof an OSS chip would’ve been less susceptible to this type of attack. x86, ARM, and the closed-source CPU model aren’t going anywhere and these security breaches offer no compelling reasons why they should.

Continue reading

AMD Buys FPGA developer Xilinx in $35 Billion Deal
AMD Buys FPGA developer Xilinx in $35 Billion Deal

The deal, which we discussed earlier this month, will give AMD access to new markets that it hasn't previously played in, including FPGAs and artificial intelligence.

ET Deals: Dell Inspiron 15 5000 Intel Core i7-1165G7 Laptop for $674, iRobot Roomba i7+ 7550 Robot Vacuum for $599
ET Deals: Dell Inspiron 15 5000 Intel Core i7-1165G7 Laptop for $674, iRobot Roomba i7+ 7550 Robot Vacuum for $599

Today you can take advantage of a 10 percent discount to snag a Dell Inspiron 15 5000 laptop with an Intel Core i7-1165G7 processor, 12GB of RAM and a 512GB NVMe SSD for just $674. You can also get iRobot's Roomba i7+ robot vacuum for just $599.00, which is the same price it was on Cyber Monday.

Microsoft Denies Cutting Secret Deal With Duracell Over Xbox Controllers
Microsoft Denies Cutting Secret Deal With Duracell Over Xbox Controllers

Despite earlier rumors, there is no secret deal between Microsoft and Duracell to keep the Xbox controller using old AA technology.

Sony Loathed the Idea of Crossplay on the PS4, Leaked Docs Show
Sony Loathed the Idea of Crossplay on the PS4, Leaked Docs Show

Sony, it turns out, was exactly as hostile to the idea of crossplay as we thought it was.