CPU Utilization Is Wrong on PCs, and Getting Worse Every Year
CPU utilization is wrong. That’s the argument Brandon Gregg, Netflix’s senior performance architect, has leveled against one of the most fundamental performance measurement tools we use when evaluating a system. According to Gregg, CPU utilization as reported by Windows isn’t just wrong — it’s actively getting worse over time.
If you’ve ever dug into this topic, you’re aware of some of the ways that CPU utilization isn’t reported accurately. Ever since Intel (and now AMD) added Hyper-Threading / SMT support, there’s been a discrepancy between how cores are presented in Task Manager and what resources are actually available. Windows, Linux, and other operating systems report the total number of cores and measure CPU utilization as if each logical core was actually a physical core. But that’s not the problem Gregg is discussing. First, there’s the problem of thread stalling. If you see your CPU running at 90 percent load, you might think it looks like this:
In reality, Gregg points out, what might be going on is something akin to this, in which the CPU is stalled and waiting for data but isn’t actually doing any work.
If you think about it, you’ve probably seen this in action. If you’ve ever performed a rendering or Photoshop manipulation that really tasked your CPU, performance — even UI performance — may slow to a crawl while the workload is executing. There are ways to avoid this problem by setting the total number of active threads or the priority of the workload itself, but if you’ve worked with computers for any length of time you’ve probably seen instances where 100 percent CPU utilization didn’t actually mean 100 percent CPU utilization. The problem, according to Gregg, is that memory accesses often slow the system. This is known as the CPU-DRAM gap, and it’s a topic we’ve discussed before at ET.
The entire reason we implemented advanced caching structures with L1, L2, and L3 cache is precisely because the DRAM gap stalls out CPUs and lowers overall performance. But now, there’s another problem causing issues for CPU utilization: Spectre and Meltdown patches.
In the video above, Gregg walks through a case example of two modern servers that were turning in very different performance figures for the same workload, despite running at the same clock and performing exactly the same tasks. The culprit? Spectre and Meltdown patches that flush the TLB caches, causing stall cycles in the CPU. Gregg goes into much more detail on how the KPTI patches can impact performance in a blog post on the topic, and while the data he presents is specific to the workloads he’s running (as one would expect), the impact is considerable.
But the takeaway is this: CPU utilization, as reported by Windows, is often incorrect. All too often, what looks like CPU usage is actually a stalled CPU waiting to do something useful.
Continue reading
Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference
Intel's Raja Koduri will speak at a Samsung foundry event this week — and that's not something that would happen if Intel didn't have something to say.
Xbox Series X Review: The Living Room Gaming PC I’ve (Mostly) Always Wanted
The Xbox Series X launches in five days, and we're clear to talk about it. I've never done a console review before, so I went into this from the perspective of what I'm used to — PC gaming. Microsoft objectively has a lot to be proud of, here.
Apple’s New M1 SoC Looks Great, Is Not Faster Than 98 Percent of PC Laptops
Apple's new M1 silicon really looks amazing, but it isn't faster than 98 percent of the PCs sold last year, despite what the company claims.
What Does It Mean for the PC Market If Apple Makes the Fastest CPU?
Apple's M1 SoC could have a profound impact on the PC market. After 25 years, x86 may no longer be the highest-performing CPU architecture you can practically buy.