CPU Utilization Is Wrong on PCs, and Getting Worse Every Year
CPU utilization is wrong. That’s the argument Brandon Gregg, Netflix’s senior performance architect, has leveled against one of the most fundamental performance measurement tools we use when evaluating a system. According to Gregg, CPU utilization as reported by Windows isn’t just wrong — it’s actively getting worse over time.
If you’ve ever dug into this topic, you’re aware of some of the ways that CPU utilization isn’t reported accurately. Ever since Intel (and now AMD) added Hyper-Threading / SMT support, there’s been a discrepancy between how cores are presented in Task Manager and what resources are actually available. Windows, Linux, and other operating systems report the total number of cores and measure CPU utilization as if each logical core was actually a physical core. But that’s not the problem Gregg is discussing. First, there’s the problem of thread stalling. If you see your CPU running at 90 percent load, you might think it looks like this:
In reality, Gregg points out, what might be going on is something akin to this, in which the CPU is stalled and waiting for data but isn’t actually doing any work.
If you think about it, you’ve probably seen this in action. If you’ve ever performed a rendering or Photoshop manipulation that really tasked your CPU, performance — even UI performance — may slow to a crawl while the workload is executing. There are ways to avoid this problem by setting the total number of active threads or the priority of the workload itself, but if you’ve worked with computers for any length of time you’ve probably seen instances where 100 percent CPU utilization didn’t actually mean 100 percent CPU utilization. The problem, according to Gregg, is that memory accesses often slow the system. This is known as the CPU-DRAM gap, and it’s a topic we’ve discussed before at ET.
The entire reason we implemented advanced caching structures with L1, L2, and L3 cache is precisely because the DRAM gap stalls out CPUs and lowers overall performance. But now, there’s another problem causing issues for CPU utilization: Spectre and Meltdown patches.
In the video above, Gregg walks through a case example of two modern servers that were turning in very different performance figures for the same workload, despite running at the same clock and performing exactly the same tasks. The culprit? Spectre and Meltdown patches that flush the TLB caches, causing stall cycles in the CPU. Gregg goes into much more detail on how the KPTI patches can impact performance in a blog post on the topic, and while the data he presents is specific to the workloads he’s running (as one would expect), the impact is considerable.
But the takeaway is this: CPU utilization, as reported by Windows, is often incorrect. All too often, what looks like CPU usage is actually a stalled CPU waiting to do something useful.
Continue reading
How to Be a Task Manager Wizard, According to the Guy Who Wrote It
The author of the original Windows Task Manager has some tips on how to use it more effectively, including a few we'd never heard of — and one we tossed in ourselves if you have trouble with Task Manager getting stuck behind fullscreen-stealing windows.
RISC vs. CISC Is the Wrong Lens for Comparing Modern x86, ARM CPUs
Try to investigate the differences between the x86 and ARM processor families (or x86 and the Apple M1), and you'll see the acronyms CISC and RISC. It's a common way to frame the discussion, but not a very useful one. Today, "RISC versus CISC" obscures more than it explains.
I Wrote the First Full Review of the Voodoo 5 6000. 3dfx Isn’t Coming Back [Updated]
Nearly 20 years ago, I wrote the first review of the Voodoo 5 6000. As much as I loved 3dfx, I don't think this is the company to resurrect it.
GPU Pricing, Availability Are Moving in the Wrong Direction
GPU prices are moving again, but not in the direction we'd prefer.