AMD’s Ryzen 5 2400G and Ryzen 3 2200G are excellent APUs, but they both have a fairly high TDP requirement. A 65W TDP is fine for even a small tower, but it’s a fairly aggressive thermal envelope for the smallest systems or chassis roughly the size of an Intel NUC. To remedy this, AMD has launched a pair of 35W TDP APUs that should give system builders more leeway to drop Ryzen processors into diminutive system builds.
We’ve put together a chart of the difference between the 65W and 35W TDP parts, shown below:
Now, before we go farther, a word on Thermal Design Power (TDP) and power consumption. In the old days, AMD and Intel used two different definitions of TDP. Intel’s definition of TDP is formally available here, but the plain-language summary is that a CPU’s TDP represents its standard thermal dissipation in a representative workload over a given period of time. In Intel parlance, TDP is explicitly not equivalent to maximum CPU power consumption. That’s an important point, because for many years, AMD defined its CPU TDP as being, “The maximum power a processor draws for a thermally significant period while running commercially useful software.”
See the difference? Both manufacturers provided these figures to heatsink designers and both were labeled TDP, but they meant something different and were not comparable between the companies. If this seems confusing, well, you aren’t alone. And here, it gets a teensy bit worse. TDPs still aren’t directly comparable between Intel and AMD, because the companies still use different methodologies to arrive at their figures, but the figures appear to now be calculated more in-line with one another. AMD now defines TDP as: “Thermal Design Power (TDP) is strictly the measurement of an ASIC’s thermal output, which defines the minimum cooling solution necessary to achieve rated performance.”
This isn’t a new change — AMD shifted its TDP definition at some point between 2009 and the present day — but having come across this information in the Ryzen 7 2700X Reviewer’s Guide, it made sense to slip it into this discussion. Keep in mind, this information is typically used by heatsink designers, not enthusiasts, and represents a thermal dissipation requirement rather than a formal statement of power draw. It’s not even a less accurate figure. AMD’s old decision to give maximum power draw as functional TDP meant that for most of the time you were using the system, the TDP figure slapped on the chip was much too high for the actual CPU.
The other reason I suspect AMD has overhauled its TDPs is because, well, CPU thermal dissipation is vastly more complex than it used to be. When AMD first formulated its original TDP = maximum power consumption metric, CPUs ran at the clock speeds you told them to run at. Later, it became possible to lower their clock speeds at idle, but they still ramped up quickly under load. Today, both Intel and AMD bring a wide range of thermal and performance optimizations to the table, with CPUs that self-adjust their clocks based on what’s going on in the system and available headroom. Expect these chips to hold their top clocks less aggressively and to throttle more quickly — the maximum speeds may have only dropped slightly, but yanking the entire CPU + GPU complex down into just over half the wattage is a pretty tall order.
AMD hasn’t given a formal shipping date for these parts yet, but if the company is putting them up online they should be available sooner rather than later.
New Intel Rocket Lake Details: Backwards Compatible, Xe Graphics, Cypress Cove
Intel has released a bit more information about Rocket Lake and its 10nm CPU that's been back-ported to 14nm.
Apple’s M1 Continues to Impress in Cinebench R23, Affinity Photo
New Cinebench R23 benchmarks paint AMD in a more competitive light against the M1, but Apple's SoC still acquits itself impressively. The Affinity Photo benchmark, however, is a major M1 win.
Why Apple’s M1 Chip Threatens Intel and AMD
Intel's own history suggests it and AMD should take Apple's new M1 SoC very seriously.
Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications
Nvidia is already capitalizing on its ARM acquisition with a massively powerful new CPU-plus-GPU combination that it claims will speed up the training of large machine-learning models by a factor of 10.