GPUs Used For Crypto Mining Might Lose Game Performance, Long-Term

GPUs Used For Crypto Mining Might Lose Game Performance, Long-Term

One question that’s come up from time to time in gaming is whether older GPUs get slower over time. wfoojjaec has examined this question before with respect to software updates, meaning we checked whether older GPUs lost performance as a result of later driver updates that were not as well optimized for older GPU architectures. While our driver tests found no evidence of software-driven slowdowns, we didn’t check the impact of whether aging GPU hardware could impact performance.

A new investigation of an 18 month-old RTX 2080 Ti claims to have uncovered evidence that an old GPU will run more slowly than a newer card, based on a comparison of two (we think) MSI GeForce RTX 2080 Ti Gaming X Trio GPUs. Unfortunately, based on available information, there’s no way to confirm that conclusion. At best, what YouTube channel Testing Games has established is that long-term mining might slow down a GPU.

At first glance, the findings seem equivocal. Testing Games runs a suite of games, including Assassin’s Creed Valhalla, Battlefield V, Cyberpunk 2077, Forza Horizon 4, Horzion: Zero Dawn, Kingdom Come Deliverance, Mafia Definitive Edition, and Red Dead Redemption. The MSI Gaming X Trio GPU card used for crypto mining for the past 18 months is typically about 10 percent slower than the new MSI Gaming X Trio GPU that hasn’t been used for mining. Spot checks of various games show that the used RTX 2080 Ti runs 15-20 degrees hotter than the new card, uses somewhat less power, and hits a lower maximum clock speed. This would seem to be an open-and-shut demonstration of the fact that mining can wear out a GPU, but there are some problems with this analysis.

First, the authors don’t appear to have re-pasted or dusted the used GPU. Dust is an absolutely magnificent insulator and enough of it will easily destabilize a gaming rig. This alone could account for the higher temperatures and lower clocks on the used card, no explanation needed.

Second, it is not clear if this represents the same GPU being tested, or two different versions of the card purchased at two different times. The former would be more useful. The wide use of Turbo clocks in GPUs and CPUs today allow for variations in binning that can impact the final result. It could be that the newer card fielded a better core, allowing for higher base performance, and effectively invalidating our ability to derive any useful information from this comparison. The official boost clock on the MSI Gaming Trio X is 1755MHz, which means both GPUs are shown running above this specification. It is possible that some of the variance between the two GPUs reflects SoC quality.

If these are two different GPUs, we also don’t know if they use an identical VBIOS version or if they use exactly the same brand of RAM. Micro-timings and VBIOS updates can introduce their own performance changes. The newer GPU is also often faster than the older GPU than the difference in its clock speed would indicate. The clock gap as measured in-game is on the order of 3-5 percent (it varies depending on where you are in the run), while the performance variation varies by 8-12 percent. The RAM clock is supposedly locked to an effective 7GHz (14Gbps) across both cards.

There’s another point I want to bring up: These numbers are a little odd as far as the implied relationship between clock variation and actual observed performance.

GPUs Used For Crypto Mining Might Lose Game Performance, Long-Term

GPU clocks and performance results do not typically move in lockstep. Increase the GPU core and memory clocks by 10 percent, and a game’s performance may only improve by 6-8 percent. This is expected because there’s always the chance that a benchmark is slightly limited by some other aspect of the system. The expected result from a linear clock speed increase is a linear-to-sublinear improvement in performance. It therefore follows that the expected impact of reducing clock is a linear-to-sublinear reduction in performance.

The results in this video show the opposite, in almost every case. Apart from Kingdom Come: Deliverance, the gap between the reported GPU clocks is about half the size of the performance improvement. 4-6 percent clock speed differences are associated with 8-15 percent performance shifts.

This could be a result of polling errors in the utilities being used to gather this information. Alternately, it suggests some other variable in play that hasn’t been accounted for in the YouTube video above. The used GPU could be hitting thermal limits and throttling itself back, but doing so more quickly than monitoring utility can detect. Most polling utilities only poll once per second, while GPUs are capable of adjusting their clocks in a matter of milliseconds. It’s possible that the used GPU’s clock looks more stable than it would if we had finer-grained reporting tools.

Testing Games has not released any follow-up information on their testing protocols or whether this comparison was performed on the same GPU at two different points in time or on two different GPUs purchased at different times. It also hasn’t released any discussion of why these results point to greater-than-linear performance improvements despite linear increases in GPU clock and no changes to memory clock.

Until these questions are answered, the idea that heavily-mined cards lose gaming performance can only be considered a theory. We’re not saying the theory is wrong, but it hasn’t been properly tested yet. More data is needed, either from Testing Games or from other sources, to illustrate the accuracy of this claim.

Continue reading

Intel Launches New Xe Max Mobile GPUs for Entry-Level Content Creators
Intel Launches New Xe Max Mobile GPUs for Entry-Level Content Creators

Intel has launched a new consumer, mobile GPU — but it's got a very specific use-case, at least for now.

MIT Creates Battery-Free Underwater GPS
MIT Creates Battery-Free Underwater GPS

GPS radio signals dissipate quickly when they hit water, causing a headache for scientific research at sea. The only alternative is to use acoustic systems that chew through batteries. A team from MIT has devised a battery-free tracking technology that could end this annoyance.

Intel Details XPU Strategy, Launches New Server GPU, OneAPI Gold
Intel Details XPU Strategy, Launches New Server GPU, OneAPI Gold

Intel made a pair of announcements today regarding its OneAPI initiative and the launch of its first server GPU based on Xe graphics.

Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth
Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth

Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs.