Samsung Caught Cheating Customers Again, But On TVs This Time

Samsung Caught Cheating Customers Again, But On TVs This Time

Samsung has been caught cheating customers again in what’s becoming an irritatingly common event. It’s only been a few months since the Korean manufacturer was caught cheating on the Galaxy S22’s benchmarks* by throttling the phone in virtually every application that isn’t a performance test. Now it’s been caught lying in TV reviews by programming its TVs to improve their performance when they detect certain test patterns.

Every Company Wants to Look Good. Not Every Company Cheats

It’s easy to take issues like this and paint every company with the same brush, but it would be a mistake to do so. While every company wants to put its best foot forward, different firms choose very different ways to do that.

At the benign end of the scale, you have companies picking benchmarks (or benchmark settings) that show their products in the best light. Behavior gets scummier from that point in various ways and permutations, including non-identical hardware configurations between systems, different compiler settings, optimized binaries, and plain old non-representative cherry-picking.

Then, you have what Samsung is doing, which appears to involve pretending you’ve bought an entirely different television.

Samsung has apparently programmed at least two television sets — the S95B and the QN95B, specifically — to recognize when a reviewer is running test patterns on them. Television sets are typically tested, calibrated and reviewed with test patterns that take up ten percent of the screen. Coincidentally, Samsung has reportedly programmed its televisions to behave entirely differently when just ten percent of the panel is in-use. FlatPanelsHD detected this behavior when they began using a nine percent test window and observed very different brightness and color accuracy from the exact same television.

Check out the difference in the TV’s measured HDR performance when using a 10-percent window versus a 9-percent window:

Samsung Caught Cheating Customers Again, But On TVs This Time
Samsung Caught Cheating Customers Again, But On TVs This Time

The ten percent TV looks like a much better panel than the nine percent model. Delta E is a metric that measures the difference between color as displayed and the original color standard of the output. Test the QN95B with a ten percent panel, and its Delta E rating is 6.1. Test it with a nine percent window, and it’s 26.8. A Delta E of 6.1 is generally considered to be “perceptible at a glance” according to this guide by Zachary Schuessler, while a Delta E of 26.8 falls under “colors are more similar than different.” What this means for our purposes is that the QN95B is much less accurate than it pretends to be, with a peak brightness 80 percent lower than it claims.

According to FlatPanelsHD, the QN95B will boost its peak brightness from 1300 nits (normal) to 2300 nits (10 percent mode). This behavior was only observed when the TV was tested with a window that occupied ten percent of the screen. Set a nine percent window, and the TV brightness doesn’t exceed 1300 nits. It also doesn’t exceed 1300 nits when showing any normal content, from any source, including HDR video, YouTube video, and gaming. There does seem to be some evidence that this cheating can throw off the display of certain content, however, as discussed below.

TV Performance Measurement is Hard Enough Without This

Benchmarks are commonly dinged for being difficult to translate to real-world performance, but reading a bar graph to see which GPU is faster is much easier than explaining to someone why one collection of dots inside a brightly colored triangle is better or worse than a slightly different collection of dots.

Benchmarking televisions and monitors is uniquely problematic because there is no way to show the reader / viewer exactly what panel output actually looks like. It’s not impossible to capture differences between two TVs with a video camera, but the image won’t be the same as what you’d see in-person. Manipulating the results reviewers’ see when running tests versus regular content doesn’t make it easier to relate to people what they should expect. The only reason Samsung is doing this is so reviewers will crow over the brightness and accuracy of the underlying panel. Actual content doesn’t benefit from these capabilities. It may actually be harmed.

Samsung likely believes it can get away with this is because very few people calibrate their displays with aftermarket hardware. Even if you did, Samsung would say that it doesn’t guarantee calibration out of the box and that every display will look slightly different. Most people don’t buy ten televisions to confirm that color accuracy and brightness are systemically worse than what reviews claim. Samsung knows that, and it’s taking advantage of it.

This ten percent window detection can apparently impact content reproduction. Here’s a shot from FlatPanelsHD comparing the QN95B to Sony’s X95K. According to the review, both TVs were set to their most accurate HDR modes, but Samsung is far brighter than Sony:

Samsung Caught Cheating Customers Again, But On TVs This Time

Samsung on the left, Sony on the right. All picture enhancements are disabled on both TVs. The Samsung makes this scene look like it takes place during the day.

Why? According to FPHD: “Last year, we ascribed this to Samsung’s dynamic tone-mapping, which is technically correct, but the more precise explanation is that it is a result of Samsung’s “AI” processor detecting our and others’ 10% window test patterns used for measurements and calibration to change and mislead about the TV’s actual picture output, as discussed earlier.”

Their review concludes: “like last year’s QN95A, QN95B has a significantly overbrightened picture in all of its HDR picture modes, a fact that Samsung’s “AI” video processor tries to hide by detecting the pattern used by reviewers/calibrators and changing its picture output during measurements only to return them to other values after the measurements have been carried out – that’s deception and cheating.”

Samsung’s Response

FlatPanelsHD has already reached out to Samsung, which provided the following response: “To provide a more dynamic viewing experience for the consumers, Samsung will provide a software update that ensures consistent brightness of HDR contents across a wider range of window size beyond the industry standard.” This could be read to indicate Samsung will adjust its cheating software to be more effective rather than less. The reference to ensuring a consistent range of brightness “beyond industry standard” makes it sound like this is some kind of service the company provides.

Samsung’s response is inadequate to the situation at hand. This is the third time in less than a year that a different division of the company has been caught falsifying product data or designing systems deliberately intended to obfuscate actual product performance. This is fundamentally consumer-hostile and it makes a mockery of the idea of a fair review.

When companies pull stunts like this, reviewers have no choice but to assume that the company cannot be trusted. That doesn’t mean you stop reviewing its products, but it does mean devoting a lot of time and energy to making certain that the company isn’t trying to cheat people. Sabotaging the review process this way might yield short-term sales benefits, but it’ll lead to a long-term decline in trust if people feel like they can’t trust performance data any longer. When repeated cheating scandals engulf various sections of the company, it starts to look less like a few bad apples are responsible and more like a concerted effort to defraud customers by misrepresenting the performance of its SSDs, displays, and smartphones.

* I differ from my colleague Ryan Whitwam on whether or not the Galaxy S22 shenanigans constitute cheating. Because benchmarks are intended to be representative of device performance, any phone that throttles everything but benchmarks is also cheating — it’s just cheating a little more indirectly. I consider any manufacturer-created application that modifies device performance for the sole purpose of changing the implied performance relationship between benchmark and non-benchmark applications to be cheating, regardless of which type of software is being modified.

Continue reading

Ryzen 9 5950X and 5900X Review: AMD Unleashes Zen 3 Against Intel’s Last Performance Bastions
Ryzen 9 5950X and 5900X Review: AMD Unleashes Zen 3 Against Intel’s Last Performance Bastions

AMD continues its onslaught on what was once Intel's undisputed turf.

Qualcomm Revamps Snapdragon 865 Again, Calls It Snapdragon 870
Qualcomm Revamps Snapdragon 865 Again, Calls It Snapdragon 870

Qualcomm just unveiled a new high-end 800-series ARM processor, and I know what you're thinking. Didn't Qualcomm already announce its 2021 flagship system-on-a-chip (SoC)? It did, but the new Snapdragon 870 will slot in below the flagship Snapdragon 888.

Apple Warns Customers Against Using the iPhone 12 Near a Pacemaker
Apple Warns Customers Against Using the iPhone 12 Near a Pacemaker

If you own an Apple iPhone 12 and have a pacemaker or other implanted medical device, you should be careful with how you hold it — but these restrictions apply to more than just the iPhone 12.

IBM Built an AI Capable of Holding Its Own Against Humans in a Debate
IBM Built an AI Capable of Holding Its Own Against Humans in a Debate

IBM's Project Debater isn't sweeping humans off the debate stage just yet, but it's getting better at competing with them.