Samsung Caught Cheating Customers Again, But On TVs This Time

Samsung Caught Cheating Customers Again, But On TVs This Time

Samsung has been caught cheating customers again in what’s becoming an irritatingly common event. It’s only been a few months since the Korean manufacturer was caught cheating on the Galaxy S22’s benchmarks* by throttling the phone in virtually every application that isn’t a performance test. Now it’s been caught lying in TV reviews by programming its TVs to improve their performance when they detect certain test patterns.

Every Company Wants to Look Good. Not Every Company Cheats

It’s easy to take issues like this and paint every company with the same brush, but it would be a mistake to do so. While every company wants to put its best foot forward, different firms choose very different ways to do that.

At the benign end of the scale, you have companies picking benchmarks (or benchmark settings) that show their products in the best light. Behavior gets scummier from that point in various ways and permutations, including non-identical hardware configurations between systems, different compiler settings, optimized binaries, and plain old non-representative cherry-picking.

Then, you have what Samsung is doing, which appears to involve pretending you’ve bought an entirely different television.

Samsung has apparently programmed at least two television sets — the S95B and the QN95B, specifically — to recognize when a reviewer is running test patterns on them. Television sets are typically tested, calibrated and reviewed with test patterns that take up ten percent of the screen. Coincidentally, Samsung has reportedly programmed its televisions to behave entirely differently when just ten percent of the panel is in-use. FlatPanelsHD detected this behavior when they began using a nine percent test window and observed very different brightness and color accuracy from the exact same television.

Check out the difference in the TV’s measured HDR performance when using a 10-percent window versus a 9-percent window:

Samsung Caught Cheating Customers Again, But On TVs This Time
Samsung Caught Cheating Customers Again, But On TVs This Time

The ten percent TV looks like a much better panel than the nine percent model. Delta E is a metric that measures the difference between color as displayed and the original color standard of the output. Test the QN95B with a ten percent panel, and its Delta E rating is 6.1. Test it with a nine percent window, and it’s 26.8. A Delta E of 6.1 is generally considered to be “perceptible at a glance” according to this guide by Zachary Schuessler, while a Delta E of 26.8 falls under “colors are more similar than different.” What this means for our purposes is that the QN95B is much less accurate than it pretends to be, with a peak brightness 80 percent lower than it claims.

According to FlatPanelsHD, the QN95B will boost its peak brightness from 1300 nits (normal) to 2300 nits (10 percent mode). This behavior was only observed when the TV was tested with a window that occupied ten percent of the screen. Set a nine percent window, and the TV brightness doesn’t exceed 1300 nits. It also doesn’t exceed 1300 nits when showing any normal content, from any source, including HDR video, YouTube video, and gaming. There does seem to be some evidence that this cheating can throw off the display of certain content, however, as discussed below.

TV Performance Measurement is Hard Enough Without This

Benchmarks are commonly dinged for being difficult to translate to real-world performance, but reading a bar graph to see which GPU is faster is much easier than explaining to someone why one collection of dots inside a brightly colored triangle is better or worse than a slightly different collection of dots.

Benchmarking televisions and monitors is uniquely problematic because there is no way to show the reader / viewer exactly what panel output actually looks like. It’s not impossible to capture differences between two TVs with a video camera, but the image won’t be the same as what you’d see in-person. Manipulating the results reviewers’ see when running tests versus regular content doesn’t make it easier to relate to people what they should expect. The only reason Samsung is doing this is so reviewers will crow over the brightness and accuracy of the underlying panel. Actual content doesn’t benefit from these capabilities. It may actually be harmed.

Samsung likely believes it can get away with this is because very few people calibrate their displays with aftermarket hardware. Even if you did, Samsung would say that it doesn’t guarantee calibration out of the box and that every display will look slightly different. Most people don’t buy ten televisions to confirm that color accuracy and brightness are systemically worse than what reviews claim. Samsung knows that, and it’s taking advantage of it.

This ten percent window detection can apparently impact content reproduction. Here’s a shot from FlatPanelsHD comparing the QN95B to Sony’s X95K. According to the review, both TVs were set to their most accurate HDR modes, but Samsung is far brighter than Sony:

Samsung Caught Cheating Customers Again, But On TVs This Time

Samsung on the left, Sony on the right. All picture enhancements are disabled on both TVs. The Samsung makes this scene look like it takes place during the day.

Why? According to FPHD: “Last year, we ascribed this to Samsung’s dynamic tone-mapping, which is technically correct, but the more precise explanation is that it is a result of Samsung’s “AI” processor detecting our and others’ 10% window test patterns used for measurements and calibration to change and mislead about the TV’s actual picture output, as discussed earlier.”

Their review concludes: “like last year’s QN95A, QN95B has a significantly overbrightened picture in all of its HDR picture modes, a fact that Samsung’s “AI” video processor tries to hide by detecting the pattern used by reviewers/calibrators and changing its picture output during measurements only to return them to other values after the measurements have been carried out – that’s deception and cheating.”

Samsung’s Response

FlatPanelsHD has already reached out to Samsung, which provided the following response: “To provide a more dynamic viewing experience for the consumers, Samsung will provide a software update that ensures consistent brightness of HDR contents across a wider range of window size beyond the industry standard.” This could be read to indicate Samsung will adjust its cheating software to be more effective rather than less. The reference to ensuring a consistent range of brightness “beyond industry standard” makes it sound like this is some kind of service the company provides.

Samsung’s response is inadequate to the situation at hand. This is the third time in less than a year that a different division of the company has been caught falsifying product data or designing systems deliberately intended to obfuscate actual product performance. This is fundamentally consumer-hostile and it makes a mockery of the idea of a fair review.

When companies pull stunts like this, reviewers have no choice but to assume that the company cannot be trusted. That doesn’t mean you stop reviewing its products, but it does mean devoting a lot of time and energy to making certain that the company isn’t trying to cheat people. Sabotaging the review process this way might yield short-term sales benefits, but it’ll lead to a long-term decline in trust if people feel like they can’t trust performance data any longer. When repeated cheating scandals engulf various sections of the company, it starts to look less like a few bad apples are responsible and more like a concerted effort to defraud customers by misrepresenting the performance of its SSDs, displays, and smartphones.

* I differ from my colleague Ryan Whitwam on whether or not the Galaxy S22 shenanigans constitute cheating. Because benchmarks are intended to be representative of device performance, any phone that throttles everything but benchmarks is also cheating — it’s just cheating a little more indirectly. I consider any manufacturer-created application that modifies device performance for the sole purpose of changing the implied performance relationship between benchmark and non-benchmark applications to be cheating, regardless of which type of software is being modified.

Continue reading

Chromebooks Gain Market Share as Education Goes Online
Chromebooks Gain Market Share as Education Goes Online

Chromebook sales have exploded in the pandemic, with sales up 90 percent and future growth expected. This poses some challenges to companies like Microsoft.

Scientists Confirm the Presence of Water on the Moon
Scientists Confirm the Presence of Water on the Moon

Scientists have confirmed the discovery of molecular water on the moon. Is there any of it in a form we can use? That's less clear.

Review: The Oculus Quest 2 Could Be the Tipping Point for VR Mass Adoption
Review: The Oculus Quest 2 Could Be the Tipping Point for VR Mass Adoption

The Oculus Quest 2 is now available, and it's an improvement over the original in every way that matters. And yet, it's $100 less expensive than the last release. Having spent some time with the Quest 2, I believe we might look back on it as the headset that finally made VR accessible to mainstream consumers.

Samsung, Stanford Built a 10,000 PPI Display That Could Revolutionize VR, AR
Samsung, Stanford Built a 10,000 PPI Display That Could Revolutionize VR, AR

Ask anyone who has spent more than a few minutes inside a VR headset, and they'll mention the screen door effect. This could eliminate it for good.