DxOMark Shows How Far Phone Cameras Have Come
In the six years DxOMark has been testing phone cameras, it’s seen plenty of big improvements. In a new white paper, the company takes the time to detail how dramatically new technologies have improved phone image quality. Let’s take a closer look.
Overall Scores Have Increased Dramatically Over the Years
In 2012, the best smartphone camera available was the Nokia 808 PureView. It amassed what at the time were very high Photo and Video scores in the 60s (as retested using the 2017 protocols). Fast forward to today, where mainline flagship devices from Google, Huawei, Samsung, and Apple are all within a hair of each other and of cracking through the 100 mark.
Overcoming Physical Limits: Hardware Tricks, Clever Software
The drive to create thinner phones with more features has meant that phone sensors and lenses are just as tiny as they ever were (in fact, smaller than the one found in the 808 PureView). But by innovating in a number of areas of hardware and image processing, phone makers are getting much better images out of their tiny cameras. DxOMark’s paper dives into the improvements in several areas, including Noise, Exposure, HDR, Autofocus, Stabilization, and Zoom. We’ll break down the progress in some of them for you.
Autofocus: Smarter Hardware Means Faster Focusing
While many factors including faster processors have contributed to faster autofocus, some of the most important are clever new hardware features. One of the first breakthroughs was phase detection using dedicated areas of the sensor. This same technique has been used to dramatically improve autofocus performance in many mirrorless cameras. Prior to phase detect, smartphones and mirrorless cameras both relied on the much slower process of analyzing frames and attempting to maximize contrast in software. Google coupled phase detection with laser time-of-flight depth sensing to achieve good results in its original Pixel, for example.
Pushing the technology further, the Pixel 2 and a few other Android phones incorporate a dual-pixel sensor, where the left and right halves of each pixel can be read separately. The relationship between the two can be used as a form of phase detection. Unlike traditional phase detection, however, every pixel in the camera can take part, so you essentially have unlimited focus sensors. Apple doesn’t have access to dual-pixel sensors, which helps explain its lower autofocus scores. Google goes even further than its competitors and uses the dual-pixel information to estimate depth, allowing it to do well capturing images in Portrait mode, even with a single camera.
Lower Noise Means Better Images
Small sensors mean noise. There isn’t any avoiding it, given the nature of semiconductor electronics and the physics of light. But there are a lot of clever techniques phone makers have started using to work around it.
The first is better noise-reduction algorithms. The problem with reducing noise is that you also wind up smoothing out textures in the image and losing detail. One way around this is Temporal Noise Reduction (TNR), which involves blending data from several frames to average out the noise. Faster processors let you do that without introducing visible artifacts. The processors run advanced image processing algorithms to accurately align images and make decisions about what to include in the final image. HDR+ on the Pixel 2, for example, uses up to 10 frames taken at 30fps to create a final image.
Additional techniques for reducing noise include the introduction of better stabilization, allowing for longer shutter speeds. Wider lens apertures also allow more light to be collected in the same amount of time. In multi-camera devices, some configurations allow the use of data from both cameras to produce a single noise-reduced image.
Smart Auto Exposure and HDR Come to Phones
But because the dynamic range of the small sensors in phones is quite limited, vendors have also started to capture multiple images in a burst and use them to further increase the apparent dynamic range of the camera. This is related to exposure, because different companies use different algorithms for capturing HDR scenes this way. Most use a traditional bracketing technique, sandwiching an overexposed, underexposed, and reference image together, and tone mapping the aligned result. Google uses a novel technique of capturing all the images with the same, slightly underexposed setting and then combining them using its HDR+ technology.
Zoom is Happening, But Still an Achilles Heel
Talk to just about any phone camera designer and they’ll lament the limits of physics are particularly troublesome when it comes to adding zoom to phone cameras. By using a dedicated second camera, they can get from wide-angle to a more normal focal length, but there isn’t enough depth (z height) in a modern phone for longer lenses. They’re so thin, even the folded optics used by startup Light in its L16 would be too thick. The holy grail may be diffractive optics, but so far the image quality isn’t there yet.
Yet, even here current cameras are capable of greatly improved results. Dual cameras allow for both the option of a dedicated telephoto lens like in some models from Apple, Google, and others, or for combining color and monochrome cameras for more detail, as implemented by Huawei in its Mate 10 Pro.
New Technologies Mean New Benchmarks
Image quality improvements haven’t simply driven up camera benchmark scores; they’ve required a whole new set of benchmarks. In September, DxOMark updated its original 2012 suite of mobile camera tests to a 2017 version, which features much more demanding tests for low light and images with motion, as well as additional tests to evaluate zoom and depth effects (bokeh). Since the scores aren’t directly comparable, DxOMark retested some earlier phones for comparison, and it is those retest results that are shown in the charts we’ve used in this article.
Given the rapid pace of technology adoption in phone cameras, it’s likely this won’t be the last time benchmarks need to be updated to reflect additional features and use cases. If you want to get a fun look at the process DxOMark uses to measure camera image quality, the company has posted this short video:
Images courtesy of DxOMark Image Labs. You can read the entire white paper here.
Disclaimer: I work extensively with DxOMark, and was involved in elements of the launch of the new DxOMark Mobile website.
Continue reading
Huawei P20 Pro Triple Camera Smashes DxOMark High-Score Record
Huawei's new P20 Pro flagship blows past Apple, Samsung, and Google's phones in camera performance with a unique triple camera design.
DxOMark Adds Selfie Benchmark to Its Phone Scores
DxOMark's Mobile scores are often used to compare the overall image quality of current-model phones. However, it only tests the main camera. With the growing popularity of selfies, DxOMark has added a new set of tests for the front-facing "selfie" camera.