How Google’s Night Sight Works, and Why It’s So Good

How Google’s Night Sight Works, and Why It’s So Good

Reading all the gushing praise for Google’s new Night Sight low-light photography feature for Pixel phones, you’d be forgiven for thinking Google had just invented color film. In fact, night shooting modes aren’t new, and many of the underlying technologies go back years. But Google has done an amazing job of combining its prowess in computational imaging with its unparalleled strength in machine learning to push the capability past anything previously seen in a mobile device. We’ll take a look at the history of multi-image capture low-light photography, how it is likely used by Google, and speculate about what AI brings to the party.

The Challenge of Low-Light Photography

How Google’s Night Sight Works, and Why It’s So Good

Bigger pixels, typically found in larger sensors, are the traditional strategy for addressing the issue. Unfortunately, phone camera sensors are tiny, resulting in small photosites (pixels) that operate well in nice lighting but fail quickly as light levels decrease.

That leaves phone camera designers with two options for improving low-light images. The first is to use multiple images that are then combined into one, lower-noise version. An early implementation of this in a mobile device accessory was the SRAW mode of the DxO ONE add-on for the iPhone. It fused four RAW images to create one improved version. The second is to use clever post-processing (with recent versions often powered by machine learning) to reduce the noise and improve the subject. Google’s Night Sight uses both of those.

Multi-Image, Single-Capture

By now we’re all used to our phones and cameras combining several images into one, mostly to improve dynamic range. Whether it is a traditional bracketed set of exposures like used by most companies, or Google’s HDR+, which uses several short-duration images, the result can be a superior final image — if the artifacts caused by fusing multiple images of a moving scene together can be minimized. Typically that is done by choosing a base frame that best represents the scene, and then merging useful portions of the other frames into it to enhance the image. Huawei, Google, and others have also used this same approach to create better-resolution telephoto captures. We’ve recently seen how important choosing the correct base frame is, since Apple has explained its “BeautyGate” snafu as a bug where the wrong base frame was being chosen out of the captured sequence.

So it only makes sense that Google, in essence, combined these uses of multi-image capture to create better low-light images. In doing so, it is building on a series of clever innovations in imaging. It is likely that Marc Levoy’s Android app SeeInTheDark and his 2015 paper on “Extreme imaging using cell phones” were the genesis of this effort. Levoy was a pioneer in computational imaging at Stanford and is now a Distinguished Engineer working on camera technology for Google. SeeInTheDark (a follow-on to his earlier SynthCam iOS app) used a standard phone to accumulate frames, warping each frame to match the accumulated image, and then performing a variety of noise reduction and image enhancement steps to produce a remarkable final low-light image. In 2017 a Google Engineer, Florian Kanz, built on some of those concepts to show how a phone could be used to create professional-quality images even in very low light.

Stacking Multiple Low-Light Images Is a Well-known Technique

Photographers have been stacking multiple frames together to improve low light performance since the beginning of digital photography (and I suspect some even did it with film). In my case, I started off doing it by hand, and later used a nifty tool called Image Stacker. Since early DSLRs were useless at high ISOs, the only way to get great night shots was to take several frames and stack them. Some classic shots, like star trails, were initially best captured that way. These days the practice isn’t very common with DSLR and mirrorless cameras, as current models have excellent native high-ISO and long-exposure noise performance. I can leave the shutter open on my Nikon D850 for 10 or 20 minutes and still get some very-usable shots.

So it makes sense that phone makers would follow suit, using similar technology. However, unlike patient photographers shooting star trails using a tripod, the average phone user wants instant gratification, and will almost never use a tripod. So the phone has the additional challenges of making the low-light capture happen fairly quickly, and also minimize blur from camera shake — and ideally even from subject motion. Even the optical image stabilization found on many high-end phones has its limits.

I’m not positive which phone maker first employed multiple-image capture to improve low light, but the first one I used is the Huawei Mate 10 Pro. Its Night Shot mode takes a series of images over 4-5 seconds, then fuses them into one final photo. Since Huawei leaves the real-time preview active, we can see that it uses several different exposures during that time, essentially creating several bracketed images.

In his paper on the original HDR+, Levoy makes the case that multiple exposures are harder to align (which is why HDR+ uses many identically-exposed frames), so it is likely that Google’s Night Sight, like SeeInTheDark, also uses a series of frames with identical exposures. However, Google (at least in the pre-release version of the app) doesn’t leave the real-time image on the phone screen, so that’s just speculation on my part. Samsung has used a different tactic in the Galaxy S9 and S9+, with a dual-aperture main lens. It can switch to an impressive f/1.5 in low-light to improve image quality.

Comparing Huawei and Google’s Low-Light Camera Capabilities

I don’t have a Pixel 3 or Mate 20 yet, but I do have access to a Mate 10 Pro with Night Shot and a Pixel 2 with a pre-release version of Night Sight. So I decided to compare for myself. Over a series of tests Google clearly out-performed Huawei, with lower noise and sharper images. Here is one test sequence to illustrate:

Painting in Daylight with Huawei Mate 10 Pro
Painting in Daylight with Huawei Mate 10 Pro
Painting in Daylight with Google Pixel 2
Painting in Daylight with Google Pixel 2
Without a Night Shot mode, here’s what you get photographing the same scene in the near dark with the Mate 10 Pro. It chose a 6 second shutter time, which shows in the blur.
Without a Night Shot mode, here’s what you get photographing the same scene in the near dark with the Mate 10 Pro. It chose a 6 second shutter time, which shows in the blur.
A version shot in the near dark using Night Shot on the Huawei Mate 10 Pro. EXIF data shows ISO3200 and 3 seconds total exposure time.
A version shot in the near dark using Night Shot on the Huawei Mate 10 Pro. EXIF data shows ISO3200 and 3 seconds total exposure time.
The same scene using (pre-release) Night Sight on a Pixel 2. More accurate color and slightly sharper. EXIF data shows ISO5962 and 1/4s for shutter time (presumably for each of many frames). Both images were re-compressed to a smaller overall size for use on the web.
The same scene using (pre-release) Night Sight on a Pixel 2. More accurate color and slightly sharper. EXIF data shows ISO5962 and 1/4s for shutter time (presumably for each of many frames). Both images were re-compressed to a smaller overall size for use on the web.

Is Machine Learning Part of Night Sight’s Secret Sauce?

Given how long image stacking has been around, and how many camera and phone makers have employed some version of it, it’s fair to ask why Google’s Night Sight seems to be so much better than anything else out there. First, even the technology in Levoy’s original paper is very complex, so the years Google has had to continue to improve on it should give them a decent head start on anyone else. But Google has also said that Night Sight uses machine learning to decide the proper colors for a scene based on content.

That’s pretty cool sounding, but also fairly vague. It isn’t clear whether it is segmenting individual objects so that it knows they should be a consistent color, or coloring well-known objects appropriately, or globally recognizing a type of scene the way intelligent autoexposure algorithms do and deciding how scenes like that should generally look (green foliage, white snow, and blue skies for example). I’m sure once the final version rolls out and photographers get more experience with the capability, we’ll learn more about this use of machine learning.

Another place where machine learning might have come in handy is the initial calculation of exposure. The core HDR+ technology underlying Night Sight, as documented in Google’s SIGGRAPH paper, relies on a hand-labeled dataset of thousands of sample scenes to help it determine the correct exposure to use. That would seem like an area where machine learning could result in some improvements, particularly in extending the exposure calculation to very-low-light conditions where the objects in the scene are noisy and hard to discern. Google has also been experimenting with using neural networks to enhance phone image quality, so it wouldn’t be surprising to start to see some of those techniques being deployed.

Whatever combination of these techniques Google has used, the result is certainly the best low-light camera mode on the market today. It will be interesting as the Huawei P20 family rolls out whether it has been able to push its own Night Shot capability closer to what Google has done.

Continue reading

Look Up: You Can See All the Planets in Our Solar System Tonight
Look Up: You Can See All the Planets in Our Solar System Tonight

You've probably seen diagrams of the solar system that place the planets in nice, orderly lines, but the truth is they're often on the other side of the sun from Earth. We happen to be going through a period during which all the planets are visible. You just have to know where and when to look.

Apple AirTags, Now Jailbroken, Could Become Even Bigger Privacy Nightmare
Apple AirTags, Now Jailbroken, Could Become Even Bigger Privacy Nightmare

The new Apple AirTag is not the first smart tracker, but it's so good at what it does that it could actually be a privacy nightmare, an even greater concern after a security researcher has shown it's possible to "jailbreak" one.

RTX 3080 Prices Plummet Overnight (in Australia)
RTX 3080 Prices Plummet Overnight (in Australia)

We can only hope this trend will start to spread across the globe soon.

We’ve Almost Gotten Full-Color Night Vision to Work
We’ve Almost Gotten Full-Color Night Vision to Work

Night vision appears to be getting a makeover with full-color visibility made possible by deep learning.