Machine Learning Works Wonders On Low-Light Images

Machine Learning Works Wonders On Low-Light Images

It’s no secret that smartphone SoCs don’t scale as well as they once did, and the overall rate of performance improvement in phones and tablets has slowed dramatically. One area where companies are still delivering significant improvements, however, is cameras. While this obviously varies depending on your manufacturer, companies like Samsung, LG, and Apple continue to deliver year-on-year improvements, including higher MP ratings, multiple cameras, improved sensors, and features like optical image stabilization. There’s still a gap between DSLR and phone cameras, but it’s been narrowing for years. And if recent work from Intel and the University of Illinois Urbana-Champaign is any proof, machine-learning can solve a problem that bedevils phone cameras to this day: low-light shots.

Don’t get me wrong, the low-light capabilities of modern smartphones are excellent compared with where we were just a few short years ago. But this is the sort of area where the difference between phones and a DSLR becomes apparent. The gap between the two types of devices when shooting static shots outside is much smaller than the difference you’ll see when shooting in low light. The team built a machine learning engine by creating a data set of short exposure and long exposure low-light images (these were used for reference). The report states:

Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.

The team has put a video together to explain and demonstrate how their technique works, as shown below:

We’d recommend visiting the site if you want to see high-resolution images of the before and after, but the base images being worked with aren’t just “low light” — the original shots are, in some cases, almost entirely black to the naked eye. Existing image software struggles to make much out of these kind of results, even when professional processing is used.

While there’s still some inevitable blur, if you click through and look at either the paper or the high-resolution default shots, the results from Intel and Champagne-Urbana are an order of magnitude better than anything we’ve seen before. And with smartphone vendors jockeying to build machine intelligence capabilities into more devices, it’s entirely possible that we’ll see more and more products bringing these kinds of capabilities to market in phones and making them available to ordinary customers. I, for one, welcome the idea of a smarter camera — preferably one able to correct for my laughably terrible photography skills.

Continue reading

New Jupiter Images From Juno Probe Reveal Amazing Detail

In a pair of recently released images, you can see an unprecedented amount of detail in Jupiter's clouds, and they were both created by citizen scientists.

NASA’s IMAGE Satellite Still Has Functional Power, Electronics Systems

NASA has fresh news about its IMAGE satellite, but there are still a lot of questions about why the hardware failed the way it did — or why it's operational again.

New PS4 Update Boosts Image Quality for PS4 Pro Owners Stuck on 1080p

Sony's new PS4 System Software 5.50 update adds a bevy of new features, including buffed parental controls, UI upgrades, and some visual treats for PS4 Pro owners who are still using 1080p TVs.

Wildlife Photographer of the Year’s ‘Stuffed Animal’ Image Disqualified

Think you can cheat to win a prestigious photo competition? You might want to use your own stuffed anteater or tame wolf instead of borrowing ones that are well-known.