Machine Learning Works Wonders On Low-Light Images

Machine Learning Works Wonders On Low-Light Images

It’s no secret that smartphone SoCs don’t scale as well as they once did, and the overall rate of performance improvement in phones and tablets has slowed dramatically. One area where companies are still delivering significant improvements, however, is cameras. While this obviously varies depending on your manufacturer, companies like Samsung, LG, and Apple continue to deliver year-on-year improvements, including higher MP ratings, multiple cameras, improved sensors, and features like optical image stabilization. There’s still a gap between DSLR and phone cameras, but it’s been narrowing for years. And if recent work from Intel and the University of Illinois Urbana-Champaign is any proof, machine-learning can solve a problem that bedevils phone cameras to this day: low-light shots.

Don’t get me wrong, the low-light capabilities of modern smartphones are excellent compared with where we were just a few short years ago. But this is the sort of area where the difference between phones and a DSLR becomes apparent. The gap between the two types of devices when shooting static shots outside is much smaller than the difference you’ll see when shooting in low light. The team built a machine learning engine by creating a data set of short exposure and long exposure low-light images (these were used for reference). The report states:

Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.

The team has put a video together to explain and demonstrate how their technique works, as shown below:

We’d recommend visiting the site if you want to see high-resolution images of the before and after, but the base images being worked with aren’t just “low light” — the original shots are, in some cases, almost entirely black to the naked eye. Existing image software struggles to make much out of these kind of results, even when professional processing is used.

While there’s still some inevitable blur, if you click through and look at either the paper or the high-resolution default shots, the results from Intel and Champagne-Urbana are an order of magnitude better than anything we’ve seen before. And with smartphone vendors jockeying to build machine intelligence capabilities into more devices, it’s entirely possible that we’ll see more and more products bringing these kinds of capabilities to market in phones and making them available to ordinary customers. I, for one, welcome the idea of a smarter camera — preferably one able to correct for my laughably terrible photography skills.

Continue reading

Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications
Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications

Nvidia is already capitalizing on its ARM acquisition with a massively powerful new CPU-plus-GPU combination that it claims will speed up the training of large machine-learning models by a factor of 10.

New AI Writes Computer Code: Still Not Skynet, But It’s Learning
New AI Writes Computer Code: Still Not Skynet, But It’s Learning

The Singularity is now in private beta. But you still have to care about syntax errors.

Google to Make Chrome ‘More Helpful’ With New Machine Learning Additions
Google to Make Chrome ‘More Helpful’ With New Machine Learning Additions

Google is looking to make notifications in Chrome less annoying, and it wants to anticipate a user's behavior as well.

Google’s AutoML Creates Machine Learning Models Without Programming Experience
Google’s AutoML Creates Machine Learning Models Without Programming Experience

The gist of Cloud AutoML is that almost anyone can bring a catalog of images, import tags for the images, and create a functional machine learning model based on that.