Machine Learning Works Wonders On Low-Light Images

Machine Learning Works Wonders On Low-Light Images

It’s no secret that smartphone SoCs don’t scale as well as they once did, and the overall rate of performance improvement in phones and tablets has slowed dramatically. One area where companies are still delivering significant improvements, however, is cameras. While this obviously varies depending on your manufacturer, companies like Samsung, LG, and Apple continue to deliver year-on-year improvements, including higher MP ratings, multiple cameras, improved sensors, and features like optical image stabilization. There’s still a gap between DSLR and phone cameras, but it’s been narrowing for years. And if recent work from Intel and the University of Illinois Urbana-Champaign is any proof, machine-learning can solve a problem that bedevils phone cameras to this day: low-light shots.

Don’t get me wrong, the low-light capabilities of modern smartphones are excellent compared with where we were just a few short years ago. But this is the sort of area where the difference between phones and a DSLR becomes apparent. The gap between the two types of devices when shooting static shots outside is much smaller than the difference you’ll see when shooting in low light. The team built a machine learning engine by creating a data set of short exposure and long exposure low-light images (these were used for reference). The report states:

Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.

The team has put a video together to explain and demonstrate how their technique works, as shown below:

We’d recommend visiting the site if you want to see high-resolution images of the before and after, but the base images being worked with aren’t just “low light” — the original shots are, in some cases, almost entirely black to the naked eye. Existing image software struggles to make much out of these kind of results, even when professional processing is used.

While there’s still some inevitable blur, if you click through and look at either the paper or the high-resolution default shots, the results from Intel and Champagne-Urbana are an order of magnitude better than anything we’ve seen before. And with smartphone vendors jockeying to build machine intelligence capabilities into more devices, it’s entirely possible that we’ll see more and more products bringing these kinds of capabilities to market in phones and making them available to ordinary customers. I, for one, welcome the idea of a smarter camera — preferably one able to correct for my laughably terrible photography skills.

Continue reading

Google’s AutoML Creates Machine Learning Models Without Programming Experience

The gist of Cloud AutoML is that almost anyone can bring a catalog of images, import tags for the images, and create a functional machine learning model based on that.

Valve Removed Steam Machines From Its Home Page

You can still access what remains of the Steam Machine landing site via a direct link — not that you'll see much when you get there.

Valve Isn’t Done With Steam Machines After All

Valve developer Pierre-Loup Griffais claims Steam Machines are not dead, and the company has big plans for gaming on Linux and the associated hardware platform.

Google’s Cloud TPU Matches Volta in Machine Learning at Much Lower Prices

Google and Nvidia both offer competitive machine learning products — but Google is beating Nvidia on costs, at least in certain tests.