Machine Learning Works Wonders On Low-Light Images

It’s no secret that smartphone SoCs don’t scale as well as they once did, and the overall rate of performance improvement in phones and tablets has slowed dramatically. One area where companies are still delivering significant improvements, however, is cameras. While this obviously varies depending on your manufacturer, companies like Samsung, LG, and Apple continue to deliver year-on-year improvements, including higher MP ratings, multiple cameras, improved sensors, and features like optical image stabilization. There’s still a gap between DSLR and phone cameras, but it’s been narrowing for years. And if recent work from Intel and the University of Illinois Urbana-Champaign is any proof, machine-learning can solve a problem that bedevils phone cameras to this day: low-light shots.
Don’t get me wrong, the low-light capabilities of modern smartphones are excellent compared with where we were just a few short years ago. But this is the sort of area where the difference between phones and a DSLR becomes apparent. The gap between the two types of devices when shooting static shots outside is much smaller than the difference you’ll see when shooting in low light. The team built a machine learning engine by creating a data set of short exposure and long exposure low-light images (these were used for reference). The report states:
Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.
The team has put a video together to explain and demonstrate how their technique works, as shown below:
We’d recommend visiting the site if you want to see high-resolution images of the before and after, but the base images being worked with aren’t just “low light” — the original shots are, in some cases, almost entirely black to the naked eye. Existing image software struggles to make much out of these kind of results, even when professional processing is used.
While there’s still some inevitable blur, if you click through and look at either the paper or the high-resolution default shots, the results from Intel and Champagne-Urbana are an order of magnitude better than anything we’ve seen before. And with smartphone vendors jockeying to build machine intelligence capabilities into more devices, it’s entirely possible that we’ll see more and more products bringing these kinds of capabilities to market in phones and making them available to ordinary customers. I, for one, welcome the idea of a smarter camera — preferably one able to correct for my laughably terrible photography skills.
Continue reading

OpenAI’s ‘DALL-E’ Generates Images From Text Descriptions
All you need to do is give DALL-E some instructions, and it can draw an image for you. Sometimes the renderings are little better than fingerpainting, but other times they're startlingly accurate portrayals.

NASA’s Super-Fast Solar Probe Returns Amazing Image of Venus
According to NASA, Parker spotted a previously unseen glow that could be a product of oxygen in the inhospitable planet's atmosphere. The unexpected clarity of surface features also has scientists reassessing how sensitive Parker's cameras are.

NASA Probe Returns Amazing Image of Venus
According to NASA, Parker spotted a previously unseen glow that could be a product of oxygen in the inhospitable planet's atmosphere. The unexpected clarity of surface features also has scientists reassessing how sensitive Parker's cameras are.

New Image of Supermassive Black Hole Reveals Swirling Magnetic Fields
The Event Horizon Telescope gave us the iconic 2019 image of a black hole, the first one ever produced. Now, the team has conducted new observations of the supermassive black hole at the center of galaxy M87, revealing magnetic field lines around the void.