It’s no secret that smartphone SoCs don’t scale as well as they once did, and the overall rate of performance improvement in phones and tablets has slowed dramatically. One area where companies are still delivering significant improvements, however, is cameras. While this obviously varies depending on your manufacturer, companies like Samsung, LG, and Apple continue to deliver year-on-year improvements, including higher MP ratings, multiple cameras, improved sensors, and features like optical image stabilization. There’s still a gap between DSLR and phone cameras, but it’s been narrowing for years. And if recent work from Intel and the University of Illinois Urbana-Champaign is any proof, machine-learning can solve a problem that bedevils phone cameras to this day: low-light shots.
Don’t get me wrong, the low-light capabilities of modern smartphones are excellent compared with where we were just a few short years ago. But this is the sort of area where the difference between phones and a DSLR becomes apparent. The gap between the two types of devices when shooting static shots outside is much smaller than the difference you’ll see when shooting in low light. The team built a machine learning engine by creating a data set of short exposure and long exposure low-light images (these were used for reference). The report states:
Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.
The team has put a video together to explain and demonstrate how their technique works, as shown below:
We’d recommend visiting the site if you want to see high-resolution images of the before and after, but the base images being worked with aren’t just “low light” — the original shots are, in some cases, almost entirely black to the naked eye. Existing image software struggles to make much out of these kind of results, even when professional processing is used.
While there’s still some inevitable blur, if you click through and look at either the paper or the high-resolution default shots, the results from Intel and Champagne-Urbana are an order of magnitude better than anything we’ve seen before. And with smartphone vendors jockeying to build machine intelligence capabilities into more devices, it’s entirely possible that we’ll see more and more products bringing these kinds of capabilities to market in phones and making them available to ordinary customers. I, for one, welcome the idea of a smarter camera — preferably one able to correct for my laughably terrible photography skills.
Hubble Examines 16 Psyche, the Asteroid Worth $10,000 Quadrillion
Researchers just finished an ultraviolet survey of 16 Psyche, the ultra-valuable asteroid NASA plans to visit in 2026.
How L1 and L2 CPU Caches Work, and Why They’re an Essential Part of Modern Chips
Ever been curious how L1 and L2 cache work? We're glad you asked. Here, we deep dive into the structure and nature of one of computing's most fundamental designs and innovations.
How Do SSDs Work?
Ever wondered how SSDs read and write data, or what determines their performance? Our tech explainer has you covered.