Nvidia AI Compensates for Your Poor Photography Skills by Erasing Noise From Images
Taking a photo in poor lighting can often result in something too pixelated and noisy to be useful. Advanced software processing on some phones and cameras can fix moderate noise, but a new project from Nvidia, MIT, and Aalto University uses AI to correct for extreme levels of noise. Even if the “Noise2Noise” system has never seen an image before, it can de-noise it to get something very close to the original.
Noise2Noise is a neural network, which means you need to train it with lots of data. The team used 50,000 images from the ImageNet database, which contains clear, high-resolution images. Of course, the network needs to see noisy images in order to understand how to de-noise them. So, the team artificially added noise to the images and used those to train the algorithm.
Nvidia contributed a bank of Tesla P100 GPUs to run the network training with the cuDNN-accelerated TensorFlow deep learning framework. The network was adjusted until it was able to take out the noise and deliver something close to the original dataset image. The true test is how the network handles new images that it hasn’t seen before. The team reports that Noise2Noise can remove artifacts and noise with a high degree of accuracy.
Researchers point to several possible applications for Noise2Noise. Low-light photography is probably the one that would make the biggest immediate impact on your life. You could run your noisy photos through Noise2Noise and end up with something that looks much nicer. Astrophotography often involves very long exposures, and that leads to high noise. The same process could be applied here to make images of space clearer. MRI images suffer from similar noise issues, and the team tested Noise2Noise as a way to clean them up.
Many camera and smartphone manufacturers have their own processing algorithms that strip noise out of RAW images before showing you the final jpeg. For the most part, they don’t rely on the same technology as Noise2Noise. The only one that’s close is Google, which has leveraged its machine learning technology in the Pixel camera to do similar noise reduction work. However, it’s nowhere near as extreme. Noise2Noise can resolve detail from an almost unrecognizably pixelated image. The final product does look a bit unnaturally smooth, but that’s an issue even with less powerful image processing.
The researchers are presenting their work at the International Conference on Machine Learning in Stockholm, Sweden. It’s still just a computer science curiosity at the moment, but image processing is big business. A practical application could be a big hit.
Continue reading
MSI’s Nvidia RTX 3070 Gaming X Trio Review: 2080 Ti Performance, Pascal Pricing
Nvidia's new RTX 3070 is a fabulous GPU at a good price, and the MSI RTX 3070 Gaming X Trio shows it off well.
Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report
AMD's Smart Access Memory hasn't even shipped yet, but Nvidia claims it can duplicate the feature.
Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth
Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs.
Nvidia, Google to Support Cloud Gaming on iPhone Via Web Apps
Both Nvidia and Google have announced iOS support for their respective cloud gaming platforms via progressive web applications. Apple can't block that.