We’ve Almost Gotten Full-Color Night Vision to Work

We’ve Almost Gotten Full-Color Night Vision to Work

Scientists at the University of California, Irvine, have experimented with reconstructing night vision scenes in color using a deep learning algorithm. The algorithm uses infrared images invisible to the naked eye; humans can only see light waves from about 400 nanometers (what we see as violet) to 700 nanometers (red), while infrared devices can see up to one millimeter. Infrared is therefore an essential component of night vision technology, as it allows humans to “see” what we would normally perceive as total darkness.

Though thermal imaging has previously been used to color scenes captured in infrared, it isn’t perfect, either. Thermal imaging uses a technique called pseudocolor to “map” each shade from a monochromatic scale into color, which results in a helpful yet highly unrealistic image. This doesn’t solve the problem of identifying objects and individuals in low- or no-light conditions.

We’ve Almost Gotten Full-Color Night Vision to Work

The scientists at UC Irvine, on the other hand, sought to create a solution that would produce an image similar to what a human would see in visible spectrum light. They used a monochromatic camera sensitive to visible and near-infrared light to capture photographs of color palettes and faces. They then trained a convolutional neural network to predict visible spectrum images using only the near-infrared images supplied. The training process resulted in three architectures: a baseline linear regression, a U-Net inspired CNN (UNet), and an augmented U-Net (UNet-GAN), each of which were able to produce about three images per second.

Once the neural network produced images in color, the team—made up of engineers, vision scientists, surgeons, computer scientists, and doctoral students—provided the images to graders, who selected which outputs subjectively appeared most similar to the ground truth image. This feedback helped the team select which neural network architecture was most effective, with UNet outperforming UNet-GAN except in zoomed-in conditions.

The team at UC Irvine published their findings in the journal PLOS ONE on Wednesday. They hope their technology can be applied in security, military operations, and animal observation, though their expertise also tells them it could be applicable to reducing vision damage during eye surgeries.

Continue reading

Hubble Examines 16 Psyche, the Asteroid Worth $10,000 Quadrillion
Hubble Examines 16 Psyche, the Asteroid Worth $10,000 Quadrillion

Researchers just finished an ultraviolet survey of 16 Psyche, the ultra-valuable asteroid NASA plans to visit in 2026.

Voyager 2 Probe Talks to Upgraded NASA Network After 8 Months of Silence
Voyager 2 Probe Talks to Upgraded NASA Network After 8 Months of Silence

NASA just said "hello" to Voyager 2, and the probe said it back.

How Do SSDs Work?
How Do SSDs Work?

Ever wondered how SSDs read and write data, or what determines their performance? Our tech explainer has you covered.

How L1 and L2 CPU Caches Work, and Why They’re an Essential Part of Modern Chips
How L1 and L2 CPU Caches Work, and Why They’re an Essential Part of Modern Chips

Ever been curious how L1 and L2 cache work? We're glad you asked. Here, we deep dive into the structure and nature of one of computing's most fundamental designs and innovations.