Adobe’s New Dual-Stream Neural Network Can Detect Photo Fraud

Adobe’s New Dual-Stream Neural Network Can Detect Photo Fraud

For several years Adobe has touted its Sensei framework for incorporating AI into its image editing tools for more realistic noise reduction, cloning, and object removal. Unfortunately, that effort is also one more reason it’s become harder to detect image fakery. So Adobe Research, along with the University of Maryland, are working on a way to use a sophisticated Deep Neural Network (DNN) to detect several types of image hacking.

Splicing, Cloning, and Object Removal

The team’s system isn’t a general-purpose system for finding all types of manipulation. Instead, it has been trained to detect three of the most common: splicing, the compositing of multiple images; cloning, copying a portion of an image and pasting it over another; and object removal.

One of the big challenges for the team was finding enough test images to train their network. They took the interesting approach of using the COCO database of images that include labeled objects, and using an automated tool to perform combinations of these three manipulations on them. That gave them a much larger training set of data than most previous efforts.

Dual-Stream Design Analyzes Image and Noise

Examples of tampering artifacts — Unnatural contrast in the baseball photo and obvious low-noise area in the second image
Examples of tampering artifacts — Unnatural contrast in the baseball photo and obvious low-noise area in the second image
Adobe’s New Dual-Stream Neural Network Can Detect Photo Fraud

Several Ways of Securing Images

The problem of detecting fake images is made particularly hard if only the processed image is available. And there are several cases where very powerful tools already exist. First, RAW files are quite difficult to fake. So getting the RAW file is now a common requirement of many major photo contests. Second, on-camera signing of images is a great way to secure their origin. Many high-end cameras already offer that as an option. Signed images, just like any public-key secured data, can be authenticated by any recipient. Similarly, JPEGs captured by most cameras also have distinctive attributes that are different from those in images created with Photoshop. So having the original JPEG, a RAW file, or a signed image are all ways to validate an image, or to use it as a baseline compared with the suspected version.

The Beginning of an AI Arms Race

When the team evaluated their system against other leading research implementations, it did better on almost every metric in all cases. As with many other fields like object and facial recognition, image manipulation and detection looks like one where machine learning approaches will quickly leapfrog other techniques. Of course, the two sides will also be leaping over each other, as tools for image editing produce more natural results in tandem with manipulation detection software becoming more powerful.

Continue reading

Android 8.1 Now Tells You How Fast Nearby Wi-Fi Networks Are

You won't get exact speed measurements, but a general descriptor tells you if the network is lightning fast or slow as molasses.

People Are Using a Neural Network App to Create Fake Celebrity Porn

Machine learning has become so advanced that a handful of developers have created a tool called FakeApp that can create convincing "face swap" videos. And of course, they're using it to make porn.

Trump Administration Denies Plan to Nationalize 5G Network

A memo leaked from the Trump administration discusses the possibility of nationalizing the 5G networks under construction in the US, but the administration states it has no plans to do so.

MIT Neural Network Processor Cuts Power Consumption by 95 Percent

MIT has developed new neural network processing methods that could cut the power consumption of existing solutions by up to 95 percent. It's a sea change that could effectively reinvent the nature of AI — and where such workloads are performed.