For several years Adobe has touted its Sensei framework for incorporating AI into its image editing tools for more realistic noise reduction, cloning, and object removal. Unfortunately, that effort is also one more reason it’s become harder to detect image fakery. So Adobe Research, along with the University of Maryland, are working on a way to use a sophisticated Deep Neural Network (DNN) to detect several types of image hacking.
Splicing, Cloning, and Object Removal
The team’s system isn’t a general-purpose system for finding all types of manipulation. Instead, it has been trained to detect three of the most common: splicing, the compositing of multiple images; cloning, copying a portion of an image and pasting it over another; and object removal.
One of the big challenges for the team was finding enough test images to train their network. They took the interesting approach of using the COCO database of images that include labeled objects, and using an automated tool to perform combinations of these three manipulations on them. That gave them a much larger training set of data than most previous efforts.
Dual-Stream Design Analyzes Image and Noise
Several Ways of Securing Images
The problem of detecting fake images is made particularly hard if only the processed image is available. And there are several cases where very powerful tools already exist. First, RAW files are quite difficult to fake. So getting the RAW file is now a common requirement of many major photo contests. Second, on-camera signing of images is a great way to secure their origin. Many high-end cameras already offer that as an option. Signed images, just like any public-key secured data, can be authenticated by any recipient. Similarly, JPEGs captured by most cameras also have distinctive attributes that are different from those in images created with Photoshop. So having the original JPEG, a RAW file, or a signed image are all ways to validate an image, or to use it as a baseline compared with the suspected version.
The Beginning of an AI Arms Race
When the team evaluated their system against other leading research implementations, it did better on almost every metric in all cases. As with many other fields like object and facial recognition, image manipulation and detection looks like one where machine learning approaches will quickly leapfrog other techniques. Of course, the two sides will also be leaping over each other, as tools for image editing produce more natural results in tandem with manipulation detection software becoming more powerful.
People Are Using a Neural Network App to Create Fake Celebrity Porn
Machine learning has become so advanced that a handful of developers have created a tool called FakeApp that can create convincing "face swap" videos. And of course, they're using it to make porn.
MIT Neural Network Processor Cuts Power Consumption by 95 Percent
MIT has developed new neural network processing methods that could cut the power consumption of existing solutions by up to 95 percent. It's a sea change that could effectively reinvent the nature of AI — and where such workloads are performed.
Google Neural Network Can Isolate Individual Voices in Videos
In a new feat of machine learning magic, Google Research has developed a system that can replicate the "cocktail party effect," where your brain focuses on a single audio source in a crowded room.
IBM Aims To Reduce Power Needed For Neural Net Training By 100x
Custom silicon for speeding up AI inferencing is here, but IBM wants to go further and use a hybrid computing architecture and elements of resistive computing to also improve the efficiency of training neural networks.