Google’s DeepMind unit has been involved in some of the company’s coolest AI breakthroughs in recent years, from outsmarting human Go players to developing more realistic speech synthesis. Now, DeepMind is looking to improve the way machines understand and model 3D spaces. DeepMind AI researchers have created a neural network that can predict what a space will look like after seeing one or more images of it, even if only parts of the room are visible.
At the core of this project is the aim to make neural networks easier to train. Usually, you have to get humans to label the data you use to train a neural network. After feeding the data in, the nodes in the network calculate weights and feed-forward into more nodes. At the end, the system output should match what you put in. Of course, it won’t at first, so you need to adjust the network until it’s trained. DeepMind’s new generative query network (GQN) can learn from unlabeled inputs and apply its knowledge to new situations.
The team generated 3D virtual spaces from vectors, and then created single-frame images of them for the system to analyze. The GQN is actually two neural networks — there’s a network that learns from the images and a second one that generates new perspectives. The team simulated a virtual robot arm, a block-like table, and a simple maze.
After training the GQN on millions of images, the system can create accurate representations of an object or room with just a single still image. It’s similar to the way your brain works. If you see a wall in the middle of a room, you’d probably imagine what the other side looks like and about where it’s located in comparison with other objects you can see.
DeepMind believes this sort of technology could be vital in areas like self-driving cars, where the system might not have all available information about upcoming road conditions. However, maybe it can predict with a high degree of accuracy based on what it does know.
The images shown to the GQN are very simple compared with the real world, and it still took months to get the network up to speed on current hardware. It may take another few generations of processing improvements before such a system can come close to understanding and predicting the layout of a complex real-world situation.
Today’s Surprisingly Excellent Net Neutrality Explainer Is Brought to You by Burger King
Burger King — yes, Burger King — made a pretty darn good video to explain what net neutrality is, why it's important, and why the FCC killing it last month represents a loss for Americans.
People Are Using a Neural Network App to Create Fake Celebrity Porn
Machine learning has become so advanced that a handful of developers have created a tool called FakeApp that can create convincing "face swap" videos. And of course, they're using it to make porn.
How to Build a PC in 2018: Choosing the Right Components
Everything you always wanted to know about building a PC, but were afraid to ask.
Google’s AI-Focused Tensor Processing Units Now Available in Beta
Google is ready to open up its Cloud TPU platform to developers and researchers looking to test machine learning workloads — and it's got a new, more powerful Cloud TPU design than the chips we've previously discussed.