Google has invested heavily in artificial intelligence to power products like Google Photos and Google Assistant. These applications are impressive in their own right, but they’re not exactly “fun.” Google has just rolled out a new AI experiment that is geared entirely toward fun. The new Mirror Move experiment can identify your pose and match it to more than 80,000 images of other people to show you someone in a similar stance. Why? So you can make GIFs, of course.
While this specific Google experiment doesn’t have a practical use, it’s a tour de force of technology buzzwords. Mirror Move utilizes AI, machine learning, neural networks, augmented reality, and more. You can try it right now as long as your computer has a webcam. Just head over to the Move Mirror site linked above and grant the page access to your camera to get started.
Mirror Move works best if you stand far enough away from your computer that all your joints are in the frame. It’s also limited to a single person at a time. As you move, the AI scans your body and estimates where your joints are. It then matches your poses to a catalog of 80,000 photos of people. So, your live camera feed is on one side of the screen, and Mirror Move populates the other with matching pictures in real time. You can even create a GIF of this process to share on the internet.
This is all just for fun, but the technology behind Mirror Move could have many applications. Google calls it PoseNet, and you can learn more about the details in a Medium post from earlier this year. Like many Google technologies, PoseNet is powered by a convolutional neural network. The camera feed gets piped into the network, which identifies people and maps 17 tracking points on the image. The network matches those points to its catalog of 80,000 images, and you get the output.
PoseNet works in both single-person and multiple person detection modes. There’s a separate demo that shows how well this works, but all you get is the 17-point detection wireframe. Mirror Move is an implementation of the single-person version because it’s faster and the image library consists of individual people.
PoseNet could eventually find use in games, fitness tracking, and even interactive art installations. This is not a tool for recognizing who people are, so it’s less of a privacy concern than tools that can recognize and remember faces.
This Is Your Brain On Electrodes: Nissan’s ‘B2V’ Driver-Skill Amplifier
Nissan's B2V technology — brain-to-vehicle — captures and decodes the driver's brain waves. It can give the car up to a half-second advance notice of the driver's intentions.
The Digital Multi-Screen Experience: Coming Soon to a Car Near You
Automakers and suppliers continue to flesh out their vision for the digital automotive experience. We went hands on at CES 2018.
Lidar: A Gold Rush Is On to Help Your Car See Better
For autonomous vehicles and driver assistance systems to improve on human performance, they need to start with superior sensors. Mostly that means lidar — and thanks to a flurry of innovation, lidar is getting better, smaller, and less expensive.