Google Hired Photographers To Help Train the AI-Powered ‘Clips’ Camera

All the cameras in your life thus far have been entirely under your control — you get to decide what they film and for how long. Google’s upcoming Clips camera is different. It uses machine learning to decide what’s important, then processes or discards the footage accordingly. Google unveiled this device in October 2017, but it’s still not available. It’s expected to go on sale soon, and now Google has detailed how it tuned the camera’s neural network to understand what is a significant moment.
The idea is that Clips will run continuously, but it will only save the video clips it thinks you will want. There’s also a shutter button for manual captures. It has three hours of battery life and 16GB of local storage. It doesn’t upload any video to the cloud for processing, so the on-device neural network (called Moment IQ) needed to be extremely efficient.
In many ways, the Clips is the embodiment of what Google calls “human-centered design.” That’s simply the idea that AI products should do something useful for the user without adding additional stress or complication. The Clips watches the world and gives you the video it thinks will make you smile. Training the network to understand an abstract concept like a memorable video was no easy feat, though.
Like all neural networks, Clips needed to see heaps of “good” and “bad” videos to tune its behavior. Some of the markers for the network were obvious. Video that’s blurry or partially obstructed can be disregarded easily. However, Google also needed to teach the AI about the concept of time. You don’t want too much filler in your clips, even if something interesting does happen at some point.

The team behind Clips initially just showed the model images they considered beautiful or interesting. Along with an understanding of depth of field, lighting, and the rule of thirds, Googlers thought the network would be able to understand what makes a good clip. However, that wasn’t the case. To help engrain the nuance of memorable video in Clips, Google hired a documentary filmmaker, a photojournalist, and a fine arts photographer to decide what images to use in training Clips.
Google admits the AI won’t be perfect, but the team seems confident that it will be capable of generating “good” videos. We won’t know how much of the video from Clips will be good until it launches. There’s still no release date, but it will cost $250 when it comes out.
Continue reading

Third-Party Repair Shops May Be Blocked From Servicing iPhone 12 Camera
According to a recent iFixit report, Apple's hostility to the right of repair has hit new heights with the iPhone 12 and iPhone 12 Pro.

Sony’s New Xperia Pro Smartphone Is a $2,500 Camera Accessory
This device is aimed at professional and enthusiast photographers who want a powerful accessory for their cameras. It's also Sony's first 5G smartphone in the US, but the price is positively jaw-dropping at $2,500.

Google Will Use Pixel’s Camera to Measure Heart Rate and Breathing
Like many of Google's machine learning projects, this one is coming first to Pixel phones, and more phones will probably get it down the line.

New Xiaomi Phone Has a Secondary Display in the Camera Hump
Chinese mobile giant Xiaomi is set to announce a new device called the Mi 11 Ultra, and the device has leaked early. It's got a giant camera module that supports up to 120x zoom, and there's even an extra screen. Yes, a screen in the camera hump. Because why not, I guess?