Nvidia AI Turns Regular Video Into 240fps High-Speed Video

Nvidia AI Turns Regular Video Into 240fps High-Speed Video

Filming slow-motion video requires capturing more than the standard 30 or 60 frames per second. You might not always have hardware around that can do that, but Nvidia has a workaround. It has developed a new AI algorithm that can take your regular video and build new frames out of nothing to create high-speed video after you’ve filmed an event. Nvidia accomplished this feat with an array of GPUs and a neural network.

High-speed video is more accessible than ever. You don’t need multi-thousand-dollar camera rigs to capture something in slow-motion when phones like the Galaxy S9 can film in 240 frames per second. However, limited storage space and processing power mean you can only take a few seconds of high-speed video. Few consumers are going to buy a device designed specifically to capture high-speed video. So, Nvidia’s system can take your regular video and make it high-speed by looking at the existing frames. It then creates new intermediary frames (up to seven of them) so the motion is smoother when you slow it down.

The new frames are a good approximation of reality, but they’re not real — most of the frames are computer generated. Nvidia used Tesla V100 GPUs and the cuDNN neural network framework to build a model for processing video into high-speed video. The team trained the network on more than 11,000 videos of everyday events filmed at 240 frames per second. The network learned how to emulate a 240fps video to it could predict the extra frames in videos that only have 30 frames per second.

Nvidia successfully slowed down 30fps video of a car skidding across a road, people playing sports, and more. Without the interpolated frames from the neural network, slowing down that video just makes it look like a choppy stop motion animation. The network even works on video that was already filmed in slow-motion. Nvidia used footage from the YouTube series The Slow Mo Guys, which is shot at 240fps already. The team slowed those videos down by a factor of four with the same smooth motion.

The examples shown by Nvidia are impressive. If you didn’t know the footage was mostly computer generated, you might not suspect anything was amiss. However, a notable limitation is the neural network can’t handle just any old video. It was trained on certain types of videos, so it can only slow down similar 30fps videos. For example, to slow down video of the car skidding, Nvidia needed to train the network with high-speed video of cars skidding. This is still quite impressive for a research project. Hopefully, it becomes a real consumer technology someday.

Continue reading

Intel Launches AMD Radeon-Powered CPUs
Intel Launches AMD Radeon-Powered CPUs

Intel's new Radeon+Kaby Lake hybrid CPUs are headed for store shelves. Here's how the SKUs break down and what you need to know.

NASA’s OSIRIS-REx Asteroid Sample Is Leaking into Space
NASA’s OSIRIS-REx Asteroid Sample Is Leaking into Space

NASA reports the probe grabbed so much regolith from the asteroid that it's leaking out of the collector. The team is now working to determine how best to keep the precious cargo from escaping.

Chromebooks Gain Market Share as Education Goes Online
Chromebooks Gain Market Share as Education Goes Online

Chromebook sales have exploded in the pandemic, with sales up 90 percent and future growth expected. This poses some challenges to companies like Microsoft.

Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference
Intel’s Raja Koduri to Present at Samsung Foundry’s Upcoming Conference

Intel's Raja Koduri will speak at a Samsung foundry event this week — and that's not something that would happen if Intel didn't have something to say.