Filming slow-motion video requires capturing more than the standard 30 or 60 frames per second. You might not always have hardware around that can do that, but Nvidia has a workaround. It has developed a new AI algorithm that can take your regular video and build new frames out of nothing to create high-speed video after you’ve filmed an event. Nvidia accomplished this feat with an array of GPUs and a neural network.
High-speed video is more accessible than ever. You don’t need multi-thousand-dollar camera rigs to capture something in slow-motion when phones like the Galaxy S9 can film in 240 frames per second. However, limited storage space and processing power mean you can only take a few seconds of high-speed video. Few consumers are going to buy a device designed specifically to capture high-speed video. So, Nvidia’s system can take your regular video and make it high-speed by looking at the existing frames. It then creates new intermediary frames (up to seven of them) so the motion is smoother when you slow it down.
The new frames are a good approximation of reality, but they’re not real — most of the frames are computer generated. Nvidia used Tesla V100 GPUs and the cuDNN neural network framework to build a model for processing video into high-speed video. The team trained the network on more than 11,000 videos of everyday events filmed at 240 frames per second. The network learned how to emulate a 240fps video to it could predict the extra frames in videos that only have 30 frames per second.
Nvidia successfully slowed down 30fps video of a car skidding across a road, people playing sports, and more. Without the interpolated frames from the neural network, slowing down that video just makes it look like a choppy stop motion animation. The network even works on video that was already filmed in slow-motion. Nvidia used footage from the YouTube series The Slow Mo Guys, which is shot at 240fps already. The team slowed those videos down by a factor of four with the same smooth motion.
The examples shown by Nvidia are impressive. If you didn’t know the footage was mostly computer generated, you might not suspect anything was amiss. However, a notable limitation is the neural network can’t handle just any old video. It was trained on certain types of videos, so it can only slow down similar 30fps videos. For example, to slow down video of the car skidding, Nvidia needed to train the network with high-speed video of cars skidding. This is still quite impressive for a research project. Hopefully, it becomes a real consumer technology someday.
Samsung Announces High-Speed HBM2 Breakthrough, Codenamed Aquabolt
Samsung has found a way to turn up the speed on HBM2 dramatically without needing a voltage increase. Could this be the turning point for the standard?
Phantom v2640 High-Speed Camera Can Film 11,750fps in Full HD
This camera records at greater than HD resolution, but that's not the impressive part. No, the impressive part is that it records at greater than HD resolution at a mind-boggling 6,600 frames per second. You can push it even higher at lower resolutions.
The Boring Company Will Build High-Speed Transit for Chicago Airport
The Chicago Infrastructure Trust has chosen The Boring Company to build the Chicago Express Loop service, which will carry passengers to and from O'Hare Airport. The company's vision for that involves autonomous electric pods that travel at 150 miles per hour.
Here Are the Most Affordable Places to Live With High-Speed Internet
Curious about the intersection between affordable housing and high-speed internet? These metropolitan areas strike the best balance in the US between both goals.