Nvidia AI Can Render Complete Urban Environments in Unreal Engine 4
Interest in artificial neural networks has skyrocketed over the years as companies like Google and Facebook have invested heavily in machines that can think like humans. Today, an AI can recognize objects in photos or help generate realistic computer speech, but Nvidia has successfully built a neural network that can create an entire virtual world with the help of a game engine. The researchers speculate this “hybrid” approach could one day make AI-generated games a reality.
The system build by Nvidia engineers uses many of the same parts as other AI experiments, but they’re arranged in a slightly different way. To goal of the project was to create a simple driving simulator, but without using any humans to design the environment.
Like all neural networks, the system needed training data. Luckily, work on self-driving cars has ensured there’s plenty of training footage of a vehicle driving around city streets. The team used a segmentation network to recognize different object categories like trees, cars, sky, buildings, and so on. The segmented data is what Nvidia fed into its model, which used a generative adversarial network to improve the accuracy of the final output. Essentially, one network created rendered scenes, and a second network would pass or fail them. Over time, the network is tuned to only create believable data.
Nvidia plugged its AI into the Unreal Engine 4, which powers games like PUBG, Fortnite, and Octopath Traveler, and many more. However, the team didn’t render a full environment in UE4. Rather, the AI only got the basic topology of an urban environment. The AI filled in the world with buildings, roads, vehicles, and textures based on what it learned from the training data.
The resulting footage looks almost real, and Nvidia didn’t even need any massive server farms to run the AI generating this virtual world. It only needed a single GPU, although admittedly, it was a monstrously powerful graphics card — a $3,000 Titan V. The AI created 25 frames per second, which is enough to make the simulation smooth and playable. However, the AI would change the colors and textures in every frame because there’s nothing strange about that to a machine without any memory; each frame is a new event with no connection to the previous one. To fix that, Nvidia engineers created a rudimentary short-term memory for the AI to help it remain consistent.
While this technology is fascinating, the designers caution that it may be decades before games actually use AI-generated worlds. They compare this project to ray tracing to generate more realistic lighting. There are ray tracing demos that go back many years, but Nvidia only launched ray tracing-capable GPUs a few weeks ago.
Continue reading
Samsung Galaxy Z Flip4 Press Renders Have Leaked Ahead of August Launch
We now have what appear to be official renders of the Samsung Galaxy Z Flip4 smartphone. If you've seen the Flip3, you'll know what to expect.
Far Cry 5 Lovingly Renders the Countryside of Montana, But Not Everyone is Blissed Out
After a few years off, the Far Cry franchise is back with a full-on numbered sequel, and we're headed to the good ol' United States this time. We break down early performance tests on PCs and consoles.
Samsung Galaxy Tab S4 Design Renders Leak
We've been expecting a new premium Tab S device, and images of the purported Galaxy Tab S4 have now leaked. Surprise: it looks like a large, black slate.
RTX 2080 vs. Radeon VII vs. 5700 XT: Rendering and Compute Performance
How do the AMD Radeon VII, Nvidia RTX 2080, and AMD's RDNA-based Radeon RX 5700 XT compare in professional applications? We take these solutions for a spin, with a particular examination of the new Blender version, 2.80.