Nvidia AI Can Render Complete Urban Environments in Unreal Engine 4
Interest in artificial neural networks has skyrocketed over the years as companies like Google and Facebook have invested heavily in machines that can think like humans. Today, an AI can recognize objects in photos or help generate realistic computer speech, but Nvidia has successfully built a neural network that can create an entire virtual world with the help of a game engine. The researchers speculate this “hybrid” approach could one day make AI-generated games a reality.
The system build by Nvidia engineers uses many of the same parts as other AI experiments, but they’re arranged in a slightly different way. To goal of the project was to create a simple driving simulator, but without using any humans to design the environment.
Like all neural networks, the system needed training data. Luckily, work on self-driving cars has ensured there’s plenty of training footage of a vehicle driving around city streets. The team used a segmentation network to recognize different object categories like trees, cars, sky, buildings, and so on. The segmented data is what Nvidia fed into its model, which used a generative adversarial network to improve the accuracy of the final output. Essentially, one network created rendered scenes, and a second network would pass or fail them. Over time, the network is tuned to only create believable data.
Nvidia plugged its AI into the Unreal Engine 4, which powers games like PUBG, Fortnite, and Octopath Traveler, and many more. However, the team didn’t render a full environment in UE4. Rather, the AI only got the basic topology of an urban environment. The AI filled in the world with buildings, roads, vehicles, and textures based on what it learned from the training data.
The resulting footage looks almost real, and Nvidia didn’t even need any massive server farms to run the AI generating this virtual world. It only needed a single GPU, although admittedly, it was a monstrously powerful graphics card — a $3,000 Titan V. The AI created 25 frames per second, which is enough to make the simulation smooth and playable. However, the AI would change the colors and textures in every frame because there’s nothing strange about that to a machine without any memory; each frame is a new event with no connection to the previous one. To fix that, Nvidia engineers created a rudimentary short-term memory for the AI to help it remain consistent.
While this technology is fascinating, the designers caution that it may be decades before games actually use AI-generated worlds. They compare this project to ray tracing to generate more realistic lighting. There are ray tracing demos that go back many years, but Nvidia only launched ray tracing-capable GPUs a few weeks ago.
Continue reading
Hayabusa2 Spacecraft Completes Mission, Returns Asteroid Sample to Earth
After six years in space, the Hayabusa2 sample container landed on Earth, providing scientists with the first significant samples collected directly from an asteroid.
Qualcomm Completes Nuvia Acquisition, Aims for Laptops
Qualcomm announced it has completed its acquisition of Nuvia, the CPU design company that broke cover last year with claims its architectures would outperform both Apple and x86.
NASA’s OSIRIS-REx Completes Last Asteroid Flyby Before Heading Home
NASA reports that OSIRIS-REx has completed a last-minute addition to its mission profile: one final flyby of Bennu to see how its activities changed the surface of the object.
NASA’s Mars Helicopter Completes Its Third and Most Impressive Flight
This was the first test to feature significant side-to-side movement, which has helped to validate the drone's autonomous flight software. Plus, NASA has received some of the first color photos from the aircraft.