How Nvidia’s RTX Real-Time Ray Tracing Works
Since it was invented decades ago, ray tracing has been the holy grail for rendering computer graphics realistically. By tracing individual light rays as they bounce off multiple surfaces, it can faithfully recreate reflections, sub-surface scattering (when light penetrates through a bit of a surface like human skin but scatters back as it goes), translucency, and other nuances that help make a scene compelling. It’s commonly used in feature films and advertising, but the large amount of processing time needed has kept it away from real-time applications like gaming.
With its new Turing GPU family, Nvidia is aiming to change all that by rolling-out improved real-time support for RTX, its high-performance ray tracing library that can churn out detailed scenes at game-worthy frame rates. We’ll take a deeper look at what makes ray tracing hard, what makes Nvidia’s RTX special, and how its Turing architecture further accelerates it.
How Ray Casting Turns Into Ray Tracing
Theoretically, ray tracing involves casting rays from each light source in a scene, generating (usually randomly) light rays from it, and following them as they hit and are reflected off surfaces. At each surface, the properties of the light are combined with the properties of the material it’s striking, and of course, the angle at which it intersects. The light, which may have picked up a different color from reflecting off the object, is then traced further, using multiple rays that simulate the reflected light — thus the term ray tracing. The tracing process continues until the rays leave the scene.
While that process is great in theory, it’s incredibly time-consuming, as most rays don’t hit anything we’re interested in, and other rays can bounce around nearly indefinitely. So real-world solutions make a clever optimization. They use a principle of light called reciprocity, which states the inverse of a light beam works the same way as the original to cast rays from the virtual camera out into the scene. That means only rays which will contribute to the final scene are cast, greatly increasing efficiency. Those rays are then followed (traced) as they bounce around until they either hit a light source or exit the scene. Even when they exit the scene, it could be at a point that adds light (like the sky), so either way, the amount of illumination added to each surface the ray hits is then added to the scene. The software may also limit how many reflections it will follow for a ray if the light contribution is likely to be small.
Ray Casting Is Massively Processor Intensive
The total number of rays of light falling on a scene is beyond what even the powerful computers used for ray tracing can model. Practically that means tracers have to pick a level of quality that determines how many rays are cast from each pixel of the scene in various directions. Then, after calculating where each ray intersects an object in the scene, it needs to follow some number of rays out from each intersection to model reflected light. Calculating those intersections is relatively expensive. Until recently it was also limited to CPUs. Moving it to the GPU, like some modern ray tracers are able to do, has provided a major improvement in speed, but Hollywood-quality results still require nearly 10 hours per frame on a high-end multi-GPU mini-supercomputer.
Because ray tracing is so processor intensive, interactive applications like games have been unable to use it except to generate compelling scenes in advance. For real-time rendering, they rely on rasterization, where each object’s surfaces are shaded based on their material properties and which lights fall on them. Clever optimizations such as light baking help make it possible for games to render at high frame rates while still looking great. But they fall short when it comes to subtle interactions like sub-surface scattering. So for the ultimate in recreating realistic scenes, the goal has always been real-time ray tracing.
Nvidia RTX Uses Its GPUs to Accelerate Ray Tracing
RTX also relies on a fancy new denoising module. Denoising is very important in ray tracing because you can only cast a limited number of rays from each pixel in the virtual camera. So unless you leave your ray tracer running long enough to fill in the scene, you have a lot of unpleasant-looking “bald spots” or “noise.” There are some great research projects that help optimize which rays are cast, but you still wind up with a lot of noise. If that noise can be reduced separately, you can produce quality output much faster than if you need to address the issue by sending out that many more rays. Nvidia uses this technique to help it create frames more quickly. It is likely this capability builds on the AI-powered work Nvidia presented at SIGGRAPH last year, as performing it in real time relies heavily on the Tensor “AI” Cores in its Turing GPUs.
Custom RT Cores Facilitate Real-Time Ray Tracing
The heavy-lifting in any type of physically based ray tracing is the calculation of intersections. For each light ray cast from the camera (of which hundreds are typically needed for each pixel of each frame), the software needs to find the object it intersects. Multiple ongoing rays need to then be sent from that object, and those intersections calculated, and so on until the ray either leaves the scene (and is assigned a value based on the environmental lighting map) or hits a radiant light source. Denoising reduces the number of rays needed, but the number is still staggering.
To help address this issue Nvidia has developed purpose-built RT Cores as part of its Turing architecture. There isn’t a lot of information out about them, but in particular, they allow for much faster calculation of the intersection of a ray with objects stored in a Bounding Volume Hierarchy (BVH), a popular data structure for representing objects in ray tracers.
RTX Comes to Consumer GPUs With New Turing 20xx Cards
You can see what Nvidia expects from developers in the following demo clip built by the Battlefield V team. It shows reflections from off-screen objects including flames and explosions, which wouldn’t be possible using only rendering based on what is on the screen. The reflections also closely model the physical properties of surfaces, another big advantage of ray tracing:
Initially, it is likely that ray-traced elements or specific scenes will appear as features mixed into games, rather than being used to create entire games, since the vast majority of gamers won’t be able to run RTX for quite a while. Some game devs have also said that it would be a massive undertaking, and a large performance hit, to re-architect their titles to support ray tracing — no matter how fast the ray tracer itself is. Nvidia is certainly chumming the water, with a solid list of titles it claims are actively working on implementing real-time ray tracing using RTX, but we’ll have to wait and see what they actually deliver and when
Continue reading
Flume 2 Real-Time Water Meter Review: Monitors Usage, Detects Leaks
Until recently, it has been hard to get a handle on how much water you're using for what, a problem for those of us who live in frequent drought areas with water restrictions. Flume has built an amazing smart meter that helps you make more informed water usage decisions.
Real-Time Radiation Tracking Unlocks Safer Cancer Treatment
Thanks to new research, medical experts might soon be able to fine-tune radiation delivery to avoid negative side effects or future cancers.
How Nvidia’s RTX Real-Time Ray Tracing Works
Ray tracing is the ultimate way to create realistic computer-generated scenes — and at up to 10 hours per frame to render, it's been mostly restricted to movies and advertising, until now. Nvidia has shown real-time ray tracing using its new RTX running on its Volta GPUs.
Facebook to Build Its Own Silicon for Real-Time Content Filtering
Facebook is reportedly going to start building its own hardware for content monitoring and real-time filtering.