Intel Hires Former Nvidia Researcher Who Helped Develop Ray Tracing Tech

Intel Hires Former Nvidia Researcher Who Helped Develop Ray Tracing Tech

Intel is serious about re-entering the graphics market, but the company also has a lot of catching up to do to match the decades of experience AMD and Nvidia bring to the table when it comes to working with developers and developing high-end video card hardware.

Kaplanyan has written a blog post laying out his plans for the future:

Recent advances in neural graphics, ray tracing, cloud computing, perceptual and differentiable rendering, materials and lighting, appearance modeling, as well as in commoditized content creation require a transformation of the current graphics platform. This gives us an opportunity to not only completely reinvent how we co-design graphics hardware and software with these technologies in the driver’s seat, but also to look at a larger picture of a potentially novel graphics ecosystem that can include heterogenous distributed systems and multi-level compute for movie-quality visuals on a wide range of consumer devices.

This might sound like jargon or marketing if you don’t look too closely, but there’s real depth here. Kaplanyan’s reference to “neural graphics” refers to ongoing efforts to develop a neural graphics pipeline. There are efforts underway to combine generative adversarial network techniques directly into graphics engines. This description of OpenDR, an approximate differentiable renderer, explains the concept as follows: “OpenDR can take color and vertices as input to produce pixels in an image and from those pixels, it retains derivatives such that it can determine exactly what inputs contributed to the final pixel colors. In this way, it can ‘de-render’ an image back into colors and vertices.”

Intel Hires Former Nvidia Researcher Who Helped Develop Ray Tracing Tech

I can’t say for certain what Kaplanyan has in mind when he talks about the commoditization of content creation, but I suspect it applies to tools like the in-browser character creation tools Epic has shown off as part of Unreal Engine 5. What all of this adds up to is a great deal of practical work into cutting-edge methods of displaying pixels on screens.

This kind of research is needed more than ever. GPU power consumption continues to rise; rumors have suggested next-generation cards from Nvidia and Intel could hit 400-500W. While those figures seem high, GPU power consumption has been rising for years. Ray tracing makes the climb that much harder, but we’ve already seen how technologies like Nvidia’s DLSS or AMD’s FSR can reduce the impact of ray tracing by improving the visual experience at a lower resolution. This allows the end-user to target a lower native resolution and divert additional GPU horsepower to the ray-tracing side of the equation if desired. Other techniques, like coarse pixel shading, attempt to save GPU power by rendering backgrounds with less detail.

It’s going to be a few years yet before ray tracing goes truly mainstream. It looks as though we’ll need to wait for at least one more generation before AMD and Nvidia push plausible ray tracing into the $250 price point and the budget market may take even longer. Nonetheless, the combined advances of machine learning, ray tracing, and new rendering techniques do represent a significant change compared with rasterization alone.

Exactly how much of a part Intel has to play in all of this will depend on how good the company’s graphics hardware is — but we should have some answers to that question before too much longer.

Continue reading

MSI’s Nvidia RTX 3070 Gaming X Trio Review: 2080 Ti Performance, Pascal Pricing
MSI’s Nvidia RTX 3070 Gaming X Trio Review: 2080 Ti Performance, Pascal Pricing

Nvidia's new RTX 3070 is a fabulous GPU at a good price, and the MSI RTX 3070 Gaming X Trio shows it off well.

Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report
Nvidia Will Mimic AMD’s Smart Access Memory on Ampere: Report

AMD's Smart Access Memory hasn't even shipped yet, but Nvidia claims it can duplicate the feature.

Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth
Nvidia Unveils Ampere A100 80GB GPU With 2TB/s of Memory Bandwidth

Nvidia announced an 80GB Ampere A100 GPU this week, for AI software developers who really need some room to stretch their legs.

Nvidia, Google to Support Cloud Gaming on iPhone Via Web Apps
Nvidia, Google to Support Cloud Gaming on iPhone Via Web Apps

Both Nvidia and Google have announced iOS support for their respective cloud gaming platforms via progressive web applications. Apple can't block that.