NVIDIA Research
Accueil » SIGGRAPH 2021: these NVIDIA renders don’t look like much, but they might be the future of rendering

SIGGRAPH 2021: these NVIDIA renders don’t look like much, but they might be the future of rendering

This article is also available in: French

At first glance, Neural Scene Graph Rendering is just yet another technical paper, and it can even seem underwhelming: the renders shown by the authors are not as impressive as what you’ll find in your typical SIGGRAPH talk. When looking at these, one might even wonder if we really are in 2021.

But looks can be very deceiving.

The best of two worlds

The authors, Jonathan Granskog, Till Schnabel, Fabrice RousselleJan Novak from NVIDIA, explain that their research project is actually part of a long-term goal, one that could revolutionize rendering as we know it.

At the moment, two very different set of techniques can be used to create, render images:

  • Traditional 3D rendering, widely used in VFX and CG animation. It is perfect to create a wide variety of projects, but photorealism can be quite difficult to achieve.
Project Mike (SIGGRAPH 2017), a good example of Traditional 3D rendering (here, in real-time).
  • Neural rendering, in other words techniques relying on IA/deep learning. In the last few years, you probably stumbled upon tools capable of creating photorealistic portraits of humans that don’t exist, or NVIDIA Canvas, a tool capable of turning brushstrokes into realistic landscape images. These tools are good examples of neural rendering. Here, “neural scene representation” might represents variables, while the “neural renderer” is the generator network that creates the picture.
Faces produced by a StyleGAN : none of these people are real.

If we compare these two set of techniques, for exemple to create a face, 3D rendering gives very good results, but probably not as photorealistic as a neural render.

On the other hand, as the authors of the paper explain, if you want to create something really different, such as an alien head, then your face generator won’t be able to, since it was not trained on the right data, whereas it will be quite easy to edit a 3D model head within Maya or Blender.

Top: traditional pipeline Bottom: neural rendering

Artistic freedom, realism, ability to work on entirely new projects… In the end, each approach has its specific pros and cons.

But what if we could combine those two sets of techniques? What if we created a bridge between these approaches? We could then render some parts of an image/animation using 3D rendering, some with neural rendering. For example, as the authors suggest, neural rendering could be used for faces and fur, which are notoriously complex to handle using 3D rendering.

This idea is quite appealing, but at this stage this is just an idea. A lot of work is needed before we can marry these two widely different approaches.

A 3D scene rendered without a traditional renderer

The paper by Jonathan Granskog, Till N. Schnabel, Fabrice Rousselle, Jan Novák is a step in this direction. The research team focused on translating a traditional scene graph (the way a 3D scene is represented with 3D models, primitives, materials, lights etc) in the neural world. In other words, they present a neural scene graph, a “a modular and controllable representation of scenes with elements that are learned from data”, that can then be sent to a neural renderer to create pictures and animations.

In other words, the paper does not present a way to marry traditional and neural approaches -yet-, but presents a way to translate traditional 3D scenes into the neural world.

The research team managed to create some interesting results. Here are some of them:

  • An animation with two tori playing beach volleyball with a volumetric ball:
  • A small tool allowing real-time editing of a neural scene graph rendered with a neural renderer. What you see is therefore not handled using traditional 3D rendering techniques.
  • This technique can also be used with 2D sprite animations. Once again, what you see is the result of neural rendering, not of a traditional renderer.

Back to the big picture

Once again, it should be highlighted that even if these renders might seem underwhelming at first glance, we need to consider the big picture. At this stage, the goal is not to provide production-ready tools.

However, with these tests, the research team proved that precise control can be achieved using neural rendering, both with animated 2D and 3D scenes.

This is therefore a first step towards the long-term vision of the authors: creating a bridge between a traditional 3D workflow and neural rendering. Or, as the authors put it, “marrying traditional editing mechanisms with learned representations, and towards high-quality, controllable neural rendering”.

Does this mean we are at the dawn of a revolution? Only time will tell, and there is still a long road ahead. As usual, 3DVF.com will keep you updated on further developments.

In the meantime, we highly recommend you watch the following video presentation. In particular, at the 30 second mark, the authors explain their long-term vision. You can also read the full paper (11 pages): Neural Scene Graph Rendering.

Laissez un commentaire

A Lire également