Accueil » SIGGRAPH 2021: The future is now with Real-Time Live!

SIGGRAPH 2021: The future is now with Real-Time Live!

This article is also available in: French

Every year, Real-Time Live! is a highlight of SIGGRAPH. A live event during which the latest, most original interactive projects of the year are showcased.

This year, seven cutting-edge real-time technology demos where selected, with topics ranging from experimental art to avatars and next-gen classrooms. You will find below the recording of the event and the timecodes of each demo. If you’d rather pick and choose which demos you want to watch, you can also find below the video a short explanation about each project.

4:07 – The LiViCi Music Series : Real-Time Immersive Musical Circus Performance 10:59 – I am AI – AI-driver Digital Avatar made easy 18:40 – Shading Rig: Dynamic Art-Directable Stylised Shading for 3D characters 25:31 Coretet : VR Musical Instruments for the 21st Century 32:53 Technical Art Behind the Animated Shot “Windup” 40:37 Future Classroom 47:25 Normalized Avatar Digitization for Communication in VR

LiViCi: Circus is getting high-tech

Athomas Goldberg, Samuel Tetreault showcased LiViCi (“Live and Virtual Circus”), live from Animatrik Film Design. At the hear of LiViCi is a simple idea: what if we could mix circus, motion capture and virtual production tools ? During the live demo, several artists performed on a motion capture stage to create a live performance using 3D avatars and a virtual camera.

This demo received the Audience Choice Award.

I Am AI: videoconferencing without a webcam thanks to NVIDIA

NVIDIA Research told us about an impressive technology breakthrough. Thanks to deep learning, one can create animated avatars from a single photo and audio speech or text. There isn’t any video streamed, which means much less bandwidth is needed.

Thanks to I Am AI, you can use any picture of yourself to create an avatar and join an online meeting, while wearing pyjamas or being on a beach. And if there is too much noise around you, feel free to switch to using your keyboard: I Am AI is able to use text-to-speech and to animate your avatar at the same time.

Of course, since this is a NVIDIA project, everything is running on the GPU. According to the presentation, I Am AI will run at 20fps or above using a RTX GPU. Last, but not least, if you want to change the way you look, NVIDIA also showcased a tool that can generate stylized avatars.

Of course, this kind of technology has a few drawbacks. For example, one could use I am AI to try to impersonate someone else.

I am AI won the Best in Show Award.

Shading Rig: toon shading is getting better!

Toon shading is an effective tool if you want to give a 2D look to your animation, but this technique can generate unwanted shading artifacts that must be cleaned up. Some smoothing techniques can be used to avoid these artifacts, but you usually end up with a flat-looking render. Lohit Petikam, Ken Anjyo, Taehyun Rhee present a novel approach that avoids both of these issues, while allowing artists to precisely control the end result.

Coretet: can you play music in VR?

Rob Hamilton showed us a surprising concept: VR instruments, most of them inspired by real ones (violin, cello…). He also showcased an experimental VR instrument. A century after the invention of the theremin, decades after the first synthetizers, is this the dawn of a new era of instruments?

Windup: how Unity was used to create a touching animated short film

Chris Kang, a member of the creative team for the short film/tech demo Windup, showed us some of the key elements that allowed the team to create such a beautiful short. The timeline, the root creation process and even an ice shader for a window were showcased.

If you missed the short when it came out, you can watch it below:

The future of classrooms, according to Ken Perlin

Ken Perlin (who gave his name to Perlin Noise) from the NYU Future Reality Lab, Zhenyi He, Un Joo Christopher, Kris Layng worked on the classroom of the future.

Their idea is to get rid of regular classroms and Zoom meetings. Instead, they suggest using an immersive teaching platform that allows students to learn in a new way. The system was created using volumetric animations, and you can learn more about this tool directly as part of the on-demand SIGGRAPH 2021 Course, “Inventing the Future.”

Normalized Avatar Digitization for Communication in VR

Last, but not least, McLean Goldwhite, Zejian Wang, Huiwen Luo, Han-Wei Kung, Koki Nagano, Liwen Hu, Lingyu Wei, Hao Li showcased a Pinscreen and UC Berkeley project centered on avatar generation for VR.

Using a single picture of a human face and thanks to the power of a StyleGAN, their method can create a 3D face. A fully rigged body is also created. Using an Oculus Quest 2, the avatar will follow your movements; even the lips are animated when you speak.

Once again, Real-Time Live! did not disappoint

This year’s Real-Time Live! featured experimental art, digital avatars and even the future of education. As usual, the event gave us a glimpse of what could be the future of interactive techniques. We especially appreciated the fact that the demonstration were live, as usual, even though SIGGRAPH 2021 is a virtual-only event.

Laissez un commentaire

A Lire également