Back to top
05/08/2018

By: John Root, Virtual Production Supervisor, Technicolor Experience Center


NVIDIA’s GPU Technology Conference (GTC) is an annual event focused on the many ways in which people are using the Nvidia GPU. GPU, the acronym for ‘Graphics Processing Unit’, historically targeted graphics programmers and game developers, however, in recent years, Crypto Miners and Artificial Intelligence have found ways to use these cards to mine for Bitcoin and train Deep Learning. It’s made the conference a very technologically diverse place.

This was my second visit to GTC, and based on my past experience, I was expecting it to be very graphics-focused. To my surprise, it was highly focused on AI! I was approached to speak on the Revolutionizing Virtual Production with VR and Deep Learning panel alongside Ben Grossmann, Co-Founder, Magnopus; Darren Hendler, Director Digital Human Group, Digital Domain; Michael Ford, CA, Sony Pictures Imageworks; Lap Luu, CTO, Magnopus; Rev Lebaredian, Vice President, GameWorks & Lightspeed Studios, NVIDIA, and Richard Grandy, Sr Solutions Architect, NVIDIA. The diverse mix of panelists represented virtual production from all sides of the industry, allowing the audience to see different opinions and perspectives.

One of the biggest takeaways from GTC was that deep learning – a subsection of machine learning built with algorithms influenced by the structure and function of the brain – could be a useful training tool. In striking up conversations with leaders in this domain, I found a common thread in that they are all craving…even starving for…training data. Until recently, the GPU was used almost exclusively for graphics.

The way modern algorithms work is by training. By showing these machines thousands, millions, or even thousands of millions of training images, they learn to eventually recognize those images. For instance, to train a self-driving car, we would need to show the machine millions of frames of pedestrians walking in crosswalks. But in addition to showing these images, we would need to tag the images. Specifics like gender, hair color, motion, etc. all play a factor in machine learning. A human would need to tag the image with a description such as “female, walking, holding an umbrella” to ignite memorization; the more descriptive, the better. I’ve learned that currently there is no common taxonomy for this. Working at Technicolor, my initial thought was how we are in a great position to help in this niche field. Using our motion capture stage, we have the capacity to capture data and tag it accordingly to aid in the development of Deep Learning.

In addition to my virtual production-focused panel, my colleague Marcie Jastrow spoke on the panel The Future of AI for Media & Entertainment alongside Munika Lay, Director, Strategy & Business Development, End Cue; Vicki Dobbs Beck, Executive in Charge, ILMxLAB; Shalini De Mello, Senior Research Scientist, NVIDIA; and Rick Champagne - Global Media & Entertainment Strategy and Marketing, NVIDIA. This served as an opportunity to share Technicolor’s growing knowledge on the subject and plans for future activation.

Overall, GTC was a great experience because representatives from Technicolor were able to cross-pollinate with other technology sectors. With our global reach, talented digital artists and scientists, and countless technological resources, Technicolor is well-positioned to aid in the growth of deep learning, and I’m excited to be a part of that initiative.