Keynotes

We are pleased to announce the following keynote speakers for CVMP 2020:

Angjoo Kanazawa, UC Berkeley
Perceiving Humans, Animals and Objects in the 3D World

We live in a 3D world that is dynamic—it is full of life, with inhabitants like people and animals who interact with their environment through moving their bodies. Capturing this complex world in 3D from everyday images or video has a huge potential for many applications such as marker-less motion capture from ordinary devices, compelling mixed reality applications that can interact with people and objects, novel content creation tools for artists, robots that can learn to act by visually observing people, and applications in biometrics, animal behavior sciences, and more.

In this talk I will give a high-level overview of the challenges and developments in this area in the past several years and exciting problems that remain. Revolving around the question “How can we recover the 3D mesh of a human from a single image or a video captured in an uncontrolled environment?” I will discuss challenges that center around the lack of ground truth 3D data and solutions that result in applications like training a simulated character to learn to act by watching YouTube videos or learning a dynamics model of human 3D motion from video. We will briefly touch upon how these approaches translate to animals and new challenges therein. I will end with problems at the frontier, which involve recovering the 3D state of not just humans but also the objects they interact with and models that can recover high-fidelity appearance of people including their clothing and accessories.

Angjoo Kanazawa is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. She is also a Research Scientist at Google Research. Her research lies at the intersection of computer vision, computer graphics, and machine learning. The goal of her lab is to build a system that can capture, perceive, and understand the underlying dynamic 3D world behind everyday photographs and videos, including deformable objects such as humans and animals. Previously, Prof. Kanazawa was a BAIR postdoc at UC Berkeley advised by Jitendra Malik, Alexei A. Efros and Trevor Darrell. She completed her PhD in Computer Science at the University of Maryland, College Park advised by David Jacobs. During her PhD, she had the pleasure to visit the Max Planck Institute in Tübingen, Germany under the guidance of Michael Black. Prior to that, Prof. Kanazawa completed her BA in Mathematics and Computer Science at NYU. She has been named a Rising Star in EECS and is a recipient of Anita Borg Memorial Scholarship. Her work on learning 3D deformation of animals from 2D images won the best paper award at Eurographics 2016.

Angjoo Kanazawa

Ben Radcliffe, Unity Technologies
The Road to Realtime

The real-time revolution has arrived in CG animation, previs, VFX, and more, empowering directors and artists to complete storytelling projects that look closer to their creative vision. What is Realtime? It’s shorthand for a related set of new technologies that re changing the way we interact with the world and with each other. To understand real-time, you need to understand the technologies that enable it. If you understand the available technologies, how they are used, and what their future holds, you’ll be well equipped to deal with how the way we create, distribute, and consume stories is rapidly evolving and converging.

Ben is a media and entertainment technical specialist at Unity Technologies with 20 years of experience in VFX, CG feature, and TV animation. He’s worked for a variety of top studios, including Dreamworks Animation, Sony Pictures Imageworks, Disney Feature Animation, MPC, and Industrial Light and Magic, and he’s worked as a digital artist supervisor on major franchises including Avengers and Transformers. In his role at Unity, Ben works with technical decision-makers and influencers at key studios to advance the art and science of visual effects and animation through the use of real-time technology.

Ben Radcliffe

Paul Debevec, Senior Staff Scientist, Google Research
Light Fields, Light Stages, and the Future of Virtual Production

In this talk I’ll describe the latest work we’ve done at Google and the USC Institute for Creative Technologies to bridge the real and virtual worlds through photography, lighting, and machine learning. I’ll begin by describing our new DeepView solution for Light Field Video: Immersive Motion Pictures that you can move around in after they have been recorded. Our latest light field video techniques record six-degrees-of-freedom virtual reality where subjects can come close enough to be within arm’s reach. I’ll also present how Google’s new Light Stage system paired with Machine Learning techniques is enabling new techniques for lighting estimation from faces for AR and interactive portrait relighting on mobile phone hardware. I will finally talk about how both of these techniques may enable the next advances in virtual production filmmaking, infusing both light fields and relighting into the real-time image-based lighting techniques now revolutionizing how movies and television are made.

Paul is a Senior Staff Scientist at Google Research and an Adjunct Research Professor at the USC Institute for Creative Technologies. His research in HDR imaging, image-based lighting, and photoreal digital actors has been recognized with two technical Academy Awards and SMPTE’s 2017 Progress Medal. In the early 2000’s, he originated the technique of surrounding actors with LED displays to create on-set image-based lighting for virtual production. Techniques from his work have been used to create key visual effects sequences in The Matrix, Spider-Man 2, Benjamin Button, Avatar, Gravity, Furious 7, Gemini Man, and to create a 3D Portrait of US President Barack Obama. More info at: http://www.debevec.org.

Paul Debevec

Sarah Ticho, Founder of Hatsumi
Immersive Art and Healthcare

Sarah Ticho is the founder of Hatsumi, producer at Deep and Healthcare Lead at Immerse UK. During this talk, she will discuss the applications of immersive technology in healthcare to support patients, educate clinicians and help inject a bit of fun into the recovery process. This talk will emphasise the importance of supporting cross-disciplinary collaborations between researchers, healthcare professionals, patients and creative practitioners which can support new innovative discoveries and opportunities for impact. She will share her experience of developing Hatsumi: a virtual reality adaptation of arts and health research method; body mapping which enables people to visualise the embodied experience of pain and emotion using 3D drawing and sound. Through digitising the process it can enhance its potential as a tool to improve patient-doctor communication, enhance research into the phenomenology of pain and emotion and create new creative tools to change the conversation around invisible experiences.

Sarah plays an instrumental role in shaping the future of immersive technology in healthcare. She is a consultant, researcher, curator and practitioner, and has worked with leading organisations including YouTube VR Creators Lab, The Big Anxiety Festival, Nesta, The Knowledge Transfer Network, NHS and the Institution of Engineering and Technology amongst others. She is the founder of virtual reality health tech startup Hatsumi, and the producer at Explore Deep, an award-winning clinically validated breath controlled VR experience designed to reduce anxiety. She continues to nurture the UK VR healthcare community through her role as Healthcare Lead at Immerse UK, and sits on the XR Safety Initiatve’s Medical XR advisory council.

Sarah Ticho