We are pleased to announce the following keynote speakers for CVMP 2018:
Wolfgang Lempp, FilmLight:
Technology of Visual Storytelling
We take it for granted that image capture technology, computer-generated images, and display technology are getting better all the time. But we are reaching a point of diminishing returns, as our eyes and the laws of physics get in the way. Could it even be that we have to tell better stories if we want to keep the audience interested? We look at the developments over the last 50,000 years to come up with an answer.
Wolfgang Lempp has been involved in motion picture technology for over 30 years. With a background in Physics, he started as an electronic design engineer in animatronics, and moved on to develop motion control systems. He was instrumental in the pioneering efforts of the Computer Film Company, developing one of the first digital film compositing systems, creating numerous image processing tools, including the original Keylight algorithm. In 2002, he founded FilmLight, together with his engineering colleagues from CFC. The Northlight film scanner, the Truelight colour management tools and the Baselight grading system were the basis of FilmLight’s widely recognised success, and the combination of film heritage, colour science and creative toolset are still driving innovation at FilmLight today. Wolfgang has received a number of industry awards, including an AMPAS Sci-Tech award and, more recently, the Herbert T. Kalmus Medal Award.
J.P. Lewis, SEED, Electronic Arts
The Promise, and Problems, of Deep Learning for Character (and other Content) Creation
J.P. Lewis is a numerical programmer and researcher. Lewis is Principal Research Scientist at SEED, the new research lab of Electronic Arts, and is an adjunct Associate Professor in the machine learning group of Victoria University. His interests include computer vision and machine learning applications in entertainment. He has received credits on a few movies including Avatar and The Matrix Sequels, and several of his algorithms have been adopted in commercial software including Maya and Matlab.
Rafał Mantiuk, University of Cambridge:
How many pixels are too many?
We start to lack the processing power and bandwidth to drive 8K and high-resolution head-mounted displays. However, as the human eye has its own limitations, the relevant question is what spatial and temporal resolution is the ultimate limit for any technology. In this talk, I will review the visual models of spatio-temporal and chromatic contrast sensitivity, which can explain such limitations. Then, I will show how they can be used to reduce rendering cost in VR-applications, find the more efficient encoding of high dynamic range images and compress images in a visually lossless manner.
Rafał K. Mantiuk is a Senior Lecturer at the Computer Laboratory, University of Cambridge (UK). He received a PhD from the Max-Planck-Institute for Computer Science (Germany). His recent interests focus on computational displays, novel display technologies, rendering and imaging algorithms that adapt to human visual performance and viewing conditions in order to deliver the best images given limited resources, such as computation time, bandwidth or dynamic range. He contributed to early work on high dynamic range imaging, including quality metrics (HDR-VDP), video compression and tone-mapping. More on his research can be found at: http://www.cl.cam.ac.uk/~rkm38/.
Matthias Nießner, Technical University of Munich:
3D Reconstruction and Understanding of the Real World
In this talk, I will cover our latest research on 3D reconstruction and semantic scene understanding. To this end, we use modern machine learning techniques, in particular deep learning algorithms, in combination with traditional computer vision approaches. Specifically, I will talk about real-time 3D reconstruction using RGB-D sensors, which enable us to capture high-fidelity geometric representations of the real world. In a new line of research, we use these representations as input to 3D Neural Networks that infer semantic class labels and object classes directly from the volumetric input. In order to train these data-driven learning methods, we introduce several annotated datasets, such as ScanNet and Matterport3D, that are directly annotated in 3D and allow tailored volumetric CNNs to achieve remarkable accuracy. In addition to these discriminative tasks, we put a strong emphasis on generative models. For instance, we aim to predict missing geometry in occluded regions, and obtain completed 3D reconstructions with the goal of eventual use in production applications. We believe that this research has significant potential for application in content creation scenarios (e.g., for Virtual and Augmented Reality) as well as in the field of Robotics where autonomous entities need to obtain an understanding of the surrounding environment.
Matthias Nießner is heading the Visual Computing Lab at Technical University of Munich (TUM). He obtained his PhD from the University of Erlangen-Nuremberg in 2013, and was a Visiting Assistant Professor at Stanford University from 2013 to 2017. Since 2017 he is Professor at TUM, where he is focusing on static and dynamic 3D reconstruction approaches with a strong focus on modern machine learning and optimization techniques.