Conference Programme

COVID-19

To reduce the risk of COVID-19 transmission, we will require each attendee to adhere to the following safety measures:

  • Face masks must be worn at all times during the event, except when delivering a presentation, eating or drinking. We recommend wearing masks that offer a good level of protection: surgical, FFP2 or FFP3 masks (rather than face coverings). Physical distancing will be encouraged.

  • Each attendee will take two COVID-19 lateral flow or PCR tests: within 24h before the first and then the second day of the conference. Attendees who have recovered from COVID-19 within the last 90 days don’t need to take tests.

  • If any delegate, or any close contact of them, tests positive for COVID-19 within 10 days before the conference, that person must not attend the event. In the case of a positive test result, the organizers will issue a full refund of the registration fee and arrange a remote presentation if necessary.

Tentative Programme

Monday 6th December 2021
09:00 Registration opens with Coffee
09:30 Chairs’ Welcome
Rafał Mantiuk, University of Cambridge and Christian Richardt, University of Bath (Conference Chairs)
09:40 Papers and Industry Talks Session 1: Bring on the colours!
  • Semantic-driven Colorization
    Man M. Ho (Hosei University), Lu Zhang (INSA Rennes), Alexander Raake (TU Ilmenau), Jinjia Zhou (Hosei University)
  • Arnold 7 update
    Frederic Servant (Autodesk)
  • Photometric Stereo with Area Lights for Lambertian Surfaces
    Jiangbin Gan (University of Marburg), Thorsten Thormählen (University of Marburg)
  • Material acquisition and editing
    Valentin Deschaintre (Adobe Research)
11:00 Coffee Break
Poster presenters put up posters
11:30 Keynote: 3D Digital Reality – Modeling for Perception
Cengiz Öztireli, University of Cambridge
12:30 Spotlight Session
  • One-shot SVBRDF Estimation Including Anisotropic Material
    Nozomu Terada (Tokyo University of Agriculture and Technology), Ikuko Shimizu (Tokyo University of Agriculture and Technology)
  • AURealnessGAN - An Architecture that Enables Manipulation of FACS Action Units in Face Image Generation
    Koyo Ishihara (Tokyo University of Agriculture and Technology), Ikuko Shimizu (Tokyo University of Agriculture and Technology), Akio Sashima (National Institute of Advanced Industrial Science and Technology), Koichi Kurumatani (National Institute of Advanced Industrial Science and Technology)
  • Learning semantic object segmentation for video post-production
    Flavien Jourdren (InterDigital), Emmanuel Jolly (InterDigital R&D France), Claire-Helene Demarty (Technicolor), Frederic Lefebvre (InterDigital), Pierre Hellier (InterDigital (Technicolor))
  • A Deep Learning Based Approach for Camera Switching in Amateur Ice Hockey Game Broadcasting
    Hamid Reza Tohidypour (University of British Columbia), Yixiao Wang (University of British Columbia), Mohsen Gholami (University of British Columbia), Megha Kalia (University of British Columbia), Kexin Wen (University of British Columbia), Lawrence Li (University of British Columbia), Panos Nasiopoulos (University of British Columbia), Mahsa Pourazad (TELUS Communications Inc.)
  • A Step Towards Automating the Synthesis of a Scene Script
    Américo Pereira (INESC TEC), Ricardo Carvalho (FEUP), Pedro Carvalho (INESC TEC and Universidade do Porto), Luís Corte-Real (FEUP)
  • Look-Up-Table Mystified
    Jurgen Stauder (InterDigital), Patrick Morvan (InterDigital), Angelo Mazzante (InterDigital), Anita Orhand (InterDigital), John Frith (MPC)
  • Spatio-temporal algorithm for 3D sequences noise reduction
    Ljubomir Jovanov (UGent)
  • Image Super-Resolution via Hierarchical Attention-Based Multi-References Sampling
    Marco Pesavento (University of Surrey), Marco Volino (University of Surrey), Adrian Hilton (University of Surrey)
  • Human Point Cloud Generation using Deep Learning
    Ryan Spick (University of York)
  • Demo: Video Provenance Network for Robust Content Attribution
    Alexander Black (University of Surrey), Tu Bui (University of Surrey), Simon Jenni (Adobe Research), Viswanathan (Vishy) Swaminathan (Adobe), John Collomosse (Adobe Research)
12:40 Lunch, Demo and Short Papers
14:30 Industry Special Session on Digital Humans
  • Volumetric video at the intersection of visual effects and virtual production
    Juraj Tomori (dimension), Charles Dupont (dimension), George Ash (dimension), Mike Pelton (dimension)
  • High-Performance Multi-Camera Systems for Volumetric Capture and 4D Face/Body Scanning
    Andrew Searle (IO Industries Inc)
  • The Creation of 3D Human Datasets for CV Research
    Lukas Lamprecht (Renderpeople)
15:30 Posters Session (coffee served)
16:00 Keynote: Creating Presence in Mixed Reality and the Metaverse
Darren Cosker, Microsoft
17:00 Networking Reception
Tuesday 7th December 2021
09:00 Registration opens with Coffee
09:30 Papers and Industry Talks Session 2: And action!
  • Automatic Camera Control and Directing with an Ultra-High-Definition Collaborative Recording System
    Bram Vanherle (Hasselt University), Tim Vervoort (Hasselt University), Nick Michiels (Hasselt University), Philippe Bekaert (Hasselt University)
  • Contact-rich simulation in NVIDIA Omniverse
    Kier Storey and Michelle Lu (NVIDIA)
  • FacialFilmroll: High-resolution multi-shot video editing
    Bharath Bhushan Damodaran (InterDigital R&D), Emmanuel Jolly (InterDigital R&D France), Gilles Puy (In his own name), Philippe-Henri Gosselin (InterDigital), Cédric Thébault (InterDigital), Junghyun Ahn (InterDigital), Tim Christensen (In his own name), Paul Ghezzo (In his own name), Pierre Hellier (InterDigital (Technicolor))
  • Foundry and Machine Learning
    Ben Kent (Foundry)
10:50 Coffee Break
Poster presenters put up posters
11:20Keynote: Learning to Capture and Synthesise 3D Humans in 3D Scene
Siyu Tang, ETH Zürich
12:20 Lunch, Demo and Short Papers
14:00 Papers and Industry Talks Session 3: Gimme the data!
  • Depth Estimation from a Single Omnidirectional Image using Domain Adaptation
    Yihong Wu (The University of Southampton ECS VLC Group), Yuwen Heng (University of Southampton), Mahesan Niranjan (University of Southampton), Hansung Kim (University Of Southampton)
  • VPN: Video Provenance Network for Robust Content Attribution
    Alexander Black (University of Surrey), Tu Bui (University of Surrey), Simon Jenni (Adobe Research), Viswanathan (Vishy) Swaminathan (Adobe), John Collomosse (Adobe Research)
  • High-fidelity procedural data synthesis for validation and training of perception function
    Oliver Grau (Intel), Korbinian Hagn (Intel)
  • Speech-Driven Conversational Agents using Conditional Flow-VAEs
    Sarah Taylor (University of East Anglia), Jonathan Windle (University of East Anglia), David Greenwood (University of East Anglia), Iain Matthews (Carnegie Mellon University)
15:20 Posters Session (coffee served)
15:50 Keynote: Perceptually-inspired VR Image Synthesis
Tobias Ritschel, University College London
16:50 Prizes, Announcements and Close