| 09:00 | Registration Opens |
| 09:30 | Session #3: Virtual Humans Chair: Claudio Guarnera - Neural Implicit Avatar Conditioned on Human Pose, Identity and Gender
Guillaume Loranchet, Pierre Hellier, Adnane Boukhayma, Joao Regateiro, Franck Multon - Realistic Clothed Human and Object Joint Reconstruction from a Single Image
Ayushi Dutta, Marco Pesavento, Marco Volino, Adrian Hilton, Armin Mustafa - Hi-RQCT: Hierarchical Residual-Quantized Causal Transformer for High-Quality 3D Human Motion Generation
Dongjie Fu, Tengjiao Sun, Pengcheng Fang, Yiyang Zhang, Hansung Kim - 3D-Aware Latent-Space Reenactment: Combining Expression Transfer and Semantic Editing
Paul Hinzer, Florian Barthel, Anna Hilsmann, Peter Eisert |
| 10:30 | Coffee Break |
| 11:00 | Keynote #3: Geometry First: Structured Representations for Generating 3D in a Large-Data World Angela Dai Technical University of Munich |
| 12:00 | Spotlight Session - Short Papers and Demos - Posters:
- A Video Processing Pipeline for Automatic Gesture and Prosody Annotation
Szymon Lisowski (University of Southampton), Anna Wilson (University of Oxford), Fabio Pizzati (Mohamed Bin Zayed University of Artificial Intelligence), Elinor Payne, Philip Torr (University of Oxford), Hansung Kim (University of Southampton) - SANDFISH: Smart ANomaly Detection for Filming Interesting Species and Habitats
Jack Alston, Robert Dawes (BBC Research & Development) - Audio Visual Instance Segmentation for Video Editing
Jinbae Seo, Seungho Baek, Kwanghoon Sohn (Yonsei University) - Image-based Facial Rig Inversion
Tianxiang Yang (University of Surrey and Humain Ltd.), Marco Volino, Armin Mustafa (University of Surrey), Greg Maguire, Robert Kosk (Humain Ltd.) - Finding the pattern between Emotions and Genre Classification in Films
Ines N. Teixeira, Paula Viana, Maria Teresa Andrade (University of Porto) - TemporalMamba: A Novel Video Super-Resolution Framework with Temporal State Space Models and Frequency-Domain Fusion
Simin Mirzaei, Panos Nasiopoulos, Shahriar Mirabbasi (University of British Columbia) - Enabling Local Multimodal AI for Metadata Generation in Archival Collections
Minsak Nanang, Karyn Fleeting, Armin Mustafa (University of Surrey, The National Gallery, and British Film Institute) - IK-Based Full-Body and Facial Capture for Stylized 2D Puppets for Real-Time or Linear Pipelines
Meghdad Asadilari (Rochester Institute of Technology) - Monochromatic Palette Space: A Low-Dimensional Manifold Model for Steerable Color Grading
Jinwoo Lee (Korea Advanced Institute of Science and Technology) - Demos:
- Virtual Scene Integration: Compositing and Relighting Greenscreen Performances
James Gardner, William A. P. Smith, Will Rowan, Florian Block (pxld.ai and University of York) - IK-Based Full Body and Facial MoCap for Stylized 2D Puppets Using Accessible Tracking Devices
Meghdad Asadilari (Rochester Institute of Technology) - Content Understanding for Personalised Media in AI4ME
Davide Berghi, Asmar Nadeem, Tony Alex, Armin Mustafa, Graham Thomas, Philip J. B. Jackson, Adrian Hilton (University of Surrey), Robert Dawes (British Broadcasting Corporation) - CoSTAR National Lab: Prototyping Emerging AI Technologies for Converged Media Production
Violeta Menéndez González (CoSTAR National Lab and University of Surrey), Hazel Dixon, Branden Faulls (CoSTAR National Lab and Royal Holloway University of London) |
| 12:20 | Posters, Demos and Lunch |
| 13:40 | Session #4: Industrial Session Chair: Oliver Grau - 13:40-14:05: PBR Capture at scale for production & ML Datasets
Elliott Round (CoFounder & CTO, M-XR) - 14:05-14:30: Evaluation of Depth-Based Volumetric Capture Methods in Production Environments
Philip Coulam-Jones (Disguise) - 14:30-14:55: Videomatics for Technical Control and Creative Freedom in ICVFX Productions
Adam Streicher, Lena Gieseke (Filmuniversität Babelsberg KONRAD WOLF) - 14:55-15:20: Stop fixing it in post - how innovation should be pushing VP to the next level
Adam Smith (Executive Producer, Dimension) |
| 15:20 | Coffee Break (Posters and Demos continue) |
| 15:50 | Keynote #4: The Last Mile of Research for Production-Ready View-Synthesis Peter Hedman Google DeepMind |
| 16:50 | Closing and Best Paper Awards Claudio Guarnera University of York |