Introduction to Volumetric Video Capture
February 5, 2026 @ 12:00 pm - 2:00 pm
There has been a rising interest in volumetric video capture, a method for recording real-world people, objects, performances, or spaces in full 3D, creating data that allows viewers to experience the recording from any angle, with realistic depth and lighting, rather than being stuck in a fixed viewpoint like traditional video (Jin et.al. 2024 and Young et.al. 2023). This technique is valuable across academic disciplines, enabling the preservation of cultural heritage, the creation of training simulations, and high-fidelity analysis of human movement and performance in fields like bio-mechanics and digital humanities. This session will focus on the Depthkit Core software, which, when paired with depth sensors like the Kinect v2 or RealSense, allows creators to generate high-quality volumetric assets for immersive experiences. The workshop will also cover Depthkit Studio, which can be used to achieve real-time, live-streamed volumetric video for live events. Upon completion of this workshop, participants will know the foundations of volumetric capture, including essential hardware setup and best practices for lighting and calibration, and be able to navigate the full workflow from capture to processing, stitching, and exporting your 3D video assets for integration into platforms like Unity, Unreal, or WebXR.
Young, Gareth W., Néill O’Dwyer, and Aljosa Smolic. “Volumetric Video as a Novel Medium for Creative Storytelling.” In Immersive Video Technologies, edited by Giuseppe Valenzise, Martin Alain, Emin Zerman, and Cagri Ozcinar, 591–607. Academic Press, 2023. https://doi.org/10.1016/B978-0-32-391755-1.00027-4.
Jin, Yili, Kaiyuan Hu, Junhua Liu, Fangxin Wang, and Xue Liu. “From Capture to Display: A Survey on Volumetric Video” arXiv, 2024. arXiv:2309.05658v2. https://arxiv.org/html/2309.05658v2#bib.
