Blog

HoloCapture: In any place, to any space.

The ultimate software to transform human performance into volumetric video and play them back anywhere. HoloCapture offers a comprehensive volumetric pipeline; each component available individually upon request.

Capture Technology

Capture Technology

Capture.

From software to hardware like cameras and servers and various combinations, HoloCapture is a complete studio solution delivering everything you need to shoot high-quality volumetric content, flexible to various spaces and lighting conditions.

Capture Technology

Capture Technology

Reconstruct.

HoloCapture reconstructs video feeds into high fidelity 3D assets, leveraging cloud-based processing for maximum scale and without processing bottlenecks. On-premise local processing also available.

Capture Technology

Capture Technology

Edit, Stream, and Playback.

Licensing HoloCapture gains you access to our end-to-end volumetric pipeline, including the post-production platform, HoloSuite. Edit and compress your data, stream it worldwide, or playback on popular software platforms.

Schedule a consultation

Arcturus Studio

San Francisco

680 Folsom St
San Francisco, CA 94107 USA

Worldwide licensees

London

Dimension
Unit 35, Wimbledon Business Centre
Riverside Road
London SW17 0BA United Kingdom

Website

Los Angeles

Metastage
12800 Foothill Blvd
Los Angeles, CA 91342 USA

Website

Seoul

ifland Studio Seoul
65 Eulji-ro, Myeong-dong, Jung-gu,
Seoul, South Korea

Website

Tokyo

Nikon Creates Corporation
Tokyo Ryutsu Center B Building
6-1-1 Heiwajima, Ota-ku
Tokyo, 143-0006 Japan

Website

Berlin

VoluCap
August-Bebel-Str. 26-53, 14482 Potsdam, Germany

Website

Zurich

ETH
Technoparkstrasse 1, 8005 Zürich, Switzerland

Website

FAQs

HoloCapture produces raw content (holographic video), tools to work with that content in a post-production environment, and the means to play that content back on a wide variety of devices.

Holographic video, also known as volumetric video, looks like video from any given viewpoint but exists volumetrically in 3D space. Viewers can change their view of a performance at any time, or actually move around the video, in mixed reality experiences.

We capture performances on our stage using many cameras, then use computer vision algorithms to create a textured 3D mesh per frame. We further process that data to provide some consistency in the meshes over time, which we then compress into a file format that is playable on a wide variety of cross-platform devices.

We usually shoot @ 30 fps, though we can capture at both higher and lower frames rates if desired. For playback, it is possible to interpolate to generate higher frame rates to, for example, match framerates of immersive devices.

We can capture very long takes, on the order of an hour.

We've captured as many as 20 people in a single scene before, but don't recommend that as a best practice. We find that we can shoot two people at the same time comfortably, and up to four with careful staging (and some technical limitations with resolution). We're happy to discuss your specific needs during pre-production to arrive at a satisfactory solution for your project.

We adjust the size of our capture volume by moving camera towers closer in or further away. Moving the cameras closer to the subject results in higher resolution result, but a smaller capture volume. Moving them further away reduces resolution. The Studio version of HoloCapture generally shoot at 8ft as our maximum diameter and 4.5ft as our minimum diameter. Our max height is 10ft. Please discuss your project with us even if it appears that our 8ft diameter might be too small for your needs.

We often use uniform lighting so it’s easier to re-light the capture in post. We are able to support a much wider variety of lighting scenarios though, including very low levels of light and colored gels. Part of our pre-production process will help establish what will work best for your particular project.

Our system is very similar to a standard video shoot, with many of the same roles needed for a smooth production. We provide production support with camera operators, producers, and technical directors. To get the best from your shoot day we also recommend makeup/hair, wardrobe, set or prop designer, audio engineer, animal trainer, etc., based on the specific needs and complexity of the shoot.

Our system is set up to capture audio. We have 8 channels of shotgun microphones equi-distant around performer. We can also support custom mic configurations like lavalier, boom, etc. While we are capable of capturing basic audio, and will provide you with a synced scratch track, your team will be responsible for audio post-production and sweetening. We’ll walk you through the process as part of pre-production.

With our premium 106 camera system, we output 600GB/min of raw footage. Our end result compresses that data down to something comparable to HD video at 15-30Mbps.

On shoot days, we usually turn around initial selects for review within 24 hours. Timeframes for final delivery depend on the content and duration of the take, as well as specific needs of the client and project.

Output is typically in the range of 20K triangles and 2K texture per character for a VR device, down to 10K triangles and 1K texture for mobile devices, which we provide as a custom streamable .MP4 file.

We've created plug-in support for our compressed MP4 file for Unity, Unreal, and native support for Windows, ARKit for IOS and SceneKit for Android just to name a few. We're truly cross-platform compatible; if you're not sure if we support what you need, just reach out! There’s a good chance we’re working on it. We can also provide our OBJs and PNGs, which can be read by many digital content creation apps.

We can compress down to rates typical for HD video. We use h.264 for the MP4 files and can compress to client's specifications. For example, demo captures released with HoloLens ran between 7Mbps-12Mbps. Higher resolution and/or uncompressed formats would be on the order of several hundred MB for a 30sec clip.

Apparent accuracy depends on several factors such as the number of polygons, texture resolution, viewing distance, and playback device. We can adjust resolution while preserving detail, and usually work with clients to balance performance with quality for their specific scenario and device.

Yes, we frequently capture and/or remove props from the scene, depending on needs of the performance and scenario. In addition, we provide some post-production tools, workflow support and best practices for removing props and/or adding CG props as needed after capture.

This technology has been in development since 2010, and we’ve captured thousands of human and animal performances over a very wide range of action, costumes, and props. We have a good understanding of where challenges will be, and ways to minimize them to achieve creative goals.