Foundry’s Research into AR and VR Production Workflows from SIGGRAPH 2017
This SIGGRAPH 2017 presentation from Foundry discusses new AR and VR production techiniques they have been researching and it is quite a good watch.
When researching workflows in producing AR experiences, the Foundry‘s main goal was to have maximum viewer interaction but with cinematic high quality rendering all in a smartphone/tablet architecture. The point cloud scene data is captured using a smartphone and streamed to a server where artists can later layout and tweak the capture in the Unreal Engine.
In order to provide a rich cinematic experience, the server then handles the backend rendering and streams it to the user’s tablet or phone. To address the lighting and matching it with the rendered assets depending on time of day, Foundry is also experimenting with deep learning AI to address the issue of ever changing lighting.
For Foundry’s VR experiment they helped produce a VR Experience called Kinch & The Double World. Through the use of lightfield cameras, Foundry was able to composite Steroscopic 3D footage shot with a Nokia Ozo with elements shot with a lightfield camera array and have users interact with the Unity engine.
The traditional stereoscopic plates combined with lightfield acquisition of the closer up elements such as actors allows proper parallax for those things closer to the camera. This allows for a truly immersive experience where the viewer will get proper parallax when tilting, rotating, and leaning while in the VR experience.
This is all an exciting glimpse into workflows that will become more and more commonplace as these new media experiences emerge.