Multi-Modal VR Environments

In addition to the technological contributions targeted at providing real-time multi-party holo-portation services, this project will investigate how to efficiently recreate and provide realistic multi-modal VR environments. The goal is to determine strategic combination and integration methods, blending immersive and traditional audiovisual formats (e.g. 3D synthetic content, 2D mono / stereo / FVV video, 180º/360 clips, Point Clouds, textual info, spatial audio, etc.), as well as multi-sensory stimuli (e.g. scents and haptic feedback), with the goal of maximizing the feelings of realism and immersion, while minimizing production and integration efforts. While recent works, including studies by the involved research teams, have superficially explored this research scope, they were limited to fully video-based spaces or 3D spaces with limited freedom of exploration, and were based on predefined settings. This project will deeply address this scope for fully navigable 3D spaces, with unlimited 6DoF and involving diverse viewpoints, as well as supporting interactive presentation dynamics, which become fundamental aspects for next-generation VR / XR environments.