In the ever-evolving landscape of virtual reality (VR) technology, a number of key hurdles remain. But a team of computer scientists have tackled one of the major challenges in VR that will greatly improve user experience--enabling an immersive virtual experience while being physically limited to one's actual, real-world space. The research team will present their work at SIGGRAPH 2018.
Computer scientists from Stony Brook University, NVIDIA and Adobe have collaborated on a computational framework that gives VR users the perception of infinite walking in the virtual world--while limited to a small physical space. The framework also enables this free-walking experience for users without causing dizziness, shakiness, or discomfort typically tied to physical movement in VR. And, users avoid bumping into objects in the physical space while in the VR world.
To do this, the researchers focused on manipulating a user's walking direction by working with a basic natural phenomenon of the human eye, called saccade. Saccades are quick eye movements that occur when we look at a different point in our field of vision, like when scanning a room or viewing a painting. Saccades occur without our control and generally several times per second. During that time, our brains largely ignore visual input in a phenomenon known as "saccadic suppression"--leaving us completely oblivious to our temporary blindness, and the motion that our eyes performed.
"In VR, we can display vast universes; however, the physical spaces in our homes and offices are much smaller," says lead author of the work, Qi Sun, a PhD student at Stony Brook University and former research intern at Adobe Research and NVIDIA. "It's the nature of the human eye to scan a scene by moving rapidly between points of fixation. We realized that if we rotate the virtual camera just slightly during saccades, we can redirect a user's walking direction to simulate a larger walking space."
Using a head- and eye-tracking VR headset, the researchers' new method detects saccadic suppression and redirects users during the resulting temporary blindness. When more redirection is required, researchers attempt to encourage saccades using a tailored version of subtle gaze direction--a method that can dynamically encourage saccades by creating points of contrast in our visual periphery.
The team who authored the research, titled "Towards Virtual Reality Infinite Walking: Dynamic Saccade Redirection," will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world's leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
To date, existing methods addressing infinite walking in VR have limited redirection capabilities or cause undesirable scene distortions; they have also been unable to avoid obstacles in the physical world, like desks and chairs. The team's new method dynamically redirects the user away from these objects. The method runs fast, so it is able to avoid moving objects as well, such as other people in the same room.
The researchers ran user studies and simulations to validate their new computational system, including having participants perform game-like search and retrieval tasks. Overall, virtual camera rotation was unnoticeable to users during episodes of saccadic suppression; they could not tell that they were being automatically redirected via camera manipulation. Additionally, in testing the team's method for dynamic path planning in real-time, users were able to walk without running into walls and furniture, or moving objects like fellow VR users.
"Currently in VR, it is still difficult to deliver a completely natural walking experience to VR users," says Sun. "That is the primary motivation behind our work--to eliminate this constraint and enable fully immersive experiences in large virtual worlds."
Though mostly applicable to VR gaming, the new system could potentially be applied to other industries, including architectural design, education, and film production.
For the full paper and video, visit the project page. Coauthors include Qi Sun (Stony Brook University, NVIDIA, Adobe), Anjul Patney (NVIDIA), Li-Yi Wei (Adobe), Omer Shapira (NVIDIA), Jingwan Lu (Adobe), Paul Asente (Adobe), Suwen Zhu (Stony Brook University), Morgan McGuire (NVIDIA), David Luebke (NVIDIA), and Arie Kaufman (Stony Brook University).
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world's largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world's leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
To register for SIGGRAPH 2018 and hear from the authors themselves, visit s2018.siggraph.org/attend/register.