News Release

Physical keyboards make virtual reality typing easier

Peer-Reviewed Publication

Michigan Technological University

Virtual Reality Typing

image: To find out how to type better with virtual reality technology, computer scientists used a light-up virtual display, autocorrect algorithms and a physical keyboard. view more 

Credit: Kiran Udayakumar

Cranking out text is an integral part of our digital lives, but it's a field of research that has a surprising lack of emphasis in virtual reality (VR) development.

"Lots of people are buying head-mounted displays, but it's mostly for video games," says Scott Kuhl, an associate professor of computer science at Michigan Tech. "We're trying to figure out how we can use a head-mounted display for office work like writing and editing a document or sending a message to someone."

James Walker, a lecturer in computer science, led the research as part of his dissertation, working with Kuhl. He says the challenge stems from the fact that people need to see what they're typing--a bit difficult with an over-eye headset on--so he developed a light-up virtual keyboard synced with a physical keyboard. This virtual keyboard lets a VR typist see in the head-mounted display what keys they typed on the physical keyboard.

Other VR typing systems rely on either mid-air virtual keyboards or overlaying of real-world video into the VR display. But both approaches require extra equipment such as tracking cameras that can be error-prone and intrusive. People's texting performance also declines with only virtual keyboards.

"Typing in mid­air is very fatiguing," Walker says. "Our solution is noteworthy because it enables people to continue using their physical peripherals, which gives the best performance, and it doesn't need any extra hardware or require superimposing a video feed into the virtual environment."

To assess the effectiveness of a physical keyboard, Walker conducted an experiment in which participants typed on a keyboard they couldn't see. In one part, participants wore a head-mounted display--in this case, an Oculus Rift. In the other part, participants used a desktop monitor with their view of the keyboard occluded by a cover. In each part, Walker tested participants' performance with and without his virtual keyboard.

To start, a large number of people reported their typing prowess to be expert or at least proficient. Being human, participants made mistakes--lots of them--especially those without the virtual keyboard lighting up keys. That makes this experiment a perfect set-up not just for testing VR text hardware, but also to examine how autocorrect fits in.

"People underappreciate the redundancy in natural language," says Keith Vertanen, an assistant professor of computer science who assisted Walker and Kuhl with a language model to correct participants' typing. "Our recognition touch screen program, VelociTap, is extremely accurate as it's been trained on billions of words."

Still, Vertanen says he and the rest of the team were pleasantly surprised by how well the autocorrect algorithm shifted from its touch screen origins to predicting intended letters on a physical keyboard. In addition, the team observed that error rates declined as people continued typing with the VelociTap feedback and the model corrected about two-thirds of the text errors.

###

Learn more about the computer science program and the Institute of Computing and Cybersystems at Michigan Tech.

Here is the conference website and program for CHI2017: https://chi2017.acm.org/


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.