Multiply and subtract your way to more lifelike VR avatars
Pohang University of Science & Technology (POSTECH)
image: A method and mobile system that takes another user’s motion and the user’s intended peer-relative expression as inputs, then generates socially-aligned avatar motions.
Credit: POSTECH
POSTECH (Pohang University of Science and Technology) Professor Inseok Hwang’s team has developed ArithMotion, a mobile virtual reality (VR) system that enables anyone to express a wide range of avatar motions with ease. Using simple arithmetic-like controls, users can scale an avatar’s motion up or down and reverse it into an opposite response, allowing more natural nonverbal communication without expensive equipment.
In social VR platforms such as VRChat, people communicate through their avatars’ movements, facial expressions, and gestures. In particular, bodily motions are a key channel for building emotional connections between users and enhancing immersion and a sense of agency. However, because most users do not have access to expensive full-body tracking equipment, they are often limited to repeating preset motions—making natural, spontaneous communication difficult.
In this study, the team focused on a natural form of social behavior known as “peer relativity”—the way people instinctively mirror others’ actions or respond in the opposite direction. They brought this phenomenon directly into VR avatars: when another player celebrates a win with an excited gesture, your avatar can respond in the same way, while threatening behavior from others can trigger a more defensive, protective reaction—preserving a more lifelike sense of social realism.
The key idea is an intuitive, arithmetic-style input method. If a user multiplies another person’s motion by a number, such as 2, the avatar produces an amplified, more expressive reaction; applying a minus sign generates an opposite response. Much like pressing “+” or “−” buttons on a calculator, users can convey their intent through simple inputs without complex controls.
The team implemented the technology as a mobile-ready system, making it practical for real-world use. As a result, even in motion-limited settings such as mobile VR, users can express a variety of socially-aligned motions. Instead of repeating the same preset gestures like a robot, they can react in ways that match their intent—allowing them to feel more like themselves, even in virtual spaces.
This work is significant in that it helps narrow the gap in nonverbal expression caused by differences in hardware, opening a path for more people to communicate on more equal terms. Professor Inseok Hwang, who led the study, said, “ArithMotion was designed so that avatars can respond naturally to others’ actions, enabling more lifelike communication in VR,” adding, “We also expanded its potential applications by making it usable on smartphones.” Jaewoong Jang, the first author of the paper, added, “We focused on helping the system understand users’ intent accurately and express it in a much more natural way.”
This study—conducted by Professor Inseok Hwang’s team in the Department of Computer Science and Engineering at POSTECH (Pohang University of Science and Technology) (integrated Ph.D. student Jaewoong Jang, Ph.D. student Sungjae Cho, and undergraduate student Yeseul Shin)—was recently presented at ACM Symposium on Virtual Reality Software and Technology (VRST 2025), a leading international conference in the VR field.
This research was supported by the National Research Foundation of Korea (NRF) through the Mid-career Researcher Program and the Future Convergence Technology Pioneer Program, as well as by grants from the Institute of Information & Communications Technology Planning & Evaluation (IITP), the Korea Institute for Advancement of Technology (KIAT), and the Korea Innovation Foundation (Innopolis).
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.