News Release

Eyes are faster than hands

A new machine learning based intention detection method using first-person-view camera for Exo Glove Poly II to aid the disabled seamlessly

Peer-Reviewed Publication

Seoul National University

A New Intention Detection Method Using Machine Learning

video: A Korean research team created a wearable hand robot that can aid the disabled who lost hand mobility. The robot can detect user's intention by collecting the behaviors with machine learning algorithm. view more 

Credit: Soft Robotics Research Center, Seoul National University

Professor Sungho Jo (KAIST) and Kyu-Jin Cho (Seoul National University), a collaboration research team in Soft Robotics Research Center (SRRC), Seoul, Korea, have proposed a new intention detection paradigm for wearable hand robots. The proposed paradigm predicts grasping/releasing intentions based on user behaviors, enabling the spinal cord injury (SCI) patients with lost hand mobility to pick-andplace objects. (The researchers include (KAIST) Daekyum Kim & Jeesoo Ha, (Seoul National University) Brian Byunghyun Kang, Kyu Bum Kim & Hyungmin Choi)

They developed a method based on a machine learning algorithm that predicts user intentions for wearable hand robots by utilizing a first-person-view camera. Their development is based on the hypothesis: user intentions can be inferred through the collection of user arm behaviors and hand-object interactions.

The machine learning model used in this study, Vision-based Intention Detection network from an EgOcentric view (VIDEO-Net), is designed based on this hypothesis. VIDEO-Net is composed of spatial and temporal sub-networks, where the temporal sub-network is to recognize user arm behaviors and the spatial sub-network is to recognize hand-object interactions.

An SCI patient wearing Exo-Glove Poly II, (Video of previous version ) a soft wearable hand robot, successfully pick-and-place various objects and perform essential activities of daily living, such as drinking coffee without any additional helps.

Their development is advantageous in that it detects user intentions without requiring any person-toperson calibrations and additional actions. This enables the wearable hand robot to interact with humans seamlessly.

>> Professor Sungho Jo

Sungho Jo received the B.S. degree in school of mechanical & aerospace engineering from the Seoul National University, Seoul, Korea, in 1999, the S.M. in mechanical engineering, and Ph.D. in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge, MA, USA, in 1999 and 2006 respectively. While pursuing the Ph.D., he was associated with the Computer Science and Artificial Intelligence Laboratory (CSAIL), Laboratory for Information Decision and Systems (LIDS), and Harvard-MIT HST NeuroEngineering Collaborative. Before joining the faculty at KAIST, he worked as a postdoctoral researcher at MIT media laboratory. Since December in 2007, he has been with the department of computer science at KAIST, where he is currently (tenured) Associate Professor. His research interests include intelligent robots, neural interfacing computing, neuromorphic computing, and wearable computing etc.

>> Professor Kyu-Jin Cho

Professor CHO Kyu-Jin is the director of the Soft Robotics Research Center in Seoul National University. He received the IEEE RAS Early Career award in 2014 for his achievement in the field of soft robots and bio-inspired robots. In 2015, He developed the water strider robot published in the journal Science. In 2016, he presented the Exo-Glove Poly, a glove-like soft robot that helps people with disabilities in their daily lives, at the AAAS Annual Convention. In April of the same year, he won the Soft Robot Challenge with the robot named 'SNUMAX' which was held in Pisa, Italy for the first time.

Q: How does this system work?

A: This technology aims to predict user intentions, specifically grasping and releasing intent toward a target object, by utilizing a first-person-view camera mounted on glasses. VIDEONet, a deep learning-based algorithm, is devised to predict user intentions from the camera based on user arm behaviors and hand-object interactions. Instead of using bio-signals, which is often used for intention detection of disabled people, we use a simple camera to find out the intention of the user. Whether the person is trying to grasp or not. This works because the target users are able to move their arm, but not their hands. We can predict the user's intention of grasping by observing the arm movement and the distance from the object and the hand, and interpreting the observation using machine learning.

Q: Who benefits using this technology?

A: As mentioned earlier, this technology detects user intentions from human arm behaviors and hand-object interactions. This technology can be used to any people with lost hand mobility, such as spinal cord injury, stroke, cerebral palsy or any other injuries, as long as they can move their arm voluntarily.

Q: What are the limitations and future works?

A: Most of the limitations come from the drawbacks of using a monocular camera. For example, if a target object is occluded by another object, the performance of this technology decreases. Also, if user hand gesture is not able to be seen in the camera scene, this technology is not usable. In order to overcome the lack of generality due to these, the algorithm needs to be improved by incorporating other sensor information or other existing intention detection methods, such as using an electromyography sensor or tracking eye gaze.

Q: To use this technology in daily life, what do you need?

A: In order for this technology to be used in daily life, these devices are needed: a wearable hand robot with an actuation module, a computing device, and glasses with a camera mounted. We aim to decrease the size and weight of the computing device so that the robot can be portable to be used in daily life. So far, we could find compact computing device that fulfills our requirements, but we expect that neuromorphic chips that are able to perform deep learning computations will be commercially available.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.