News Release

“Robotic white cane” uses computer vision to aid blind and visually impaired

Peer-Reviewed Publication

Chinese Association of Automation

Technologies from GPS to artificial intelligence are transforming transportation for sighted travellers, but for those who are blind or visually impaired, very little has changed with respect to navigation technology. The white cane and guide dog, which have been around for a very long time, remain their main mobility tools. Researchers at Virginia Commonwealth University wanted to address this technology disparity and develop a “robotic white cane”. Their system is described in a paper published in the August 2021 Issue, IEEE/CAA Journal of Automatica Sinica.

In the past, there have been efforts at developing robotic navigation aids, or RNAs, that use a simultaneous localization and mapping (SLAM) technique. This involves the use of cameras to estimate the position (or, more formally, the “pose”) of the RNA with respect to its environs and to detect nearby obstacles from a 3D point cloud map that has previously been generated. A point cloud is a set of millions of data points of Cartesian coordinates (up-down, right-left, front-back) representing the 3D shape of objects, typically produced by 3D scanners.

However, over time, error in pose estimation can accrue over time. This is not much of a problem for short journeys, but for long ones, error accrual is sufficient to break down the navigation function of an RNA. To compensate for this, a range of solutions has been proposed, from the installation of Bluetooth beacons to radio-frequency identification chips (RFID), and constructing a visual map ahead of time. But building such a map is very time-consuming, and placing beacons or chips in the environment is impractical for all but a tiny handful of applications, rendering such ‘improvements’ actually inferior to a cane.

To overcome these limitations, the team developed a computer vision technique that uses an RGB-D camera, a gyroscope, and an accelerometer to measure how the RNA moves and rotates in space. This camera produces both a color image and depth data for each image pixel. The system combines depth data for visual features in the environment with the plane of the floor or ground. As the latter is always observable throughout the entire process as a reference against which to compare all other data, pose error is significantly reduced.

A statistical method is then deployed to reduce error still further. An initial estimated pose is used as a ‘seed’ for the statistical generation of numerous other probable poses surrounding the estimated one. The system then computes what the onboard sensor should be measuring based on the floor plan map at each of its poses and compares this to the actual sensor measurement.

“In essence, the method uses geometric features such as door frames, hallways, junctions, etc., from the 2D floor plan map to reduce pose estimation errors,” says professor Cang Ye, an engineer specializing in computer vision and the corresponding author for the paper. Ye is currently a professor with Department of Computer Science of Virginia Commonwealth University, USA.

Other efforts at technological improvements in conventional white canes have also stumbled over the effectiveness of their human-machine interface. Existing devices tend to use a speech interface delivering turn-by-turn navigational commands.

“These really are not very helpful to the visually impaired,” Ye adds. “They’re humans walking along at a natural pace, not drivers of cars waiting at a stop light.”

So the team abandoned that approach entirely and instead designed a novel ‘robotic roller tip’ interface at the end of an otherwise conventional white cane. This consists of a rolling tip like the end of a ballpoint pen, an electromagnetic clutch, and a motor. By engaging the clutch, the user puts the cane in robotic mode. Switched on, the motor then rotates the rolling tip to steer the cane in the desired direction calculated by the onboard computer vision system. A vibrator in the cane subtly suggests to the user the desired direction via a coded vibration pattern.

If the user attempts to swing the device or lift it off the ground, this is instantly perceived by the onboard sensors and the device automatically switches into plain white-cane mode. It stays in this mode until it returns to the ground, giving the whole system an automatic mode-switching functionality that removes the need for the user to consciously switch back and forth.

Having developed the prototype, the team are now working to reduce its weight and cost.

He Zhang, Lingqiu Jin and Cang Ye, "An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid," IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1389-1400, Aug. 2021. http://www.ieee-jas.net/en/article/doi/10.1109/JAS.2021.1004084

###

IEEE/CAA Journal of Automatica Sinica aims to publish high-quality, high-interest, far-reaching research achievements globally, and provide an international forum for the presentation of original ideas and recent results related to all aspects of automation.

The Impact Factor of IEEE/CAA Journal of Automatica Sinica is 6.171, ranking among Top 11% (7/63, SCI Q1) in the category of Automation & Control Systems, according to the latest Journal Citation Reports released by Clarivate Analytics in 2021. In addition, its latest CiteScore is 11.2, and has entered Q1 in all three categories it belongs to (Information System, Control and Systems Engineering, Artificial Intelligence) since 2018. 

Why publish with us: Fast and high quality peer review; Simple and effective online submission system; Widest possible global dissemination of your research; Indexed in SCIE, EI, IEEE, Scopus, Inspec. JAS papers can be found at http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6570654 or www.ieee-jas.net


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.