Newswise — Researchers from two Johns Hopkins divisions — the Applied Physics Laboratory in Laurel, Maryland, and the Whiting School of Engineering in Baltimore — have collaborated to develop a navigation system that enables blind or visually impaired users to navigate their surroundings with greater confidence and accuracy.
The system leverages artificial intelligence (AI) to map environments, track users’ locations and provide real-time guidance. It also processes information from depth imaging sensors and RGB — the red, green and blue channels used in imaging sensors to capture visual information — to produce detailed semantic mappings of the environment, allowing the navigation system to not only recognize obstacles but also identify specific objects and their properties. This capability can enable users to query the system for guidance to specific objects or features within their surroundings, making navigation more intuitive and effective.
What makes this system particularly innovative is its ability to significantly enhance the interpretability of the environment for users, explained lead researcher Nicolas Norena Acosta, a robotics research software engineer at APL.
“Traditional navigation systems for the visually impaired often rely on basic sensor-based mapping, which can only distinguish between occupied and unoccupied spaces,” he said. “The new semantic mapping approach, however, provides a much richer understanding of the environment, enabling high-level human-computer interactions.”
Current prosthetic vision devices can only stimulate a small area of vision, providing minimal visual feedback that’s not robust enough for users to navigate their environment safely and independently. Norena Acosta and his team — Chigozie Ewulum, Michael Pekala and Seth Billings from APL and Marin Kobilarov from the Whiting School’s Department of Mechanical Engineering — enhanced this basic visual feedback with additional haptic, visual and auditory sensory inputs to create a more comprehensive navigation system.
The haptic feedback involves an APL-developed headband that vibrates in different places to indicate the direction of obstacles or the path the user should follow. For example, if the path is to the right, the right side of the headband will vibrate. The auditory feedback uses voice prompts and spatial sound to give verbal directions and alerts about the surroundings.
The combined sensory inputs to the system are also translated into visual feedback that enhances the user’s ability to perceive obstacles and navigate effectively. The system provides a clear, simplified view of the environment, highlighting only the most critical information needed to avoid obstacles and move safely.
“The challenge was creating a system that could synchronize and process multiple types of sensory data in real time,” Norena Acosta explained. “Accurately integrating the visual, haptic and auditory feedback required sophisticated algorithms and robust computing power, as well as advanced AI techniques.”
The research was presented in April at SPIE Defense + Commercial Sensing 2024. The system is currently being tested in a clinical trial, with results expected this summer.
This work is funded by the National Eye Institute to capitalize on recent advances in computer vision — including developments in object recognition, depth sensing and simultaneous localization and mapping technologies — to augment the capabilities of commercial retinal prostheses.
Billings, the principal investigator of this effort, said that a robust, intuitive navigation aid like this system has the potential to improve the independence and mobility of its users significantly.
“The potential impact of this work on patient populations is substantial,” said Billings. “This could lead to greater social inclusion and participation in daily activities, ultimately enhancing the overall quality of life for blind and visually impaired individuals.”
Research reported in this publication was supported by the National Eye Institute of the National Institutes of Health under Award Number R01EY029741. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.