Newswise — In the high-stakes world of aviation, a pilot's ability to perform under stress can mean the difference between a safe flight and disaster. Comprehensive and precise training is crucial to equip pilots with the skills needed to handle these challenging situations.
Pilot trainers rely on augmented reality (AR) systems for teaching, by guiding pilots through various scenarios so they learn appropriate actions. But those systems work best when they are tailored to the mental states of the individual subject.
Enter HuBar, a novel visual analytics tool designed to summarize and compare task performance sessions in AR — such as AR-guided simulated flights — through the analysis of performer behavior and cognitive workload.
By providing deep insights into pilot behavior and mental states, HuBar enables researchers and trainers to identify patterns, pinpoint areas of difficulty, and optimize AR-assisted training programs for improved learning outcomes and real-world performance.
HuBar was developed by a research team from NYU Tandon School of Engineering that will present it at the 2024 IEEE Visualization and Visual Analytics Conference on October 17, 2024.
“While pilot training is one potential use case, HuBar isn't just for aviation,” explained Claudio Silva, NYU Tandon Institute Professor in the Computer Science and Engineering (CSE) Department, who led the research with collaboration from Northrop Grumman Corporation (NGC). “HuBar visualizes diverse data from AR-assisted tasks, and this comprehensive analysis leads to improved performance and learning outcomes across various complex scenarios.”
“HuBar could help improve training in surgery, military operations and industrial tasks,” said Silva, who is also the co-director of the Visualization and Data Analytics Research Center (VIDA) at NYU.
The team introduced HuBar in a paper that demonstrates its capabilities using aviation as a case study, analyzing data from multiple helicopter co-pilots in an AR flying simulation. The team also produced a video about the system.
Focusing on two pilot subjects, the system revealed striking differences: one subject maintained mostly optimal attention states with few errors, while the other experienced underload states and made frequent mistakes.
HuBar's detailed analysis, including video footage, showed the underperforming copilot often consulted a manual, indicating less task familiarity. Ultimately, HuBar can enable trainers to pinpoint specific areas where copilots struggle and understand why, providing insights to improve AR-assisted training programs.
What makes HuBar unique is its ability to analyze non-linear tasks where different step sequences can lead to success, while integrating and visualizing multiple streams of complex data simultaneously.
This includes brain activity (fNIRS), body movements (IMU), gaze tracking, task procedures, errors, and mental workload classifications. HuBar's comprehensive approach allows for a holistic analysis of performer behavior in AR-assisted tasks, enabling researchers and trainers to identify correlations between cognitive states, physical actions, and task performance across various task completion paths.
HuBar's interactive visualization system also facilitates comparison across different sessions and performers, making it possible to discern patterns and anomalies in complex, non-sequential procedures that might otherwise go unnoticed in traditional analysis methods.
"We can now see exactly when and why a person might become mentally overloaded or dangerously underloaded during a task," said Sonia Castelo, VIDA Research Engineer, Ph.D. student in VIDA, and the HuBar paper’s lead author. "This kind of detailed analysis has never been possible before across such a wide range of applications. It's like having X-ray vision into a person's mind and body during a task, delivering information to tailor AR assistance systems to meet the needs of an individual user.”
As AR systems – including headsets like Microsoft Hololens, Meta Quest and Apple Vision Pro – become more sophisticated and ubiquitous, tools like HuBar will be crucial for understanding how these technologies affect human performance and cognitive load.
"The next generation of AR training systems might adapt in real-time based on a user's mental state," said Joao Rulff, a Ph.D. student in VIDA who worked on the project. "HuBar is helping us understand exactly how that could work across diverse applications and complex task structures."
HuBar is part of the research Silva is pursuing under the Defense Advanced Research Projects Agency (DARPA) Perceptually-enabled Task Guidance (PTG) program. With the support of a $5 million DARPA contract, the NYU group aims to develop AI technologies to help people perform complex tasks while making these users more versatile by expanding their skillset — and more proficient by reducing their errors. The pilot data in this study came from NGC as part of the DARPA PTG
In addition to Silva, Castelo and Rulff, the paper’s authors are: Erin McGowan, PhD Researcher, VIDA; Guande Wu, Ph.D. student, VIDA; Iran R. Roman, Post-Doctoral Researcher, NYU Steinhardt; Roque López, Research Engineer, VIDA; Bea Steers, Research Engineer, NYU Steinhardt; Qi Sun, Assistant Professor of CSE, NYU; Juan Bello, Professor, NYU Tandon and NYU Steinhardt; Bradley Feest, Lead Data Scientist, Northrop Grumman Corporation; Michael Middleton, Applied AI Software Engineer and Researcher, Northrop Grumman Corporation, and PhD student, NYU Tandon; Ryan McKendrick, Applied Cognitive Scientist, Northrop Grumman Corporation.
arXiv:2407.12260 [cs.HC]
Meeting Link: 2024 IEEE Visualization and Visual Analytics Conference