The mechanics of eating are more complex than they might appear. For about a decade, researchers in the at the University of Washington have been working to build a robot that can help feed people who can’t eat on their own.
Researchers’ first breakthrough, back when the lab was at Carnegie Mellon University, was creating a robotic arm that could use a fork to feed someone a marshmallow. Since then, the robot graduated from to . Researchers also investigated .
Up until recently, this work has mostly been evaluated in the lab. But last year, researchers deployed the assistive feeding arm in a pair of studies outside the lab. In the first, six users with motor impairments used the robot to feed themselves a meal in a UW cafeteria, an office or a conference room. In the second study, one of those users, , a community researcher and co-author on the research, used the system at home for five days, having it feed him ten meals.
March 5 at the ACM/IEEE International Conference on Human-Robot Interaction in Melbourne.
“Our past studies have been in the lab because, if you want to evaluate specific system components in isolation, you need to control all other aspects of the meal,” said lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “But that doesn’t capture the diverse meal contexts that exist outside the lab. At the end of the day, the goal is to enable people to feed themselves in real environments, so we should also evaluate the system in those environments.”
The system, which researchers dubbed ADA (Assistive Dexterous Arm), consists of a robotic arm that can be affixed to something nearby, such as a power wheelchair or hospital table. The user specifies what bite they want through a web app, and the system then feeds the user that bite autonomously (though users can stop the arm with a “kill button”). The arm has a force sensor and camera to distinguish between foods and to get the food to the user’s mouth.
In both studies, users successfully fed themselves their meals. In the first study, the robot acquired entrees with around 80% accuracy, which users in another study found to be the threshold for success. In the second study, the home’s varied circumstances and environments — Ko could be eating while watching TV in low light or while working in bed — hindered the system’s default functionality. But researchers designed the system to be customizable, so Ko was able to control the robot and still feed himself all meals.
The team plans to continue improving the system for effectiveness and customizability.
“It was a really important step to take the robot out of the lab,” Ko said. “You eat in different environments, and there are little variables that you don’t think about. If the robot is too heavy, it might tilt a table. Or if the lighting isn’t good, the facial recognition could struggle, but lighting is something you really don’t think about when you’re eating.”
Additional co-authors include , , and , all doctoral students in the Allen School; , a lecturer in the Allen School; , the late president of the Tyler Schrenk Foundation and a community researcher; , an occupational therapy clinical research lead at Hello Robot; , a high school intern in the Allen School while completing this research; , , , , and , all undergraduate students in the Allen School while doing this research; , a research scientist assistant in the Allen School while completing this research; and , a research scientist in the Allen School while completing this research. and , both professors in the Allen School, are senior authors. This research was funded in part by a UW CREATE student grant; UW Allen School Postdoc Research award; the National Science Foundation; the DARPA RACER program; the National Institute of Biomedical Imaging and Bioengineering; and the Office of Naval Research.
For more information, contact Nanavati at [email protected].