Research

Human-Aware Reinforcement Learning

Understanding the human state is just a single component into improving human-robot teams. Robots need intelligent methods to utilize human state information that informs action selection. Typical reinforcement-learning paradigms only consider human actions into a system to find robot policies (actions), but not the quality of the human actions, namely factors that impact human performance. This work focuses on augmenting a robot’s observation space with informative human-state information to generate adaptation strategies that are individualized to a human. It is expected individual adaptation strategies will improve trust, transparency, and overall team performance.


Robust Human-State Estimation

Robust human modeling is central to providing a robot with essential understanding of its human teammate. Typical approaches only model (estimate) behavioral information (e.g., human position), task information (e.g., human-activity recognition), or internal state information (e.g., workload, emotions) using general machine-learning algorithms that process intrusive sensor data (e.g., EEG). A more robust approach is to develop continual-learning models that consider all three information modalities using non-invasive sensors, while extracting useful context to improve model accuracy.


Inclusive Social Robotics

The current social robotics have been developed for typically developing individuals, but recent focus has shifted into making robots more inclusive. The primary research thrusts is to understand how robots can effectively communicate with Deaf and Hard-of-Hearing (DHH) individuals and how robots can be integrated into senior life centers to augment standard of living.