The cornerstone of our research program. We take a full-stack approach — from perception (multimodal human-state estimation) to cognition (human-aware RL) to interaction (inclusive teaming strategies). This includes non-invasive workload estimation, off-policy RL for personalized human-robot teams, and inclusive interaction design for Deaf/Hard-of-Hearing populations. Future work extends to AR-based communication and predictive social robotics.
Key topics:
Funding: NSF AWARE-AI, NSF NRT (IPP)
Robots that learn with people, not just around them. RL’s application to real-world human-robot systems is constrained by sample inefficiency, distributional shifts, and reward misalignment. We address these barriers through offline policy reuse from human-generated data, anxiety-inspired models for cautious adaptation, and psychologically motivated intrinsic rewards — including vicarious fear — for safer exploration alongside human partners. Our long-term vision is a neuro-cognitive architecture integrating fear, anxiety, curiosity, and trust to regulate adaptive robotic behavior in human environments.
Key topics:
Funding: AFRL Summer Faculty Fellowship, Sophic Computing
Understanding human partners is the foundation of effective teaming. We integrate behavioral, physiological, and environmental signals to build robust models of human state — but multimodal systems face challenges in explainability, sensor robustness, data scarcity, and computational cost. We develop methods to regulate modality reliance in neural networks, compress models during training for edge deployment near human collaborators, and generate synthetic cross-modality data to close the human-data gap.
Key topics:
Funding: NGA, Army Research Lab, DTRA