Research

The AHRT Lab puts humans at the center of every algorithm we build. We develop human-aware, context-sensitive AI systems for real-world human-robot collaboration — from understanding human partners through multimodal sensing, to adapting robot behavior through psychologically grounded learning, to ensuring inclusive interaction for all. Three interconnected research thrusts drive this vision.

Human-Robot Teaming

The cornerstone of our research program. We take a full-stack approach — from perception (multimodal human-state estimation) to cognition (human-aware RL) to interaction (inclusive teaming strategies). This includes non-invasive workload estimation, off-policy RL for personalized human-robot teams, and inclusive interaction design for Deaf/Hard-of-Hearing populations. Future work extends to AR-based communication and predictive social robotics.

Key topics:

  • Real-time human workload and fatigue estimation
  • Off-policy RL for human-aware decision making
  • Inclusive teaming for d/DHH individuals (with NTID)
  • AR interfaces for search and rescue
  • Predictive modules for natural conversational robotics

Funding: NSF AWARE-AI, NSF NRT (IPP)

Human-Aware Reinforcement Learning

Robots that learn with people, not just around them. RL’s application to real-world human-robot systems is constrained by sample inefficiency, distributional shifts, and reward misalignment. We address these barriers through offline policy reuse from human-generated data, anxiety-inspired models for cautious adaptation, and psychologically motivated intrinsic rewards — including vicarious fear — for safer exploration alongside human partners. Our long-term vision is a neuro-cognitive architecture integrating fear, anxiety, curiosity, and trust to regulate adaptive robotic behavior in human environments.

Key topics:

  • Offline policy evaluation and data reuse from human-robot interaction
  • Distributional shift detection via human anxiety models
  • Intrinsic motivation and vicarious conditioning for safe RL
  • Neuro-cognitive architectures grounded in active inference

Funding: AFRL Summer Faculty Fellowship, Sophic Computing

Human-Centered Sensing & Modeling

Understanding human partners is the foundation of effective teaming. We integrate behavioral, physiological, and environmental signals to build robust models of human state — but multimodal systems face challenges in explainability, sensor robustness, data scarcity, and computational cost. We develop methods to regulate modality reliance in neural networks, compress models during training for edge deployment near human collaborators, and generate synthetic cross-modality data to close the human-data gap.

Key topics:

  • Quantifying and regulating modality dependence in human-signal fusion networks
  • Training-time model compression with structural regularization
  • Cross-modality synthetic data generation (e.g., SAR from near-infrared)
  • GAN architectures with neural architecture search

Funding: NGA, Army Research Lab, DTRA