1. ARMCHAIR: integrated inverse reinforcement learning and model predictive control for human-robot collaboration
- Author
-
Caregnato-Neto, Angelo, Siebert, Luciano Cavalcante, Zgonnikov, Arkady, Maximo, Marcos Ricardo Omena de Albuquerque, and Afonso, Rubens Junqueira Magalhães
- Subjects
Computer Science - Robotics ,Computer Science - Human-Computer Interaction ,Computer Science - Multiagent Systems ,Electrical Engineering and Systems Science - Systems and Control - Abstract
One of the key issues in human-robot collaboration is the development of computational models that allow robots to predict and adapt to human behavior. Much progress has been achieved in developing such models, as well as control techniques that address the autonomy problems of motion planning and decision-making in robotics. However, the integration of computational models of human behavior with such control techniques still poses a major challenge, resulting in a bottleneck for efficient collaborative human-robot teams. In this context, we present a novel architecture for human-robot collaboration: Adaptive Robot Motion for Collaboration with Humans using Adversarial Inverse Reinforcement learning (ARMCHAIR). Our solution leverages adversarial inverse reinforcement learning and model predictive control to compute optimal trajectories and decisions for a mobile multi-robot system that collaborates with a human in an exploration task. During the mission, ARMCHAIR operates without human intervention, autonomously identifying the necessity to support and acting accordingly. Our approach also explicitly addresses the network connectivity requirement of the human-robot team. Extensive simulation-based evaluations demonstrate that ARMCHAIR allows a group of robots to safely support a simulated human in an exploration scenario, preventing collisions and network disconnections, and improving the overall performance of the task.
- Published
- 2024