101. MDP-Based Adaptive Motion Planning for Autonomous Robot Operations Under DegradedConditions
- Subjects
Unmanned Ground Vehicles ,Robotics ,Fault-tolerant Planning ,Motion Planning ,Markov Decision Process - Abstract
Autonomous mobile robots (AMR) like ground and aerial vehicles may encounter internal failures and external disturbances when deployed in real-world scenarios compromising the success of a mission. This thesis proposes an online learning method to adapt the motion planner to recover and continue an operation after a change in a robot's dynamics. Our proposed framework builds on the Markov Decision Process (MDP) and leverages the residual - defined in this work as the difference between the predicted and the actual state - to update the transition probabilities online and in turn update the optimal MDP policy. To maintain the system safe during learning, we propose a chi-squared-based dynamic learning rate that is event-triggered when the robot approaches an unsafe region of the workspace. Our framework can also distinguish between external disturbances versus internal failures by tracking the robot's state in a local and fixed frame view. We finally propose a state-machine-based resetting procedure to return to a previous MDP model when the problem disappears. This framework for resilient planning of impaired vehicles is validated both in simulations and experiments on unmanned ground vehicles (UGV) in a cluttered environment. Finally, we show an extension of our framework for multitask cooperative missions in which robots need to balance tasks based on the impaired dynamics of the robots in the network.
- Published
- 2021
- Full Text
- View/download PDF