Back to Search Start Over

Human locomotion with reinforcement learning using bioinspired reward reshaping strategies.

Authors :
Nowakowski, Katharine
Carvalho, Philippe
Six, Jean-Baptiste
Maillet, Yann
Nguyen, Anh Tu
Seghiri, Ismail
M'Pemba, Loick
Marcille, Theo
Ngo, Sy Toan
Dao, Tien-Tuan
Source :
Medical & Biological Engineering & Computing; Jan2021, Vol. 59 Issue 1, p243-256, 14p, 3 Color Photographs, 2 Black and White Photographs, 13 Graphs
Publication Year :
2021

Abstract

Recent learning strategies such as reinforcement learning (RL) have favored the transition from applied artificial intelligence to general artificial intelligence. One of the current challenges of RL in healthcare relates to the development of a controller to teach a musculoskeletal model to perform dynamic movements. Several solutions have been proposed. However, there is still a lack of investigations exploring the muscle control problem from a biomechanical point of view. Moreover, no studies using biological knowledge to develop plausible motor control models for pathophysiological conditions make use of reward reshaping. Consequently, the objective of the present work was to design and evaluate specific bioinspired reward function strategies for human locomotion learning within an RL framework. The deep deterministic policy gradient (DDPG) method for a single-agent RL problem was applied. A 3D musculoskeletal model (8 DoF and 22 muscles) of a healthy adult was used. A virtual interactive environment was developed and simulated using opensim-rl library. Three reward functions were defined for walking, forward, and side falls. The training process was performed with Google Cloud Compute Engine. The obtained outcomes were compared to the NIPS 2017 challenge outcomes, experimental observations, and literature data. Regarding learning to walk, simulated musculoskeletal models were able to walk from 18 to 20.5 m for the best solutions. A compensation strategy of muscle activations was revealed. Soleus, tibia anterior, and vastii muscles are main actors of the simple forward fall. A higher intensity of muscle activations was also noted after the fall. All kinematics and muscle patterns were consistent with experimental observations and literature data. Regarding the side fall, an intensive level of muscle activation on the expected fall side to unbalance the body was noted. The obtained outcomes suggest that computational and human resources as well as biomechanical knowledge are needed together to develop and evaluate an efficient and robust RL solution. As perspectives, current solutions will be extended to a larger parameter space in 3D. Furthermore, a stochastic reinforcement learning model will be investigated in the future in scope with the uncertainties of the musculoskeletal model and associated environment to provide a general artificial intelligence solution for human locomotion learning. Graphical abstract. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01400118
Volume :
59
Issue :
1
Database :
Complementary Index
Journal :
Medical & Biological Engineering & Computing
Publication Type :
Academic Journal
Accession number :
148139445
Full Text :
https://doi.org/10.1007/s11517-020-02309-3