Back to Search
Start Over
Learning Task-Agnostic Action Spaces for Movement Optimization.
- Source :
- IEEE Transactions on Visualization & Computer Graphics; Dec2022, Vol. 28 Issue 12, p4700-4712, 13p
- Publication Year :
- 2022
-
Abstract
- We propose a novel method for exploring the dynamics of physically based animated characters, and learning a task-agnostic action space that makes movement optimization easier. Like several previous article, we parameterize actions as target states, and learn a short-horizon goal-conditioned low-level control policy that drives the agent's state towards the targets. Our novel contribution is that with our exploration data, we are able to learn the low-level policy in a generic manner and without any reference movement data. Trained once for each agent or simulation environment, the policy improves the efficiency of optimizing both trajectories and high-level policies across multiple tasks and optimization algorithms. We also contribute novel visualizations that show how using target states as actions makes optimized trajectories more robust to disturbances; this manifests as wider optima that are easy to find. Due to its simplicity and generality, our proposed approach should provide a building block that can improve a large variety of movement optimization methods and applications. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 10772626
- Volume :
- 28
- Issue :
- 12
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Visualization & Computer Graphics
- Publication Type :
- Academic Journal
- Accession number :
- 160687537
- Full Text :
- https://doi.org/10.1109/TVCG.2021.3100095