1. Automatic Hierarchical Reinforcement Learning for Efficient Large-Scale Service Composition
- Author
-
Guicheng Huang, Hongbing Wang, and Qi Yu
- Subjects
Computer science ,business.industry ,Distributed computing ,media_common.quotation_subject ,Services computing ,02 engineering and technology ,Service composition ,computer.software_genre ,Machine learning ,Adaptability ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,Reinforcement learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,Web service ,business ,computer ,media_common - Abstract
Developing efficient solutions to achieve automatic service composition has drawn significant attentions in service computing. As a composite service typically runs in a dynamic environment, adaptability of the composition solution arises as a central concern. Reinforcement Learning (RL) is one commonly used approach in service composition to achieve self-adaptability. However, traditional RL methods cannot guarantee good efficiency for large-scale composition problems. Hierarchical RL (HRL) has appeared to be a viable solution to address the efficiency issue. The applicability of existing HRL methods (e.g., MAXQ) requires a task graph, which can be generated by decomposing a composition plan into a task hierarchy. Current approaches that apply HRL to service composition generate the task graph manually, which will not scale to large service composition problems. In this paper, we address the above issue by systematically integrating automatic task decomposition and MAXQ HRL, resulting in an adaptive composition solution with good efficiency. Our experimental results demonstrate the effectiveness of the proposed service composition approach.
- Published
- 2016
- Full Text
- View/download PDF