Back to Search Start Over

Reinforcement learning for Hybrid Disassembly Line Balancing Problems.

Authors :
Wang, Jiacun
Xi, GuiPeng
Guo, XiWang
Liu, Shixin
Qin, ShuJin
Han, Henry
Source :
Neurocomputing. Feb2024, Vol. 569, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

With the rapid development of the economy and technology, the rate of product replacement has accelerated, resulting in a large number of products being discarded. Disassembly is an important way to recycle waste products, which is also helpful to reduce manufacturing costs and environmental pollution. The combination of a single-row linear disassembly line and a U-shaped disassembly line presents distinctive advantages within various application scenarios. The Hybrid Disassembly Line Balancing Problem (HDLBP) that considers the requirement of multi-skilled workers is addressed in this paper. A mathematical model is established to maximize the recovery profit according to the characteristics of the proposed problem. To facilitate the search for optimal solution, a new strategy for agents in reinforcement learning to interact with complex and changeable environments in real-time is developed, and deep reinforcement learning is used to complete the distribution of multi-products and disassembly tasks. On this basis, we propose a Soft Actor–Critic (SAC) algorithm to effectively address this problem. Compared with the Deep Deterministic Policy Gradient (DDPG) algorithm, Advantage Actor–Critic (A2C) algorithm, and Proximal Policy Optimization (PPO) algorithm, the results show that the SAC can get the approximate optimal result on small-scale cases. The performance of SAC is also better than DDPG, PPO, and A2C in solving large-scale disassembly cases. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
569
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
174470034
Full Text :
https://doi.org/10.1016/j.neucom.2023.127145