Back to Search Start Over

Optimizing warehouse logistics scheduling strategy using soft computing and advanced machine learning techniques.

Authors :
Li, Kuigang
Source :
Soft Computing - A Fusion of Foundations, Methodologies & Applications; Dec2023, Vol. 27 Issue 23, p18077-18092, 16p
Publication Year :
2023

Abstract

In recent years, with the improvement of people's living standards, online shopping has become an indispensable part of people's lives. The rapid development of e-commerce has brought unprecedented opportunities to the express delivery industry. Therefore, modern manufacturing enterprises must shorten the cycle from order to delivery to be successful. The study of machine learning (ML), which integrates computer science, statistics, pattern recognition, data mining, and predictive analytics, has become one of the most significant areas of research in the last few decades. It has also established itself as a cornerstone in terms of applications, making significant progress in modern information technology and practice. This paper used the capabilities of one of the powerful paradigms of ML called reinforcement learning (RL) and soft computing to improve the warehouse automation process while taking market demands into account. Since stackers and Automatic Guided Vehicles (AGV) are the main participants in this automation process, we focused on these two in our research to enhance the warehouse logistic scheduling process as a whole. To accomplish this, we collected historical data related to warehouse operation from the warehouse environment, such as AGV and stacker moments, inventory level, job execution time, and other pertinent factors. We first created an RF-based model using the Q-learning technique, one of the RF approaches, before using these data for the model training. The model designing is accomplished by first formulating the logistic scheduling problem as a Markov Decision Process (MDP), where the warehouse system changes between states and takes actions to maximize a cumulative reward over time. After that, we performed a number of operations, including state representation, action space definition, and reward design, to transform the problem into a format that the Q-learning approach can handle. In four experiments, the design model is trained using the data that has been collected up to 100 episodes. The proposed model is further improved with soft computing approaches such as fuzzy control methods. We utilized MATLAB and Plant simulation software to conduct the experiments. The results of the proposed model are thoroughly evaluated and compared with already existing approaches. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14327643
Volume :
27
Issue :
23
Database :
Complementary Index
Journal :
Soft Computing - A Fusion of Foundations, Methodologies & Applications
Publication Type :
Academic Journal
Accession number :
172972054
Full Text :
https://doi.org/10.1007/s00500-023-09269-4