Back to Search Start Over

Self-Improved Learning for Scalable Neural Combinatorial Optimization

Authors :
Luo, Fu
Lin, Xi
Wang, Zhenkun
Tong, Xialiang
Yuan, Mingxuan
Zhang, Qingfu
Publication Year :
2024

Abstract

The end-to-end neural combinatorial optimization (NCO) method shows promising performance in solving complex combinatorial optimization problems without the need for expert design. However, existing methods struggle with large-scale problems, hindering their practical applicability. To overcome this limitation, this work proposes a novel Self-Improved Learning (SIL) method for better scalability of neural combinatorial optimization. Specifically, we develop an efficient self-improved mechanism that enables direct model training on large-scale problem instances without any labeled data. Powered by an innovative local reconstruction approach, this method can iteratively generate better solutions by itself as pseudo-labels to guide efficient model training. In addition, we design a linear complexity attention mechanism for the model to efficiently handle large-scale combinatorial problem instances with low computation overhead. Comprehensive experiments on the Travelling Salesman Problem (TSP) and the Capacitated Vehicle Routing Problem (CVRP) with up to 100K nodes in both uniform and real-world distributions demonstrate the superior scalability of our method.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.19561
Document Type :
Working Paper