1. Adapting Large Language Models to Log Analysis with Interpretable Domain Knowledge
- Author
-
Ji, Yuhe, Liu, Yilun, Yao, Feiyu, He, Minggui, Tao, Shimin, Zhao, Xiaofeng, Chang, Su, Yang, Xinhua, Meng, Weibin, Xie, Yuming, Chen, Boxing, and Yang, Hao
- Subjects
Computer Science - Computation and Language ,Computer Science - Software Engineering - Abstract
The increasing complexity of computer systems necessitates innovative approaches to fault and error management, going beyond traditional manual log analysis. While existing solutions using large language models (LLMs) show promise, they are limited by a gap between natural and domain-specific languages, which restricts their effectiveness in real-world applications. Our approach addresses these limitations by integrating interpretable domain knowledge into open-source LLMs through continual pre-training (CPT), enhancing performance on log tasks while retaining natural language processing capabilities. We created a comprehensive dataset, NLPLog, with over 250,000 question-answer pairs to facilitate this integration. Our model, SuperLog, trained with this dataset, achieves the best performance across four log analysis tasks, surpassing the second-best model by an average of 12.01%. Our contributions include a novel CPT paradigm that significantly improves model performance, the development of SuperLog with state-of-the-art results, and the release of a large-scale dataset to support further research in this domain.
- Published
- 2024