Back to Search Start Over

Benchmarking Large Language Models for Log Analysis, Security, and Interpretation.

Authors :
Karlsen, Egil
Luo, Xiao
Zincir-Heywood, Nur
Heywood, Malcolm
Source :
Journal of Network & Systems Management. Jul2024, Vol. 32 Issue 3, p1-27. 27p.
Publication Year :
2024

Abstract

Large Language Models (LLM) continue to demonstrate their utility in a variety of emergent capabilities in different fields. An area that could benefit from effective language understanding in cybersecurity is the analysis of log files. This work explores LLMs with different architectures (BERT, RoBERTa, DistilRoBERTa, GPT-2, and GPT-Neo) that are benchmarked for their capacity to better analyze application and system log files for security. Specifically, 60 fine-tuned language models for log analysis are deployed and benchmarked. The resulting models demonstrate that they can be used to perform log analysis effectively with fine-tuning being particularly important for appropriate domain adaptation to specific log types. The best-performing fine-tuned sequence classification model (DistilRoBERTa) outperforms the current state-of-the-art; with an average F1-Score of 0.998 across six datasets from both web application and system log sources. To achieve this, we propose and implement a new experimentation pipeline (LLM4Sec) which leverages LLMs for log analysis experimentation, evaluation, and analysis. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10647570
Volume :
32
Issue :
3
Database :
Academic Search Index
Journal :
Journal of Network & Systems Management
Publication Type :
Academic Journal
Accession number :
177917669
Full Text :
https://doi.org/10.1007/s10922-024-09831-x