Back to Search Start Over

AD-LLM: Benchmarking Large Language Models for Anomaly Detection

Authors :
Yang, Tiankai
Nian, Yi
Li, Shawn
Xu, Ruiyao
Li, Yuangang
Li, Jiaqi
Xiao, Zhuo
Hu, Xiyang
Rossi, Ryan
Ding, Kaize
Hu, Xia
Zhao, Yue
Publication Year :
2024

Abstract

Anomaly detection (AD) is an important machine learning task with many real-world uses, including fraud detection, medical diagnosis, and industrial monitoring. Within natural language processing (NLP), AD helps detect issues like spam, misinformation, and unusual user activity. Although large language models (LLMs) have had a strong impact on tasks such as text generation and summarization, their potential in AD has not been studied enough. This paper introduces AD-LLM, the first benchmark that evaluates how LLMs can help with NLP anomaly detection. We examine three key tasks: (i) zero-shot detection, using LLMs' pre-trained knowledge to perform AD without tasks-specific training; (ii) data augmentation, generating synthetic data and category descriptions to improve AD models; and (iii) model selection, using LLMs to suggest unsupervised AD models. Through experiments with different datasets, we find that LLMs can work well in zero-shot AD, that carefully designed augmentation methods are useful, and that explaining model selection for specific datasets remains challenging. Based on these results, we outline six future research directions on LLMs for AD.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.11142
Document Type :
Working Paper