Back to Search Start Over

Appearance Blur-driven AutoEncoder and Motion-guided Memory Module for Video Anomaly Detection

Authors :
Lyu, Jiahao
Zhao, Minghua
Hu, Jing
Huang, Xuewen
Du, Shuangli
Shi, Cheng
Lv, Zhiyong
Publication Year :
2024

Abstract

Video anomaly detection (VAD) often learns the distribution of normal samples and detects the anomaly through measuring significant deviations, but the undesired generalization may reconstruct a few anomalies thus suppressing the deviations. Meanwhile, most VADs cannot cope with cross-dataset validation for new target domains, and few-shot methods must laboriously rely on model-tuning from the target domain to complete domain adaptation. To address these problems, we propose a novel VAD method with a motion-guided memory module to achieve cross-dataset validation with zero-shot. First, we add Gaussian blur to the raw appearance images, thereby constructing the global pseudo-anomaly, which serves as the input to the network. Then, we propose multi-scale residual channel attention to deblur the pseudo-anomaly in normal samples. Next, memory items are obtained by recording the motion features in the training phase, which are used to retrieve the motion features from the raw information in the testing phase. Lastly, our method can ignore the blurred real anomaly through attention and rely on motion memory items to increase the normality gap between normal and abnormal motion. Extensive experiments on three benchmark datasets demonstrate the effectiveness of the proposed method. Compared with cross-domain methods, our method achieves competitive performance without adaptation during testing.<br />Comment: 13 pages, 11 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.17608
Document Type :
Working Paper