Back to Search Start Over

Detection Anomaly in Video Based on Deep Support Vector Data Description.

Authors :
Wang, Bokun
Yang, Caiqian
Chen, Yaojing
Source :
Computational Intelligence & Neuroscience. 5/4/2022, p1-6. 6p.
Publication Year :
2022

Abstract

Video surveillance systems have been widely deployed in public places such as shopping malls, hospitals, banks, and streets to improve the safety of public life and assets. In most cases, how to detect video abnormal events in a timely and accurate manner is the main goal of social public safety risk prevention and control. Due to the ambiguity of anomaly definition, the scarcity of anomalous data, as well as the complex environmental background and human behavior, video anomaly detection is a major problem in the field of computer vision. Existing anomaly detection methods based on deep learning often use trained networks to extract features. These methods are based on existing network structures, instead of designing networks for the goal of anomaly detection. This paper proposed a method based on Deep Support Vector Data Description (DSVDD). By learning a deep neural network, the input normal sample space can be mapped to the smallest hypersphere. Through DSVDD, not only can the smallest size data hypersphere be found to establish SVDD but also useful data feature representations and normal models can be learned. In the test, the samples mapped inside the hypersphere are judged as normal, while the samples mapped outside the hypersphere are judged as abnormal. The proposed method achieves 86.84% and 73.2% frame-level AUC on the CUHK Avenue and ShanghaiTech Campus datasets, respectively. By comparison, the detection results achieved by the proposed method are better than those achieved by the existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
16875265
Database :
Academic Search Index
Journal :
Computational Intelligence & Neuroscience
Publication Type :
Academic Journal
Accession number :
156672554
Full Text :
https://doi.org/10.1155/2022/5362093