Explainable Anomaly Detection in Surveillance Videos: Autoencoder-based Reconstruction and Error Map Visualization

No Thumbnail Available

Date

2024

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

none
The ever-increasing volume of surveillance video data creates a challenge for security applications, rendering manual monitoring impractical. Existing automatic anomaly detection methods often rely on computationally expensive processing steps, require substantial labeled training data, and lack interpretability. This project addresses these limitations by proposing an unsupervised, end-to-end deep learning framework with built-in explainability for anomaly detection in videos. Central to this approach is the autoencoder model, leveraging its capability to reconstruct video frames and identify abnormal patterns through the analysis of reconstruction errors. Five different lightweight autoencoder architectures are investigated, exploring the effectiveness of 2D and 3D convolutions, denoising techniques, and spatio-temporal layers for capturing both spatial and temporal features directly from raw video data. These models achieve promising performance, with Area Under the Curve values ranging from 70% to 95% on the benchmark UCSD Pedestrian datasets, showcasing the potential of lightweight architectures for efficient deployment in diverse environments. The proposed framework offers several advantages beyond efficient anomaly detection. It directly extracts spatial and temporal features from raw video, simplifying system design and eliminating the need for complex processing steps. Additionally, inherent interpretability is achieved through error maps generated during reconstruction. This transparency allows for understanding the model's decisions and accurate anomaly localization for human oversight. It is crucial for building trust in anomaly detection systems in real-world surveillance applications. This research establishes a foundation for the development of robust and ethical anomaly detection systems with a focus on lightweight and explainable models.

Description

Keywords

None, Machine Learning, Anomaly Detection, Explainability

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By