1. Learning Compact and Robust Representations for Anomaly Detection
- Author
-
Lunardi, Willian T., Banabila, Abdulrahman, Herzalla, Dania, and Andreoni, Martin
- Subjects
Computer Science - Machine Learning - Abstract
Distance-based anomaly detection methods rely on compact and separable in-distribution (ID) embeddings to effectively delineate anomaly boundaries. Single-positive contrastive formulations suffer from class collision, promoting unnecessary intra-class variance within ID samples. While multi-positive formulations can improve inlier compactness, they fail to preserve the diversity among synthetic outliers. We address these limitations by proposing a contrastive pretext task for anomaly detection that enforces three key properties: (1) compact ID clustering to reduce intra-class variance, (2) inlier-outlier separation to enhance inter-class separation, and (3) outlier-outlier separation to maintain diversity among synthetic outliers and prevent representation collapse. These properties work together to ensure a more robust and discriminative feature space for anomaly detection. Our approach achieves approximately 12x faster convergence than NT-Xent and 7x faster than Rot-SupCon, with superior performance. On CIFAR-10, it delivers an average performance boost of 6.2% over NT-Xent and 2% over Rot-SupCon, with class-specific improvements of up to 16.9%. Our code is available at https://anonymous.4open.science/r/firm-98B6.
- Published
- 2025