Back to Search Start Over

The Hidden Influence of Latent Feature Magnitude When Learning with Imbalanced Data

Authors :
Dablain, Damien A.
Chawla, Nitesh V.
Publication Year :
2024

Abstract

Machine learning (ML) models have difficulty generalizing when the number of training class instances are numerically imbalanced. The problem of generalization in the face of data imbalance has largely been attributed to the lack of training data for under-represented classes and to feature overlap. The typical remedy is to implement data augmentation for classes with fewer instances or to assign a higher cost to minority class prediction errors or to undersample the prevalent class. However, we show that one of the central causes of impaired generalization when learning with imbalanced data is the inherent manner in which ML models perform inference. These models have difficulty generalizing due to their heavy reliance on the magnitude of encoded signals. During inference, the models predict classes based on a combination of encoded signal magnitudes that linearly sum to the largest scalar. We demonstrate that even with aggressive data augmentation, which generally improves minority class prediction accuracy, parametric ML models still associate a class label with a limited number of feature combinations that sum to a prediction, which can affect generalization.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.10165
Document Type :
Working Paper