Back to Search Start Over

A General Framework for Learning under Corruption: Label Noise, Attribute Noise, and Beyond

Authors :
Iacovissi, Laura
Lu, Nan
Williamson, Robert C.
Publication Year :
2023

Abstract

Corruption is frequently observed in collected data and has been extensively studied in machine learning under different corruption models. Despite this, there remains a limited understanding of how these models relate such that a unified view of corruptions and their consequences on learning is still lacking. In this work, we formally analyze corruption models at the distribution level through a general, exhaustive framework based on Markov kernels. We highlight the existence of intricate joint and dependent corruptions on both labels and attributes, which are rarely touched by existing research. Further, we show how these corruptions affect standard supervised learning by analyzing the resulting changes in Bayes Risk. Our findings offer qualitative insights into the consequences of "more complex" corruptions on the learning problem, and provide a foundation for future quantitative comparisons. Applications of the framework include corruption-corrected learning, a subcase of which we study in this paper by theoretically analyzing loss correction with respect to different corruption instances.<br />42 pages

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....4c6273a3b10c15d9339ecbac22e1d40d