Back to Search
Start Over
Generalized robust loss functions for machine learning.
- Source :
-
Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2024 Mar; Vol. 171, pp. 200-214. Date of Electronic Publication: 2023 Dec 09. - Publication Year :
- 2024
-
Abstract
- Loss function is a critical component of machine learning. Some robust loss functions are proposed to mitigate the adverse effects caused by noise. However, they still face many challenges. Firstly, there is currently a lack of unified frameworks for building robust loss functions in machine learning. Secondly, most of them only care about the occurring noise and pay little attention to those normal points. Thirdly, the resulting performance gain is limited. To this end, we put forward a general framework of robust loss functions for machine learning (RML) with rigorous theoretical analyses, which can smoothly and adaptively flatten any unbounded loss function and apply to various machine learning problems. In RML, an unbounded loss function serves as the target, with the aim of being flattened. A scale parameter is utilized to limit the maximum value of noise points, while a shape parameter is introduced to control both the compactness and the growth rate of the flattened loss function. Later, this framework is employed to flatten the Hinge loss function and the Square loss function. Based on this, we build two robust kernel classifiers called FHSVM and FLSSVM, which can distinguish different types of data. The stochastic variance reduced gradient (SVRG) approach is used to optimize FHSVM and FLSSVM. Extensive experiments demonstrate their superiority, with both consistently occupying the top two positions among all evaluated methods, achieving an average accuracy of 81.07% (accompanied by an F-score of 73.25%) for FHSVM and 81.54% (with an F-score of 75.71%) for FLSSVM.<br />Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.<br /> (Copyright © 2023 Elsevier Ltd. All rights reserved.)
- Subjects :
- Algorithms
Machine Learning
Subjects
Details
- Language :
- English
- ISSN :
- 1879-2782
- Volume :
- 171
- Database :
- MEDLINE
- Journal :
- Neural networks : the official journal of the International Neural Network Society
- Publication Type :
- Academic Journal
- Accession number :
- 38096649
- Full Text :
- https://doi.org/10.1016/j.neunet.2023.12.013