Back to Search Start Over

Adjacent Inputs With Different Labels and Hardness in Supervised Learning

Authors :
Sebastian A. Grillo
Julio Cesar Mello Roman
Jorge Daniel Mello-Roman
Jose Luis Vazquez Noguera
Miguel Garcia-Torres
Federico Divina
Pedro Esteban Gardel Sotomayor
Source :
IEEE Access, Vol 9, Pp 162487-162498 (2021)
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

An important aspect of the design of effective machine learning algorithms is the complexity analysis of classification problems. In this paper, we propose a study aimed at determining the relation between the number of adjacent inputs with different labels and the required number of examples for the task of inducing a classification model. To this aim, we first quantified the adjacent inputs with different labels as a property, using a measure denoted as Neighbour Input Variation (NIV). We analyzed the relation that NIV has to random data and overfitting. We then demonstrated that a threshold of NIV may determine if a classification model can generalize to unseen data. We also presented a case study aimed at analyzing threshold neural networks and the required first hidden layer size in function of NIV. Finally, we performed experiments with five popular algorithms analyzing the relation between NIV and the classification error on problems with few dimensions. We conclude that functions whose similar inputs have different outputs with high probability, considerably reduce the generalization capacity of classification algorithms.

Details

Language :
English
ISSN :
21693536
Volume :
9
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.381b409052e45c99e0c6fa5c96bb645
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2021.3131150