Back to Search Start Over

The Loss Rank Principle for Model Selection.

Authors :
Carbonell, Jaime G.
Siekmann, Jörg
Bshouty, Nader H.
Gentile, Claudio
Hutter, Marcus
Source :
Learning Theory (9783540729259); 2007, p589-603, 15p
Publication Year :
2007

Abstract

A key issue in statistics and machine learning is to automatically select the "right" model complexity, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. We suggest a novel principle (LoRP) for model selection in regression and classification. It is based on the loss rank, which counts how many other (fictitious) data would be fitted better. LoRP selects the model that has minimal loss rank. Unlike most penalized maximum likelihood variants (AIC,BIC,MDL), LoRP only depends on the regression functions and the loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISBNs :
9783540729259
Database :
Supplemental Index
Journal :
Learning Theory (9783540729259)
Publication Type :
Book
Accession number :
33215245
Full Text :
https://doi.org/10.1007/978-3-540-72927-3_42