Back to Search
Start Over
Non-Euclidean or Non-metric Measures Can Be Informative.
- Source :
- Structural, Syntactic & Statistical Pattern Recognition; 2006, p871-880, 10p
- Publication Year :
- 2006
-
Abstract
- Statistical learning algorithms often rely on the Euclidean distance. In practice, non-Euclidean or non-metric dissimilarity measures may arise when contours, spectra or shapes are compared by edit distances or as a consequence of robust object matching [1,2]. It is an open issue whether such measures are advantageous for statistical learning or whether they should be constrained to obey the metric axioms. The k-nearest neighbor (NN) rule is widely applied to general dissimilarity data as the most natural approach. Alternative methods exist that embed such data into suitable representation spaces in which statistical classifiers are constructed [3]. In this paper, we investigate the relation between non-Euclidean aspects of dissimilarity data and the classification performance of the direct NN rule and some classifiers trained in representation spaces. This is evaluated on a parameterized family of edit distances, in which parameter values control the strength of non-Euclidean behavior. Our finding is that the discriminative power of this measure increases with increasing non-Euclidean and non-metric aspects until a certain optimum is reached. The conclusion is that statistical classifiers perform well and the optimal values of the parameters characterize a non-Euclidean and somewhat non-metric measure. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISBNs :
- 9783540372363
- Database :
- Complementary Index
- Journal :
- Structural, Syntactic & Statistical Pattern Recognition
- Publication Type :
- Book
- Accession number :
- 32910394
- Full Text :
- https://doi.org/10.1007/11815921_96