7 results on '"Duin, Robert P.W."'
Search Results
2. FIDOS: A generalized Fisher based feature extraction method for domain shift
- Author
-
Dinh, Cuong V., Duin, Robert P.W., Piqueras-Salazar, Ignacio, and Loog, Marco
- Subjects
- *
FEATURE extraction , *GENERALIZATION , *PATTERN recognition systems , *DATA analysis , *PRESUPPOSITION (Logic) , *COMPARATIVE studies - Abstract
Abstract: Traditional pattern recognition techniques often assume that the data sets used for training and testing follow the same distribution. However, this assumption is usually not true for many real world problems as data from the same classes but different domains, e.g., data are collected under different conditions, may show different characteristics. We introduce FIDOS, a generalized FIsher based method for DOmain Shift problem, that aims at learning invariant features across domains in a supervised manner. Different from classical Fisher feature extraction, FIDOS aims to minimize not only the within-class scatter but also the difference in distributions between domains. Therefore, the subspace constructed by FIDOS reduces the drift in distributions among different domains and at the same time preserves the discriminants across classes. Another advantage of FIDOS over classical Fisher is that FIDOS extracts more features when multiple source domains are available in the training set; this is essential for a good classification especially when the number of classes is small. Experimental results on both artificial and real data and comparisons with other methods demonstrate the efficiency of our method in classifying objects under domain shift situations. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
3. The dissimilarity space: Bridging structural and statistical pattern recognition
- Author
-
Duin, Robert P.W. and Pękalska, Elżbieta
- Subjects
- *
PATTERN recognition systems , *LATTICE theory , *EXPERT systems , *TOPOLOGICAL spaces , *VECTOR analysis , *MACHINE learning - Abstract
Abstract: Human experts constitute pattern classes of natural objects based on their observed appearance. Automatic systems for pattern recognition may be designed on a structural description derived from sensor observations. Alternatively, training sets of examples can be used in statistical learning procedures. They are most powerful for vectorial object representations. Unfortunately, structural descriptions do not match well with vectorial representations. Consequently it is difficult to combine the structural and statistical approaches to pattern recognition. Structural descriptions may be used to compare objects. This leads to a set of pairwise dissimilarities from which vectors can be derived for the purpose of statistical learning. The resulting dissimilarity representation bridges thereby the structural and statistical approaches. The dissimilarity space is one of the possible spaces resulting from this representation. It is very general and easy to implement. This paper gives a historical review and discusses the properties of the dissimilarity space approaches illustrated by a set of examples on real world datasets. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
4. Approximating the multiclass ROC by pairwise analysis
- Author
-
Landgrebe, Thomas C.W. and Duin, Robert P.W.
- Subjects
- *
COMPUTATIONAL complexity , *PATTERN perception , *PATTERN recognition systems , *MACHINE theory , *COMPUTER vision - Abstract
Abstract: The use of Receiver Operator Characteristic (ROC) analysis for the sake of model selection and threshold optimisation has become a standard practice for the design of two-class pattern recognition systems. Advantages include decision boundary adaptation to imbalanced misallocation costs, the ability to fix some classification errors, and performance evaluation in imprecise, ill-defined conditions where costs, or prior probabilities may vary. Extending this to the multiclass case has recently become a topic of interest. The primary challenge involved is the computational complexity, that increases to the power of the number of classes, rendering many problems intractable. In this paper the multiclass ROC is formalised, and the computational complexities exposed. A pairwise approach is proposed that approximates the multi-dimensional operating characteristic by discounting some interactions, resulting in an algorithm that is tractable, and extensible to large numbers of classes. Two additional multiclass optimisation techniques are also proposed that provide a benchmark for the pairwise algorithm. Experiments compare the various approaches in a variety of practical situations, demonstrating the efficacy of the pairwise approach. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
5. Statistical Pattern Recognition: A Review.
- Author
-
Jain, Anil K. and Duin, Robert P.W.
- Subjects
- *
PATTERN recognition systems , *ARTIFICIAL neural networks - Abstract
Presents a study which summarized and compared some of the methods used in various stages of a pattern recognition system. Information on pattern recognition; Template matching method used; Statistical approach; Neural networks model; Curse of dimensionality and peaking phenomena.
- Published
- 2000
- Full Text
- View/download PDF
6. Integration of prior knowledge of measurement noise in kernel density classification
- Author
-
Li, Yunlei, de Ridder, Dick, Duin, Robert P.W., and Reinders, Marcel J.T.
- Subjects
- *
PATTERN recognition systems , *NOISE measurement , *KERNEL functions , *PATTERN perception - Abstract
Abstract: Samples can be measured with different precisions and reliabilities in different experiments, or even within the same experiment. These varying levels of measurement noise may deteriorate the performance of a pattern recognition system, if not treated with care. Here we seek to investigate the benefit of incorporating prior knowledge about measurement noise into system construction. We propose a kernel density classifier which integrates such prior knowledge. Instead of using an identical kernel for each sample, we transform the prior knowledge into a distinct kernel for each sample. The integration procedure is straightforward and easy to interpret. In addition, we show how to estimate the diverse measurement noise levels in a real world dataset. Compared to the basic methods, the new kernel density classifier can give a significantly better classification performance. As expected, this improvement is more obvious for small sample size datasets and large number of features. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
7. Spherical and Hyperbolic Embeddings of Data.
- Author
-
Wilson, Richard C., Hancock, Edwin R., Pekalska, Elzbieta, and Duin, Robert P.W.
- Subjects
- *
EMBEDDINGS (Mathematics) , *COMPUTER vision , *HYPERBOLIC geometry , *PATTERN recognition systems , *GEODESIC distance , *EIGENVALUES , *MANIFOLDS (Mathematics) , *CURVATURE - Abstract
Many computer vision and pattern recognition problems may be posed as the analysis of a set of dissimilarities between objects. For many types of data, these dissimilarities are not euclidean (i.e., they do not represent the distances between points in a euclidean space), and therefore cannot be isometrically embedded in a euclidean space. Examples include shape-dissimilarities, graph distances and mesh geodesic distances. In this paper, we provide a means of embedding such non-euclidean data onto surfaces of constant curvature. We aim to embed the data on a space whose radius of curvature is determined by the dissimilarity data. The space can be either of positive curvature (spherical) or of negative curvature (hyperbolic). We give an efficient method for solving the spherical and hyperbolic embedding problems on symmetric dissimilarity data. Our approach gives the radius of curvature and a method for approximating the objects as points on a hyperspherical manifold without optimisation. For objects which do not reside exactly on the manifold, we develop a optimisation-based procedure for approximate embedding on a hyperspherical manifold. We use the exponential map between the manifold and its local tangent space to solve the optimisation problem locally in the euclidean tangent space. This process is efficient enough to allow us to embed data sets of several thousand objects. We apply our method to a variety of data including time warping functions, shape similarities, graph similarity and gesture similarity data. In each case the embedding maintains the local structure of the data while placing the points in a metric space. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.