33 results on '"Duin, Robert P.W."'
Search Results
2. FIDOS: A generalized Fisher based feature extraction method for domain shift
- Author
-
Dinh, Cuong V., Duin, Robert P.W., Piqueras-Salazar, Ignacio, and Loog, Marco
- Subjects
- *
FEATURE extraction , *GENERALIZATION , *PATTERN recognition systems , *DATA analysis , *PRESUPPOSITION (Logic) , *COMPARATIVE studies - Abstract
Abstract: Traditional pattern recognition techniques often assume that the data sets used for training and testing follow the same distribution. However, this assumption is usually not true for many real world problems as data from the same classes but different domains, e.g., data are collected under different conditions, may show different characteristics. We introduce FIDOS, a generalized FIsher based method for DOmain Shift problem, that aims at learning invariant features across domains in a supervised manner. Different from classical Fisher feature extraction, FIDOS aims to minimize not only the within-class scatter but also the difference in distributions between domains. Therefore, the subspace constructed by FIDOS reduces the drift in distributions among different domains and at the same time preserves the discriminants across classes. Another advantage of FIDOS over classical Fisher is that FIDOS extracts more features when multiple source domains are available in the training set; this is essential for a good classification especially when the number of classes is small. Experimental results on both artificial and real data and comparisons with other methods demonstrate the efficiency of our method in classifying objects under domain shift situations. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
3. The dissimilarity space: Bridging structural and statistical pattern recognition
- Author
-
Duin, Robert P.W. and Pękalska, Elżbieta
- Subjects
- *
PATTERN recognition systems , *LATTICE theory , *EXPERT systems , *TOPOLOGICAL spaces , *VECTOR analysis , *MACHINE learning - Abstract
Abstract: Human experts constitute pattern classes of natural objects based on their observed appearance. Automatic systems for pattern recognition may be designed on a structural description derived from sensor observations. Alternatively, training sets of examples can be used in statistical learning procedures. They are most powerful for vectorial object representations. Unfortunately, structural descriptions do not match well with vectorial representations. Consequently it is difficult to combine the structural and statistical approaches to pattern recognition. Structural descriptions may be used to compare objects. This leads to a set of pairwise dissimilarities from which vectors can be derived for the purpose of statistical learning. The resulting dissimilarity representation bridges thereby the structural and statistical approaches. The dissimilarity space is one of the possible spaces resulting from this representation. It is very general and easy to implement. This paper gives a historical review and discusses the properties of the dissimilarity space approaches illustrated by a set of examples on real world datasets. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
4. Classification of three-way data by the dissimilarity representation
- Author
-
Porro-Muñoz, Diana, Duin, Robert P.W., Talavera, Isneri, and Orozco-Alzate, Mauricio
- Subjects
- *
DATA analysis , *CHEMOMETRICS , *DIGITAL signal processing , *IMAGE analysis , *DATA structures , *COMPUTATIONAL complexity - Abstract
Abstract: Representation of objects by multi-dimensional data arrays has become very common for many research areas e.g. image analysis, signal processing and chemometrics. In most cases, it is the straightforward representation obtained from sophisticated measurement equipments e.g. radar signal processing. Although the use of this complex data structure could be advantageous for a better discrimination between different classes of objects, it is usually ignored. Classification tools that take this structure into account have hardly been developed yet. Meanwhile, the dissimilarity representation has demonstrated advantages in the solution of classification problems e.g. spectral data. Dissimilarities also allow the representation of multi-dimensional objects in a way that the data structure can be used. This paper introduces their use as a tool for classifying objects originally represented by two-dimensional (2D) arrays. 2D measures can be useful to achieve this representation. A 2D measure to compute the dissimilarity representation from spectral data with this kind of structure is proposed. It is compared to existent 2D measures, in terms of the information that is taken into account and computational complexity. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
5. An experimental study of one- and two-level classifier fusion for different sample sizes
- Author
-
Zhang, Chun-Xia and Duin, Robert P.W.
- Subjects
- *
SAMPLE size (Statistics) , *PERFORMANCE evaluation , *EXPERIMENTAL design , *MACHINE learning - Abstract
Abstract: Due to the wide variety of fusion techniques available for combining multiple classifiers into a more accurate classifier, a number of good studies have been devoted to determining in what situations some fusion methods should be preferred over other ones. However, the sample size behavior of the various fusion methods has hitherto received little attention in the literature of multiple classifier systems. The main contribution of this paper is thus to investigate the effect of training sample size on their relative performance and to gain more insight into the conditions for the superiority of some combination rules. A large experiment is conducted to study the performance of some fixed and trainable combination rules for executing one- and two-level classifier fusion for different training sample sizes. The experimental results yield the following conclusions: when implementing one-level fusion to combine homogeneous or heterogeneous base classifiers, fixed rules outperform trainable ones in nearly all cases, with only one exception of merging heterogeneous classifiers for large sample size. Moreover, the best classification for any considered sample size is generally achieved by a second level of combination (namely, utilizing one fusion rule to further combine a set of ensemble classifiers with each of them constructed by fusing base classifiers). Under these circumstances, it seems that adopting different types of fusion rules (fixed or trainable) as the combiners for two levels of fusion is appropriate. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
6. Dissimilarity-based detection of schizophrenia.
- Author
-
Ulaş, Aydın, Duin, Robert P.W., Castellani, Umberto, Loog, Marco, Mirtuono, Pasquale, Bicego, Manuele, Murino, Vittorio, Bellani, Marcella, Cerruti, Stefania, Tansella, Michele, and Brambilla, Paolo
- Subjects
- *
DIAGNOSIS of schizophrenia , *MAGNETIC resonance imaging , *CLASSIFICATION , *MODAL logic , *MEDICAL care , *IMAGING systems - Abstract
In this article, a novel approach to schizophrenia classification using magnetic resonance images (MRI) is proposed. The presented method is based on dissimilarity-based classification techniques applied to morphological MRIs and diffusion-weighted images (DWI). Instead of working with features directly, pairwise dissimilarities between expert delineated regions of interest (ROIs) are considered as representations based on which learning and classification can be performed. Experiments are carried out on a set of 59 patients and 55 controls and several pairwise dissimilarity measurements are analyzed. We demonstrate that significant improvements can be obtained when combining over different ROIs and different dissimilarity measures. We show that combining ROIs using the dissimilarity-based representation, we achieve higher accuracies. The dissimilarity-based representation outperforms the feature-based representation in all cases. Best results are obtained by combining the two modalities. In summary, our contribution is threefold: (i) We introduce the usage of dissimilarity-based classification to schizophrenia detection and show that dissimilarity-based classification achieves better results than normal features, (ii) We use dissimilarity combination to achieve better accuracies when carefully selected ROIs and dissimilarity measures are considered, and (iii) We show that by combining multiple modalities we can achieve even better results. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 179-192, 2011 [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
7. A generalization of dissimilarity representations using feature lines and feature planes
- Author
-
Orozco-Alzate, Mauricio, Duin, Robert P.W., and Castellanos-Domínguez, Germán
- Subjects
- *
GENERALIZATION , *REPRESENTATIONS of graphs , *PROTOTYPES , *STATISTICAL correlation , *PLANE geometry , *BAYESIAN analysis - Abstract
Abstract: Even though, under representational restrictions, the nearest feature rules and the dissimilarity-based classifiers are feasible alternatives to the nearest neighbor method; individually, they may not be sufficiently powerful if a very small set of prototypes is required, e.g. when it is computationally expensive to deal with larger sets of prototypes. In this paper, we show that combining both strategies, taking advantage of their individual properties, provides an improvement, particularly for correlated data sets. The combined strategy consists in deriving an enriched (generalized) dissimilarity representation by using the nearest feature rules, namely feature lines and feature planes. On top of that enriched representation, Bayesian classifiers can be constructed in order to obtain a good generalization. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
8. Approximating the multiclass ROC by pairwise analysis
- Author
-
Landgrebe, Thomas C.W. and Duin, Robert P.W.
- Subjects
- *
COMPUTATIONAL complexity , *PATTERN perception , *PATTERN recognition systems , *MACHINE theory , *COMPUTER vision - Abstract
Abstract: The use of Receiver Operator Characteristic (ROC) analysis for the sake of model selection and threshold optimisation has become a standard practice for the design of two-class pattern recognition systems. Advantages include decision boundary adaptation to imbalanced misallocation costs, the ability to fix some classification errors, and performance evaluation in imprecise, ill-defined conditions where costs, or prior probabilities may vary. Extending this to the multiclass case has recently become a topic of interest. The primary challenge involved is the computational complexity, that increases to the power of the number of classes, rendering many problems intractable. In this paper the multiclass ROC is formalised, and the computational complexities exposed. A pairwise approach is proposed that approximates the multi-dimensional operating characteristic by discounting some interactions, resulting in an algorithm that is tractable, and extensible to large numbers of classes. Two additional multiclass optimisation techniques are also proposed that provide a benchmark for the pairwise algorithm. Experiments compare the various approaches in a variety of practical situations, demonstrating the efficacy of the pairwise approach. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
9. Prototype selection for dissimilarity-based classifiers
- Author
-
Pękalska, Elżbieta, Duin, Robert P.W., and Paclík, Pavel
- Subjects
- *
PROTOTYPES , *INDUSTRIAL design , *ENGINEERING design , *PATTERN perception - Abstract
Abstract: A conventional way to discriminate between objects represented by dissimilarities is the nearest neighbor method. A more efficient and sometimes a more accurate solution is offered by other dissimilarity-based classifiers. They construct a decision rule based on the entire training set, but they need just a small set of prototypes, the so-called representation set, as a reference for classifying new objects. Such alternative approaches may be especially advantageous for non-Euclidean or even non-metric dissimilarities. The choice of a proper representation set for dissimilarity-based classifiers is not yet fully investigated. It appears that a random selection may work well. In this paper, a number of experiments has been conducted on various metric and non-metric dissimilarity representations and prototype selection methods. Several procedures, like traditional feature selection methods (here effectively searching for prototypes), mode seeking and linear programming are compared to the random selection. In general, we find out that systematic approaches lead to better results than the random selection, especially for a small number of prototypes. Although there is no single winner as it depends on data characteristics, the k-centres works well, in general. For two-class problems, an important observation is that our dissimilarity-based discrimination functions relying on significantly reduced prototype sets (3–10% of the training objects) offer a similar or much better classification accuracy than the best k-NN rule on the entire training set. This may be reached for multi-class data as well, however such problems are more difficult. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
10. Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion.
- Author
-
Loog, Marco and Duin, Robert P.W.
- Subjects
- *
EIGENVECTORS , *VECTOR spaces , *HETEROSCEDASTICITY , *ANALYSIS of variance , *MATRICES (Mathematics) , *REGRESSION analysis - Abstract
We propose an eigenvector-based heteroscedastic linear dimension reduction (LOR) technique for multiclass data. The technique is based on a heteroscedastic two-class technique which utilizes the so-called Chemoff criterion, and successfully extends the well-known linear discriminant analysis (LDA). The latter, which is based on the Fisher criterion, is incapable of dealing with heteroscedastic data in a proper way. For the two-class case, the between-class scatter is generalized so to capture differences in (co)variances. It is shown that the classical notion of between-class scatter can be associated with Euclidean distances between class means. From this viewpoint, the between-class scatter is generalized by employing the Chemoff distance measure, leading to our proposed heteroscedastic measure. Finally, using the results from the two-class case, a multiclass extension of the Chemoff criterion is proposed. This criterion combines separation information present in the class mean as well as the class covariance matrices. Extensive experiments and a comparison with similar dimension reduction techniques are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2004
11. The MDF discrimination measure: Fisher in disguise
- Author
-
Loog, Marco, Duin, Robert P.W., and Viergever, Max A.
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *STATISTICAL correlation , *PROBABILITY theory - Abstract
Recently, a discrimination measure for feature extraction for two-class data, called the maximum discriminating (MDF) measure (Talukder and Casasent [Neural Networks 14 (2001) 1201–1218]), was introduced.In the present paper, it is shown that the MDF discrimination measure produces exactly the same results as the classical Fisher criterion, on the condition that the two prior probabilities are chosen to be equal. The effect of unequal priors on the efficiency of the measures is also discussed. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
12. Dissimilarity-based classification of spectra: computational issues
- Author
-
Paclík, Pavel and Duin, Robert P.W.
- Subjects
- *
SPECTRUM analysis , *IMAGING systems , *OPTOELECTRONIC devices - Abstract
For the sake of classification, spectra are traditionally represented by points in a high-dimensional feature space, spanned by spectral bands. An alternative approach is to represent spectra by dissimilarities to other spectra. This relational representation enables one to treat spectra as connected entities and to emphasize characteristics such as shape, which are difficult to handle in the traditional approach. Several classification methods for relational representations were developed and found to outperform the nearest-neighbor rule. Existing studies focus only on the performance measured by the classification error. However, for real-time spectral imaging applications, classification speed is of crucial importance. Therefore, in this paper, we focus on the computational aspects of the on-line classification of spectra. We show, that classifiers built in dissimilarity spaces may also be applied significantly faster than the nearest-neighbor rule. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
13. Dissimilarity representations allow for building good classifiers
- Author
-
Pękalska, Elżbieta and Duin, Robert P.W.
- Subjects
- *
REPRESENTATIONS of graphs , *DENSITY functionals - Abstract
In this paper, a classification task on dissimilarity representations is considered. A traditional way to discriminate between objects represented by dissimilarities is the nearest neighbor method. It suffers, however, from a number of limitations, i.e., high computational complexity, a potential loss of accuracy when a small set of prototypes is used and sensitivity to noise. To overcome these shortcomings, we propose to use a normal density-based classifier constructed on the same representation. We show that such a classifier, based on a weighted combination of dissimilarities, can significantly improve the nearest neighbor rule with respect to the recognition accuracy and computational effort. [Copyright &y& Elsevier]
- Published
- 2002
14. Uniform Object Generation for Optimizing One--class Classifiers.
- Author
-
Tax, David M.J. and Duin, Robert P.W.
- Subjects
- *
AUTOMATIC classification , *FUNCTION spaces - Abstract
Examines the uniform object generation for optimizing one-class classifiers. Identification of target class, a one class of data in one-class classification from the rest of feature space; Proposal of support vector data description to solve the problem of one-class classification; Presentation of results for artificial data and for real world data.
- Published
- 2002
15. A note on core research issues for statistical pattern recognition
- Author
-
Duin, Robert P.W., Roli, Fabio, and de Ridder, Dick
- Published
- 2002
- Full Text
- View/download PDF
16. Statistical Pattern Recognition: A Review.
- Author
-
Jain, Anil K. and Duin, Robert P.W.
- Subjects
- *
PATTERN recognition systems , *ARTIFICIAL neural networks - Abstract
Presents a study which summarized and compared some of the methods used in various stages of a pattern recognition system. Information on pattern recognition; Template matching method used; Statistical approach; Neural networks model; Curse of dimensionality and peaking phenomena.
- Published
- 2000
- Full Text
- View/download PDF
17. Recent submissions in linear dimensionality reduction and face recognition
- Author
-
Duin, Robert P.W., Loog, Marco, and Ho, Tin Kam
- Published
- 2006
- Full Text
- View/download PDF
18. Award winning papers from the 19th International Conference on Pattern Recognition (ICPR)
- Author
-
Duin, Robert P.W., Laurendeau, Denis, and Lovell, Brian
- Published
- 2010
- Full Text
- View/download PDF
19. Multiple-instance learning as a classifier combining problem
- Author
-
Li, Yan, Tax, David M.J., Duin, Robert P.W., and Loog, Marco
- Subjects
- *
UNCERTAINTY (Information theory) , *DISTRIBUTION (Probability theory) , *DECISION making , *ESTIMATION theory , *PARAMETER estimation , *DATA analysis - Abstract
Abstract: In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model. The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
20. Integration of prior knowledge of measurement noise in kernel density classification
- Author
-
Li, Yunlei, de Ridder, Dick, Duin, Robert P.W., and Reinders, Marcel J.T.
- Subjects
- *
PATTERN recognition systems , *NOISE measurement , *KERNEL functions , *PATTERN perception - Abstract
Abstract: Samples can be measured with different precisions and reliabilities in different experiments, or even within the same experiment. These varying levels of measurement noise may deteriorate the performance of a pattern recognition system, if not treated with care. Here we seek to investigate the benefit of incorporating prior knowledge about measurement noise into system construction. We propose a kernel density classifier which integrates such prior knowledge. Instead of using an identical kernel for each sample, we transform the prior knowledge into a distinct kernel for each sample. The integration procedure is straightforward and easy to interpret. In addition, we show how to estimate the diverse measurement noise levels in a real world dataset. Compared to the basic methods, the new kernel density classifier can give a significantly better classification performance. As expected, this improvement is more obvious for small sample size datasets and large number of features. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
21. Dimensionality reduction of image features using the canonical contextual correlation projection
- Author
-
Loog, Marco, van Ginneken, Bram, and Duin, Robert P.W.
- Subjects
- *
STATISTICAL correlation , *MULTIVARIATE analysis , *LINEAR free energy relationship , *DIAGNOSTIC imaging - Abstract
Abstract: A linear, discriminative, supervised technique for reducing feature vectors extracted from image data to a lower-dimensional representation is proposed. It is derived from classical linear discriminant analysis (LDA), extending this technique to cases where there is dependency between the output variables, i.e., the class labels, and not only between the input variables. (The latter can readily be dealt with in standard LDA.) The novel method is useful, for example, in supervised segmentation tasks in which high-dimensional feature vectors describe the local structure of the image. The principal idea is that where standard LDA merely takes into account a single class label for every feature vector, the new technique incorporates class labels of its neighborhood in the analysis as well. In this way, the spatial class label configuration in the vicinity of every feature vector is accounted for, resulting in a technique suitable for, e.g. image data. This extended LDA, that takes spatial label context into account, is derived from a formulation of standard LDA in terms of canonical correlation analysis. The novel technique is called the canonical contextual correlation projection (CCCP). An additional drawback of LDA is that it cannot extract more features than the number of classes minus one. In the two-class case this means that only a reduction to one dimension is possible. Our contextual LDA approach can avoid such extreme deterioration of the classification space and retain more than one dimension. The technique is exemplified on a pixel-based medical image segmentation problem in which it is shown that it may give significant improvement in segmentation accuracy. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
22. Almost autonomous training of mixtures of principal component analyzers
- Author
-
Musa, Mohamed E.M., de Ridder, Dick, Duin, Robert P.W., and Atalay, Volkan
- Subjects
- *
ALGORITHMS , *TRAINING , *FOUNDATIONS of arithmetic , *COMPUTER programming - Abstract
In recent years, a number of mixtures of local PCA models have been proposed. Most of these models require the user to set the number of submodels (local models) in the mixture and the dimensionality of the submodels (i.e., number of PC''s) as well. To make the model free of these parameters, we propose a greedy expectation-maximization algorithm to find a suboptimal number of submodels. For a given retained variance ratio, the proposed algorithm estimates for each submodel the dimensionality that retains this given variability ratio. We test the proposed method on two different classification problems: handwritten digit recognition and 2-class ionosphere data classification. The results show that the proposed method has a good performance. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
23. A Generalized Kernel Approach to Dissimilarity-based Classification.
- Author
-
Pekalska, Elżbieta, Paclík, Pavel, and Duin, Robert P.W.
- Subjects
- *
MACHINE learning , *KERNEL functions - Abstract
Usually, objects to be classified are represented by features. In this paper, we discuss an alternative object representation based on dissimilarity values. If such distances separate the classes well, the nearest neighbor method offers a good solution. However, dissimilarities used in practice are usually far from ideal and the performance of the nearest neighbor rule suffers from its sensitivity to noisy examples. We show that other, more global classification techniques are preferable to the nearest neighbor rule, in such cases. For classification purposes, two different ways of using generalized dissimilarity kernels are considered. In the first one, distances are isometrically embedded in a pseudo-Euclidean space and the classification task is performed there. In the second approach, classifiers are built directly on distance kernels. Both approaches are described theoretically and then compared using experiments with different dissimilarity measures and datasets including degraded data simulating the problem of missing values. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
24. On combining classifiers.
- Author
-
Kittler, Josef, Hatef, Mohamad, Duin, Robert P.W., and Matas, Jiri
- Subjects
- *
STRUCTURAL frames , *CLASSIFICATION - Abstract
Provides information on the development of a theoretical framework for combining classifiers which use distinct pattern representations. Comparison of the different classifier combinations schemes; Investigation of the sensitivity analysis of the schemes; Results of the analysis.
- Published
- 1998
- Full Text
- View/download PDF
25. Spherical and Hyperbolic Embeddings of Data.
- Author
-
Wilson, Richard C., Hancock, Edwin R., Pekalska, Elzbieta, and Duin, Robert P.W.
- Subjects
- *
EMBEDDINGS (Mathematics) , *COMPUTER vision , *HYPERBOLIC geometry , *PATTERN recognition systems , *GEODESIC distance , *EIGENVALUES , *MANIFOLDS (Mathematics) , *CURVATURE - Abstract
Many computer vision and pattern recognition problems may be posed as the analysis of a set of dissimilarities between objects. For many types of data, these dissimilarities are not euclidean (i.e., they do not represent the distances between points in a euclidean space), and therefore cannot be isometrically embedded in a euclidean space. Examples include shape-dissimilarities, graph distances and mesh geodesic distances. In this paper, we provide a means of embedding such non-euclidean data onto surfaces of constant curvature. We aim to embed the data on a space whose radius of curvature is determined by the dissimilarity data. The space can be either of positive curvature (spherical) or of negative curvature (hyperbolic). We give an efficient method for solving the spherical and hyperbolic embedding problems on symmetric dissimilarity data. Our approach gives the radius of curvature and a method for approximating the objects as points on a hyperspherical manifold without optimisation. For objects which do not reside exactly on the manifold, we develop a optimisation-based procedure for approximate embedding on a hyperspherical manifold. We use the exponential map between the manifold and its local tangent space to solve the optimisation problem locally in the euclidean tangent space. This process is efficient enough to allow us to embed data sets of several thousand objects. We apply our method to a variety of data including time warping functions, shape similarities, graph similarity and gesture similarity data. In each case the embedding maintains the local structure of the data while placing the points in a metric space. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
26. Multi-spectral video endoscopy system for the detection of cancerous tissue
- Author
-
Leitner, Raimund, Biasio, Martin De, Arnold, Thomas, Dinh, Cuong Viet, Loog, Marco, and Duin, Robert P.W.
- Subjects
- *
VIDEO endoscopy , *CANCER diagnosis , *IMAGE processing , *SUPPORT vector machines , *IMAGE registration , *ACCURACY - Abstract
Abstract: Multi-spectral video endoscopy provides considerable potential for early stage cancer detection. Previous multi-spectral image acquisition systems were of limited use for endoscopy due to (i) the necessary spatial scanning of push-broom approaches or (ii) the impractical long switching times of liquid crystal tunable filters. Recent technological advances in the field of tuneable filters, in particular fast acousto-optical tunable filters (AOTF), make switching times below 1ms feasible. Thus, AOTFs represent a suitable technology for the acquisition of hyper-spectral image and multi-spectral video data with excellent spatial and temporal resolution. In this paper, we propose a hyper-spectral imaging endoscope using a fast AOTF synchronized with a highly sensitive EMCCD camera for the detection of cancerous tissue. The setup demonstrates that the acquisition of hyper-spectral image and multi-spectral video data is feasible and enables the augmentation of endoscopic videos with overlays indicating cancerous tissue regions. Using hyper-spectral measurements from biopsies acquired with the setup in a clinical environment it is shown that the spectral characteristic of cancerous regions is tissue dependent. Even a sophisticated classifier such as a Support Vector Machines (SVM) or a Mixture of Gaussian Classifier (MOGC) cannot generalize the discriminative information if the training set contains measurements from different tissue types (e.g. larynx vs. parotid). In contrast, a training data selection scheme that chooses similar training sets for a given test set achieves a better prediction accuracy using an approach based on a Quadratic Discriminant Classifier (QDC) with the important advantage of improved robustness and less liability to overtraining. Combined with an image registration removing motion-based acquisition artefacts, the spectral information allows the augmentation of the video stream with overlays indicating cancerous tissue regions. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
27. SEDMI: Saliency based edge detection in multispectral images
- Author
-
Dinh, Cuong V., Leitner, Raimund, Paclik, Pavel, Loog, Marco, and Duin, Robert P.W.
- Subjects
- *
IMAGE analysis , *CLUSTER analysis (Statistics) , *SPECTRUM analysis , *COMPARATIVE studies , *COMPUTER vision , *EMBEDDED computer systems - Abstract
Abstract: Detecting edges in multispectral images is difficult because different spectral bands may contain different edges. Existing approaches calculate the edge strength of a pixel locally, based on the variation in intensity between this pixel and its neighbors. Thus, they often fail to detect the edges of objects embedded in background clutter or objects which appear in only some of the bands. We propose SEDMI, a method that aims to overcome this problem by considering the salient properties of edges in an image. Based on the observation that edges are rare events in the image, we recast the problem of edge detection into the problem of detecting events that have a small probability in a newly defined feature space. The feature space is constructed by the spatial gradient magnitude in all spectral channels. As edges are often confined to small, isolated clusters in this feature space, the edge strength of a pixel, or the confidence value that this pixel is an event with a small probability, can be calculated based on the size of the cluster to which it belongs. Experimental results on a number of multispectral data sets and a comparison with other methods demonstrate the robustness of the proposed method in detecting objects embedded in background clutter or appearing only in a few bands. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
28. A multi-classifier for grading knee osteoarthritis using gait analysis
- Author
-
Şen Köktaş, Nigar, Yalabik, Neşe, Yavuzer, Güneş, and Duin, Robert P.W.
- Subjects
- *
CLASSIFICATION , *OSTEOARTHRITIS , *ANIMAL locomotion , *KNEE diseases , *DATA analysis , *PATTERN perception , *NUMERICAL analysis , *DECISION making - Abstract
Abstract: This study presents a system for detecting and scoring of a knee disorder, namely, osteoarthritis (OA). Data used for training and recognition is mainly data obtained through computerized gait analysis, which is a numerical representation of the mechanical measurements of human walking patterns. History and clinical characteristics of the subjects such as age, body mass index and pain level are also included in decision-making. Subjects are allocated into four OA-severity categories, formed in accordance with the Kellgren–Lawrence scale: “Normal”, “Mild”, “Moderate”, and “Severe”. Different types of classifiers are combined to incorporate the different types of data and to make the best advantages of different classifiers for better accuracy. A decision tree is developed with Multilayer Perceptrons (MLP) at the leaves. This gives an opportunity to use neural networks to extract hidden (i.e. implicit) knowledge in gait measurements and use it back into the explicit form of the decision trees for reasoning. The approach is similar to the Mixture of Experts method. Individual feature selection is applied using the Mahalanobis distance measure and most discriminatory features are used for each expert MLP. The system is tested by a separate set and a success rate of about 80% is achieved on the average. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
29. Component-based discriminative classification for hidden Markov models
- Author
-
Bicego, Manuele, Pe¸kalska, Elżbieta, Tax, David M.J., and Duin, Robert P.W.
- Subjects
- *
MARKOV processes , *DISCRIMINANT analysis , *CLASSIFICATION , *SET theory , *PROBABILITY theory , *ESTIMATION theory , *EMBEDDINGS (Mathematics) , *KERNEL functions - Abstract
Abstract: Hidden Markov models (HMMs) have been successfully applied to a wide range of sequence modeling problems. In the classification context, one of the simplest approaches is to train a single HMM per class. A test sequence is then assigned to the class whose HMM yields the maximum a posterior (MAP) probability. This generative scenario works well when the models are correctly estimated. However, the results can become poor when improper models are employed, due to the lack of prior knowledge, poor estimates, violated assumptions or insufficient training data. To improve the results in these cases we propose to combine the descriptive strengths of HMMs with discriminative classifiers. This is achieved by training feature-based classifiers in an HMM-induced vector space defined by specific components of individual hidden Markov models. We introduce four major ways of building such vector spaces and study which trained combiners are useful in which context. Moreover, we motivate and discuss the merit of our method in comparison to dynamic kernels, in particular, to the Fisher Kernel approach. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
30. Minimum spanning tree based one-class classifier
- Author
-
Juszczak, Piotr, Tax, David M.J., Pe¸kalska, Elżbieta, and Duin, Robert P.W.
- Subjects
- *
SPANNING trees , *BASES (Linear topological spaces) , *FACE perception , *BIOMETRY , *COMPUTER simulation , *NUMERICAL analysis - Abstract
Abstract: In the problem of one-class classification one of the classes, called the target class, has to be distinguished from all other possible objects. These are considered as non-targets. The need for solving such a task arises in many practical applications, e.g. in machine fault detection, face recognition, authorship verification, fraud recognition or person identification based on biometric data. This paper proposes a new one-class classifier, the minimum spanning tree class descriptor (MST_CD). This classifier builds on the structure of the minimum spanning tree constructed on the target training set only. The classification of test objects relies on their distances to the closest edge of that tree, hence the proposed method is an example of a distance-based one-class classifier. Our experiments show that the MST_CD performs especially well in case of small sample size problems and in high-dimensional spaces. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
31. The interaction between classification and reject performance for distance-based reject-option classifiers
- Author
-
Landgrebe, Thomas C.W., Tax, David M.J., Paclík, Pavel, and Duin, Robert P.W.
- Subjects
- *
LINE receivers (Integrated circuits) , *COMPUTER operating systems , *PERFORMANCE evaluation , *SYSTEMS theory - Abstract
Abstract: Consider the class of problems in which a target class is well-defined, and an outlier class is ill-defined. In these cases new outlier classes can appear, or the class-conditional distribution of the outlier class itself may be poorly sampled. A strategy to deal with this problem involves a two-stage classifier, in which one stage is designed to perform discrimination between known classes, and the other stage encloses known data to protect against changing conditions. The two stages are, however, interrelated, implying that optimising one may compromise the other. In this paper the relation between the two stages is studied within an ROC analysis framework. We show how the operating characteristics can be used for both model selection, and in aiding in the choice of the reject threshold. An analytic study on a controlled experiment is performed, followed by some experiments on real-world datasets with the distance-based reject-option classifier. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
32. Improving the specificity of fluorescence bronchoscopy for the analysis of neoplastic lesions of the bronchial tree by combination with optical spectroscopy: preliminary communication
- Author
-
Bard, Martin P.L., Amelink, Arjen, Skurichina, Marina, den Bakker, Michael, Burgers, Sjaak A., van Meerbeeck, Jan P., Duin, Robert P.W., Aerts, Joachim G.J.V., Hoogsteden, Henk C., and Sterenborg, Henricus J.C.M.
- Subjects
- *
ENDOSCOPY , *LUNG cancer , *DIAGNOSIS , *AMNIOSCOPY - Abstract
Summary: Detection of malignancies of the bronchial tree in an early stage, such as carcinoma in situ (CIS), augments the cure rate considerably. It has been shown that the sensitivity of autofluorescence bronchoscopy is better than white light bronchoscopy for the detection of CIS and dysplastic lesions. Autofluorescence bronchoscopy is, however, characterized by a low specificity with a high rate of false positive findings. In the present paper we propose to combine autofluorescence bronchoscopy with optical spectroscopy to improve the specificity of autofluorescence imaging, while maintaining the high sensitivity. Standard autofluorescence bronchoscopy was used to find suspect lesions in the upper bronchial tree, and these lesions were subsequently characterized spectroscopically using a custom made fiberoptic probe. Autofluorescence spectra of the lesions as well as reflectance spectra were measured. We will show in this preliminary report that the addition of either of these spectroscopic techniques decreases the rate of false positives findings, with the best results obtained when both spectroscopic modalities are combined. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
33. Selecting feature lines in generalized dissimilarity representations for pattern recognition
- Author
-
Plasencia-Calaña, Yenisel, Orozco-Alzate, Mauricio, García-Reyes, Edel, and Duin, Robert P.W.
- Subjects
- *
PATTERN perception , *GENERALIZATION , *INFORMATION theory , *STATISTICAL correlation , *DATA analysis , *DIMENSIONAL analysis , *SET theory - Abstract
Abstract: Recently, generalized dissimilarity representations have shown their potential for small sample size problems. In generalizations by feature lines, instead of dissimilarities with objects, we have dissimilarities with feature lines. One drawback of such generalization is the high amount of generated lines that increases computational costs and may provide redundant information. To overcome this, the selection of lines based on the length of the line segments has been considered in previous works, showing good results for correlated data. In this paper, we propose a new supervised criterion for the selection of feature lines. Experimental results show that the proposed criterion obtains competitive or better results than those obtained by previous criteria, especially for data with high intrinsic dimension, spherical data and data with outliers. As our proposal provides better results for small representation sets, it allows one to obtain a good trade-off between classification accuracy and computational efficiency. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.