442 results
Search Results
2. Mode seeking on graphs for geometric model fitting via preference analysis
- Author
-
Liming Zhang, Guobao Xiao, Hanzi Wang, and Yan Yan
- Subjects
Preference analysis ,02 engineering and technology ,Machine learning ,computer.software_genre ,Residual ,01 natural sciences ,Synthetic data ,Artificial Intelligence ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,010306 general physics ,Cluster analysis ,Mathematics ,Complex data type ,business.industry ,Random walk ,Real image ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Geometric modeling ,Algorithm ,computer ,Software - Abstract
We propose a graph-based mode-seeking method to fit multi-structural data.The proposed method combines mode-seeking with preference analysis.The proposed method exploits the global structure of graphs by random walks.Experiments show the proposed method is superior to some other fitting methods. In this paper, we propose a novel graph-based mode-seeking fitting method to fit and segment multiple-structure data. Mode-seeking is a simple and effective data analysis technique for clustering and filtering. However, conventional mode-seeking based fitting methods are very sensitive to the proportion of good/bad hypotheses, while most of sampling techniques may generate a large proportion of bad hypotheses. In this paper, we show that the proposed graph-based mode-seeking method has significant superiority for geometric model fitting. We intrinsically combine mode seeking with preference analysis. This enables mode seeking to be beneficial for reducing the influence of bad hypotheses since bad hypotheses usually have larger residual values than good ones. In addition, the proposed method exploits the global structure of graphs by random walks to alleviate the sensitivity to unbalanced data. Experimental results on both synthetic data and real images demonstrate that the proposed method outperforms several other competing fitting methods especially for complex data.
- Published
- 2016
3. Minimum cost subgraph matching using a binary linear program
- Author
-
Sébastien Adam, Julien Lerouge, Pierre Héroux, Maroua Hammami, Equipe Apprentissage (DocApp - LITIS), Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes (LITIS), Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Université Le Havre Normandie (ULH), Normandie Université (NU)-Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), and Normandie Université (NU)
- Subjects
Factor-critical graph ,Mathematical optimization ,Matching (graph theory) ,Linear programming ,Substitution (logic) ,Subgraph isomorphism problem ,Binary number ,02 engineering and technology ,01 natural sciences ,[INFO.INFO-TT]Computer Science [cs]/Document and Text Processing ,Artificial Intelligence ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Induced subgraph isomorphism problem ,Computer Vision and Pattern Recognition ,010306 general physics ,Algorithm ,Software ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
Minimum Cost Subgraph Matching (MCSM) is an adaptation of Graph Edit Distance.The paper proposes a Binary Linear Program that solves the MCSM problem.The proposed formulation is very general and can tackle a large range of graphs.MCSM is more efficient and faster than a Substitution Only Tolerant Subgraph Matching (SOTSM). This paper presents a binary linear program for the Minimum Cost Subgraph Matching (MCSM) problem. MCSM is an extension of the subgraph isomorphism problem where the matching tolerates substitutions of attributes and modifications of the graph structure. The objective function proposed in the formulation can take into account rich attributes (e.g. vectors mixing nominal and numerical values) on both vertices and edges. Some experimental results obtained on an application-dependent dataset concerning the spotting of symbols on technical drawings show that the approach obtains better performance than a previous approach which is only substitution-tolerant.
- Published
- 2016
4. Kernel subspace pursuit for sparse regression
- Author
-
Ioannis N. Psaromiligkos and Jad Kabbara
- Subjects
business.industry ,020206 networking & telecommunications ,Pattern recognition ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Kernel principal component analysis ,Kernel method ,Artificial Intelligence ,Kernel embedding of distributions ,Polynomial kernel ,Variable kernel density estimation ,Kernel (statistics) ,Signal Processing ,Radial basis function kernel ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Artificial intelligence ,0101 mathematics ,Tree kernel ,business ,Algorithm ,Software ,Mathematics - Abstract
This paper introduces a kernel version of the Subspace Pursuit algorithm.The proposed method, KSP, is a new iterative method for sparse regression.KSP outperforms and is less computationally intensive than related kernel methods. Recently, results from sparse approximation theory have been considered as a means to improve the generalization performance of kernel-based machine learning algorithms. In this paper, we present Kernel Subspace Pursuit (KSP), a new method for sparse non-linear regression. KSP is a low-complexity method that iteratively approximates target functions in the least-squares sense as a linear combination of a limited number of elements selected from a kernel-based dictionary. Unlike other kernel methods, by virtue of KSP's algorithmic design, the number of KSP iterations needed to reach the final solution does not depend on the number of basis functions used nor that of elements in the dictionary. We experimentally show that, in many scenarios involving learning synthetic and real data, KSP is less complex computationally and outperforms other kernel methods that solve the same problem, namely, Kernel Matching Pursuit and Kernel Basis Pursuit.
- Published
- 2016
5. An improved global lower bound for graph edit similarity search
- Author
-
Karam Gouda and Mona M. Arafa
- Subjects
Discrete mathematics ,Comparability graph ,Strength of a graph ,Upper and lower bounds ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Graph property ,Graph operations ,Null graph ,Lattice graph ,Algorithm ,Software ,Complement graph ,Mathematics - Abstract
New global lower bound on the edit distance between graphs.An efficient preliminary filter for similarity search in graph databases.Almost-for-free improvement on the previous global lower bounds.The new bound is at least as tight as the previous global ones.Experiments show the effectiveness of the new bound. Graph similarity search is to retrieve data graphs that are similar to a given query graph. It has become an essential operation in many application areas. In this paper, we investigate the problem of graph similarity search with edit distance constraints. Existing solutions adopt the filter-and-verify strategy to speed up the search, where lower and upper bounds of graph edit distance are employed as pruning and validation rules in this process. The main problem with existing lower bounds is that they show different performance on different data graphs. An interesting group of lower bounds is the global counting ones. These bounds come almost for free and can be injected with any filtering methodology to work as preliminary filters. In this paper, we present an improvement upon these bounds without adding any computation overhead. We show that the new bound is tighter than the previous global ones except for few cases where they identically evaluate. Via experiments, we show how the new bound, when incorporated into previous lower bounding methods, increases the performance significantly.
- Published
- 2015
6. A new extracting algorithm of k nearest neighbors searching for point clouds
- Author
-
Shengfeng Qin, Zisheng Li, Rong Li, and Guofu Ding
- Subjects
business.industry ,Computation ,Point cloud ,Pattern recognition ,k-nearest neighbors algorithm ,Set (abstract data type) ,Euclidean distance ,Artificial Intelligence ,Search algorithm ,Nearest-neighbor chain algorithm ,Signal Processing ,Point (geometry) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Mathematics - Abstract
We propose an extracting algorithm for k nearest neighbors searching.Vector inner product instead of distance calculation for distance comparison.Extracting algorithm can integrate with any other algorithm as plug-in.Two prominent algorithms and seven models are employed to experiment.Open source of proposed algorithm using dynamic memory allocation. k Nearest neighbors (kNN) searching algorithm is widely used for finding k nearest neighbors for each point in a point cloud model for noise removal and surface curvature computation. When the number of points and their density in a point cloud model increase significantly, the efficiency of a kNN searching algorithm becomes critical to various applications, thus, a better kNN approach is needed. In order to improve the efficiency of a kNN searching algorithm, in this paper, a new strategy and the corresponding algorithm are developed for reducing the amount of target points in a given data set by extracting nearest neighbors before the search begins. The nearest neighbors of a reverse nearest neighborhood are proposed to use in extracting nearest points of a query point, avoiding repetitive Euclidean distance calculation in an extracting process for saving time and memories. For any point in the model, its initial nearest neighbors can be extracted from its reverse neighborhood using an inner product of two related vectors other than direct Euclidean distance calculations and comparisons. The initial neighbors can be its full or partial set of the all nearest neighbors. If it is a partial set, the rest can be obtained by using other fast searching algorithms, which can be integrated with the proposed approach. Experimental results show that integrating extracting algorithm proposed in this paper with other excellent algorithms provides a better performance by comparing to their performances alone.
- Published
- 2014
7. Speeding up correlation search for binary data
- Author
-
W. Nick Street, Yanchi Liu, and Lian Duan
- Subjects
Discrete mathematics ,Property (programming) ,Measure (mathematics) ,Upper and lower bounds ,Pearson product-moment correlation coefficient ,Correlation ,symbols.namesake ,Monotone polygon ,Artificial Intelligence ,Search algorithm ,Signal Processing ,Binary data ,symbols ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
Searching correlated pairs in a collection of items is essential for many problems in commercial, medical, and scientific domains. Recently, a lot of progress has been made to speed up the search for pairs that have a high Pearson correlation (@f-coefficient). However, @f-coefficient is not the only or the best correlation measure. In this paper, we aim at an alternative task: finding correlated pairs of any ''good'' correlation measure which satisfies the three widely-accepted correlation properties in Section 2.1. In this paper, we identify a 1-dimensional monotone property of the upper bound of any ''good'' correlation measure, and different 2-dimensional monotone properties for different types of correlation measures. We can either use the 2-dimensional search algorithm to retrieve correlated pairs above a certain threshold, or our new token-ring algorithm to find top-k correlated pairs to prune many pairs without computing their correlations. The experimental results show that our robust algorithm can efficiently search correlated pairs under different situations and is an order of magnitude faster than the brute-force method.
- Published
- 2013
8. Algorithms for maximum-likelihood bandwidth selection in kernel density estimators
- Author
-
Antonio Artés-Rodríguez and J.M. Leiva-Murillo
- Subjects
Mathematical optimization ,Kernel density estimation ,Kernel Bandwidth ,Kernel principal component analysis ,Multivariate kernel density estimation ,Kernel method ,Artificial Intelligence ,Kernel embedding of distributions ,Variable kernel density estimation ,Signal Processing ,Radial basis function kernel ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
In machine learning and statistics, kernel density estimators are rarely used on multivariate data due to the difficulty of finding an appropriate kernel bandwidth to overcome overfitting. However, the recent advances on information-theoretic learning have revived the interest on these models. With this motivation, in this paper we revisit the classical statistical problem of data-driven bandwidth selection by cross-validation maximum likelihood for Gaussian kernels. We find a solution to the optimization problem under both the spherical and the general case where a full covariance matrix is considered for the kernel. The fixed-point algorithms proposed in this paper obtain the maximum likelihood bandwidth in few iterations, without performing an exhaustive bandwidth search, which is unfeasible in the multivariate case. The convergence of the methods proposed is proved. A set of classification experiments are performed to prove the usefulness of the obtained models in pattern recognition.
- Published
- 2012
9. Testing for the absence of correlation between two spatial or temporal sequences
- Author
-
Ronny Vallejos
- Subjects
Mahalanobis distance ,Signal processing ,Wilcoxon signed-rank test ,Stochastic process ,Artificial Intelligence ,Signal Processing ,Statistics ,Test statistic ,Computer Vision and Pattern Recognition ,Power function ,Null hypothesis ,Algorithm ,Software ,Statistical hypothesis testing ,Mathematics - Abstract
The purpose of this paper is to elucidate the problem of testing for the absence of correlation between the trajectories of two stochastic processes. It is assumed that the process is homogeneous on a pre-specified partition of the index set. The hypothesis testing methodology developed in this article consists in estimating codispersion coefficients on each subset of the partition, and in testing for the simultaneous nullity of the coefficients. To this aim, the Mahalanobis distance between the observed and theoretical codispersion vectors is used to define a test statistic, which converges to a chi-square distribution under the null hypothesis. Three examples in the context of signal processing and spatial models are discussed to point out the advantages and limitations of our proposal. Simulation studies are carried out to explore both the distribution of the test statistic under the null hypothesis and its power function. The method introduced in this paper has potential applications in time series where it is of interest to measure the comovement of two temporal sequences. The proposed test is illustrated with a real data set. Two signals are compared in terms of comovement to validate two confocal sensors in the context of biotechnology. The analysis carried out using this technique is more appropriate than previous validation tests where the mean values were compared via t test and Wilcoxon signed rank test ignoring the correlation within and across the series.
- Published
- 2012
10. ISE-bounded polygonal approximation of digital curves
- Author
-
Alexander Kolesnikov
- Subjects
Discrete mathematics ,Mean squared error ,Dynamic programming ,Artificial Intelligence ,Polygonal chain ,Bounded function ,Signal Processing ,Shortest path problem ,Vector map ,Segmentation ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Shape analysis (digital geometry) ,Mathematics - Abstract
In this paper we consider a problem of the polygonal approximation of digital curves with a minimum number of approximation segments for a given error bound with L"2-norm. The Integral Square Error bound is defined by the number of vertices in the curve and by constraint on the Root-Mean-Squared-Error (RMSE) of the polygonal approximation. This paper proposes a new, fast and efficient algorithm for solving the problem. The algorithm that is offered was based on searching for the shortest path in a feasibility graph that has been constructed on the vertices of the input curve. The proposed algorithm provides a solution with 97% optimality on average in what is practically real time. This algorithm can also be used in combination with the Reduced-Search Dynamic Programming algorithm as a preliminary step for finding a near-optimal result in an acceptable time. Experiments conducted with the large size vector data have demonstrated both the high degree of efficiency and the fast performance time of the proposed algorithms. These algorithms can be used in practical applications for image vectorization and segmentation, the analysis of shapes and time series, the simplification of vector maps, and the compression of vector data.
- Published
- 2012
11. Consistency of functional learning methods based on derivatives
- Author
-
Nathalie Villa-Vialaneix, Fabrice Rossi, Laboratoire Traitement et Communication de l'Information (LTCI), Télécom ParisTech-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS), Institut de Mathématiques de Toulouse UMR5219 (IMT), Université Toulouse Capitole (UT Capitole), Université de Toulouse (UT)-Université de Toulouse (UT)-Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Université de Toulouse (UT)-Institut National des Sciences Appliquées (INSA)-Université Toulouse - Jean Jaurès (UT2J), Université de Toulouse (UT)-Université Toulouse III - Paul Sabatier (UT3), Université de Toulouse (UT)-Centre National de la Recherche Scientifique (CNRS), IUT - Département STID - Carcassonne (UPVD), Université de Perpignan Via Domitia (UPVD), Institut National des Sciences Appliquées - Toulouse (INSA Toulouse), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), and Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)
- Subjects
SVM ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,010104 statistics & probability ,Smoothing spline ,symbols.namesake ,[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST] ,Artificial Intelligence ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Mathematics ,Smoothing splines ,business.industry ,Nonparametric statistics ,Hilbert space ,Functional data analysis ,[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH] ,Statistical learning ,Support vector machine ,Spline (mathematics) ,Kernel method ,RKHS ,Functional Data Analysis ,Signal Processing ,symbols ,020201 artificial intelligence & image processing ,Consistency ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithm ,Derivatives ,Software ,Reproducing kernel Hilbert space - Abstract
International audience; In some real world applications, such as spectrometry, functional models achieve better predictive performances if they work on the derivatives of order m of their inputs rather than on the original functions. As a consequence, the use of derivatives is a common practice in Functional Data Analysis, despite a lack of theoretical guarantees on the asymptotically achievable performances of a derivative based model. In this paper, we show that a smoothing spline approach can be used to preprocess multivariate observations obtained by sampling functions on a discrete and finite sampling grid in a way that leads to a consistent scheme on the original infinite dimensional functional problem. This work extends (Mas and Pumo, 2009) to nonparametric approaches and incomplete knowledge. To be more precise, the paper tackles two difficulties in a nonparametric framework: the information loss due to the use of the derivatives instead of the original functions and the information loss due to the fact that the functions are observed through a discrete sampling and are thus also unperfectly known: the use of a smoothing spline based approach solves these two problems. Finally, the proposed approach is tested on two real world datasets and the approach is experimentaly proven to be a good solution in the case of noisy functional predictors.
- Published
- 2011
12. Characteristic analysis of Otsu threshold and its applications
- Author
-
Xiangyang Xu, Enmin Song, Lianghai Jin, and Shengzhou Xu
- Subjects
Pixel ,Image processing ,Variance (accounting) ,Image segmentation ,Otsu's method ,symbols.namesake ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,symbols ,Range (statistics) ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
This paper proves that Otsu threshold is equal to the average of the mean levels of two classes partitioned by this threshold. Therefore, when the within-class variances of two classes are different, the threshold biases toward the class with larger variance. As a result, partial pixels belonging to this class will be misclassified into the other class with smaller variance. To address this problem and based on the analysis of Otsu threshold, this paper proposes an improved Otsu algorithm that constrains the search range of gray levels. Experimental results demonstrate the superiority of new algorithm compared with Otsu method.
- Published
- 2011
13. Comments on: A locally constrained radial basis function for registration and warping of images
- Author
-
Antonio Tristán-Vega and Verónica García-Pérez
- Subjects
Mathematical optimization ,Radial basis function network ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,Function (mathematics) ,Positive definiteness ,Artificial Intelligence ,Signal Processing ,Radial basis function ,Computer Vision and Pattern Recognition ,Image warping ,Algorithm ,Software ,Bochner's theorem ,Mathematics ,Interpolation - Abstract
In a recent paper, Siddiqui et al. introduced a kernel function to be used as a radial basis function (RBF) in image registration tasks. This function is mainly designed so that the resulting deformation is fairly distributed inside its support. The important property of positive definiteness is checked in the paper erroneously, so that the conclusions inferred are wrong. In this communication, we discuss this point and some other methodological errors in the formulation. In addition, we provide some insights into the importance of positive definiteness, concluding that this property may not be critical, or may even be worthless, in certain interpolation problems.
- Published
- 2011
14. Efficient approximate Regularized Least Squares by Toeplitz matrix
- Author
-
Rodolfo Zunino, Paolo Gastaldo, and Sergio Decherchi
- Subjects
Machine Learning ,Regularized Least- Squares Algorithm ,Computational complexity theory ,Iterative method ,Direct method ,System of linear equations ,Square matrix ,Toeplitz matrix ,Support vector machine ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Linear equation ,Mathematics - Abstract
Machine Learning based on the Regularized Least Squares (RLS) model requires one to solve a system of linear equations. Direct-solution methods exhibit predictable complexity and storage, but often prove impractical for large-scale problems; iterative methods attain approximate solutions at lower complexities, but heavily depend on learning parameters. The paper shows that applying the properties of Toeplitz matrixes to RLS yields two benefits: first, both the computational cost and the memory space required to train an RLS-based machine reduce dramatically; secondly, timing and storage requirements are defined analytically. The paper proves this result formally for the one-dimensional case, and gives an analytical criterion for an effective approximation in multidimensional domains. The approach validity is demonstrated in several real-world problems involving huge data sets with highly dimensional data.
- Published
- 2011
15. Efficient computation of new extinction values from extended component tree
- Author
-
Roberto de Alencar Lotufo and Alexandre Gonçalves Silva
- Subjects
Surface (mathematics) ,Connected component ,Extinction ,Computation ,Diagonal ,Tree (data structure) ,Artificial Intelligence ,Minimum bounding box ,Component (UML) ,Signal Processing ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
A gray-scale image can be interpreted as a topographical surface, and represented by a component tree, based on the inclusion relation of connected components obtained by threshold decomposition. Relations between plateaus, valleys or mountains of this relief are useful in computer vision systems. An important definition to characterize the topographical surface is the dynamics, introduced by Grimaud (1992), associated with each regional minimum. This concept has been extended, by Vachier and Meyer (1995), by the definition of extinction values associated with each extremum of the image. This paper proposes three new extinction values - two based on the topology of the component tree: (i) number of descendants and (ii) sub-tree height; and one geometric: (iii) level component bounding box (subdivided into extinctions of height, width or diagonal). This paper describes an efficient computation of these extinction values based on the incremental determination of attributes from the component tree construction in quasi-linear time, compares the computation time of the method and illustrates the usefulness of these new extinction values from real examples.
- Published
- 2011
16. Affine iterative closest point algorithm for point set registration
- Author
-
Shaoyi Du, Shihui Ying, Jianyi Liu, and Nanning Zheng
- Subjects
Harris affine region detector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point set registration ,Affine shape adaptation ,Affine coordinate system ,Affine combination ,Artificial Intelligence ,Affine hull ,Signal Processing ,Affine group ,Computer Vision and Pattern Recognition ,Affine transformation ,Algorithm ,Software ,Mathematics - Abstract
The traditional iterative closest point (ICP) algorithm is accurate and fast for rigid point set registration but it is unable to handle affine case. This paper instead introduces a novel generalized ICP algorithm based on lie group for affine registration of m-D point sets. First, with singular value decomposition technique applied, this paper decomposes affine transformation into three special matrices which are then constrained. Then, these matrices are expressed by exponential mappings of lie group and their Taylor approximations at each iterative step of affine ICP algorithm. In this way, affine registration problem is ultimately simplified to a quadratic programming problem. By solving this quadratic problem, the new algorithm converges monotonically to a local minimum from any given initial parameters. Hence, to reach desired minimum, good initial parameters and constraints are required which are successfully estimated by independent component analysis. This new algorithm is independent of shape representation and feature extraction, and thereby it is a general framework for affine registration of m-D point sets. Experimental results demonstrate its robustness and efficiency compared with the traditional ICP algorithm and the state-of-the-art methods.
- Published
- 2010
17. An incremental learning algorithm for Lagrangian support vector machines
- Author
-
Guoping He, Weizhen Hou, Hua Duan, Qingtian Zeng, and Xiaojian Shao
- Subjects
Iterative method ,Population-based incremental learning ,Support vector machine ,Matrix (mathematics) ,Rate of convergence ,Artificial Intelligence ,Signal Processing ,Convex optimization ,Computer Vision and Pattern Recognition ,Minification ,Algorithm ,Software ,Mathematics ,Curse of dimensionality - Abstract
Incremental learning has attracted more and more attention recently, both in theory and application. In this paper, the incremental learning algorithms for Lagrangian support vector machine (LSVM) are proposed. LSVM is an improvement to the standard linear SVM for classifications, which leads to the minimization of an unconstrained differentiable convex programming. The solution to this programming is obtained by an iteration scheme with a simple linear convergence. The inversion of the matrix in the solving algorithm is converted to the order of the original input space's dimensionality plus one at the beginning of the algorithm. The algorithm uses the Sherman-Morrison-Woodbury identity to reduce the computation time. The incremental learning algorithms for LSVM presented in this paper include two cases that are namely online and batch incremental learning. Because the inversion of the matrix after increment is solved based on the previous computed information, it is unnecessary to repeat the computing process. Experimental results show that the algorithms are superior to others.
- Published
- 2009
18. A note on two-dimensional linear discriminant analysis
- Author
-
Youfu Li, Pengfei Shi, and Zhizheng Liang
- Subjects
Feature vector ,Feature extraction ,Linear discriminant analysis ,Upper and lower bounds ,Matrix (mathematics) ,Discriminant ,Artificial Intelligence ,Optimal discriminant analysis ,Signal Processing ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Curse of dimensionality ,Mathematics - Abstract
2DLDA and its variants have attracted much attention from researchers recently due to the advantages over the singularity problem and the computational cost. In this paper, we further analyze the 2DLDA method and derive the upper bound of its criterion. Based on this upper bound, we show that the discriminant power of two-dimensional discriminant analysis is not stronger than that of LDA under the assumption that the same dimensionality is considered. In experimental parts, on one hand, we confirm the validity of our claim and show the matrix-based methods are not always better than vector-based methods in the small sample size problem; on the other hand, we compare several distance measures when the feature matrices and feature vectors are applied. The matlab codes used in this paper are available at http://www.mathworks.com/matlabcentral/fileexchange/loadCategory.do?objectType=category&objectId=127&objectName=Application.
- Published
- 2008
19. An efficient k′-means clustering algorithm
- Author
-
Krista Rizman alik
- Subjects
Correlation clustering ,Determining the number of clusters in a data set ,Data stream clustering ,Artificial Intelligence ,CURE data clustering algorithm ,Ramer–Douglas–Peucker algorithm ,Signal Processing ,Canopy clustering algorithm ,Computer Vision and Pattern Recognition ,Cluster analysis ,Algorithm ,Software ,k-medians clustering ,Mathematics - Abstract
This paper introduces k'-means algorithm that performs correct clustering without pre-assigning the exact number of clusters. This is achieved by minimizing a suggested cost-function. The cost-function extends the mean-square-error cost-function of k-means. The algorithm consists of two separate steps. The first is a pre-processing procedure that performs initial clustering and assigns at least one seed point to each cluster. During the second step, the seed-points are adjusted to minimize the cost-function. The algorithm automatically penalizes any possible winning chances for all rival seed-points in subsequent iterations. When the cost-function reaches a global minimum, the correct number of clusters is determined and the remaining seed points are located near the centres of actual clusters. The simulated experiments described in this paper confirm good performance of the proposed algorithm.
- Published
- 2008
20. Factoring Gaussian precision matrices for linear dynamic models
- Author
-
Simon King and Joe Frankel
- Subjects
Covariance matrix ,Diagonal ,MathematicsofComputing_NUMERICALANALYSIS ,Covariance ,Matrix (mathematics) ,Estimation of covariance matrices ,symbols.namesake ,speech technology ,Artificial Intelligence ,Gaussian noise ,Signal Processing ,Diagonal matrix ,Calculus ,symbols ,Computer Vision and Pattern Recognition ,Gaussian process ,Algorithm ,Software ,Mathematics - Abstract
The linear dynamic model (LDM), also known as the Kalman filter model, has been the subject of research in the engineering, control, and more recently, machine learning and speech technology communities. The Gaussian noise processes are usually assumed to have diagonal, or occasionally full, covariance matrices. A number of recent papers have considered modelling the precision rather than covariance matrix of a Gaussian distribution, and this work applies such ideas to the LDM. A Gaussian precision matrix P can be factored into the form P = UTSU where U is a transform and S a diagonal matrix. By varying the form of U, the covariance can be specified as being diagonal or full, or used to model a given set of spatial dependencies. Furthermore, the transform and scaling components can be shared between models, allowing richer distributions with only marginally more parameters than required to specify diagonal covariances. The method described in this paper allows the construction of models with an appropriate number of parameters for the amount of available training data. We provide illustrative experimental results on synthetic and real speech data in which models with factored precision matrices and automatically-selected numbers of parameters are as good as or better than models with diagonal covariances on small data sets and as good as models with full covariance matrices on larger data sets.
- Published
- 2007
21. Optimization based grayscale image colorization
- Author
-
Dongdong Nie, Lizhuang Ma, Qinyong Ma, and Shuangjiu Xiao
- Subjects
Similarity (geometry) ,Pixel ,business.industry ,Image quality ,Computation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Function (mathematics) ,Grayscale ,Weighting ,Sampling (signal processing) ,Artificial Intelligence ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Mathematics - Abstract
An optimization based interactive grayscale image colorization method is presented in this paper. It is an interactive colorization method, whereas the only thing user need to do is to provide some color hints by scribbling or seed pixels. The main contribution of this paper is that the colorization method greatly reduces computation time with the same good results in image quality by quadtree decomposition based non-uniform sampling. Moreover, by introducing a new simple weighting function to represent intensity similarity in the cost function, annoying color diffusion among different regions is alleviated. Experiments show that this method gives the same good quality of colorized images as the method of Levin et al. with a fraction of the computational cost.
- Published
- 2007
22. Structure and motion of nonrigid object under perspective projection
- Author
-
Hung-Tat Tsui, Zhanyi Hu, and Guanghui Wang
- Subjects
Mathematical optimization ,Perspective (graphical) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Iterative reconstruction ,Real image ,Nonlinear programming ,Artificial Intelligence ,Motion estimation ,Signal Processing ,Singular value decomposition ,Structure from motion ,Computer Vision and Pattern Recognition ,Affine transformation ,Algorithm ,Software ,Mathematics - Abstract
The paper focuses on the problem of structure and motion of nonrigid object from image sequence under perspective projection. Many previous methods on this problem utilize the extension technique of SVD factorization based on rank constraint to the tracking matrix, where the 3D shape of nonrigid object is expressed as a weighted combination of a set of shape bases. All these solutions are based on the assumption of Affine camera model. This assumption will become invalid and cause large reconstruction errors when the object is close to the camera. In this paper, we propose two algorithms, namely the linear recursive estimation and the nonlinear optimization, to extend these methods to general perspective camera model. Both algorithms are based on the shape and motion of weak perspective projection. The former one updates the solutions from weak perspective to perspective projection by refining the scalars corresponding to the projective depths recursively. The latter one is based on nonlinear optimization by minimizing the perspective reprojection residuals. Extensive experiments on simulated data and real image sequences are performed to validate the effectiveness of our new algorithms and noticeable improvements over the previous solutions are observed.
- Published
- 2007
23. Adaptive Hausdorff distances and dynamic clustering of symbolic interval data
- Author
-
Renata M. C. R. de Souza, Yves Lechevallier, Francisco de A. T. de Carvalho, Marie Chavent, Institut de Mathématiques de Bordeaux (IMB), Université Bordeaux Segalen - Bordeaux 2-Université Sciences et Technologies - Bordeaux 1 (UB)-Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux (Bordeaux INP)-Centre National de la Recherche Scientifique (CNRS), Usage-centered design, analysis and improvement of information systems (AxIS), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Inria Paris-Rocquencourt, Institut National de Recherche en Informatique et en Automatique (Inria), INRIA Rocquencourt, and Université Bordeaux Segalen - Bordeaux 2-Université Sciences et Technologies - Bordeaux 1-Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux (Bordeaux INP)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Mathematical optimization ,Rand index ,Single-linkage clustering ,02 engineering and technology ,Interval (mathematics) ,[STAT.OT]Statistics [stat]/Other Statistics [stat.ML] ,01 natural sciences ,Symbolic data analysis ,010104 statistics & probability ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Artificial Intelligence ,interval data ,0202 electrical engineering, electronic engineering, information engineering ,adaptive distances ,0101 mathematics ,Cluster analysis ,k-medians clustering ,Mathematics ,k-medoids ,Hausdorff distance ,dynamic clustering ,ComputingMethodologies_PATTERNRECOGNITION ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Algorithm ,Software - Abstract
This paper presents a partitional dynamic clustering method for interval data based on adaptive Hausdorff distances. Dynamic clustering algorithms are iterative two-step relocation algorithms involving the construction of the clusters at each iteration and the identification of a suitable representation or prototype (means, axes, probability laws, groups of elements, etc.) for each cluster by locally optimizing an adequacy criterion that measures the fitting between the clusters and their corresponding representatives. In this paper, each pattern is represented by a vector of intervals. Adaptive Hausdorff distances are the measures used to compare two interval vectors. Adaptive distances at each iteration change for each cluster according to its intra-class structure. The advantage of these adaptive distances is that the clustering algorithm is able to recognize clusters of different shapes and sizes. To evaluate this method, experiments with real and synthetic interval data sets were performed. The evaluation is based on an external cluster validity index (corrected Rand index) in a framework of a Monte Carlo experiment with 100 replications. These experiments showed the usefulness of the proposed method.
- Published
- 2006
24. An invariant scheme for exact match retrieval of symbolic images: Triangular spatial relationship based approach
- Author
-
P. Punitha and Devanur S. Guru
- Subjects
Binary search algorithm ,business.industry ,Pattern recognition ,Standard deviation ,Artificial Intelligence ,Signal Processing ,The Symbolic ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Invariant (mathematics) ,business ,Spatial relationship ,Algorithm ,Software ,Unique key ,SIMPLE algorithm ,Exact match ,Mathematics - Abstract
In this paper, a novel method of representing symbolic images in a symbolic image database (SID) invariant to image transformations, useful for exact match retrieval is presented. The proposed model is based on Triangular Spatial Relationship (TSR) [Guru, D.S., Nagabhushan, P., 2001. Triangular spatial relationship: A new approach for spatial knowledge representation, Pattern Recognition Lett. 22, 999-1006]. The proposed model preserves TSR among the components in a symbolic image by the use of quadruples. A distinct and unique key called TSR key is computed for each distinct quadruple. The mean and standard deviation of the set of TSR keys computed for a symbolic image are stored along with the total number of TSR keys as the representatives of the symbolic image. An exact match retrieval scheme based on the modified binary search technique [Guru, D.S., Raghavendra, H.J., Suraj, M.G., 2000. An adaptive binary search based sorting by insertion: An efficient and simple algorithm, Statist. Appl., 2, 85-96] is also presented in this paper. The presented retrieval scheme requires O(logn) search time in the worst case, where n is the total number of symbolic images in the SID. An extensive experimentation on a large database of 13,680 symbolic images is conducted to corroborate the superiority of the model.
- Published
- 2005
25. A robust algorithm for image principal curve detection
- Author
-
Mang Chen, Yuncai Liu, and Zhiguo Cheng
- Subjects
business.industry ,Principal curves ,Pattern recognition ,Document processing ,Artificial Intelligence ,Feature (computer vision) ,Computer Science::Computer Vision and Pattern Recognition ,Map symbolization ,Graph domain ,Signal Processing ,Shortest path problem ,Graph (abstract data type) ,Segmentation ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Mathematics - Abstract
Principal curves detection is an essential processing in computer vision and pattern recognition with many important applications. In this paper, we present a new method to detect principal curves in complicated feature images. Based on the criteria of the shortest path of curves and directional deviation of paths, principal curves detection is carried out in graph domain. DFS searching scheme is adopted in exploration of a graph network.The motivation of this research is to find road boundaries and house contours from printed map images. Since characters and map symbols often overlap with useful image features, the algorithm of principal curves detection aims to obtain "clean" feature images from the original maps. By extensive experiments, the algorithm has shown good efficiency and robustness with real map images. The technique described in this paper can also be used in other applications, such as in character recognition, to separate characters from other unwanted document components lying on the characters.
- Published
- 2004
26. New visual secret sharing schemes using probabilistic method
- Author
-
Ching-Nung Yang
- Subjects
Homomorphic secret sharing ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cryptography ,Secret sharing ,Visual cryptography ,Shamir's Secret Sharing ,Artificial Intelligence ,Signal Processing ,Human visual system model ,Secure multi-party computation ,Verifiable secret sharing ,Computer Vision and Pattern Recognition ,business ,Algorithm ,Software ,Mathematics - Abstract
Visual secret sharing (VSS) scheme is a perfect secure method that protects a secret image by breaking it into shadow images (called shadows). Unlike other threshold schemes, VSS scheme can be easily decoded by the human visual system without the knowledge of cryptography and cryptographic computations. However, the size of shadow images (i.e., the number of columns of the black and white matrices in VSS scheme [Naor, Shamir, Visual cryptography, Advances in Cryptology-EUROCRYPT'94, Lecture Notes in Computer Science, vol. 950, Springer-Verlag, 1995, p. 1]) will be expanded. Most recent papers about VSS schemes are dedicated to get a higher contrast or a smaller shadow size.In this paper, we use the frequency of white pixels to show the contrast of the recovered image. Our scheme is nonexpansible and can be easily implemented on a basis of conventional VSS scheme. The term non-expansible means that the sizes of the original image and shadows are the same.
- Published
- 2004
27. A new robust circular Gabor based object matching by using weighted Hausdorff distance
- Author
-
Zhenfeng Zhu, Ming Tang, and Hanqing Lu
- Subjects
business.industry ,Binary image ,Feature vector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Gabor transform ,Geometric shape ,Edge detection ,Hausdorff distance ,Gabor filter ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Invariant (mathematics) ,business ,Algorithm ,Software ,Mathematics - Abstract
This paper describes a new and efficient circular Gabor filter-based method for object matching by using a version of weighted modified Hausdorff distance. An improved Gabor odd filter-based edge detector is performed to get edge maps. A rotation invariant circular Gabor-based filter, which is different from conventional Gabor filter, is used to extract rotation invariant features. The Hausdorff distance (HD) has been shown an effective measure for determining the degree of resemblance between binary images. A version of weighted modified Hausdorff distance (WMHD) in the circular Gabor feature space is introduced to determine which position can be possible object model location, which we call 'coarse' location, and at the same time we get correspondence pairs of edge pixels for both object model and input test image. Then we introduce the geometric shape information derived from the above correspondence pairs of edge pixels to find the 'fine' location. The experimental results given in this paper show the proposed algorithm is robust to rotation, scale, occlusion, and noise etc.
- Published
- 2004
28. Experimental comparison of superquadric fitting objective functions
- Author
-
Yan Zhang
- Subjects
Euclidean distance ,Mathematical optimization ,Image representation ,Artificial Intelligence ,Robustness (computer science) ,Signal Processing ,Superquadrics ,Curve fitting ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
Most superquadric-based three-dimensional (3D) image representation methods recover superquadric models by minimizing an appropriately defined objective function. The objective function serves as an error metric to evaluate how accurately the recovered model fits the data. Both the accuracy of the recovered superquadric model and the efficiency of the data fitting process heavily depend on the objective function used. In this paper, an experimental comparison of two primarily used objective functions in superquadric model recovery is presented. The first objective function is based on the implicit definition of superquadrics, and the other on radial Euclidean distance. A variety of synthetic and real 3D range data of both regular and globally deformed superquadrics are used in experiments. The two objective functions are compared with respect to the accuracy of the recovered parameters, corresponding fitting errors, robustness against noise, sensitivity to viewpoints, and the convergence speed. The conclusion derived in this paper provides a convincing guidance for selecting the optimal objective function in superquadric representation tasks.
- Published
- 2003
29. Evaluation of genetic operators and solution representations for shape recognition by genetic algorithms
- Author
-
K. G. Khoo and Ponnuthurai Nagaratnam Suganthan
- Subjects
Matching (graph theory) ,business.industry ,Crossover ,Pattern recognition ,Genetic operator ,Operator (computer programming) ,Artificial Intelligence ,Signal Processing ,Genetic algorithm ,Pattern recognition (psychology) ,Computer Vision and Pattern Recognition ,Genetic representation ,Artificial intelligence ,Representation (mathematics) ,business ,Algorithm ,Software ,Mathematics - Abstract
In this paper, we investigate the genetic algorithm based optimization procedure for structural pattern recognition in a model-based recognition system using attributed relational graph matching technique. In this study, potential solutions indicating the mapping between scene and model vertices are represented by integer strings. The test scene may contain multiple occurrences of different or the same model object. Khoo and Suganthan [Proc. IEEE Congr. Evolutionary Comput. Conf. 2001, p. 727] proposed a solution string representation scheme for multiple mapping between a test scene and all model objects and with the uniform crossover operator. In this paper, we evaluate this proposed solution string representation scheme with another representation scheme commonly used to solve the problem. In addition, a comparison between the uniform, one-point and two-point crossover operators was made. An efficient pose-clustering algorithm is used to eliminate any wrong mappings and to determine the presence/pose of the model in the scene. Simulations are carried out to evaluate the various solution representations and genetic operators.
- Published
- 2002
30. Detecting line segments in an image – a new implementation for Hough Transform
- Author
-
Yu-Tai Ching
- Subjects
Line segment intersection ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Geometry ,Parameter space ,Intersection (Euclidean geometry) ,Hough transform ,law.invention ,Transformation (function) ,Line segment ,Artificial Intelligence ,law ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,Line (geometry) ,Point (geometry) ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
The conventional Hough Transform is a technique for detecting line segments in an image. The conventional Hough Transform transforms image points into lines in the parameter space. If there are collinear image points, the lines transformed from the points intersect at a point in the parameter space. Determining the intersection is generally carried out through the “voting method”, which partitions the parameter space into squared meshes. A problem with the voting method involves determining the resolution required for partitioning the parameter space. In this paper, we present a solution to this problem. We propose to transform an image point into a belt, whose width is a function of the width of a line in the image. We then determine the intersection of numerous belts to detect a line segment. An iterated algorithm based the transformation for detecting line segments is presented in this paper.
- Published
- 2001
31. Robust direct motion estimation considering discontinuity
- Author
-
Jong-Eun Ha and In So Kweon
- Subjects
Motion compensation ,business.industry ,Direct method ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Regularization (mathematics) ,Discontinuity (linguistics) ,Motion field ,Artificial Intelligence ,Motion estimation ,Signal Processing ,Segmentation ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Smoothing ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
In this paper, we propose a robust motion estimation algorithm using uncalibrated 3D motion model considering depth discontinuity. Most of the previous direct motion estimation algorithms with 3D motion model compute the depth value through the local smoothing, which result in erroneous results at depth discontinuity. In this paper, we overcome this problem at depth discontinuity by adding discontinuity preserving regularization term to the original equation. Robust estimation enables motion segmentation through the dominant camera motion compensation. Experimental results show the improved result at the depth discontinuity.
- Published
- 2000
32. Corner detection via topographic analysis of vector-potential
- Author
-
Bin Luo, Andrew D. J. Cross, and Edwin R. Hancock
- Subjects
Detector ,Corner detection ,Geometry ,Image plane ,Real image ,Artificial Intelligence ,Feature (computer vision) ,Signal Processing ,Computer Vision and Pattern Recognition ,Sensitivity (control systems) ,Symmetry (geometry) ,Algorithm ,Software ,Mathematics ,Vector potential - Abstract
This paper describes how corner detection can be realised using a new feature representation based on a magneto-static analogy. The idea is to compute a vector-potential by appealing to an analogy in which the Canny edge-map is regarded as an elementary current density residing on the image plane. In this paper, we demonstrate that corners are located at the saddle-points of the magnitude of the vector-potential. These points correspond to the intersections of saddle-ridge and saddle-valley structures, i.e. to junctions of the edge and symmetry lines. We describe a template-based method for locating the saddle-points. This involves performing a non-minimum suppression test in the direction of the vector-potential and a non-maximum suppression test in the orthogonal direction. Experimental results using both synthetic and real images are given. We investigate the angle and scale sensitivity of the new corner detector and compare it with a number of alternative corner detectors.
- Published
- 1999
33. Multiple graph matching with Bayesian inference
- Author
-
Mark L. Williams, Edwin R. Hancock, and Richard C. Wilson
- Subjects
Synthetic aperture radar ,Matching (graph theory) ,business.industry ,Bayesian probability ,Process (computing) ,Inference ,Pattern recognition ,Bayesian inference ,Sensor fusion ,Consistency (database systems) ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Mathematics - Abstract
This paper describes the development of a Bayesian framework for multiple graph matching. The study is motivated by the plethora of multi-sensor fusion problems which can be abstracted as multiple graph matching tasks. The study uses as its starting point the Bayesian consistency measure recently developed by Wilson and Hancock. Hitherto, the consistency measure has been used exclusively in the matching of graph-pairs. In the multiple graph matching study reported in this paper, we use the Bayesian framework to construct an inference matrix which can be used to gauge the mutual consistency of multiple graph-matches. The multiple graph-matching process is realised as an iterative discrete relaxation process which aims to maximise the elements of the inference matrix. We experiment with our multiple graph matching process using an application vehicle furnished by the matching of aerial imagery. Here we are concerned with the simultaneous fusion of optical, infra-red and synthetic aperture radar images in the presence of digital map data.
- Published
- 1997
34. Parametric shape recognition using a probabilistic inverse theory
- Author
-
Tal Arbel, P. Whaite, and Frank P. Ferrie
- Subjects
business.industry ,Probabilistic logic ,Conditional probability ,Probability density function ,Statistical model ,Conditional probability distribution ,Machine learning ,computer.software_genre ,Artificial Intelligence ,Signal Processing ,Pattern recognition (psychology) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,computer ,Reference model ,Software ,Parametric statistics ,Mathematics - Abstract
This paper describes a new framework for parametric shape recognition based on a probabilistic model of inverse theory first introduced by Tarantola. The key result is a method for generating classifiers in the form of conditional probability densities for recognizing an unknown from a set of reference models. Our procedure is automatic. Off-line, it invokes an autonomous process to estimate reference model parameters and their statistics. On-line, during measurement, it combines these with a priori context-dependent information, as well as the parameters and statistics estimated for an unknown object, into a single description. That description, a conditional probability density function, represents the likelihood of correspondence between the unknown and a particular reference model. The paper also describes the implementation of this procedure in a system for automatically generating and recognizing 3-D part-oriented models. Specifically we show that recognition performance is near perfect for cases in which complete surface information is accessible to the algorithm, and that it falls off gracefully (minimal false-positive response) when only partial information is available. This leads to the possibility of an active recognition strategy in which the belief measures associated with each classification can be used as feedback for the acquisition of further evidence as required.
- Published
- 1996
35. A novel single-pass thinning algorithm and an effective set of performance criteria
- Author
-
R. W. Zhou, Geok See Ng, and Chai Quek
- Subjects
Pixel ,Thinning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Boundary (topology) ,computer.file_format ,Set (abstract data type) ,Template ,Artificial Intelligence ,Signal Processing ,Bitmap ,Computer Vision and Pattern Recognition ,computer ,Algorithm ,Software ,Smoothing ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics ,Flag (geometry) - Abstract
A new sequential thinning algorithm, which uses both flag map and bitmap simultaneously to decide if a boundary pixel can be deleted, as well as the incorporation of smoothing templates to smooth the final skeleton, is proposed in this paper. Three performance measurements are proposed for an objective evaluation of this novel algorithm against a set of well established techniques. Extensive result comparison and analysis are presented in this paper for discussion.
- Published
- 1995
36. Experimental study of performance of pattern classifiers and the size of design samples
- Author
-
Tetsuo Takeshita and Jun-ichiro Toriwaki
- Subjects
education.field_of_study ,Population ,Estimator ,Multivariate normal distribution ,Covariance ,Linear discriminant analysis ,Bayes' theorem ,Quadratic equation ,Discriminant function analysis ,Artificial Intelligence ,Signal Processing ,Statistics ,Computer Vision and Pattern Recognition ,education ,Algorithm ,Software ,Mathematics - Abstract
This paper presents results of simulation experiments concerning the two class pattern classification problem assuming a multivariate normal population. Both a quadratic and a linear discriminant function designed by using estimated covariance matrices and mean vectors have limited performance compared to the optimal Bayes decision. Two approximate estimators of the amount of degradation in the recognition rate were proposed by Raudys and Fukunaga, respectively. This paper presents the experimental evaluation of the goodness of these estimators. We show quantitatively how well those estimators work and also confirm that the modified classifier designed by using Stein's estimator achieves better performance than the conventional one.
- Published
- 1995
37. An optimal algorithm for extracting the regions of a plane graph
- Author
-
Horst Bunke and Xiaoyi Jiang
- Subjects
Surface (mathematics) ,Computation ,Upper and lower bounds ,Planar graph ,Combinatorics ,symbols.namesake ,Polyhedron ,Artificial Intelligence ,Signal Processing ,symbols ,Computer Vision and Pattern Recognition ,Algebraic number ,Algorithm ,Time complexity ,Software ,Blossom algorithm ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
In a recent paper, Fan and Chang (1991) presented an algorithm for extracting all regions of a plane graph. It is shown in this paper that their algorithm has quadratic time and space complexity. We propose an optimal algorithm which takes O(m log m) computation time and uses O(m) space, where m is the number of edges of the plane graph. The optimality of our algorithm is established by proving an Ω(m log m) lower bound under the algebraic decision-tree model.
- Published
- 1993
38. Characterizing planar outlines
- Author
-
Jose A. García, Joaquín Fdez-Valdivia, and N. Pérez de la Blanca
- Subjects
Landmark ,business.industry ,Point set ,Curvature ,computer.software_genre ,Spline (mathematics) ,Information extraction ,Planar ,Artificial Intelligence ,Signal Processing ,Curvature estimator ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithm ,Software ,Interpolation ,Mathematics - Abstract
Finding the most outstanding perceptual point set on a planar closed outline is the first step in the shape characterization of such curves. In this paper we present an approach to this problem from the joint information provided by a set of outstanding points and an interpolation procedure defining the shape between them. The two main features of the paper are the optimization criterion for determining the class of outstanding points and the spline making the interpolation. The algorithm firstly calculates the graph of curvature and determines the local extremes on it, secondly it identifies the landmark points according to a criterion of importance and thirdly it calculates the interpolated curve from the landmark points and measures the fitting error from the interpolated curve to the observed outline.
- Published
- 1993
39. Methods of signal classification using the images produced by the Wigner-Ville distribution
- Author
-
Boualem Boashash and Saman S. Abeysekera
- Subjects
Wigner ville ,Signal ,Image (mathematics) ,Signal classification ,Distribution (mathematics) ,Artificial Intelligence ,Salient ,Signal Processing ,Pattern recognition (psychology) ,Computer Vision and Pattern Recognition ,Representation (mathematics) ,Algorithm ,Software ,Mathematics - Abstract
The Wigner-Ville distribution (WVD) can be considered as a two-dimensional (time and frequency) representation of a one-dimensional signal. The time-frequency image depicts certain salient features which can be used in recognition and classification of the signals. Three classification schemes, which are different in principle, and can be adopted for the processing of these images, are presented and compared in this paper. Of these the decision theoretic pattern recognition approach is suggested for proper classification of signals. Efficient computational schemes are also proposed in the paper.
- Published
- 1991
40. Reducing the expected computational cost of template matching using run length representation
- Author
-
Azriel Rosenfeld and Avraham Margalit
- Subjects
Speedup ,Pixel ,Template matching ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Real image ,Digital image ,Template ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Pattern matching ,Algorithm ,Software ,Mathematics - Abstract
Template matching of two digital images, represented as arrays of pixels, is computationally expensive, because it requires a pixel-by-pixel comparison of the pixels in the image and in the template for every location in the image. In this paper we present an algorithm to reduce the computational cost of template matching by using run length representation of the image and the template. Using this technique we compare only locations in the image and the template where the total mismatch accumulation may change. This method works best for images and templates with long runs. In the paper we present the algorithm, discuss conditions for its being efficient, and show experimental results on both randomly generated and real images. We present some results in which using this method yields more than 20-fold speedup.
- Published
- 1990
41. Rotation-invariant NCC for 2D color matching of arbitrary shaped fragments of a fresco
- Author
-
Dimo Dimov
- Subjects
Curvilinear coordinates ,Cross-correlation ,Template matching ,02 engineering and technology ,HSL and HSV ,Invariant (physics) ,01 natural sciences ,Artificial Intelligence ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,010306 general physics ,Spurious relationship ,Fresco ,Algorithm ,Software ,Mathematics - Abstract
The proposed RINCCAS method, an abbreviation of the paper title, was originally developed to participate in DAFNE (Digital Anastylosis of Frescos challeNgE) race, June-July 2019. The method consists of two phases. Phase 1 extends the classic Normalized Cross Correlation (NCC) for template matching of arbitrary curvilinear 2D shapes of fragments that are assumed belonging to a fresco as perceived in a photograph. For this purpose, each fragment is approximated by one (up to several, but not overlapping) Maximal & Axes-Collinear Inner Rectangles (MACIRs). The extension also includes rotation invariance and vector compatibility of NCC in respect to the three (RGB) color channels. The high positioning accuracy makes it possible to identify eventual/existing spurious fragments in Phase 2 of RINCCAS, as follows − by HSV scheme for color differences, and by accurate recognition of overlaps among the fragments. The first phase is ‘log-cubically’ complex in speed, estimated on the average size of the MACIRs of the fragments. For some DAFNE tasks, the 1st phase of RINCCAS requires high computational resources (HPC), while a conventional PC is sufficient for its 2nd phase, even in the case of multiple interactive optimization of the ratio between true and spurious fragments.
- Published
- 2020
42. Are twin hyperplanes necessary?
- Author
-
Shigeo Abe
- Subjects
0209 industrial biotechnology ,Class (set theory) ,Generalization ,02 engineering and technology ,Computer experiment ,Least squares ,Support vector machine ,020901 industrial engineering & automation ,Hyperplane ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Decision boundary ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Equivalence (measure theory) ,Algorithm ,Software ,Mathematics - Abstract
In a twin support vector machine (TSVM), the separating hyperplane associated with one class is determined such that the hyperplane is near to the data belonging to the class and that it is away from the other class. A data sample is classified into the class with the nearer hyperplane. In this paper we discuss whether the twin hyperplanes are necessary for the TSVM. By theoretical analysis we first show the equivalence conditions that one of the two decision boundaries of the TSVM coincides with the decision boundary of the SVM. Then for the least squares (LS) version of the TSVM, we clarify the equivalence conditions with the LS SVM or that with two hyperparameters for imbalanced data (one for each class). A comparison of the LS TSVM with the LS SVMs, by computer experiments, shows that the generalization abilities of the LS TSVM are comparable but not superior for 13 two-class problems and an imbalanced two-class problem.
- Published
- 2018
43. Magnetic optimization algorithm for data clustering
- Author
-
Surya Kant, Millie Pant, Vinay Kumar Jain, and Neetu Kushwaha
- Subjects
Fuzzy clustering ,Correlation clustering ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Determining the number of clusters in a data set ,ComputingMethodologies_PATTERNRECOGNITION ,Data stream clustering ,Artificial Intelligence ,CURE data clustering algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Canopy clustering algorithm ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Data mining ,Cluster analysis ,Algorithm ,computer ,Software ,k-medians clustering ,Mathematics - Abstract
In this paper, a new clustering algorithm inspired by magnetic force is proposed. This algorithm is not sensitive to the initialization problem of cluster centroids. Centroid particles change their position according to the total magnetic force applied by data points. The position of the particle gets updated by employing magnetic resultant force to find the best position of centroid particle for clustering. To evaluate the performance of the proposed algorithm, numerical experiments are conducted on eleven benchmark data sets taken from UCI repository and are compared with five different clustering algorithms. The results show that the proposed algorithms are more accurate, efficient and robust as compared to the other clustering algorithms.
- Published
- 2018
44. Parameter k search strategy in outlier detection
- Author
-
Chuan Zhou, Jin Ning, Leiting Chen, and Yang Wen
- Subjects
Field (mathematics) ,02 engineering and technology ,Artificial Intelligence ,Search algorithm ,020204 information systems ,Signal Processing ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,Order (group theory) ,020201 artificial intelligence & image processing ,Anomaly detection ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Selection (genetic algorithm) ,Mathematics - Abstract
The selection for parameter k(the number of nearest neighbors) is an important problem in the field of outlier detection. If k selected is too small, outlier clusters may not be detected. On the contrary, normal points may be detected as outliers. In order to solve the parameter selection problem, recent studies select k by searching for a natural or stable relative neighborhood. However, these studies intuitively chose k, and haven’t explained why the k is appropriate. In this paper, we have analyzed the above questions and presented a mutual neighbor graph(MNG) based parameter k searching algorithm. Furthermore, we proved the chosen k is appropriate from three angles. Experiments on synthetic and real data sets demonstrate that the proposed method achieves better performance than other alternatives.
- Published
- 2018
45. Comparison of bubble detectors and size distribution estimators
- Author
-
Roman Juránek, Jarmo Ilonen, Pavel Zemcik, Markéta Dubská, Tuomas Eerola, Heikki Kälviäinen, and Lasse Lensu
- Subjects
Boosting (machine learning) ,Bubble ,Detector ,Spectral density ,Estimator ,02 engineering and technology ,Convolutional neural network ,Physics::Fluid Dynamics ,020401 chemical engineering ,Artificial Intelligence ,Signal Processing ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,0204 chemical engineering ,Algorithm ,Software ,Mathematics - Abstract
Detection, counting and characterization of bubbles, that is, transparent objects in a liquid, is important in many industrial applications. These applications include monitoring of pulp delignification and multiphase dispersion processes common in the chemical, pharmaceutical, and food industries. Typically the aim is to measure the bubble size distribution. In this paper, we present a comprehensive comparison of bubble detection methods for challenging industrial image data. Moreover, we compare the detection-based methods to a direct bubble size distribution estimation method that does not require the detection of individual bubbles. The experiments showed that the approach based on a convolutional neural network (CNN) outperforms the other methods in detection accuracy. However, the boosting-based approaches were remarkably faster to compute. The power spectrum approach for direct bubble size distribution estimation produced accurate distributions and it is fast to compute, but it does not provide the spatial locations of the bubbles. Selecting the most suitable method depends on the specific application.
- Published
- 2018
46. An unsupervised 2D point-set registration algorithm for unlabeled feature points: Application to fingerprint matching
- Author
-
A. Pasha Hosseinbor, Alex Ushveridze, and Renat Zhdanov
- Subjects
Minutiae ,021110 strategic, defence & security studies ,Matching (graph theory) ,business.industry ,Fingerprint (computing) ,0211 other engineering and technologies ,Point set registration ,Pattern recognition ,02 engineering and technology ,Fingerprint recognition ,Least squares ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Linear least squares ,Mathematics - Abstract
An unsupervised, iterative 2D point-set registration algorithm for unlabeled data and based on linear least squares is proposed, and subsequently utilized for minutia-based fingerprint matching. The matcher considers all possible minutia pairings and iteratively aligns the two sets until the number of minutia pairs does not exceed the maximum number of allowable one-to-one pairings. The first alignment establishes a region of overlap between the two minutia sets, which is then (iteratively) refined by each successive alignment. After each alignment, minutia pairs that exhibit weak correspondence are discarded. The process is repeated until the number of remaining pairs no longer exceeds the maximum number of allowable one-to-one pairings. The proposed algorithm is tested on both the FVC2000 and FVC2002 databases, and the results indicate that the proposed matcher is both effective and efficient for fingerprint authentication; it is fast and consciously utilizes as few computationally expensive mathematical functions (e.g. trigonometric, exponential) as possible. In addition to the proposed matcher, another contribution of the paper is the analytical derivation of the least squares solution for the optimal alignment parameters for two point-sets lacking one-to-one correspondence.
- Published
- 2017
47. Enhancement of the Box-Counting Algorithm for fractal dimension estimation
- Author
-
Gang-Gyoo Jin, Gun-Baek So, and Hye-Rim So
- Subjects
0209 industrial biotechnology ,Correlation dimension ,Fractal dimension on networks ,Fractal transform ,Fractal landscape ,02 engineering and technology ,01 natural sciences ,Measure (mathematics) ,Fractal dimension ,010305 fluids & plasmas ,Box counting ,020901 industrial engineering & automation ,Fractal ,Artificial Intelligence ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics ,Pixel ,Applied Mathematics ,Multifractal system ,021001 nanoscience & nanotechnology ,Fractal analysis ,Data point ,Control and Systems Engineering ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,0210 nano-technology ,Algorithm ,Software ,Integer (computer science) - Abstract
The box-counting (BC) method is frequently used as a measure of irregularity and roughness of fractals with self-similarity property due to its simplicity and high reliability. It requires a proper choice of the number of box sizes, corresponding sizes, and size limits to guarantee the accuracy of the fractal dimension estimation. Most of the existing BC methods utilize the geometric-step method, which causes a lack of fitting data points and wasted pixels for images of large size and/or arbitrary size. This paper presents a BC algorithm in combination with a novel sampling method and fractional box-counting method which will allow us to overcome some of limitations evident in the conventional BC method. The new sampling method introduces a partial competition based on the coverage of box sizes and takes more number of box sizes than the geometric-step method. To circumvent the border problem occurring for images of arbitrary size, the fractional box-counting method allows the number of the boxes to be real, rather than integer. To show its feasibility, the proposed method is applied to a set of fractal images of exactly known fractal dimension.
- Published
- 2017
48. A CUDA-based hill-climbing algorithm to find irreducible testors from a training matrix
- Author
-
Guillermo Sánchez-Díaz, Ivan Piza-Davila, Luis Rizo-Dominguez, and Manuel S. Lazo-Cortés
- Subjects
Theoretical computer science ,Computation ,Feature selection ,02 engineering and technology ,Bloom filter ,Matrix (mathematics) ,CUDA ,Artificial Intelligence ,020204 information systems ,Signal Processing ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Graphics ,Algorithm ,Hill climbing ,Software ,Mathematics - Abstract
Irreducible testors have been used to solve feature selection problems. All the exhaustive algorithms reported for the generation of irreducible testors have exponential complexity. However, several problems only require a portion of irreducible testors (only a subset of all). The hill-climbing algorithm is the latest approach that finds a subset of irreducible testors. So this paper introduces a parallel version of the hill-climbing algorithm which takes advantage of all the cores available in the graphics card because it has been developed on a CUDA platform. The proposed algorithm incorporates a novel mechanism that improves the exploration capability without adding any extra computation at the mutation step, thus increasing the rate of irreducible testors found. In addition, a Bloom filter is incorporated for efficient handling of duplicate irreducible testors. Several experiments with synthetic and real data, and a comparison with other state-of-the-art algorithms are presented in this work.
- Published
- 2017
49. Partially collapsed parallel Gibbs sampler for Dirichlet process mixture models
- Author
-
Murat Dundar and Halid Ziya Yerebakan
- Subjects
Posterior probability ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Cluster analysis ,Mathematics ,business.industry ,Model selection ,020207 software engineering ,Markov chain Monte Carlo ,Mixture model ,Statistics::Computation ,Dirichlet process ,Signal Processing ,symbols ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Chinese restaurant process ,business ,Algorithm ,computer ,Software ,Gibbs sampling - Abstract
Proposed representation for DP is ideally suited for distributed computing.Proposed sampler offers advantages in terms of both scalability and run-time.Proposed sampler outperforms existing parallel samplers for Dirichlet process mixtures in terms of clustering accuracy. Dirichlet Process (DP) is commonly used as a non-parametric prior on mixture models. It has adaptive model selection capability which is useful in clustering applications. Although exact inference is not tractable for this prior, Markov Chain Monte Carlo (MCMC) samplers have been used to approximate the target posterior distribution. These samplers often do not scale well. Thus, recent studies focused on improving run-time efficiency through parallelization. In this paper, we introduce a new sampling method for DP by combining Chinese Restaurant Process (CRP) with stick-breaking construction allowing for parallelization through conditional independence at the data point level. Stick breaking part uses an uncollapsed sampler providing a high level of parallelization while the CRP part uses collapsed sampler allowing more accurate clustering. We show that this partially collapsed Gibbs sampler has significant advantages over the collapsed-only version in terms of scalability. We also provide results on real-world data sets that favorably compares the proposed inference algorithm against a recently introduced parallel Dirichlet Process samplers in terms of F1 scores while maintaining a comparable run-time performance.
- Published
- 2017
50. A global-local affinity matrix model via EigenGap for graph-based subspace clustering
- Author
-
Junbin Gao, Daming Shi, Dansong Cheng, and Jun Wang
- Subjects
Clustering high-dimensional data ,Mathematical optimization ,020206 networking & telecommunications ,02 engineering and technology ,Spectral clustering ,Eigengap ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Pairwise comparison ,Computer Vision and Pattern Recognition ,Cluster analysis ,Algorithm ,Software ,Subspace topology ,Eigenvalues and eigenvectors ,Mathematics - Abstract
We propose a Global-Local Affinity Matrix Model for Graph-based Subspace Clustering.We propose a criterion called Fractional Eigenvalues Sum (FEVS) for global scheme.Our proposed model is solved by Alternative Direction Method (ADM).We evaluates our proposed model on low-dimensional data.The GLAM model has excellent performance on face clustering and motion segmentation. In this paper, we address the spectral clustering problem by effectively constructing an affinity matrix with a large EigenGap. Although the faultless Block-Diagonal structure is highly in demand for accurate spectral clustering, the relaxed Block-Diagonal affinity matrix with a large EigenGap is more effective and easier to obtain. A global EigenGap scheme is proposed by utilizing the Fractional Eigenvalues Sum (FEVS) penalty of maximizing top eigenvalues and minimizing the residual. The closed-form solution of the FEVS term and the proximity term is also presented. We then propose a Global-Local Affinity Matrix model that integrates the global EigenGap with local pairwise distance measure for graph construction. Furthermore, we also combine the state-of-the-art subspace recovery methods such as LRR and RSIM with our proposed model to learn an effective affinity matrix for high dimensional data. To the best of our knowledge, this is the first research that attempts to pursue such a relaxed Block-Diagonal structure with a large EigenGap. Extensive experiments on face clustering and motion segmentation clearly demonstrate the significant advantages of the novel methods.
- Published
- 2017
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.