20,041 results
Search Results
152. Comparison of different image denoising algorithms for Chinese calligraphy images
- Author
-
Zhi-Hong Li, Ling-Ying Hou, Zhi-Biao Li, Han Huang, and Zhi-Kai Huang
- Subjects
business.industry ,Cognitive Neuroscience ,Wiener filter ,Facsimile ,020207 software engineering ,02 engineering and technology ,Non-local means ,Computer Science Applications ,Rubbing ,symbols.namesake ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Computer vision ,Video denoising ,Artificial intelligence ,Noise (video) ,Bilateral filter ,business ,Algorithm ,Image restoration ,Mathematics - Abstract
Rubbing is one of the most universal and perhaps the oldest of the techniques that have been used in printmaking. A carefully made rubbing provides an accurate and full-scale facsimile of the surface reproduced. However, many rubbing have been destroyed or lacked a good ways to identify them by certain events, while some other contained a large white background, or have become illegible due to erosion. In order to correct interpretation of these images, some image restoration techniques are employed. Image denoising is one of the important fields in the restoration arena. But, a great challenge of image denoising is how to preserve the edges and all fine details of a rubbing image while reducing the noise. This paper presents a comprehensive comparative study of image denoising techniques relying on Anisotropic Diffusion filter, Wiener filter, TV (Total Variation), NLM (Non-Local Means, NLM), Bilateral filtering. A quantitative measure of comparison is provided by the PSNR, MSE, SNR, UQI and SSIM of the image. Finally, the paper also analyzes its effect of denoising on rubbings with various algorithm and points out the advantages and disadvantages in application.
- Published
- 2016
153. Construction method of concept lattice based on improved variable precision rough set
- Author
-
Zhong Chen, Shengwu Xiong, and Ruiling Zhang
- Subjects
Cognitive Neuroscience ,05 social sciences ,Improved algorithm ,050301 education ,02 engineering and technology ,computer.software_genre ,Computer Science Applications ,Construction method ,Artificial Intelligence ,Lattice (order) ,0202 electrical engineering, electronic engineering, information engineering ,Formal concept analysis ,Preprocessor ,020201 artificial intelligence & image processing ,Lattice Miner ,Data mining ,Rough set ,0503 education ,Algorithm ,computer ,Variable precision ,Mathematics - Abstract
This paper mainly focuses on how to construct concept lattice effectively and efficiently based on improved variable precision rough set. On the basis of preprocessing formal concept, one algorithm that can determine the value range of variable precision parameter β according to the approximate classification quality is proposed. An improved β-upper and lower distribution attribute reduction algorithm is also proposed based on the improved variable precision rough set, the algorithm can be used for attribute reduction on the original data of the concept lattice, and to eliminate the redundant knowledge or noises of the formal context. For the reduced formal context, the paper combines the concept construction algorithm with an improved rule acquisition algorithm seamlessly, and proposes a novel approach of concept lattice construction based on improved variable precision rough set. Finally, a concept lattice generation prototype system is developed, this paper also performs comprehensive experiments, and the effectiveness of the improved algorithm is proved through the experimental results.
- Published
- 2016
154. NMF-Based Spectral Reflectance Estimation From Image Pairs Including Near-Infrared Components
- Author
-
Takahiro Ogawa, Yuta Igarashi, and Miki Haseyama
- Subjects
spectral reflectance estimation ,green plant detection ,near-infrared (NIR) components ,02 engineering and technology ,01 natural sciences ,Non-negative matrix factorization ,010309 optics ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Computer vision ,Electrical and Electronic Engineering ,Image sensor ,Mathematics ,Spectral signature ,business.industry ,Color correction ,Filter (signal processing) ,Inverse problem ,nonnegative matrix factorization (NMF) ,Spectral sensitivity ,RGB color model ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithm - Abstract
In this paper, a novel spectral reflectance estimation method from image pairs including near-infrared (NIR) components based on nonnegative matrix factorization (NMF) is presented. The proposed method enables estimation of spectral reflectance from only two kinds of input images: 1) an image including both visible light components and NIR components and 2) an image including only NIR components. These two images can be easily obtained using a general digital camera without an infrared-cut filter and one with a visible light-cut filter, respectively. Since RGB values of these images are obtained according to spectral sensitivity of the image sensor, the spectrum power distribution of the light source and the spectral reflectance, we have to solve the inverse problem for estimating the spectral reflectance. Therefore, our method approximates spectral reflectance by a linear combination of several bases obtained by applying NMF to a known spectral reflectance data set. Then estimation of the optimal solution to the above problem becomes feasible based on this approximation. In the proposed method, NMF is used for obtaining the bases used in this approximation from a characteristic that the spectral reflectance is a nonnegative component. Furthermore, the proposed method realizes simple approximation of the spectrum power distribution of the light source with direct and scattered light components. Therefore, estimation of spectral reflectance becomes feasible using the spectrum power distribution of the light source in our method. In the last part of this paper, we show some simulation results to verify the performance of the proposed method. The effectiveness of the proposed method is also shown using the method for several applications that are closely related to spectral reflectance estimation. Although our method is based on a simple scheme, it is the first method that realizes the estimation of the spectral reflectance and the spectrum power distribution of the light source from the above two kinds of images taken by general digital cameras and provides breakthroughs to several fundamental applications.
- Published
- 2016
155. An extension of rough fuzzy set
- Author
-
Junhai Zhai, Yao Zhang, and Sufang Zhang
- Subjects
Statistics and Probability ,0209 industrial biotechnology ,Fuzzy classification ,Fuzzy set ,Dominance-based rough set approach ,General Engineering ,02 engineering and technology ,Fuzzy subalgebra ,Type-2 fuzzy sets and systems ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy set operations ,Fuzzy number ,020201 artificial intelligence & image processing ,Rough set ,Algorithm ,Mathematics - Abstract
In rough fuzzy set, equivalence relation is used for approximating the target concept, which is a fuzzy set. In this paper, we extend the equivalence relation in rough fuzzy set to tolerance relation, and propose an extended model named tolerance rough fuzzy set. Furthermore, the properties of the extended model are investigated, and the proofs of the properties are given in the paper. The advantage of the extended model is that it can directly deal with continuous-valued attributes, and it does not require the process of discretization.
- Published
- 2016
156. Image encryption using 2D Logistic-adjusted-Sine map
- Author
-
Zhongyun Hua and Yicong Zhou
- Subjects
Security analysis ,Information Systems and Management ,Theoretical computer science ,business.industry ,Ergodicity ,Chaotic ,020207 software engineering ,Cryptography ,02 engineering and technology ,Encryption ,Computer Science Applications ,Theoretical Computer Science ,Image (mathematics) ,Artificial Intelligence ,Control and Systems Engineering ,Probabilistic encryption ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Confusion and diffusion ,business ,Algorithm ,Software ,Computer Science::Cryptography and Security ,Mathematics - Abstract
With complex properties of ergodicity, unpredictability and sensitivity to initial states, chaotic systems are widely used in cryptography. This paper proposes a two-dimensional Logistic-adjusted-Sine map (2D-LASM). Performance evaluations show that it has better ergodicity and unpredictability, and a wider chaotic range than many existing chaotic maps. Using the proposed map, this paper further designs a 2D-LASM-based image encryption scheme (LAS-IES). The principle of diffusion and confusion are strictly fulfilled, and a mechanism of adding random values to plain-image is designed to enhance the security level of cipher-image. Simulation results and security analysis show that LAS-IES can efficiently encrypt different kinds of images into random-like ones that have strong ability of resisting various security attacks.
- Published
- 2016
157. An interval-valued inversion of the non-additive interval-valued F-transform: Use for upsampling a signal
- Author
-
Olivier Strauss, Fares Graba, Image & Interaction (ICAR), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), and Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)
- Subjects
Mathematical optimization ,Logic ,Non-uniform discrete Fourier transform ,Convex set ,Deconvolution ,02 engineering and technology ,Fuzzy logic ,Upsampling ,Capacities ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Continuous signal ,Sampling ,S transform ,Mathematics ,Fuzzy transform ,Intervals ,020206 networking & telecommunications ,16. Peace & justice ,Coherent sampling ,020201 artificial intelligence & image processing ,Algorithm ,Summative and maxitive kernels - Abstract
International audience; As shown in a recent paper, the fuzzy transform can be reinterpreted as a sampling process with an ill-known sampling kernel (or point spread function). This reinterpretation leads to an interval-valued direct fuzzy transform: the non-additive fuzzy transform. It provides the convex set of all sampled values that should have been obtained by sampling a continuous signal with a convex set of sampling kernels. Its dual transform, the non-additive inverse fuzzy transform, allows computation of an interval-valued reconstruction of a sampled signal, which is the set of all values that would have been obtained by interpolating or approximating this sampled signal when using a convex set of reconstruction kernels. In this paper, we consider using a new interval-valued inversion process within this new framework to upsample a signal, i.e. reconstruct a high resolution signal with a low resolution signal, in a semi-blind context. As illustrated in the experimental part of the paper, the main advantage of using this method, compared to previous techniques, is its inherent robustness with respect to modeling the sampling process.
- Published
- 2016
158. Error bounds for joint detection and estimation of multiple unresolved target-groups
- Author
-
Xianghui Yuan, Feng Lian, Chongzhao Han, and Zhansheng Duan
- Subjects
020301 aerospace & aeronautics ,Mathematical optimization ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Statistical power ,Euclidean distance ,Cardinality ,0203 mechanical engineering ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,Euclidean geometry ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Maximum a posteriori estimation ,A priori and a posteriori ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Finite set ,Algorithm ,Mathematics - Abstract
An error bound for JDE of multiple unresolved target-groups is derived by RFS.The bound is based on OSPA distance rather than Euclidean distance.The bound is discussed for the special cases when the group number is known or is at most one.Three examples are presented to verify the effectiveness of the bound. According to random finite set (RFS) and information inequality, this paper derives an error bound for joint detection and estimation (JDE) of multiple unresolved target-groups in the presence of clutters and missed detections. The JDE here refers to determining the number of unresolved target-groups and estimating their states. In order to obtain the results of this paper, the states of the unresolved target-groups are modeled as a multi-Bernoulli RFS first. The point cluster model proposed by Mahler is used to describe the observation likelihood of each group. Then, the error metric between the true and estimated state sets of the groups is defined by the optimal sub-pattern assignment distance rather than the usual Euclidean distance. The maximum a posteriori detection and unbiased estimation criteria are used in deriving the bound. Finally, we discuss some special cases of the bound when the number of unresolved target-groups is known a priori or is at most one. Example 1 shows the variation of the bound with respect to the probability of detection and clutter density. Example 2 verifies the effectiveness of the bound by indicating the performance limitations of the cardinalized probability hypothesis density and cardinality balanced multi-target multi-Bernoulli filters for unresolved target-groups. Example 3 compares the bound of this paper with the (single-sensor) bound of 4 for the case of JDE of a single unresolved target-group. At present, this paper only addresses the static JDE problem of multiple unresolved target-groups. Our future work will study the recursive extension of the bound proposed in this paper to the filtering problems by considering the group state evolutions.
- Published
- 2016
159. Introducing locally affine-invariance constraints into lunar surface image correspondence
- Author
-
Hong Qiao, Chuankai Liu, Zhiyong Liu, Yuren Zhang, and Xu Yang
- Subjects
Matching (graph theory) ,Cognitive Neuroscience ,media_common.quotation_subject ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,02 engineering and technology ,Affine invariance ,Affine combination ,Discriminative model ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Invariant (mathematics) ,Correspondence problem ,021101 geological & geomatics engineering ,media_common ,Mathematics ,business.industry ,Ambiguity ,Computer Science Applications ,Computer Science::Computer Vision and Pattern Recognition ,Outlier ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithm - Abstract
This paper aims to solve the keypoint correspondence problem in lunar surface images, a typical correspondence task under point ambiguity. Point ambiguity may be caused by repetitive patterns, cluttered scenes, and outliers in the images, which makes the local descriptors less discriminative. In this paper we introduce locally affine-invariance constraints on graphs to tackle the keypoint correspondence problem under point ambiguity. The key idea is that each point can be represented with the affine combination of its neighbors. It is suitable for our problem because it is not only invariant to scale and rotational change, but also more resistant to outliers. Specifically, we introduce the locally affine-invariance constraints into the subgraph matching problem and the common subgraph matching problem. The locally affine-invariance constraint is not directly applicable on common subgraph matching due to its dependency on awareness of selected keypoints. This problem is approximately addressed by solving a series of reliable matching identification and rematching problems. In the experiments, we first apply the proposed method on standard graph matching datasets to evaluate its effectiveness on general correspondence problem under point ambiguity, and second validate the applicability on the lunar surface image dataset.
- Published
- 2016
160. Grid-based scan-to-map matching for accurate 2D map building
- Author
-
Lakshitha Dantanarayana, Gamini Dissanayake, Tomonari Furukawa, and Kunjin Ryu
- Subjects
0209 industrial biotechnology ,Matching (statistics) ,Kullback–Leibler divergence ,business.industry ,02 engineering and technology ,Map matching ,Grid ,Grid based ,Computer Science Applications ,Human-Computer Interaction ,Normal distribution ,Industrial Engineering & Automation ,020901 industrial engineering & automation ,Hardware and Architecture ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Grid reference ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Representation (mathematics) ,business ,Algorithm ,Software ,Mathematics - Abstract
© 2016 Taylor & Francis and The Robotics Society of Japan. This paper presents a grid-based scan-to-map matching technique for accurate 2D map building. At every acquisition of a new scan, the proposed technique matches the new scan to the previous scan similarly to the conventional techniques, but further corrects the error by matching the new scan to the globally defined map. In order to achieve best scan-to-map matching at each acquisition, the map is represented as a grid map with multiple normal distributions (NDs) in each cell, which is one contribution of this paper. Additionally, the new scan is also represented by NDs, developing a novel ND-to-ND matching technique. This ND-to-ND matching technique has significant potential in the enhancement of the global matching as well as the computational efficiency. Experimental results first show that the proposed technique accumulates very small errors after consecutive matchings and identifies that the scans are matched better to the map with the multi-ND representation than one ND representation. The proposed technique is then tested in a number of large indoor environments, including public domain datasets and the applicability to real world problems is demonstrated.
- Published
- 2016
161. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming
- Author
-
Rafael Muñoz-Salinas, F. J. Madrid-Cuevas, Rafael Medina-Carnicer, and Sergio Garrido-Jurado
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Binary number ,02 engineering and technology ,Square (algebra) ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,State (computer science) ,Error detection and correction ,Fiducial marker ,Algorithm ,Integer programming ,Pose ,Software ,Mathematics - Abstract
Square-based fiducial markers are one of the most popular approaches for camera pose estimation due to its fast detection and robustness. In order to maximize their error correction capabilities, it is required to use an inner binary codification with a large inter-marker distance. This paper proposes two Mixed Integer Linear Programming (MILP) approaches to generate configurable square-based fiducial marker dictionaries maximizing their inter-marker distance. The first approach guarantees the optimal solution, however, it can only be applied to relatively small dictionaries and number of bits since the computing times are too long for many situations. The second approach is an alternative formulation to obtain suboptimal dictionaries within restricted time, achieving results that still surpass significantly the current state of the art methods. HighlightsThe paper proposes two methods to obtain fiducial markers based on the MILP paradigm.First model guarantees the optimality in terms of inter-marker distance.Second model generates suboptimal markers within restricted time.The markers generated allow the correction of a higher amount of erroneous bits.
- Published
- 2016
162. A mean approximation based bidimensional empirical mode decomposition with application to image fusion
- Author
-
Jianjia Pan and Yuan Yan Tang
- Subjects
Image fusion ,Mathematical optimization ,Delaunay triangulation ,Applied Mathematics ,Computation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Centroid ,020206 networking & telecommunications ,Image processing ,02 engineering and technology ,Hilbert–Huang transform ,Maxima and minima ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Decomposition method (constraint satisfaction) ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Mathematics - Abstract
Empirical mode decomposition (EMD) is an adaptive decomposition method, which is widely used in time-frequency analysis. As a bidimensional extension of EMD, bidimensional empirical mode decomposition (BEMD) presents many useful applications in image processing and computer vision. In this paper, we define the mean points in BEMD 'sifting' processing as centroid point of neighbour extrema points in Delaunay triangulation and propose using mean approximation instead of envelope mean in 'sifting'. The proposed method improves the decomposition result and reduces average computation time of 'sifting' processing. Furthermore, a BEMD-based image fusion approach is presented in this paper. Experimental results show our method can achieve more orthogonal and physical meaningful components and more effective result in image fusion application. We define the mean points in BEMD 'sifting' processing as centroid point of neighbour extrema points in Delaunay triangulation.Using mean approximation instead of envelope mean in BEMD 'sifting' processing.The proposed method improves the decomposition result and reduces average computation time of 'sifting' processing.A BEMD-based image fusion approach is proposed.
- Published
- 2016
163. A Fast Superresolution Image Reconstruction Algorithm
- Author
-
Mário Sarcinelli Filho, Evandro Ottoni Teatini Salles, and Marcelo Oliveira Camponez
- Subjects
General Computer Science ,business.industry ,Computation ,Approximation algorithm ,020206 networking & telecommunications ,02 engineering and technology ,Iterative reconstruction ,Bayesian inference ,Reduction (complexity) ,Laplace's method ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Algorithm ,Image resolution ,Mathematics ,Feature detection (computer vision) - Abstract
In a previous paper we have proposed two new superresolution image reconstruction algorithms, based on a non-parametric numerical integration Bayesian inference method, the Integrated Nested Laplace Approximation (INLA). Despite achieving superior image reconstruction results compared to other state-of-the-art methods, such algorithms manipulate huge matrices (although sparse). Therefore, the demand for memory usage and computation is high. In this paper, review such algorithms, solving these problems through relaxing one equation in the original mathematical model and involving the high-resolution (HR) image in a Torus. The result is a meaningful reduction in the computation cost of such algorithms and in the dimensions of the matrices handled as well (from n2-by-n2 to n-by-n, the size of the HR image). The result is a new algorithm, much faster than its previous version and other meaningful state-of-the-art algorithms.
- Published
- 2016
164. Constrained min–max optimization via the improved constraint-activated differential evolution with escape vectors
- Author
-
Shu-Mei Guo, Chin-Chang Yang, Jason Sheng Hong Tsai, and Pang-Han Hsu
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Optimization problem ,General Engineering ,Constrained optimization ,02 engineering and technology ,Computer Science Applications ,Constraint (information theory) ,020901 industrial engineering & automation ,Artificial Intelligence ,Differential evolution ,0202 electrical engineering, electronic engineering, information engineering ,Test functions for optimization ,Systems design ,020201 artificial intelligence & image processing ,Differential (infinitesimal) ,Algorithm ,Premature convergence ,Mathematics - Abstract
A new scheme to solve the issue of premature convergence of differential evolution.New test functions to evaluate constrained min-max optimization algorithms (CMMOA).An improved CMMOA to achieve a quite satisfied success rate. In system design, the best system designed under a simple experimental environment may not be suitable for application in real world if dramatic changes caused by uncertainties contained in the real world are considered. To deal with the problem caused by uncertainties, designers should try their best to get the most robust solution. The most robust solution can be obtained by constrained min-max optimization algorithms. In this paper, the scheme of generating escape vectors has been proposed to solve the problem of premature convergence of differential evolution. After applying the proposed scheme to the constrained min-max optimization algorithm, the performance of the algorithm could be greatly improved. To evaluate the performance of constrained min-max optimization algorithms, more complex test problems have also been proposed in this paper. Experimental results show that the improved constrained min-max optimization algorithm is able to achieve a quite satisfied success rate on all considered test problems under limited accuracy.
- Published
- 2016
165. Discussion of the community detection algorithm based on statistical inference
- Author
-
Liangxun Shuo and Bianfang Chai
- Subjects
Computer science ,Machine learning ,computer.software_genre ,01 natural sciences ,010305 fluids & plasmas ,Frequentist inference ,0103 physical sciences ,Statistical inference ,Statistical theory ,lcsh:Science ,lcsh:Science (General) ,010306 general physics ,Community detection ,business.industry ,Probabilistic model ,Statistical model ,Computer Science, Software Engineering ,Fiducial inference ,lcsh:Q ,Artificial intelligence ,business ,computer ,Algorithm ,Mathematics ,lcsh:Q1-390 - Abstract
Summary This paper aims to solve the model and parameters with the discussion from the algorithm characteristics of model, the analysis of each algorithm, solving the difficulties, problems and development direction. The paper tries to analyze and summarize the evolution law of algorithm and solution thinking. Some improvements are given on the existing community detection algorithm based on statistical inference.
- Published
- 2016
166. Branch pipe routing based on 3D connection graph and concurrent ant colony optimization algorithm
- Author
-
Dan Jiang, Yanfeng Qu, and Qingyan Yang
- Subjects
Optimal design ,0209 industrial biotechnology ,Mathematical optimization ,Optimization problem ,Ant colony optimization algorithms ,System optimization ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Pipeline transport ,020901 industrial engineering & automation ,Pipe routing ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Branch point ,Mathematics - Abstract
Pipe routing, in particular branch pipes with multiple terminals, has an important influence on product performance and reliability. This paper develops a new rectilinear branch pipe routing approach for automatic generation of the optimal rectilinear branch pipe routes in constrained spaces. Firstly, this paper presents a new 3D connection graph, which is constructed by extending a new 2D connection graph. The new 2D connection graph is constructed according to five criteria in discrete Manhattan spaces. The 3D connection graph can model the 3D constrained layout space efficiently. The length of pipelines and the number of bends are modeled as the optimal design goal considering the number of branch points and three types of engineering constraints. Three types of engineering constraints are modeled by this 3D graph and potential value. Secondly, a new concurrent Max–Min Ant System optimization algorithm, which adopts concurrent search strategy and dynamic update mechanism, is used to solve Rectilinear Branch Pipe Routing optimization problem. This algorithm can improve the search efficiency in 3D constrained layout space. Numerical comparisons with other current approaches in literatures demonstrate the efficiency and effectiveness of the proposed approach. Finally, a case study of pipe routing for aero-engines is conducted to validate this approach.
- Published
- 2016
167. Multiple straight-line fitting using a Bayes factor
- Author
-
Leonardo Romero, Cuauhtemoc Gomez, and Carlos Lara-Alvarez
- Subjects
Statistics and Probability ,Observational error ,business.industry ,Applied Mathematics ,Bayesian probability ,0211 other engineering and technologies ,020302 automobile design & engineering ,Robotics ,Bayes factor ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,Set (abstract data type) ,0203 mechanical engineering ,Artificial intelligence ,business ,computer ,Algorithm ,021101 geological & geomatics engineering ,Mathematics - Abstract
This paper introduces a Bayesian approach to solve the problem of fitting multiple straight lines to a set of 2D points. Other approaches use many arbitrary parameters and threshold values, the proposed criterion uses only the parameters of the measurement errors. Models with multiple lines are useful in many applications, this paper analyzes the performance of the new approach to solve a classical problem in robotics: finding a map of lines from laser measurements. Tests show that the Bayesian approach obtains reliable models.
- Published
- 2016
168. Semiparametric Bayesian inference on generalized linear measurement error models
- Author
-
Nian-Sheng Tang, De-Wang Li, and An-Min Tang
- Subjects
Statistics and Probability ,Kullback–Leibler divergence ,Bayesian probability ,Multivariate normal distribution ,Bayesian inference ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,0504 sociology ,Statistics::Methodology ,0101 mathematics ,Cook's distance ,Mathematics ,business.industry ,05 social sciences ,050401 social sciences methods ,Pattern recognition ,Variable-order Bayesian network ,Statistics::Computation ,symbols ,Artificial intelligence ,Statistics, Probability and Uncertainty ,Bayesian linear regression ,business ,Algorithm ,Gibbs sampling - Abstract
The classical assumption in generalized linear measurement error models (GLMEMs) is that measurement errors (MEs) for covariates are distributed as a fully parametric distribution such as the multivariate normal distribution. This paper uses a centered Dirichlet process mixture model to relax the fully parametric distributional assumption of MEs, and develops a semiparametric Bayesian approach to simultaneously obtain Bayesian estimations of parameters and covariates subject to MEs by combining the stick-breaking prior and the Gibbs sampler together with the Metropolis–Hastings algorithm. Two Bayesian case-deletion diagnostics are proposed to identify influential observations in GLMEMs via the Kullback–Leibler divergence and Cook’s distance. Computationally feasible formulae for evaluating Bayesian case-deletion diagnostics are presented. Several simulation studies and a real example are used to illustrate our proposed methodologies.
- Published
- 2016
169. Minimum cost subgraph matching using a binary linear program
- Author
-
Sébastien Adam, Julien Lerouge, Pierre Héroux, Maroua Hammami, Equipe Apprentissage (DocApp - LITIS), Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes (LITIS), Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Université Le Havre Normandie (ULH), Normandie Université (NU)-Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), and Normandie Université (NU)
- Subjects
Factor-critical graph ,Mathematical optimization ,Matching (graph theory) ,Linear programming ,Substitution (logic) ,Subgraph isomorphism problem ,Binary number ,02 engineering and technology ,01 natural sciences ,[INFO.INFO-TT]Computer Science [cs]/Document and Text Processing ,Artificial Intelligence ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Induced subgraph isomorphism problem ,Computer Vision and Pattern Recognition ,010306 general physics ,Algorithm ,Software ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
Minimum Cost Subgraph Matching (MCSM) is an adaptation of Graph Edit Distance.The paper proposes a Binary Linear Program that solves the MCSM problem.The proposed formulation is very general and can tackle a large range of graphs.MCSM is more efficient and faster than a Substitution Only Tolerant Subgraph Matching (SOTSM). This paper presents a binary linear program for the Minimum Cost Subgraph Matching (MCSM) problem. MCSM is an extension of the subgraph isomorphism problem where the matching tolerates substitutions of attributes and modifications of the graph structure. The objective function proposed in the formulation can take into account rich attributes (e.g. vectors mixing nominal and numerical values) on both vertices and edges. Some experimental results obtained on an application-dependent dataset concerning the spotting of symbols on technical drawings show that the approach obtains better performance than a previous approach which is only substitution-tolerant.
- Published
- 2016
170. Nonsmooth Neural Network for Convex Time-Dependent Constraint Satisfaction Problems
- Author
-
Mauro Forti, Mauro Di Marco, Luca Pancioni, and Paolo Nistri
- Subjects
0209 industrial biotechnology ,State variable ,Linear programming ,Computer Networks and Communications ,finite-time convergence ,Context (language use) ,02 engineering and technology ,Subderivative ,020901 industrial engineering & automation ,Artificial Intelligence ,time-dependent (TD) constraints ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Constraint satisfaction problem ,Mathematics ,Artificial neural network ,Convex functions ,nonsmooth (NS) neural networks ,Regular polygon ,subdifferential ,Computer Science Applications ,Convex functions, finite-time convergence, nonsmooth (NS) neural networks, subdifferential, time-dependent (TD) constraints ,020201 artificial intelligence & image processing ,Convex function ,Algorithm ,Software - Abstract
This paper introduces a nonsmooth (NS) neural network that is able to operate in a time-dependent (TD) context and is potentially useful for solving some classes of NS-TD problems. The proposed network is named nonsmooth time-dependent network (NTN) and is an extension to a TD setting of a previous NS neural network for programming problems. Suppose $C(t)$ , $t \ge 0$ , is a nonempty TD convex feasibility set defined by TD inequality constraints. The constraints are in general NS (nondifferentiable) functions of the state variables and time. NTN is described by the subdifferential with respect to the state variables of an NS-TD barrier function and a vector field corresponding to the unconstrained dynamics. This paper shows that for suitable values of the penalty parameter, the NTN dynamics displays two main phases. In the first phase, any solution of NTN not starting in $C(0)$ at $t=0$ is able to reach the moving set $C(\cdot )$ in finite time $t_{h}$ , whereas in the second phase, the solution tracks the moving set, i.e., it stays within $C(t)$ for all subsequent times $t \ge t_{h}$ . NTN is thus able to find an exact feasible solution in finite time and also to provide an exact feasible solution for subsequent times. This new and peculiar dynamics displayed by NTN is potentially useful for addressing some significant TD signal processing tasks. As an illustration, this paper discusses a number of examples where NTN is applied to the solution of NS-TD convex feasibility problems.
- Published
- 2016
171. Ensemble clustering using factor graph
- Author
-
Jian-Huang Lai, Dong Huang, and Chang-Dong Wang
- Subjects
Mathematical optimization ,Fuzzy clustering ,Optimization problem ,Correlation clustering ,Constrained clustering ,020206 networking & telecommunications ,02 engineering and technology ,Determining the number of clusters in a data set ,Artificial Intelligence ,Signal Processing ,Consensus clustering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Cluster analysis ,Algorithm ,Software ,k-medians clustering ,Mathematics - Abstract
In this paper, we propose a new ensemble clustering approach termed ensemble clustering using factor graph (ECFG). Compared to the existing approaches, our approach has three main advantages: (1) the cluster number is obtained automatically and need not to be specified in advance; (2) the reliability of each base clustering can be estimated in an unsupervised manner and exploited in the consensus process; (3) our approach is efficient for processing ensembles with large data sizes and large ensemble sizes. In this paper, we introduce the concept of super-object, which serves as a compact and adaptive representation for the ensemble data and significantly facilitates the computation. Through the probabilistic formulation, we cast the ensemble clustering problem into a binary linear programming (BLP) problem. The BLP problem is NP-hard. To solve this optimization problem, we propose an efficient solver based on factor graph. The constrained objective function is represented as a factor graph and the max-product belief propagation is utilized to generate the solution insensitive to initialization and converged to the neighborhood maximum. Extensive experiments are conducted on multiple real-world datasets, which demonstrate the effectiveness and efficiency of our approach against the state-of-the-art approaches. HighlightsIntroduce the super-object representation to facilitate the consensus process.Probabilistically formulate the ensemble clustering problem into a BLP problem.Propose an efficient solver for the BLP problem based on factor graph.The cluster number of the consensus clustering is estimated automatically.Our method achieves the state-of-the-art performance in effectiveness and efficiency.
- Published
- 2016
172. Reaching optimized parameter set: protein secondary structure prediction using neural network
- Author
-
Jyotshna Dongardive and Siby Abraham
- Subjects
0301 basic medicine ,030102 biochemistry & molecular biology ,Artificial neural network ,Correlation coefficient ,Biomolecules (q-bio.BM) ,68T45 ,Quantitative Biology - Quantitative Methods ,Set (abstract data type) ,03 medical and health sciences ,030104 developmental biology ,Quantitative Biology - Biomolecules ,Rate of convergence ,Artificial Intelligence ,FOS: Biological sciences ,Encoding (memory) ,Convergence (routing) ,Sensitivity (control systems) ,Layer (object-oriented design) ,Algorithm ,Quantitative Methods (q-bio.QM) ,Software ,Mathematics - Abstract
We propose an optimized parameter set for protein secondary structure prediction using three layer feed forward back propagation neural network. The methodology uses four parameters viz. encoding scheme, window size, number of neurons in the hidden layer and type of learning algorithm. The input layer of the network consists of neurons changing from 3 to 19, corresponding to different window sizes. The hidden layer chooses a natural number from 1 to 20 as the number of neurons. The output layer consists of three neurons, each corresponding to known secondary structural classes viz. alpha helix, beta strands and coils respectively. It also uses eight different learning algorithms and nine encoding schemes. Exhaustive experiments were performed using non-homologues dataset. The experimental results were compared using performance measures like Q3, sensitivity, specificity, Mathew correlation coefficient and accuracy. The paper also discusses the process of obtaining a stabilized cluster of 2530 records from a collection of 11340 records. The graphs of these stabilized clusters of records with respect to accuracy are concave, convergence is monotonic increasing and rate of convergence is uniform. The paper gives BLOSUM62 as the encoding scheme, 19 as the window size, 19 as the number of neurons in the hidden layer and One- Step Secant as the learning algorithm with the highest accuracy of 78%. These parameter values are proposed as the optimized parameter set for the three layer feed forward back propagation neural network for the protein secondary structure predictionv, Comment: 22 pages, 9 figures, 28 tables
- Published
- 2016
173. Fringe Pattern Analysis With Message Passing Based Expectation Maximization for Fringe Projection Profilometry
- Author
-
Xiang Peng, Qinghua Guo, Yongkai Yin, Yanguang Sunny Yu, Limei Song, and Jiangtao Xi
- Subjects
Surface (mathematics) ,Fringe projection profilometry (FPP) ,General Computer Science ,Scale (ratio) ,message passing ,expectation maximization (EM) ,02 engineering and technology ,01 natural sciences ,010309 optics ,0103 physical sciences ,Expectation–maximization algorithm ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Computer vision ,phase shifting profilometry (PSP) ,Mathematics ,business.industry ,Message passing ,General Engineering ,Object (computer science) ,Autoregressive model ,Parallel processing (DSP implementation) ,020201 artificial intelligence & image processing ,Artificial intelligence ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Performance improvement ,business ,Algorithm ,lcsh:TK1-9971 - Abstract
Fringe projection profilometry (FPP) is a popular optical 3-D imaging approach, in which the images of deformed fringe patterns are analyzed to extract object surfaces (i.e., height maps of object surfaces). As an object surface normally does not change independently, height correlations of an object surface can be used to denoise and improve the measurement performance of the FPP. This paper investigates the issue of exploiting height correlations in FPP fringe pattern analysis. The challenge lies in that height correlations are unknown and they are different from object to object. In addition, the problem of interest is normally in a large scale. In this paper, we use autoregressive (AR) models with unknown parameters to model the unknown height correlations and formulate the FPP analysis problem (with height correlations exploited) under the framework of expectation maximization (EM). With EM, the unknown AR model parameters are determined based on observations, and the estimates of the heights with their correlations exploited can also be extracted. To deal with the large-scale problem, a message passing-based implementation of the formulated EM problem is studied and the relevant message updating rules are developed. The proposed approach has a linear complexity and it allows parallel processing due to the nature of message passing. Simulation and experimental results demonstrate a significant performance improvement by the proposed approach.
- Published
- 2016
174. A novel adaptive cuckoo search algorithm for intrinsic discriminant analysis based face recognition
- Author
-
Rutuparna Panda and Manoj Kumar Naik
- Subjects
0209 industrial biotechnology ,Fitness function ,business.industry ,Dimensionality reduction ,Feature vector ,Evolutionary algorithm ,Pattern recognition ,02 engineering and technology ,Linear discriminant analysis ,Facial recognition system ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Cuckoo search ,business ,Algorithm ,Software ,FSA-Red Algorithm ,Mathematics - Abstract
A novel adaptive cuckoo search algorithm, without using a Levy step, is proposed.The new algorithm improves the objective function values with a faster rate.Our evolutionary face recognition algorithm provides improved recognition rate.Optimal dimension reduction is achieved using PCA+IDA algorithm.ACS-IDA algorithm search optimal eigenvectors to improve accuracy. This paper presents a novel adaptive cuckoo search (ACS) algorithm for optimization. The step size is made adaptive from the knowledge of its fitness function value and its current position in the search space. The other important feature of the ACS algorithm is its speed, which is faster than the CS algorithm. Here, an attempt is made to make the cuckoo search (CS) algorithm parameter free, without a Levy step. The proposed algorithm is validated using twenty three standard benchmark test functions. The second part of the paper proposes an efficient face recognition algorithm using ACS, principal component analysis (PCA) and intrinsic discriminant analysis (IDA). The proposed algorithms are named as PCA+IDA and ACS-IDA. Interestingly, PCA+IDA offers us a perturbation free algorithm for dimension reduction while ACS+IDA is used to find the optimal feature vectors for classification of the face images based on the IDA. For the performance analysis, we use three standard face databases-YALE, ORL, and FERET. A comparison of the proposed method with the state-of-the-art methods reveals the effectiveness of our algorithm.
- Published
- 2016
175. Kernel subspace pursuit for sparse regression
- Author
-
Ioannis N. Psaromiligkos and Jad Kabbara
- Subjects
business.industry ,020206 networking & telecommunications ,Pattern recognition ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Kernel principal component analysis ,Kernel method ,Artificial Intelligence ,Kernel embedding of distributions ,Polynomial kernel ,Variable kernel density estimation ,Kernel (statistics) ,Signal Processing ,Radial basis function kernel ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Artificial intelligence ,0101 mathematics ,Tree kernel ,business ,Algorithm ,Software ,Mathematics - Abstract
This paper introduces a kernel version of the Subspace Pursuit algorithm.The proposed method, KSP, is a new iterative method for sparse regression.KSP outperforms and is less computationally intensive than related kernel methods. Recently, results from sparse approximation theory have been considered as a means to improve the generalization performance of kernel-based machine learning algorithms. In this paper, we present Kernel Subspace Pursuit (KSP), a new method for sparse non-linear regression. KSP is a low-complexity method that iteratively approximates target functions in the least-squares sense as a linear combination of a limited number of elements selected from a kernel-based dictionary. Unlike other kernel methods, by virtue of KSP's algorithmic design, the number of KSP iterations needed to reach the final solution does not depend on the number of basis functions used nor that of elements in the dictionary. We experimentally show that, in many scenarios involving learning synthetic and real data, KSP is less complex computationally and outperforms other kernel methods that solve the same problem, namely, Kernel Matching Pursuit and Kernel Basis Pursuit.
- Published
- 2016
176. Harmony Search Algorithm: A Unique Music-inspired Algorithm
- Author
-
Joong Hoon Kim
- Subjects
Structure (mathematical logic) ,education.field_of_study ,Similarity (geometry) ,business.industry ,020209 energy ,Population ,Harmony search algorithm ,02 engineering and technology ,General Medicine ,Random search ,Evolution strategy ,Metaheuristic algorithm ,Rebutal ,0202 electrical engineering, electronic engineering, information engineering ,Harmony search ,020201 artificial intelligence & image processing ,Artificial intelligence ,education ,business ,Adaptation (computer science) ,Metaheuristic ,Algorithm ,Engineering(all) ,Mathematics - Abstract
Since the Harmony Search Algorithm (HSA) was first introduced in 2001, it has drawn a world-wide attention mainly because of its balanced combination of exploration and exploitation and ease of application. The HSA, inspired by musical performance process, consists of three operators: random search, harmony memory considering rule, and pitch adjusting rule. The ways of handling exploration and exploitation with the three operators make the HSA a unique metaheuristic algorithm. However, a series of papers was recently published by an author which insisted that the HSA is equivalent to an evolution strategy (ES). The ES, based on ideas of adaptation and evolution, consists of two operators: recombination and mutation operators. Except the similarity in generating a single new solution at each iteration which can replace the worst solution in the population, other components (e.g., their exploration and exploitation strategies and structure) are totally different between the HSA and ES. This paper is written to rebut and point out academic flaws in the papers.
- Published
- 2016
177. Coherent summation of multiple short-time signals for direct positioning of a wideband source based on delay and Doppler
- Author
-
Le Yang, Jinzhou Li, Wenli Jiang, and Fucheng Guo
- Subjects
0209 industrial biotechnology ,02 engineering and technology ,law.invention ,symbols.namesake ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,law ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,Waveform ,Electrical and Electronic Engineering ,Radar ,Wideband ,Mathematics ,Applied Mathematics ,Transmitter ,Estimator ,020206 networking & telecommunications ,Computational Theory and Mathematics ,Signal Processing ,symbols ,Computer Vision and Pattern Recognition ,Statistics, Probability and Uncertainty ,Doppler effect ,Cramér–Rao bound ,Algorithm - Abstract
We consider identifying the source position directly from the received source signals. This direct position determination (DPD) approach has been shown to be superior in terms of better estimation accuracy and improved robustness to low signal-to-noise ratios (SNRs) to the conventional two-step localization technique, where signal measurements are extracted first and the source position is then estimated from them. The localization of a wideband source such as a communication transmitter or a radar whose signal should be considered deterministic is investigated in this paper. Both passive and active localization scenarios, which correspond to the source signal waveform being unknown and being known respectively, are studied. In both cases, the source signal received at each receiver is partitioned into multiple non-overlapping short-time signal segments for the DPD task. This paper proposes the use of coherent summation that takes into account the coherency among the short-time signals received at the same receiver. The study begins with deriving the Cramer-Rao lower bounds (CRLBs) of the source position under coherent summation-based and non-coherent summation-based DPDs. Interestingly, we show analytically that with coherent summation, the localization accuracy of the DPD improves as the time interval between two short-time signals increases. This paper also develops approximate maximum likelihood (ML) estimators for DPDs with coherent and non-coherent summations. The CRLB results and the performance of the proposed source position estimators are illustrated via simulations.
- Published
- 2016
178. Fast and exact unidimensional L2–L1 optimization as an accelerator for iterative reconstruction algorithms
- Author
-
Daniel R. Pipa, Alvaro R. De Pierro, and Marcelo V. W. Zibetti
- Subjects
Deblurring ,Line search ,Signal reconstruction ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Iterative reconstruction ,Iteratively reweighted least squares ,Nonlinear conjugate gradient method ,Compressed sensing ,Computational Theory and Mathematics ,Artificial Intelligence ,Search algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Mathematics - Abstract
This paper studies the use of fast and exact unidimensional L2-L1 minimization as a line search for accelerating iterative reconstruction algorithms. In L2-L1 minimization reconstruction problems, the squared Euclidean, or L2 norm, measures signal-data discrepancy and the L1 norm stands for a sparsity preserving regularization term. Functionals as these arise in important applications such as compressed sensing and deconvolution. Optimal unidimensional L2-L1 minimization has only recently been studied by Li and Osher for denoising problems and by Wen et al. for line search. A fast L2-L1 optimization procedure can be adapted for line search and used in iterative algorithms, improving convergence speed with little increase in computational cost. This paper proposes a new method for exact L2-L1 line search and compares it with the Li and Osher's, Wen et al.'s, as well as with a standard line search algorithm, the method of false position. The use of the proposed line search improves convergence speed of different iterative algorithms for L2-L1 reconstruction such as iterative shrinkage, iteratively reweighted least squares, and nonlinear conjugate gradient. This assertion is validated experimentally in applications to signal reconstruction in compressed sensing and sparse signal deblurring.
- Published
- 2016
179. A Study on the Convergence Technique enhanced GrabCut Algorithm Using Color Histogram and modified Sharpening filter
- Author
-
Jong-Hun Park, Gang-Seong Lee, and Sang-Hun Lee
- Subjects
Color histogram ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Pattern recognition ,Sharpening ,Filter (signal processing) ,Object detection ,GrabCut ,Video tracking ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Histogram equalization ,Mathematics - Abstract
In this paper, we proposed image enhancement method using sharpening filter for improving the accuracy of object detection using the existing Grabcut algorithm. GrabCut algorithm is the excellent performance extracting an object within a rectangular window range, but it has the drawback of the inferior performance in image with no clear distinction between background and objects. So, in this paper, reinforcing the brightness and clarity through histogram equalization, and tightening the border of the object using the sharpening filter look better than that extracted result of existing GrabCut algorithm in a similar image of the object and the background. Based on improved Grabcut algorithm, it is possible to obtain an improved result in the image processing convergence technique of character recognition, real-time object tracking and so on.
- Published
- 2015
180. General relation-based variable precision rough fuzzy set
- Author
-
Weimin Ma, Eric C. C. Tsang, and Bingzhen Sun
- Subjects
0209 industrial biotechnology ,Fuzzy classification ,Fuzzy set ,Dominance-based rough set approach ,02 engineering and technology ,Type-2 fuzzy sets and systems ,computer.software_genre ,Fuzzy logic ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,Fuzzy set operations ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Rough set ,Data mining ,Algorithm ,computer ,Software ,Mathematics - Abstract
In order to effectively handle the real-valued data sets in practice, it is valuable from theoretical and practical aspects to combine fuzzy rough set and variable precision rough set so that a powerful tool can be developed. That is, the model of fuzzy variable precision rough set, which not only can handle numerical data but also is less sensitive to misclassification and perturbation,In this paper, we propose a new variable precision rough fuzzy set by introducing the variable precision parameter to generalized rough fuzzy set, i.e., the variable precision rough fuzzy set based on general relation. We, respectively, define the variable precision rough lower and upper approximations of any fuzzy set and it level set with variable precision parameter by constructive approach. Also, we present the properties of the proposed model in detail. Meanwhile, we establish the relationship between the variable precision rough approximation of a fuzzy set and the rough approximation of the level set for a fuzzy set. Furthermore, we give a new approach to uncertainty measure for variable precision rough fuzzy set established in this paper in order to overcome the limitations of the traditional methods. Finally, some numerical example are used to illuminate the validity of the conclusions given in this paper.
- Published
- 2015
181. Diversified learning for continuous hidden Markov models with application to fault diagnosis
- Author
-
Huajing Fang, Zefang Li, and Ming Huang
- Subjects
Mathematical optimization ,Estimation theory ,General Engineering ,Estimator ,Fault (power engineering) ,Bearing (navigation) ,Computer Science Applications ,Maxima and minima ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Likelihood function ,Hidden Markov model ,Gradient descent ,Algorithm ,Mathematics - Abstract
The diversified learning formulas of CHMM parameters are derived.A likelihood-based model averaging estimator is developed.Bearing fault diagnosis is effectively performed. The learning problem of continuous hidden Markov models (CHMMs) is the most critical and challenging one for the application of CHMMs. This paper aims to attack the learning problem of CHMMs by using the diversified gradient descent (DGD) algorithm. The novel learning formula of CHMM parameters, requiring no special form of the objective function and yielding various parameter estimates with different degree of diversity, is derived through dynamically adjusting the iterative procedure according to the gradient change of each parameter. It is the first work for standard CHMM attempting to obtain more local maxima so that the global maximum of the likelihood function of CHMM can be better approximated or even discovered. Hence this paper takes an important step forward in solving the learning problem of CHMM. Furthermore, a likelihood-based model averaging (LBMA) estimator is developed to achieve robust parameter estimation of CHMM based upon the diversiform models attained by the DGD algorithm. The proposed methods are tested on simulation and real-life bearing fault diagnosis problem. The results show that proposed methods perform better in parameter estimation and bearing fault diagnosis compared to the conventional methods.
- Published
- 2015
182. Generalized type-2 fuzzy weight adjustment for backpropagation neural networks in time series prediction
- Author
-
Patricia Melin, Oscar Castillo, Fevrier Valdez, and Fernando Gaxiola
- Subjects
Mathematical optimization ,Adaptive neuro fuzzy inference system ,Information Systems and Management ,Fuzzy classification ,Artificial neural network ,Neuro-fuzzy ,Defuzzification ,Fuzzy logic ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Fuzzy set operations ,Fuzzy number ,Algorithm ,Software ,Mathematics - Abstract
In this paper the comparison of a proposed neural network with generalized type-2 fuzzy weights (NNGT2FW) with respect to the monolithic neural network (NN) and the neural network with interval type-2 fuzzy weights (NNIT2FW) is presented. Generalized type-2 fuzzy inference systems are used to obtain the generalized type-2 fuzzy weights and are designed by a strategy of increasing and decreasing an epsilon variable for obtaining the different sizes of the footprint of uncertainty (FOU) for the generalized membership functions. The proposed method is based on recent approaches that handle weight adaptation using type-1 and type-2 fuzzy logic. The approach is applied to the prediction of the Mackey-Glass time series, and results are shown to outperform the results produced by other neural models. Gaussian noise was applied to the test data of the Mackey-Glass time series for finding out which of the presented methods in this paper shows better performance and tolerance to noise.
- Published
- 2015
183. A lagrangian propagator for artificial neural networks in constraint programming
- Author
-
Michele Lombardi, Stefano Gualandi, Lombardi, Michele, and Gualandi, Stefano
- Subjects
Mathematical optimization ,02 engineering and technology ,symbols.namesake ,Artificial Intelligence ,Computational Theory and Mathematic ,Constraint logic programming ,0202 electrical engineering, electronic engineering, information engineering ,Constraint programming ,Discrete Mathematics and Combinatorics ,Overhead (computing) ,Subgradient method ,Discrete Mathematics and Combinatoric ,Mathematics ,Lagrangian relaxation ,Artificial neural network ,020208 electrical & electronic engineering ,Constraint satisfaction ,Neural network ,Constraint (information theory) ,Computational Theory and Mathematics ,symbols ,020201 artificial intelligence & image processing ,Algorithm ,Software - Abstract
This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of [27], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.
- Published
- 2015
184. An efficient pruning strategy for approximate string matching over suffix tree
- Author
-
Jianzhong Li, Hong Gao, Huan Hu, and Hongzhi Wang
- Subjects
Compressed suffix array ,Suffix tree ,String (computer science) ,Generalized suffix tree ,Commentz-Walter algorithm ,020207 software engineering ,02 engineering and technology ,String searching algorithm ,Approximate string matching ,law.invention ,Human-Computer Interaction ,Artificial Intelligence ,Hardware and Architecture ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Depth-first search ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
Approximate string matching over suffix tree with depth-first search (ASM_ST_DFS), a classical algorithm in the field of approximate string matching, was originally proposed by Ricardo A. Baeza-Yates and Gaston H. Gonnet in 1990. The algorithm is one of the most excellent algorithms for approximate string matching if combined with other indexing techniques. However, its time complexity is sensitive to the length of pattern string because it searches $$m+k$$m+k characters on each path from the root before backtracking. In this paper, we propose an efficient pruning strategy to solve this problem. We prove its correctness and efficiency in theory. Particularly, we proved that if the pruning strategy is adopted, it averagely searches O(k) characters on each path before backtracking instead of O(m). Considering each internal node of suffix tree has multiple branches, the pruning strategy should work very well. We also experimentally show that when k is much smaller than m, the efficiency improves hundreds of times, and when k is not much smaller than m, it is still several times faster. This is the first paper that tries to solve the backtracking problem of ASM_ST_DFS in both theory and practice.
- Published
- 2015
185. Recurrent Neural Network for Computing the Drazin Inverse
- Author
-
Yimin Wei, Predrag S. Stanimirović, and Ivan S. Zivkovic
- Subjects
Network architecture ,Theoretical computer science ,Artificial neural network ,Computer Networks and Communications ,Time delay neural network ,Computer Science::Neural and Evolutionary Computation ,Drazin inverse ,Stability (learning theory) ,Mathematical Concepts ,Computer Science Applications ,Matrix (mathematics) ,Recurrent neural network ,Nonlinear Dynamics ,Artificial Intelligence ,Convergence (routing) ,Humans ,Computer Simulation ,Neural Networks, Computer ,Algorithm ,Algorithms ,Software ,Mathematics - Abstract
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of $n$ independent parts (subnetworks), where $n$ is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
- Published
- 2015
186. An accelerating scheme for destructive parsimonious extreme learning machine
- Author
-
Bing Li, Yong-Ping Zhao, and Ye-Bo Li
- Subjects
Scheme (programming language) ,Computational complexity theory ,business.industry ,Cognitive Neuroscience ,Process (computing) ,Training (meteorology) ,Machine learning ,computer.software_genre ,Measure (mathematics) ,Constructive ,Computer Science Applications ,Artificial Intelligence ,Enhanced Data Rates for GSM Evolution ,Artificial intelligence ,business ,Algorithm ,computer ,Mathematics ,computer.programming_language ,Extreme learning machine - Abstract
Constructive and destructive parsimonious extreme learning machines (CP-ELM and DP-ELM) were recently proposed to sparsify ELM. In comparison with CP-ELM, DP-ELM owns the advantage in the number of hidden nodes, but it loses the edge with respect to the training time. Hence, in this paper an equivalent measure is proposed to accelerate DP-ELM (ADP-ELM). As a result, ADP-ELM not only keeps the same hidden nodes as DP-ELM but also needs less training time than CP-ELM, which is especially important for the training time sensitive scenarios. The similar idea is extended to regularized ELM (RELM), yielding ADP-RELM. ADP-RELM accelerates the training process of DP-RELM further, and it works better than CP-RELM in terms of the number of hidden nodes and the training time. In addition, the computational complexity of the proposed accelerating scheme is analyzed in theory. From reported results on ten benchmark data sets, the effectiveness and usefulness of the proposed accelerating scheme in this paper is confirmed experimentally.
- Published
- 2015
187. Towards solving practical problems of large solution space using a novel pattern searching hybrid evolutionary algorithm – An elastic optical network optimization case study
- Author
-
Roza Goscien, Michał Witold Przewoźniczek, Miroslaw Klinkowski, and Krzysztof Walkowiak
- Subjects
Mathematical optimization ,Optimization problem ,Fitness function ,Computational complexity theory ,General Engineering ,Evolutionary algorithm ,Computer Science Applications ,Test case ,Artificial Intelligence ,Genetic algorithm ,Convergence (routing) ,Unicast ,Algorithm ,Mathematics - Abstract
The proposition of novel evolutionary algorithm for flow design in elastic networks.Evolutionary methods hybridization.Problem dedicated mechanisms.The detailed analysis of fitness function evaluation number and computation load dependence.Successful solving problem of large solution space. The fast social and economic development observed in the recent years brings up new challenging optimization problems. These problems are often very hard not only because of their computational complexity, but also due to their enormous solution space size. Therefore, this paper proposes an effective optimization method, based on the novel Multi Population Pattern Searching (MuPPetS) Algorithm, to solve optimization problems characterized with very large solution space. As a case study problem, we focus on the problem of routing and spectrum allocation with joint anycast and unicast traffic demands that arises in the field of optical networks optimization. The proposed method is adjusted to the problem with proper solution encoding, hybridization using a local search algorithm, and dedicated mechanisms necessary to improve method convergence. The above adjustments are required to make the method effective against test cases with solution space size of up to 103700 points (sets of values of the choice variables). The paper compares the performance of the proposed method with other reference methods known from the literature. Another key contribution of this paper is presentation of the complicated dependency between fitness function evaluation number (FFE) and real computation load, which are used to evaluate effectiveness of the proposed technique. The analysis is supported with proper empirical tests and their analysis.
- Published
- 2015
188. A metaheuristic algorithm to solve satellite broadcast scheduling problem
- Author
-
Ayed A. Salman, Mahamed G. H. Omran, and Imtiaz Ahmad
- Subjects
education.field_of_study ,Mathematical optimization ,Information Systems and Management ,Optimization problem ,Job shop scheduling ,Population ,Stochastic diffusion search ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Nurse scheduling problem ,Differential evolution ,Heuristics ,education ,Metaheuristic ,Algorithm ,Software ,Mathematics - Abstract
In this paper a new effective and scalable differential evolution algorithm is proposed for optimizing the Satellite Broadcast's Scheduling problem (SBS). The satellite broadcast's scheduling optimization problem is known to be an NP-complete problem in which the aim is to find a valid broadcasting pattern to earth-stationed terminals which maximizes the number of timeslots utilized for broadcasting under certain constraints. The algorithm proposed SD-BDE, is a binary version of Differential Evolution hybridized with ideas extracted from Stochastic Diffusion search. On top of that a repair heuristic mechanism is added to resolve constraint violations. Preliminary analysis shows that the repair heuristic is very effective as compared to other versions which include other heuristics. Further, the performance of the proposed algorithm is tested thoroughly against published work toward solving SBS problem as well as state-of-the-art existing binary-coded population-based similar algorithms. In this paper, we used, along with instances reported in the literature for such problem, randomly generated benchmark's instances of varying sizes for the sake of creating a unified large-scale set to compare different algorithm against. Experimental results show that the proposed algorithm outperformed the existing algorithms by finding better or optimal solutions for almost all tested benchmarks.
- Published
- 2015
189. A new rotating machinery fault diagnosis method based on improved local mean decomposition
- Author
-
Yu Wei, Yongbo Li, Wenhu Huang, Zhao Haiyang, and Minqiang Xu
- Subjects
Rank (linear algebra) ,Applied Mathematics ,Bandwidth (signal processing) ,Fault (power engineering) ,Computational Theory and Mathematics ,Orthogonality ,Artificial Intelligence ,Hermite interpolation ,Moving average ,Signal Processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Envelope (mathematics) ,Algorithm ,Smoothing ,Mathematics - Abstract
A demodulation technique based on improved local mean decomposition (LMD) is investigated in this paper. LMD heavily depends on the local mean and envelope estimate functions in the sifting process. It is well known that the moving average (MA) approach exists in many problems (such as step size selection, inaccurate results and time-consuming). Aiming at the drawbacks of MA in the smoothing process, this paper proposes a new self-adaptive analysis algorithm called optimized LMD (OLMD). In OLMD method, an alternative approach called rational Hermite interpolation is proposed to calculate local mean and envelope estimate functions using the upper and lower envelopes of a signal. Meanwhile, a reasonable bandwidth criterion is introduced to select the optimum product function (OPF) from pre-OPFs derived from rational Hermite interpolation with different shape controlling parameters in each rank. Subsequently, the orthogonality criterion (OC) is taken as the product function (PF) iterative stopping condition. The effectiveness of OLMD method is validated by the numerical simulations and applications to gearbox and roller bearing fault diagnosis. Results demonstrate that OLMD method has better fault identification capacity, which is effective in rotating machinery fault diagnosis. A novel time-frequency analysis method called OLMD is presented in this paper.OLMD can weaken the mode mixing problem in traditional LMD.The simulation and experimental results validate the reliability and feasibility of the proposed methodology.
- Published
- 2015
190. Globally consistent registration of terrestrial laser scans via graph optimization
- Author
-
P. W. Theiler, Jan Dirk Wegner, and Konrad Schindler
- Subjects
Loop (graph theory) ,Orientation (computer vision) ,business.industry ,Point cloud ,Residual ,Energy minimization ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,Task (project management) ,Set (abstract data type) ,Computer vision ,Pairwise comparison ,Artificial intelligence ,Computers in Earth Sciences ,business ,Engineering (miscellaneous) ,Algorithm ,Mathematics - Abstract
In this paper we present a framework for the automatic registration of multiple terrestrial laser scans. The proposed method can handle arbitrary point clouds with reasonable pairwise overlap, without knowledge about their initial orientation and without the need for artificial markers or other specific objects. The framework is divided into a coarse and a fine registration part, which each start with pairwise registration and then enforce consistent global alignment across all scans. While we put forward a complete, functional registration system, the novel contribution of the paper lies in the coarse global alignment step. Merging multiple scans into a consistent network creates loops along which the relative transformations must add up. We pose the task of finding a global alignment as picking the best candidates from a set of putative pairwise registrations, such that they satisfy the loop constraints. This yields a discrete optimization problem that can be solved efficiently with modern combinatorial methods. Having found a coarse global alignment in this way, the framework proceeds by pairwise refinement with standard ICP, followed by global refinement to evenly spread the residual errors. The framework was tested on six challenging, real-world datasets. The discrete global alignment step effectively detects, removes and corrects failures of the pairwise registration procedure, finally producing a globally consistent coarse scan network which can be used as initial guess for the highly non-convex refinement. Our overall system reaches success rates close to 100% at acceptable runtimes 1 h, even in challenging conditions such as scanning in the forest.
- Published
- 2015
191. Inconsistency in the use of the term 'validation' in studies reporting the performance of deep learning algorithms in providing diagnosis from medical imaging
- Author
-
Dong Wook Kim, Joon Seo Lim, Yousun Ko, Seon-Ok Kim, Jung Hee Son, Seong Ho Park, Hye-Young Jang, and Pyeong Hwa Kim
- Subjects
Diagnostic Imaging ,Computer and Information Sciences ,Medical Journals ,Systematic Reviews ,Computer science ,Science ,Validation Studies as Topic ,Research and Analysis Methods ,030218 nuclear medicine & medical imaging ,Machine Learning ,Machine Learning Algorithms ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Artificial Intelligence ,Diagnostic Medicine ,Medicine and Health Sciences ,Humans ,030212 general & internal medicine ,Scientific Publishing ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Research Assessment ,Term (time) ,Systematic review ,Bibliometrics ,Physical Sciences ,Medicine ,Journal Impact Factor ,Periodicals as Topic ,Medical Humanities ,Algorithm ,Mathematics ,Algorithms ,Research Article ,Test data - Abstract
BackgroundThe development of deep learning (DL) algorithms is a three-step process-training, tuning, and testing. Studies are inconsistent in the use of the term "validation", with some using it to refer to tuning and others testing, which hinders accurate delivery of information and may inadvertently exaggerate the performance of DL algorithms. We investigated the extent of inconsistency in usage of the term "validation" in studies on the accuracy of DL algorithms in providing diagnosis from medical imaging.Methods and findingsWe analyzed the full texts of research papers cited in two recent systematic reviews. The papers were categorized according to whether the term "validation" was used to refer to tuning alone, both tuning and testing, or testing alone. We analyzed whether paper characteristics (i.e., journal category, field of study, year of print publication, journal impact factor [JIF], and nature of test data) were associated with the usage of the terminology using multivariable logistic regression analysis with generalized estimating equations. Of 201 papers published in 125 journals, 118 (58.7%), 9 (4.5%), and 74 (36.8%) used the term to refer to tuning alone, both tuning and testing, and testing alone, respectively. A weak association was noted between higher JIF and using the term to refer to testing (i.e., testing alone or both tuning and testing) instead of tuning alone (vs. JIF 10: adjusted odds ratio 2.41, P = 0.089). Journal category, field of study, year of print publication, and nature of test data were not significantly associated with the terminology usage.ConclusionsExisting literature has a significant degree of inconsistency in using the term "validation" when referring to the steps in DL algorithm development. Efforts are needed to improve the accuracy and clarity in the terminology usage.
- Published
- 2020
192. Support vector machine with quantile hyper-spheres for pattern classification
- Author
-
Maoxiang Chu, Rongfen Gong, Jie Zhao, and Xiaoping Liu
- Subjects
0209 industrial biotechnology ,Support Vector Machine ,Computer science ,Kernel Functions ,Thyroid Gland ,Twins ,Density ,02 engineering and technology ,Pattern Recognition, Automated ,Machine Learning ,020901 industrial engineering & automation ,Materials Physics ,0202 electrical engineering, electronic engineering, information engineering ,Medicine and Health Sciences ,Cluster Analysis ,Breast ,Operator Theory ,Multidisciplinary ,Physics ,Heart ,Density estimation ,Kernel (statistics) ,Physical Sciences ,Decision boundary ,Medicine ,Engineering and Technology ,020201 artificial intelligence & image processing ,Female ,Anatomy ,Algorithm ,Research Article ,Optimization ,Computer and Information Sciences ,Science ,Materials Science ,Material Properties ,Robustness (computer science) ,Artificial Intelligence ,Support Vector Machines ,Hinge loss ,Humans ,Quadratic programming ,Cluster analysis ,Biology and Life Sciences ,Noise Reduction ,Support vector machine ,Signal Processing ,Cardiovascular Anatomy ,Mathematics ,Quantile ,Developmental Biology - Abstract
This paper formulates a support vector machine with quantile hyper-spheres (QHSVM) for pattern classification. The idea of QHSVM is to build two quantile hyper-spheres with the same center for positive or negative training samples. Every quantile hyper-sphere is constructed by using pinball loss instead of hinge loss, which makes the new classification model be insensitive to noise, especially the feature noise around the decision boundary. Moreover, the robustness and generalization of QHSVM are strengthened through maximizing the margin between two quantile hyper-spheres, maximizing the inner-class clustering of samples and optimizing the independent quadratic programming for a target class. Besides that, this paper proposes a novel local center-based density estimation method. Based on it, ρ-QHSVM with surrounding and clustering samples is given. Under the premise of high accuracy, the execution speed of ρ-QHSVM can be adjusted. The experimental results in artificial, benchmark and strip steel surface defects datasets show that the QHSVM model has distinct advantages in accuracy and the ρ-QHSVM model is fit for large-scale datasets.
- Published
- 2018
193. Influence of Variable Selection and Forest Type on Forest Aboveground Biomass Estimation Using Machine Learning Algorithms
- Author
-
Mingyang Li, Zhenzhen Liu, Chao Li, and Yingchang Li
- Subjects
010504 meteorology & atmospheric sciences ,0211 other engineering and technologies ,Feature selection ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Forest ecology ,Linear regression ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,Mathematics ,forest type ,Biomass (ecology) ,business.industry ,Forestry ,Stepwise regression ,Random forest ,Variable (computer science) ,machine learning ,Perpetual inventory ,Artificial intelligence ,subtropical forests ,aboveground biomass ,business ,computer ,Algorithm ,variable selection - Abstract
Forest biomass is a major store of carbon and plays a crucial role in the regional and global carbon cycle. Accurate forest biomass assessment is important for monitoring and mapping the status of and changes in forests. However, while remote sensing-based forest biomass estimation in general is well developed and extensively used, improving the accuracy of biomass estimation remains challenging. In this paper, we used China&rsquo, s National Forest Continuous Inventory data and Landsat 8 Operational Land Imager data in combination with three algorithms, either the linear regression (LR), random forest (RF), or extreme gradient boosting (XGBoost), to establish biomass estimation models based on forest type. In the modeling process, two methods of variable selection, e.g., stepwise regression and variable importance-base method, were used to select optimal variable subsets for LR and machine learning algorithms (e.g., RF and XGBoost), respectively. Comfortingly, the accuracy of models was significantly improved, and thus the following conclusions were drawn: (1) Variable selection is very important for improving the performance of models, especially for machine learning algorithms, and the influence of variable selection on XGBoost is significantly greater than that of RF. (2) Machine learning algorithms have advantages in aboveground biomass (AGB) estimation, and the XGBoost and RF models significantly improved the estimation accuracy compared with the LR models. Despite that the problems of overestimation and underestimation were not fully eliminated, the XGBoost algorithm worked well and reduced these problems to a certain extent. (3) The approach of AGB modeling based on forest type is a very advantageous method for improving the performance at the lower and higher values of AGB. Some conclusions in this paper were probably different as the study area changed. The methods used in this paper provide an optional and useful approach for improving the accuracy of AGB estimation based on remote sensing data, and the estimation of AGB was a reference basis for monitoring the forest ecosystem of the study area.
- Published
- 2019
194. Graph Matching with Anchor Nodes: A Learning Approach
- Author
-
Nan Hu, Raif M. Rustamov, and Leonidas J. Guibas
- Subjects
Factor-critical graph ,FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,010501 environmental sciences ,Strength of a graph ,01 natural sciences ,law.invention ,law ,Line graph ,0202 electrical engineering, electronic engineering, information engineering ,Adjacency matrix ,Random geometric graph ,0105 earth and related environmental sciences ,Mathematics ,Discrete mathematics ,business.industry ,Graph theory ,Directed graph ,020201 artificial intelligence & image processing ,Artificial intelligence ,Null graph ,business ,Algorithm ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
In this paper, we consider the weighted graph matching problem with partially disclosed correspondences between a number of anchor nodes. Our construction exploits recently introduced node signatures based on graph Laplacians, namely the Laplacian family signature (LFS) on the nodes, and the pairwise heat kernel map on the edges. In this paper, without assuming an explicit form of parametric dependence nor a distance metric between node signatures, we formulate an optimization problem which incorporates the knowledge of anchor nodes. Solving this problem gives us an optimized proximity measure specific to the graphs under consideration. Using this as a first order compatibility term, we then set up an integer quadratic program (IQP) to solve for a near optimal graph matching. Our experiments demonstrate the superior performance of our approach on randomly generated graphs and on two widely-used image sequences, when compared with other existing signature and adjacency matrix based graph matching methods., Comment: final version for CVPR2013
- Published
- 2018
- Full Text
- View/download PDF
195. Deep neural networks algorithms for stochastic control problems on finite horizon: convergence analysis
- Author
-
Achref Bachouch, Côme Huré, Huyên Pham, and Nicolas Langrené
- Subjects
FOS: Computer and information sciences ,Machine Learning (stat.ML) ,010103 numerical & computational mathematics ,01 natural sciences ,Approximation error ,Statistics - Machine Learning ,Bellman equation ,FOS: Mathematics ,Reinforcement learning ,0101 mathematics ,Mathematics - Optimization and Control ,Mathematics ,Stochastic control ,Numerical Analysis ,Artificial neural network ,business.industry ,Applied Mathematics ,Deep learning ,Probability (math.PR) ,Recursion (computer science) ,Computational Mathematics ,Rate of convergence ,Optimization and Control (math.OC) ,Artificial intelligence ,business ,Algorithm ,65C05, 90C39, 93E35, 68T07 ,Mathematics - Probability - Abstract
This paper develops algorithms for high-dimensional stochastic control problems based on deep learning and dynamic programming. Unlike classical approximate dynamic programming approaches, we first approximate the optimal policy by means of neural networks in the spirit of deep reinforcement learning, and then the value function by Monte Carlo regression. This is achieved in the dynamic programming recursion by performance or hybrid iteration, and regress now methods from numerical probabilities. We provide a theoretical justification of these algorithms. Consistency and rate of convergence for the control and value function estimates are analyzed and expressed in terms of the universal approximation error of the neural networks, and of the statistical error when estimating network function, leaving aside the optimization error. Numerical results on various applications are presented in a companion paper (arxiv.org/abs/1812.05916) and illustrate the performance of the proposed algorithms., Comment: To appear in SIAM Journal on Numerical Analysis
- Published
- 2018
- Full Text
- View/download PDF
196. Rule-based fuzzy classifier based on quantum ant optimization algorithm
- Author
-
Changjiang Zhang, Lei Yang, Zhihui Li, Tianrui Li, and Jue Wu
- Subjects
Statistics and Probability ,Set (abstract data type) ,Meta-optimization ,Fuzzy rule ,Fuzzy classification ,Artificial Intelligence ,Ramer–Douglas–Peucker algorithm ,Population-based incremental learning ,General Engineering ,Rule-based system ,Algorithm ,Mathematics ,FSA-Red Algorithm - Abstract
Fuzzy rule-based classification systems have been used extensively in data mining. This paper proposes a fuzzy rule- based classification algorithm based on a quantum ant optimization algorithm. A method of generating the hierarchical rules with different granularity hybridization is used to generate the initial rule set. This method can obtain an original rule set with a smaller number of rules. The modified quantum ant optimization algorithm is used to generate the optimal individual. Compared to other similar algorithms, the algorithm proposed in this paper demonstrates higher classification accuracy and a higher convergence rate. The algorithm is proved to be convergent on theory. Some experiments have been conducted on the algorithm, and the results proved that the algorithm is feasible.
- Published
- 2015
197. Fast algorithms of attribute reduction for covering decision systems with minimal elements in discernibility matrix
- Author
-
Ming Sun, Yanyan Yang, and Ze Dong
- Subjects
0209 industrial biotechnology ,Reduction (recursion theory) ,Relation (database) ,Computation ,Computational intelligence ,02 engineering and technology ,Matrix (mathematics) ,020901 industrial engineering & automation ,Artificial Intelligence ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Rough set ,Algorithm ,Software ,Maximal element ,Mathematics - Abstract
Covering rough sets, which generalize traditional rough sets by considering coverings instead of partitions, are introduced to deal with set-valued, missing-valued or real-valued data sets. For decision systems with such kinds of data sets, attribute reduction with covering rough sets aims to delete superfluous attributes, and fast algorithms of finding reducts are clearly meaningful for practical problems. In the existing study of attribute reduction with covering rough sets, the approach of discernibility matrix is the theoretical foundation. However, it always shares heavy computation load and large store space because of finding and storing all elements in the discernibility matrix. In this paper, we find that only minimal elements in the discernibility matrix are sufficient to find reducts. This fact motivates us in this paper to develop algorithms to find reducts by only employing minimal elements without computing other elements in the discernibility matrix. We first define the relative discernible relation of covering to characterize the relationship between minimal elements in the discernibility matrix and particular sample pairs in the covering decision system. By employing this relative discernible relation, we then develop algorithms to search the minimal elements in the discernibility matrix and find reducts for the covering decision system. Finally, experimental comparisons with other existing algorithms of covering rough sets on several data sets demonstrate that the proposed algorithms in this paper can greatly reduce the running time of finding reducts.
- Published
- 2015
198. Covering-based multi-granulation fuzzy rough sets
- Author
-
Caihui Liu and Witold Pedrycz
- Subjects
Statistics and Probability ,Reduct ,Discrete mathematics ,Approximations of π ,Dominance-based rough set approach ,General Engineering ,Extension (predicate logic) ,Space (mathematics) ,Artificial Intelligence ,Fuzzy set operations ,Rough set ,Fuzzy rough sets ,Algorithm ,Mathematics - Abstract
As a new and meaningful extension of the Pawlak rough set, multi-granulation rough sets (MGRSs) have attracted much attention and fruitful achievements have been reported in different aspects. By combining with fuzzy rough set, the paper introduces multi-granulation fuzzy rough sets in the covering approximation space, namely, covering-based multi-granulation fuzzy rough sets (CMFRS), which form the extension of fuzzy rough sets. We first investigate several important properties of lower and upper approximations of concepts in covering-based multi-granulation fuzzy rough sets and elaborate on the differences between the proposed models and the existing ones in literature. By employing the notions of reduct and exclusion of a covering, the paper studies the necessary and sufficient conditions for two CMFRS to generate identical lower and upper approximations of a target concept in the given covering approximation space. Finally, the relationships between the new models are explored in the paper.
- Published
- 2015
199. Nonlinear and adaptive undecimated hierarchical multiresolution analysis for real valued discrete time signals via empirical mode decomposition approach
- Author
-
Charlotte Yuk-Fan Ho, Zhijing Yang, Weichao Kuang, Bingo Wing-Kuen Ling, and Qingyun Dai
- Subjects
Mathematical optimization ,Applied Mathematics ,Multiresolution analysis ,Filter (signal processing) ,Filter bank ,Discrete Fourier transform ,Hilbert–Huang transform ,Wavelet ,Computational Theory and Mathematics ,Artificial Intelligence ,Frequency domain ,Signal Processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Spectral leakage ,Mathematics - Abstract
Hierarchical multiresolution analysis is an important tool for the analysis of signals. Since this multiresolution representation provides a pyramid like framework for representing signals, it can extract signal information effectively via levels by levels. On the other hand, a signal can be nonlinearly and adaptively represented as a sum of intrinsic mode functions (IMFs) via the empirical mode decomposition (EMD) algorithm. Nevertheless, as the IMFs are obtained only when the EMD algorithm converges, no further iterative sifting process will be performed directly when the EMD algorithm is applied to an IMF. As a result, the same IMF will be resulted and further level decompositions of the IMFs cannot be obtained directly by the EMD algorithm. In other words, the hierarchical multiresolution analysis cannot be performed via the EMD algorithm directly. This paper is to address this issue by performing a nonlinear and adaptive hierarchical multiresolution analysis based on the EMD algorithm via a frequency domain approach. In the beginning, an IMF is expressed in the frequency domain by applying discrete Fourier transform (DFT) to it. Next, zeros are inserted to the DFT sequence and a conjugate symmetric zero padded DFT sequence is obtained. Then, inverse discrete Fourier transform (IDFT) is applied to the zero padded DFT sequence and a new signal expressed in the time domain is obtained. Actually, the next level IMFs can be obtained by applying the EMD algorithm to this signal. However, the lengths of these next level IMFs are increased. To reduce these lengths, first DFT is applied to each next level IMF. Second, the DFT coefficients of each next level IMF at the positions where the zeros are inserted before are removed. Finally, by applying IDFT to the shorten DFT sequence of each next level IMF, the final set of next level IMFs are obtained. It is shown in this paper that the original IMF can be perfectly reconstructed. Moreover, computer numerical simulation results show that our proposed method can reach a component with less number of levels of decomposition compared to that of the conventional linear and nonadaptive wavelets and filter bank approaches. Also, as no filter is involved in our proposed method, there is no spectral leakage in various levels of decomposition introduced by our proposed method. Whereas there could be some significant leakage components in the various levels of decomposition introduced by the wavelets and filter bank approaches.
- Published
- 2015
200. On mimicry among sequential sampling models
- Author
-
Arash Khodadadi and James T. Townsend
- Subjects
Process (engineering) ,Stochastic process ,business.industry ,Applied Mathematics ,Computation ,Applied probability ,Range (mathematics) ,symbols.namesake ,Wiener process ,Decision boundary ,symbols ,Artificial intelligence ,First-hitting-time model ,business ,Algorithm ,General Psychology ,Mathematics - Abstract
Sequential sampling models are widely used in modeling the empirical data obtained from different decision making experiments. Since 1960s, several instantiations of these models have been proposed. A common assumption among these models is that the subject accumulates noisy information during the time course of a decision. The decision is made when the accumulated information favoring one of the responses reaches a decision boundary. Different models, however, make different assumptions about the information accumulation process and the implementation of the decision boundaries. Comparison among these models has proven to be challenging. In this paper we investigate the relationship between several of these models using a theoretical framework called the inverse first passage time problem. This framework has been used in the literature of applied probability theory in investigating the range of the first passage time distributions that can be produced by a stochastic process. In this paper, we use this framework to prove that any Wiener process model with two time-constant boundaries can be mimicked by an independent race model with time-varying boundaries. We also examine the numerical computation of the mimicking boundaries. We show that the mimicking boundaries of the race model are not symmetric. We then propose an equivalent race model in which the boundaries are symmetric and time-constant but the drift coefficients are time-varying.
- Published
- 2015
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.