3,073 results
Search Results
2. A correction method for the near field approximated model based localization techniques
- Author
-
Pascal Charge, Yide Wang, Parth Raj Singh, Institut d'Électronique et des Technologies du numéRique (IETR), Université de Nantes (UN)-Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), Charlier, Sandrine, Université de Nantes (UN)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), and Nantes Université (NU)-Université de Rennes 1 (UR1)
- Subjects
Systematic error ,Mathematical optimization ,Correction method ,Computational complexity theory ,[SPI] Engineering Sciences [physics] ,Near and far field ,02 engineering and technology ,Signal ,[SPI]Engineering Sciences [physics] ,Artificial Intelligence ,Angle of arrival ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,Approximated model ,Electrical and Electronic Engineering ,Mathematics ,Wavefront ,Applied Mathematics ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,Fresnel approximation error ,near field sources localization ,[SPI.TRON] Engineering Sciences [physics]/Electronics ,[SPI.TRON]Engineering Sciences [physics]/Electronics ,Computational Theory and Mathematics ,Signal Processing ,Computer Vision and Pattern Recognition ,Statistics, Probability and Uncertainty ,Algorithm - Abstract
International audience; Almost all of the existing near field sources localization techniques use an ap-proximated model of the spherical wavefront to reduce the computational com-plexity. This approximation adds a systematic bias to the estimated range andangle of arrival (AOA) when the received signal has the spherical wavefront. Inthis paper, we propose an effcient correction method to mitigate the systematicerror introduced by the use of the approximated model in the existing near fieldsources localization techniques. The performance of the correction method isshown by simulation results.
- Published
- 2017
3. Angular Radon spectrum for rotation estimation
- Author
-
Dario Lodi Rizzini
- Subjects
Radon transform ,020207 software engineering ,02 engineering and technology ,Correlation function (astronomy) ,Mixture model ,Translation (geometry) ,Parallel ,Point distribution model ,Artificial Intelligence ,Orientation (geometry) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Algorithm ,Rotation (mathematics) ,Software ,Mathematics - Abstract
This paper presents a robust method for rotation estimation of planar point sets using the Angular Radon Spectrum (ARS). Given a Gaussian Mixture Model (GMM) representing the point distribution, the ARS is a continuous function derived from the Radon Transform of such distribution. The ARS characterizes the orientation of a point distribution by measuring its alignment w.r.t. a pencil of parallel lines. By exploting its translation and angular-shift invariance, the rotation angle between two point sets can be estimated through the correlation of the corresponding spectra. Beside its definition, the novel contributions of this paper include the efficient computation of the ARS and of the correlation function through their Fourier expansion, and a new algorithm for assessing the rotation between two point sets. Moreover, experiments with standard benchmark datasets assess the performance of the proposed algorithm and other state-of-the-art methods in presence of noisy and incomplete data.
- Published
- 2018
4. Scalar Quantization as Sparse Least Square Optimization
- Author
-
Chen Wang, Ruisen Luo, Xiaofeng Gong, Fei Shaomin, Xiaomei Yang, Miao Du, and Kai Zhou
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computational complexity theory ,Iterative method ,Computer Science - Artificial Intelligence ,Machine Learning (stat.ML) ,02 engineering and technology ,Information loss ,Machine Learning (cs.LG) ,Artificial Intelligence ,Statistics - Machine Learning ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Mathematics - Numerical Analysis ,Cluster analysis ,Time complexity ,Mathematics ,Artificial neural network ,Scalar quantization ,business.industry ,Applied Mathematics ,Quantization (signal processing) ,Numerical Analysis (math.NA) ,Artificial Intelligence (cs.AI) ,Computational Theory and Mathematics ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software - Abstract
Quantization can be used to form new vectors/matrices with shared values close to the original. In recent years, the popularity of scalar quantization for value-sharing applications has been soaring as it has been found huge utilities in reducing the complexity of neural networks. Existing clustering-based quantization techniques, while being well-developed, have multiple drawbacks including the dependency of the random seed, empty or out-of-the-range clusters, and high time complexity for a large number of clusters. To overcome these problems, in this paper, the problem of scalar quantization is examined from a new perspective, namely sparse least square optimization. Specifically, inspired by the property of sparse least square regression, several quantization algorithms based on $l_1$ least square are proposed. In addition, similar schemes with $l_1 + l_2$ and $l_0$ regularization are proposed. Furthermore, to compute quantization results with a given amount of values/clusters, this paper designed an iterative method and a clustering-based method, and both of them are built on sparse least square. The paper shows that the latter method is mathematically equivalent to an improved version of k-means clustering-based quantization algorithm, although the two algorithms originated from different intuitions. The algorithms proposed were tested with three types of data and their computational performances, including information loss, time consumption, and the distribution of the values of the sparse vectors, were compared and analyzed. The paper offers a new perspective to probe the area of quantization, and the algorithms proposed can outperform existing methods especially under some bit-width reduction scenarios, when the required post-quantization resolution (number of values) is not significantly lower than the original number.
- Published
- 2019
5. Iterative joint integrated probabilistic data association filter for multiple-detection multiple-target tracking
- Author
-
Taek Lyul Song, Yifan Xie, and Yuan Huang
- Subjects
Initialization ,02 engineering and technology ,Joint Probabilistic Data Association Filter ,Machine learning ,computer.software_genre ,Tracking (particle physics) ,0203 mechanical engineering ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Event (probability theory) ,Mathematics ,020301 aerospace & aeronautics ,business.industry ,Applied Mathematics ,Probabilistic logic ,020206 networking & telecommunications ,Tracking system ,Filter (signal processing) ,Computational Theory and Mathematics ,Transmission (telecommunications) ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,computer ,Algorithm - Abstract
Most classical target tracking algorithms assume that one target generates one measurement per scan. However, for situations where the target's physical size exceeds one sensor resolution cell or the transmission and received signals have multi-path propagations, one target generates multiple detections which terms the multiple-detection problem. The multiple-detection multiple-target tracking algorithms usually suffered from an intractable computational load and could not operate in real time when multiple targets are closely spaced or too many detections are generated. In addition, automatic track initialization technique in cluttered environments brings in the presences of both true and false tracks that false track discrimination is required. These two terms are critical for real tracking systems but largely neglected by the published papers for the multiple-detection problem. In this paper, the authors propose an algorithm, called the Multiple-detection Iterative Joint Integrated Probabilistic Data Association (MD-iJIPDA) filter, that iteratively adjusts the computational load as well the tracking performances so that desirable performances can be achieved within manageable time. By integrating the target existence probability as a track quality measure, the MD-iJIPDA becomes capable of false track discrimination. The MD-iJIPDA is obtained by utilizing the arithmetic structure of MD-JIPDA algorithm. Therefore, we first incorporate the target existence probability into the calculation of joint event probability in MD-JPDA to derive the MD-JIPDA algorithm, then followed by the derivation of MD-iJIPDA algorithm. Simulations for the scenarios of large number targets with multiple detections are investigated to validate the effectivenesses of the proposed algorithm in tracking performance and computational efficiency.
- Published
- 2018
6. New binary linear programming formulation to compute the graph edit distance
- Author
-
Julien Lerouge, Romain Raveaux, Sébastien Adam, Zeina Abu-Aisheh, Pierre Héroux, Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes (LITIS), Université Le Havre Normandie (ULH), Normandie Université (NU)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA), Equipe Apprentissage (DocApp - LITIS), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Université Le Havre Normandie (ULH), Laboratoire d'Informatique Fondamentale et Appliquée de Tours (LIFAT), Université de Tours (UT)-Institut National des Sciences Appliquées - Centre Val de Loire (INSA CVL), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS), Laboratoire Informatique, Image et Interaction - EA 2118 (L3I), Université de La Rochelle (ULR), Centre National de la Recherche Scientifique (CNRS)-Université de Tours-Institut National des Sciences Appliquées - Centre Val de Loire (INSA CVL), and Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)
- Subjects
Comparability graph ,02 engineering and technology ,01 natural sciences ,Upper and lower bounds ,law.invention ,Pattern Matching ,Pathwidth ,Artificial Intelligence ,law ,0103 physical sciences ,Line graph ,0202 electrical engineering, electronic engineering, information engineering ,Pattern matching ,010306 general physics ,Integer programming ,Mathematics ,Integer Linear Programming ,Graph Matching ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,1-planar graph ,Graph Edit Distance ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Graph product - Abstract
International audience; Graph edit distance (GED) is a powerful and flexible graph matching paradigm that can be used to address different tasks in structural pattern recognition, machine learning, and data mining. In this paper, some new binary linear programming formulations for computing the exact GED between two graphs are proposed. A major strength of the formulations lies in their genericity since the GED can be computed between directed or undirected fully attributed graphs (i.e. with attributes on both vertices and edges). Moreover, a relaxation of the domain constraints in the formulations provides efficient lower bound approximations of the GED. A complete experimental study comparing the proposed formulations with 4 state-of-the-art algorithms for exact and approximate graph edit distances is provided. By considering both the quality of the proposed solution and the efficiency of the algorithms as performance criteria, the results show that none of the compared methods dominates the others in the Pareto sense. As a consequence, faced to a given real-world problem, a trade-off between quality and efficiency has to be chosen w.r.t. the application constraints. In this context, this paper provides a guide that can be used to choose the appropriate method.
- Published
- 2017
7. Fast procedures for accurate parameter estimation of sine-waves affected by noise and harmonic distortion
- Author
-
Dario Petri and Daniel Belega
- Subjects
Total harmonic distortion ,Estimation theory ,Noise (signal processing) ,Applied Mathematics ,Estimator ,020206 networking & telecommunications ,02 engineering and technology ,Discrete Fourier transform ,Sine wave ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Spectral leakage ,Algorithm ,Hann function ,Mathematics - Abstract
In this paper two procedures are proposed for the estimation of the frequency, amplitude, and phase of sine-waves affected by either wideband noise or by both noise and harmonic distortion, respectively. The procedures are very simple and suitable for real-time applications. According to them, firstly the signal parameters are estimated by means of the new Corrected Interpolated Discrete Fourier transform (IpDFTc) algorithm, which compensates the contribution of the spectral interference from the fundamental image component and harmonics on the parameter estimates returned by the classical IpDFT algorithm. Then, a linear sine-fit algorithm is applied to optimize the noise robustness of the estimators, which is worsened by signal windowing employed to reduce the effect of spectral leakage on the IpDFTc estimates. In the paper, the Hann window is applied since it exhibit both the highest noise robustness among the Maximum Sidelobe Decay (MSD) windows and a good reduction of long-range spectral leakage. It has been shown that both proposed procedures almost attain the Cramer-Rao Lower Bounds (CRLBs) for unbiased estimators when at least about 1.5 sine-wave cycles are observed so that the effect of interfering tones on the IpDFTc estimates can be effectively compensated by windowing. The performances of both procedures are analysed through computer simulations and experimental results.
- Published
- 2021
8. GMAT: Glottal closure instants detection based on the Multiresolution Absolute Teager–Kaiser energy operator
- Author
-
Kebin Wu, David Zhang, and Guangming Lu
- Subjects
Speech production ,Applied Mathematics ,Speech recognition ,Pooling ,020206 networking & telecommunications ,02 engineering and technology ,Glottal closure ,Residual ,Energy operator ,Identification rate ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Nonlinear system ,Computational Theory and Mathematics ,Artificial Intelligence ,Robustness (computer science) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,0305 other medical science ,Algorithm ,Mathematics - Abstract
Glottal Closure Instants (GCIs) detection is important to many speech applications. However, most existing algorithms cannot achieve computational efficiency and accuracy simultaneously. In this paper, we present the Glottal closure instants detection based on the Multiresolution Absolute TKEO (GMAT) that can detect GCIs with high accuracy and low computational cost. Considering the nonlinearity in speech production, the Teager–Kaiser Energy Operator (TKEO) is utilized to detect GCIs and an instant with a high absolute TKEO value often indicates a GCI. To enhance robustness, three multiscale pooling techniques, which are max pooling, multiscale product, and mean pooling, are applied to fuse absolute TKEOs of several scales. Finally, GCIs are detected based on the fused results. In the performance evaluation, GMAT is compared with three state-of-the-art methods, MSM (Most Singular Manifold-based approach), ZFR (Zero Frequency Resonator-based method), and SEDREAMS (Speech Event Detection using the Residual Excitation And a Mean-based Signal). On clean speech, experiments show that GMAT can attain higher identification rate and accuracy than MSM. Comparing with ZFR and SEDREAMS, GMAT gives almost the same reliability and higher accuracy. In addition, on noisy speech, GMAT demonstrates the highest robustness for most SNR levels. Additional comparison shows that GMAT is less sensitive to the choice of scale in multiscale processing and it has low computational cost. Finally, pathological speech identification, which is a concrete application of GCIs, is included to show the efficacy of GMAT in practice. Through this paper, we investigate the potential of TKEO for GCI detection and the proposed algorithm GMAT can detect GCIs with high accuracy and low computational cost. Due to the superiority of GMAT, it will be a promising choice for GCI detection, particularly in real-time scenarios. Hence, this work may contribute to systems relying on GCIs, where both accuracy and computational cost are crucial.
- Published
- 2017
9. ML-based single-step estimation of the locations of strictly noncircular sources
- Author
-
Ying Wu, Xiangwen Yao, Ding Wang, and Jiexin Yin
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Iterative method ,Applied Mathematics ,Estimator ,020206 networking & telecommunications ,02 engineering and technology ,Function (mathematics) ,020901 industrial engineering & automation ,Computational Theory and Mathematics ,Rate of convergence ,Artificial Intelligence ,Position (vector) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,Computer Vision and Pattern Recognition ,Time domain ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Focus (optics) ,Algorithm ,Mathematics - Abstract
This paper concentrates on the location methods for strictly noncircular sources by widely separated arrays. The conventional two-step methods extract measurement parameters and then, estimate the positions from them. Compared with the conventional two-step methods, direct position determination (DPD) is a promising technique, which locates transmitters directly from original sensor outputs without estimating intermediate parameters in a single step, and thus, improves the location accuracy and avoids the data association problem. However, existing DPD methods mainly focus on complex circular sources without considering noncircular signals, which can be exploited to enhance the localization accuracy. This paper proposes a maximum likelihood (ML)-based DPD algorithm for strictly noncircular sources whose waveforms are unknown. By exploiting the noncircularity of sources, we establish an ML-based function in time domain under the constraint on the waveforms of signals. A decoupled iterative method is developed to solve the prescribed ML estimator with a moderate complexity. In addition, we derive the deterministic Cramer–Rao Bound (CRB) for strictly noncircular sources, and prove that this CRB is upper bounded by the associated CRB for circular signals. Simulation results demonstrate that the proposed algorithm has a fast convergence rate, and outperforms the other location methods in a wide range of scenarios.
- Published
- 2017
10. Blind separation of partially overlapping data packets
- Author
-
Alle-Jan van der Veen and Mu Zhou
- Subjects
Beamforming ,Applied Mathematics ,Speech recognition ,020206 networking & telecommunications ,010103 numerical & computational mathematics ,02 engineering and technology ,Communications system ,01 natural sciences ,Blind signal separation ,Computational Theory and Mathematics ,Artificial Intelligence ,Asynchronous communication ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,0101 mathematics ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Generalized singular value decomposition ,Secondary surveillance radar ,Algorithm ,Subspace topology ,Mathematics ,Transponder - Abstract
The paper discusses the separation of partially overlapping data packets by an antenna array in narrowband communication systems. This problem occurs in asynchronous communication systems and several transponder systems such as Radio Frequency Identification (RFID) for wireless tags, Automatic Identification System (AIS) for ships, and Secondary Surveillance Radar (SSR) and Automatic Dependent Surveillance—Broadcast (ADS—B) for aircraft. Partially overlapping data packages also occur as inter-cell interference in mutually unsynchronized communication systems. Arbitrary arrival times of the overlapping packets cause nonstationary scenarios and makes it difficult to identify the signals using standard blind beamforming techniques. After selecting an observation interval, we propose subspace-based algorithms to suppress partially present (interfering) packets, as a preprocessing step for existing blind beamforming algorithms that assume stationary (fully overlapping) sources. The proposed algorithms are based on a subspace intersection, computed using a generalized singular value decomposition (GSVD) or a generalized eigenvalue decomposition (GEVD). In the second part of the paper, the algorithm is refined using a recently developed subspace estimation tool, the Signed URV algorithm, which is closely related to the GSVD but can be computed non-iteratively and allows for efficient subspace tracking. Simulation results show that the proposed algorithms significantly improve the performance of classical algorithms designed for block stationary scenarios in cases where asynchronous co-channel interference is present. An example on experimental data from the AIS ship transponder system confirms the effectiveness of the proposed algorithms in a real application.
- Published
- 2017
11. Efficient fuzzy composite predictive scheme for effectual 2-D up-sampling of images for multimedia applications
- Author
-
Aditya Acharya and Sukadev Meher
- Subjects
business.industry ,Process (computing) ,020206 networking & telecommunications ,02 engineering and technology ,Fuzzy logic ,Image (mathematics) ,Nonlinear system ,Sampling (signal processing) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Laplace operator ,Algorithm ,Interpolation ,Mathematics - Abstract
In this paper, highly nonlinear, fuzzy based composite scheme is proposed.In the pre-processing part, a newly designed 55 HPF is used recursively.A Fuzzy based post-processing scheme is also developed based on local statistics.The nonlinearity of FIS is enhanced by varying its parameters for better HF restoration.The proposed scheme is meant for both online and off-line applications. In this paper, a highly nonlinear, fuzzy logic based, composite scheme is proposed by combining a pre-processing and a post-processing operation to efficiently restore high frequency (HF) and very high frequency (VHF) details in an up-scaled image. The blurring in case of an up-sampled image is caused by the degradation of HF and VHF image details that correspond to fine details and edge regions during the up-sampling process. The degradation of HF and VHF image details is more significant than that of the flat and slowly varying regions. In order to resolve this problem effectively, a fuzzy composite scheme is developed which is based on the inverse modeling approach of HF degradation. During the pre-processing operation, the VHF components of an image are boosted up using recursive Laplacian of Laplacian (LOL) operator prior to image up-scaling. Subsequent to the image up-scaling, a fuzzy local adaptive Laplacian post-processing scheme is used which enhances the HF image details more than the low frequency image details based on local statistics in the up-scaled image. The HF restoration performance of the fuzzy based composite scheme is enhanced by improving its nonlinearity through the variations of different parameters of the fuzzy inference system (FIS) such as slope, width and the number of input-, and output membership functions. The effective fusion of pre-processing and post-processing operations makes the proposed scheme much effective to tackle the non-uniform blurring than the standalone pre-processing and post-processing techniques. Experimental results reveal that the proposed composite scheme gives much less blurring in comparison to the standalone schemes and performs better than most of the widely used interpolation schemes in terms of objective and subjective measures.
- Published
- 2017
12. Cramér–Rao bounds for coprime and other sparse arrays, which find more sources than sensors
- Author
-
P.P. Vaidyanathan and Chun-Lin Liu
- Subjects
Coprime integers ,Applied Mathematics ,Array processing ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,Upper and lower bounds ,Expression (mathematics) ,Combinatorics ,symbols.namesake ,Computational Theory and Mathematics ,Artificial Intelligence ,Rank condition ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Fisher information ,Algorithm ,Cramér–Rao bound ,Mathematics - Abstract
The Cramer-Rao bound (CRB) offers a lower bound on the variances of unbiased estimates of parameters, e.g., directions of arrival (DOA) in array processing. While there exist landmark papers on the study of the CRB in the context of array processing, the closed-form expressions available in the literature are not easy to use in the context of sparse arrays (such as minimum redundancy arrays (MRAs), nested arrays, or coprime arrays) for which the number of identifiable sources D exceeds the number of sensors N. Under such situations, the existing literature does not spell out the conditions under which the Fisher information matrix is nonsingular, or the condition under which specific closed-form expressions for the CRB remain valid. This paper derives a new expression for the CRB to fill this gap. The conditions for validity of this expression are expressed as the rank condition of a matrix defined based on the difference coarray. The rank condition and the closed-form expression lead to a number of new insights. For example, it is possible to prove the previously known experimental observation that, when there are more sources than sensors, the CRB stagnates to a constant value as the SNR tends to infinity. It is also possible to precisely specify the relation between the number of sensors and the number of uncorrelated sources such that these conditions are valid. In particular, for nested arrays, coprime arrays, and MRAs, the new expressions remain valid for D = O ( N 2 ) , the precise detail depending on the specific array geometry.
- Published
- 2017
13. Stochastic discretized learning-based weak estimation: a novel estimation method for non-stationary environments
- Author
-
Geir Horn, Anis Yazidi, Ole-Christoffer Granmo, and B. John Oommen
- Subjects
Learning automata ,Estimator ,020206 networking & telecommunications ,02 engineering and technology ,Binomial distribution ,Univariate distribution ,Efficient estimator ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Multinomial distribution ,Computer Vision and Pattern Recognition ,Minimax estimator ,Algorithm ,Software ,Invariant estimator ,Mathematics - Abstract
The task of designing estimators that are able to track time-varying distributions has found promising applications in many real-life problems.Existing approaches resort to sliding windows that track changes by discarding old observations. In this paper, we report a novel estimator referred to as the Stochastic Discretized Weak Estimator (SDWE), that is based on the principles of discretized Learning Automata (LA). In brief, the estimator is able to estimate the parameters of a time varying binomial distribution using finite memory. The estimator tracks changes in the distribution by operating a controlled random walk in a discretized probability space. The steps of the estimator are discretized so that the updates are done in jumps, and thus the convergence speed is increased. Further, the state transitions are both state-dependent and randomized. As far as we know, such a scheme is both novel and pioneering. The results which have first been proven for binomial distributions have subsequently been extended for the multinomial case, using which they can be applied to any univariate distribution using a histogram-based scheme. The most outstanding and pioneering contribution of our work is that of achieving multinomial estimation without relying on a set of binomial estimators, and where the underlying strategy is truly randomized. Interestingly, the estimator possesses a low computational complexity that is independent of the number of parameters of the multinomial distribution. The generalization of these results for other distributions has also been alluded to. The paper briefly reports conclusive experimental results that prove the ability of the SDWE to cope with non-stationary environments with high adaptation and accuracy. HighlightsWe present finite-memory estimation techniques for non-stationary environments.These new techniques use the principles of discretized Learning Automata.The results have been tested for many time-varying problems and applications.
- Published
- 2016
14. Estimating the order of sinusoidal models using the adaptively penalized likelihood approach: Large sample consistency properties
- Author
-
Petre Stoica, Amit Mitra, Khushboo Surana, and Sharmishtha Mitra
- Subjects
Penalized likelihood ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,Consistency (statistics) ,0202 electrical engineering, electronic engineering, information engineering ,Order (group theory) ,0101 mathematics ,Electrical and Electronic Engineering ,Astrophysics::Galaxy Astrophysics ,Mathematics ,Signal processing ,business.industry ,SIGNAL (programming language) ,Estimator ,020206 networking & telecommunications ,Pattern recognition ,Sinusoidal model ,Nonlinear system ,Control and Systems Engineering ,Signal Processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software - Abstract
Recently, the paper 2 introduced a method for model order estimation based on penalizing adaptively the likelihood (PAL). In this paper, we use the PAL based order estimation method for a nonlinear sinusoidal model and study its asymptotic statistical properties. We prove that the estimator of the model order using the PAL rule is consistent. Simulation examples are presented to illustrate the performance of the PAL method for small sample sizes and to compare it with that of three information criterion-based methods. In this paper, we use the PAL based order estimation method for estimating the number of components of a superimposed nonlinear sinusoidal model.We have proved that the estimator of the model order using the PAL rule is large sample consistent.Simulation examples are presented to illustrate the performance of the PAL method for small sample sizes and to compare it with three standard methods, namely, the usual AIC, the usual BIC and the asymptotic MAP rule.The PAL rule can also be extended for estimating the number of components for similar nested superimposed nonlinear 1-d and 2-d signal models.
- Published
- 2016
15. Mode seeking on graphs for geometric model fitting via preference analysis
- Author
-
Liming Zhang, Guobao Xiao, Hanzi Wang, and Yan Yan
- Subjects
Preference analysis ,02 engineering and technology ,Machine learning ,computer.software_genre ,Residual ,01 natural sciences ,Synthetic data ,Artificial Intelligence ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,010306 general physics ,Cluster analysis ,Mathematics ,Complex data type ,business.industry ,Random walk ,Real image ,Signal Processing ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Geometric modeling ,Algorithm ,computer ,Software - Abstract
We propose a graph-based mode-seeking method to fit multi-structural data.The proposed method combines mode-seeking with preference analysis.The proposed method exploits the global structure of graphs by random walks.Experiments show the proposed method is superior to some other fitting methods. In this paper, we propose a novel graph-based mode-seeking fitting method to fit and segment multiple-structure data. Mode-seeking is a simple and effective data analysis technique for clustering and filtering. However, conventional mode-seeking based fitting methods are very sensitive to the proportion of good/bad hypotheses, while most of sampling techniques may generate a large proportion of bad hypotheses. In this paper, we show that the proposed graph-based mode-seeking method has significant superiority for geometric model fitting. We intrinsically combine mode seeking with preference analysis. This enables mode seeking to be beneficial for reducing the influence of bad hypotheses since bad hypotheses usually have larger residual values than good ones. In addition, the proposed method exploits the global structure of graphs by random walks to alleviate the sensitivity to unbalanced data. Experimental results on both synthetic data and real images demonstrate that the proposed method outperforms several other competing fitting methods especially for complex data.
- Published
- 2016
16. Second-order autoregressive model-based Kalman filter for the estimation of a slow fading channel described by the Clarke model: Optimal tuning and interpretation
- Author
-
Laurent Ros, Ali Houssam El Husseini, Eric Pierre Simon, Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 (IEMN), Centrale Lille-Institut supérieur de l'électronique et du numérique (ISEN)-Université de Valenciennes et du Hainaut-Cambrésis (UVHC)-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF), Télécommunication, Interférences et Compatibilité Electromagnétique - IEMN (TELICE - IEMN), Centrale Lille-Institut supérieur de l'électronique et du numérique (ISEN)-Université de Valenciennes et du Hainaut-Cambrésis (UVHC)-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-Centrale Lille-Institut supérieur de l'électronique et du numérique (ISEN)-Université de Valenciennes et du Hainaut-Cambrésis (UVHC)-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF), GIPSA - Communication Information and Complex Systems (GIPSA-CICS), Département Images et Signal (GIPSA-DIS), Grenoble Images Parole Signal Automatique (GIPSA-lab ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Grenoble Images Parole Signal Automatique (GIPSA-lab ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut Polytechnique de Grenoble - Grenoble Institute of Technology-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), and Télécommunication, Interférences et Compatibilité Electromagnétique (IEMN-TELICE)
- Subjects
Mean squared error ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Kalman filter ,Upper and lower bounds ,Delta method ,Signal-to-noise ratio ,Computational Theory and Mathematics ,Autoregressive model ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Fading ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Communication channel ,Mathematics - Abstract
This paper treats the estimation of a flat fading Rayleigh channel with Jakes' Doppler spectrum model and slow fading variations. A common method is to use a Kalman filter (KF) based on an auto-regressive model of order p (AR(p)). The parameters of the AR model can be simply tuned by using the correlation matching (CM) criterion. However, the major drawback of this method is that high orders are required to approach the Bayesian Cramer–Rao lower bound. The choice of p together with the tuning of the model parameters is thus critical, and a tradeoff must be found between the numerical complexity and the performance. The reasonable tradeoff arising from setting p = 2 has received much attention in the literature. However, the methods proposed for tuning the model parameters are either based on an extensive grid-search analysis or experimental results, which limits their applicability. A general solution for any scenario is simply missing for p = 2 and this paper aims at filling this gap. We propose using a Minimization of Asymptotic Variance (MAV) criterion, for which a general closed-form formula has been derived for the optimal tuning of the model and the mean square error. This provides deeper insight into the behavior of the KF with respect to the channel state (Doppler frequency and signal to noise ratio). Moreover, the paper interprets the proposed solution, especially the dependence of the shape of the optimal AR(2) spectrum on the channel state. Analytic and numerical comparisons with first- and second-order algorithms in the literature are also performed. Simulation results show that the proposed AR(2)-MAV model performs better than the literature and similarly to AR(p)-CM models with p ≥ 15 .
- Published
- 2019
17. Instance-Dependent Positive and Unlabeled Learning with Labeling Bias Estimation
- Author
-
Bo Han, Dacheng Tao, Jane J. You, Qizhou Wang, Chen Gong, Jian Yang, and Tongliang Liu
- Subjects
business.industry ,Applied Mathematics ,Linear model ,02 engineering and technology ,Maximization ,Upper and lower bounds ,Range (mathematics) ,Data point ,Computational Theory and Mathematics ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Graphical model ,Artificial intelligence ,Likelihood function ,business ,Random variable ,Algorithm ,Software ,Mathematics - Abstract
This paper studies instance-dependent Positive and Unlabeled (PU) classification, where whether a positive example will be labeled (indicated by s) is not only related to the class label y, but also depends on the observation x. Therefore, the labeling probability on positive examples is not uniform as previous works assumed, but is biased to some simple or critical data points. To depict the above dependency relationship, a graphical model is built in this paper which further leads to a maximization problem on the induced likelihood function regarding P(s,y|x). By utilizing the well-known EM and Adam optimization techniques, the labeling probability of any positive example P(s=1|y=1,x) as well as the classifier induced by P(y|x) can be acquired. Theoretically, we prove that the critical solution always exists, and is locally unique for linear model if some sufficient conditions are met. Moreover, we upper bound the generalization error for both linear logistic and non-linear network instantiations of our algorithm. Empirically, we compare our method with state-of-the-art instance-independent and instance-dependent PU algorithms on a wide range of synthetic, benchmark and real-world datasets, and the experimental results firmly demonstrate the advantage of the proposed method over the existing PU approaches.
- Published
- 2021
18. Extensions of the CBMeMBer filter for joint detection, tracking, and classification of multiple maneuvering targets
- Author
-
Ping Wei, Lin Gao, and Wen Sun
- Subjects
Bayesian probability ,02 engineering and technology ,Kinematics ,Set (abstract data type) ,Cardinality ,0203 mechanical engineering ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Electrical and Electronic Engineering ,Finite set ,Mathematics ,020301 aerospace & aeronautics ,business.industry ,Applied Mathematics ,020206 networking & telecommunications ,Computational Theory and Mathematics ,Filter (video) ,Signal Processing ,Clutter ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Statistics, Probability and Uncertainty ,Particle filter ,business ,Algorithm - Abstract
This paper addresses the problem of joint detection, tracking and classification (JDTC) of multiple maneuvering targets in clutter. The multiple model cardinality balanced multi-target multi-Bernoulli (MM-CBMeMBer) filter is a promising algorithm for tracking an unknown and time-varying number of multiple maneuvering targets by utilizing a fixed set of models to match the possible motions of targets, while it exploits only the kinematic information. In this paper, the MM-CBMeMBer filter is extended to incorporate the class information and the class-dependent kinematic model sets. By following the rules of Bayesian theory and Random Finite Set (RFS), the extended multi-Bernoulli distribution is propagated recursively through prediction and update. The Sequential Monte Carlo (SMC) method is adopted to implement the proposed filter. At last, the performance of the proposed filter is examined via simulations.
- Published
- 2016
19. Data stream clustering based on Fuzzy C-Mean algorithm and entropy theory
- Author
-
Lei Xue, Shan Qin, Dan Wang, Baoju Zhang, and Wei Wang
- Subjects
Fuzzy clustering ,Correlation clustering ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,CURE data clustering algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Cluster analysis ,0105 earth and related environmental sciences ,Mathematics ,business.industry ,Constrained clustering ,Pattern recognition ,ComputingMethodologies_PATTERNRECOGNITION ,Data stream clustering ,Control and Systems Engineering ,Signal Processing ,Canopy clustering algorithm ,FLAME clustering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Data mining ,Artificial intelligence ,business ,computer ,Algorithm ,Software - Abstract
In data stream clustering studies, majority of methods are traditional hard clustering, the literatures of fuzzy clustering in clustering are few. In this paper, the fuzzy clustering algorithm is used to research data stream clustering, and the clustering results can truly reflect the actual relationship between objects and classes. It overcomes the either-or shortcoming of hard clustering. This paper presents a new method to detect concept drift. The membership degree of fuzzy clustering is used to calculate the information entropy of data, and according to the entropy to detect concept drift. The experimental results show that the detection of concept drift based on the entropy theory is effective and sensitive.
- Published
- 2016
20. An evidence clustering DSmT approximate reasoning method for more than two sources
- Author
-
Haipeng Wang, You He, Qiang Guo, Tao Jian, and Xia Shutao
- Subjects
Normalization (statistics) ,Theoretical computer science ,Applied Mathematics ,Regular polygon ,020206 networking & telecommunications ,02 engineering and technology ,Dezert–Smarandache Theory ,Information fusion ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,Computation complexity ,0202 electrical engineering, electronic engineering, information engineering ,Approximate reasoning ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Cluster analysis ,Algorithm ,Mathematics - Abstract
Due to the huge computation complexity of Dezert–Smarandache Theory (DSmT), its applications especially for multi-source (more than two sources) complex fusion problems have been limited. To get high similar approximate reasoning results with Proportional Conflict Redistribution 6 (PCR6) rule in DSmT framework (DSmT +PCR6) and remain less computation complexity, an Evidence Clustering DSmT approximate reasoning method for more than two sources is proposed. Firstly, the focal elements of multi evidences are clustered to two sets by their mass assignments respectively. Secondly, the convex approximate fusion results are obtained by the new DSmT approximate formula for more than two sources. Thirdly, the final approximate fusion results by the method in this paper are obtained by the normalization step. Analysis of computation complexity show that the method in this paper cost much less computation complexity than DSmT +PCR6. The simulation experiments show that the method in this paper can get very similar approximate fusion results and need much less computing time than DSmT +PCR6, especially, when the numbers of sources and focal elements are large, the superiorities of the method are remarkable.
- Published
- 2016
21. Multi-level interval-valued fuzzy concept lattices and their attribute reduction
- Author
-
Lifeng Li
- Subjects
Fuzzy classification ,Theoretical computer science ,02 engineering and technology ,Type-2 fuzzy sets and systems ,Defuzzification ,Fuzzy logic ,Artificial Intelligence ,020204 information systems ,Fuzzy mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,Fuzzy set operations ,020201 artificial intelligence & image processing ,Fuzzy associative matrix ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
The paper introduces the multi-level interval-valued fuzzy concept lattices in an interval-valued fuzzy formal context. It introduces the notion of multi-level attribute reductions in an interval-valued fuzzy formal context and investigates related properties. In addition, the paper formulates a corresponding attribute reduction method by constructing a discernibility matrix and its associated Boolean function. The paper also proposes the multi-level granule representation in interval-valued fuzzy formal contexts.
- Published
- 2016
22. Minimum class variance support vector ordinal regression
- Author
-
Zengxi Huang, Jinrong Hu, and Xiaoming Wang
- Subjects
0301 basic medicine ,Ordinal data ,business.industry ,Generalization ,Contrast (statistics) ,02 engineering and technology ,Variance (accounting) ,Machine learning ,computer.software_genre ,Ordinal regression ,Support vector machine ,Ordinal optimization ,03 medical and health sciences ,030104 developmental biology ,Artificial Intelligence ,Kernelization ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithm ,Software ,Mathematics - Abstract
The support vector ordinal regression (SVOR) method is derived from support vector machine and developed to tackle the ordinal regression problems. However, it ignores the distribution characteristics of the data. In this paper, we propose a novel method to handle the ordinal regression problems. This method is referred to as minimum class variance support vector ordinal regression (MCVSVOR). In contrast with SVOR, MCVSVOR explicitly takes into account the distribution of the categories and achieves better generalization performance. Moreover, the problem of MCVSVOR can be transformed into one of SVOR. Thus, the existing software of SVOR can be used to solve the problem of MCVSVOR. In the paper, we first discuss the linear case of MCVSVOR and then develop the nonlinear MCVSVOR through using the kernelization trick. The comprehensive experiment results show that the proposed method is effective and can achieve better generalization performance in contrast with SVOR.
- Published
- 2016
23. Unscented Transformation based estimation of parameters of nonlinear models using heteroscedastic data
- Author
-
Tarunraj Singh and Ehsan Dehghan Niri
- Subjects
Heteroscedasticity ,Mahalanobis distance ,Mathematical optimization ,Estimation theory ,System identification ,Triangulation (social science) ,02 engineering and technology ,Covariance ,01 natural sciences ,010104 statistics & probability ,Transformation (function) ,Artificial Intelligence ,Signal Processing ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,0101 mathematics ,Algorithm ,Software ,Mathematics - Abstract
This paper addresses the issue of estimating the parameters of nonlinear models using heteroscedastic data. A weighted least squares problem formulation where the sum of the Mahalanobis distance for all measurements is minimized, forms the framework of this paper. Determining the Mahalanobis distance requires the gradient of the cost function with respect to the noisy measurements which can be computationally expensive and infeasible for model which are discontinuous. A derivative free approach to determine the Mahalanobis distance as an error metric is proposed using the Unscented Transformation. The advantages of using the proposed approach include: a black box approach to evaluate the gradient weighted objective function precluding the need for analytical gradients and an improved estimation of the covariance. Numerical results for various applications such as triangulation using radar measurements, ellipse, and super ellipse fitting demonstrate the benefits of the proposed approach. Heteroscedastic data resulting from real X-ray images are also used to illustrate the potential of the proposed approach. HighlightsDerivative free approach of solving gradient weighted least squares problems.Improved approximation of the geometric distance for curve fitting.Improved performance of proposed technique demonstrated on three benchmarks.Monte Carlo used to benchmark the reduction in bias in estimated parameters.Relevant to model fitting problems with heteroscedastic data.
- Published
- 2016
24. Relevance vector machines using weighted expected squared distance for ore grade estimation with incomplete data
- Author
-
Keyou You, Shiji Song, Xunan Zhang, Yukui Zhang, and Cheng Wu
- Subjects
business.industry ,Inverse ,Computational intelligence ,Regression analysis ,02 engineering and technology ,010502 geochemistry & geophysics ,Machine learning ,computer.software_genre ,01 natural sciences ,Weighting ,Relevance vector machine ,Artificial Intelligence ,Spatial reference system ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,computer ,Software ,0105 earth and related environmental sciences ,Mathematics ,Extreme learning machine - Abstract
Accurate ore grade estimation is crucial to mineral resources evaluation and exploration. In this paper, we consider the borehole data collected from the Solwara 1 deposit, where the hydrothermal sulfide ore body is quite complicated with incomplete ore grade values. To solve this estimation problem, the relevance vector machine (RVM) and the expected squared distance (ESD) algorithm are incorporated into one regression model. Moreover, we improve the ESD algorithm by weighting the attributes of the data set and propose the weighted expected squared distance (WESD). In this paper, we uncover the symbiosis characteristics among different elements of the deposits by statistical analysis, which leads to estimating certain metal based on the data of other elements instead of on geographical position. The proposed WESD-RVM features high sparsity and accuracy, as well as the capability of handling incomplete data. Effectiveness of the proposed model is demonstrated by comparing with other estimating algorithms, such as inverse distance weighted method and Kriging algorithm which utilize only geographical spatial coordinates for inputs; extreme learning machine, which is unable to deal with incomplete data; and ordinary ESD based RVM regression model without entropy weighted distance. The experimental results show that the proposed WESD-RVM outperforms other methods with considerable predictive and generalizing ability.
- Published
- 2016
25. Error bounds for joint detection and estimation of multiple unresolved target-groups
- Author
-
Xianghui Yuan, Feng Lian, Chongzhao Han, and Zhansheng Duan
- Subjects
020301 aerospace & aeronautics ,Mathematical optimization ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Statistical power ,Euclidean distance ,Cardinality ,0203 mechanical engineering ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,Euclidean geometry ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Maximum a posteriori estimation ,A priori and a posteriori ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Finite set ,Algorithm ,Mathematics - Abstract
An error bound for JDE of multiple unresolved target-groups is derived by RFS.The bound is based on OSPA distance rather than Euclidean distance.The bound is discussed for the special cases when the group number is known or is at most one.Three examples are presented to verify the effectiveness of the bound. According to random finite set (RFS) and information inequality, this paper derives an error bound for joint detection and estimation (JDE) of multiple unresolved target-groups in the presence of clutters and missed detections. The JDE here refers to determining the number of unresolved target-groups and estimating their states. In order to obtain the results of this paper, the states of the unresolved target-groups are modeled as a multi-Bernoulli RFS first. The point cluster model proposed by Mahler is used to describe the observation likelihood of each group. Then, the error metric between the true and estimated state sets of the groups is defined by the optimal sub-pattern assignment distance rather than the usual Euclidean distance. The maximum a posteriori detection and unbiased estimation criteria are used in deriving the bound. Finally, we discuss some special cases of the bound when the number of unresolved target-groups is known a priori or is at most one. Example 1 shows the variation of the bound with respect to the probability of detection and clutter density. Example 2 verifies the effectiveness of the bound by indicating the performance limitations of the cardinalized probability hypothesis density and cardinality balanced multi-target multi-Bernoulli filters for unresolved target-groups. Example 3 compares the bound of this paper with the (single-sensor) bound of 4 for the case of JDE of a single unresolved target-group. At present, this paper only addresses the static JDE problem of multiple unresolved target-groups. Our future work will study the recursive extension of the bound proposed in this paper to the filtering problems by considering the group state evolutions.
- Published
- 2016
26. Generation of fiducial marker dictionaries using Mixed Integer Linear Programming
- Author
-
Rafael Muñoz-Salinas, F. J. Madrid-Cuevas, Rafael Medina-Carnicer, and Sergio Garrido-Jurado
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Binary number ,02 engineering and technology ,Square (algebra) ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,State (computer science) ,Error detection and correction ,Fiducial marker ,Algorithm ,Integer programming ,Pose ,Software ,Mathematics - Abstract
Square-based fiducial markers are one of the most popular approaches for camera pose estimation due to its fast detection and robustness. In order to maximize their error correction capabilities, it is required to use an inner binary codification with a large inter-marker distance. This paper proposes two Mixed Integer Linear Programming (MILP) approaches to generate configurable square-based fiducial marker dictionaries maximizing their inter-marker distance. The first approach guarantees the optimal solution, however, it can only be applied to relatively small dictionaries and number of bits since the computing times are too long for many situations. The second approach is an alternative formulation to obtain suboptimal dictionaries within restricted time, achieving results that still surpass significantly the current state of the art methods. HighlightsThe paper proposes two methods to obtain fiducial markers based on the MILP paradigm.First model guarantees the optimality in terms of inter-marker distance.Second model generates suboptimal markers within restricted time.The markers generated allow the correction of a higher amount of erroneous bits.
- Published
- 2016
27. A mean approximation based bidimensional empirical mode decomposition with application to image fusion
- Author
-
Jianjia Pan and Yuan Yan Tang
- Subjects
Image fusion ,Mathematical optimization ,Delaunay triangulation ,Applied Mathematics ,Computation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Centroid ,020206 networking & telecommunications ,Image processing ,02 engineering and technology ,Hilbert–Huang transform ,Maxima and minima ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Decomposition method (constraint satisfaction) ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Mathematics - Abstract
Empirical mode decomposition (EMD) is an adaptive decomposition method, which is widely used in time-frequency analysis. As a bidimensional extension of EMD, bidimensional empirical mode decomposition (BEMD) presents many useful applications in image processing and computer vision. In this paper, we define the mean points in BEMD 'sifting' processing as centroid point of neighbour extrema points in Delaunay triangulation and propose using mean approximation instead of envelope mean in 'sifting'. The proposed method improves the decomposition result and reduces average computation time of 'sifting' processing. Furthermore, a BEMD-based image fusion approach is presented in this paper. Experimental results show our method can achieve more orthogonal and physical meaningful components and more effective result in image fusion application. We define the mean points in BEMD 'sifting' processing as centroid point of neighbour extrema points in Delaunay triangulation.Using mean approximation instead of envelope mean in BEMD 'sifting' processing.The proposed method improves the decomposition result and reduces average computation time of 'sifting' processing.A BEMD-based image fusion approach is proposed.
- Published
- 2016
28. Minimum cost subgraph matching using a binary linear program
- Author
-
Sébastien Adam, Julien Lerouge, Pierre Héroux, Maroua Hammami, Equipe Apprentissage (DocApp - LITIS), Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes (LITIS), Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Institut National des Sciences Appliquées (INSA)-Normandie Université (NU)-Université de Rouen Normandie (UNIROUEN), Normandie Université (NU)-Université Le Havre Normandie (ULH), Normandie Université (NU)-Institut national des sciences appliquées Rouen Normandie (INSA Rouen Normandie), and Normandie Université (NU)
- Subjects
Factor-critical graph ,Mathematical optimization ,Matching (graph theory) ,Linear programming ,Substitution (logic) ,Subgraph isomorphism problem ,Binary number ,02 engineering and technology ,01 natural sciences ,[INFO.INFO-TT]Computer Science [cs]/Document and Text Processing ,Artificial Intelligence ,0103 physical sciences ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Induced subgraph isomorphism problem ,Computer Vision and Pattern Recognition ,010306 general physics ,Algorithm ,Software ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
Minimum Cost Subgraph Matching (MCSM) is an adaptation of Graph Edit Distance.The paper proposes a Binary Linear Program that solves the MCSM problem.The proposed formulation is very general and can tackle a large range of graphs.MCSM is more efficient and faster than a Substitution Only Tolerant Subgraph Matching (SOTSM). This paper presents a binary linear program for the Minimum Cost Subgraph Matching (MCSM) problem. MCSM is an extension of the subgraph isomorphism problem where the matching tolerates substitutions of attributes and modifications of the graph structure. The objective function proposed in the formulation can take into account rich attributes (e.g. vectors mixing nominal and numerical values) on both vertices and edges. Some experimental results obtained on an application-dependent dataset concerning the spotting of symbols on technical drawings show that the approach obtains better performance than a previous approach which is only substitution-tolerant.
- Published
- 2016
29. Ensemble clustering using factor graph
- Author
-
Jian-Huang Lai, Dong Huang, and Chang-Dong Wang
- Subjects
Mathematical optimization ,Fuzzy clustering ,Optimization problem ,Correlation clustering ,Constrained clustering ,020206 networking & telecommunications ,02 engineering and technology ,Determining the number of clusters in a data set ,Artificial Intelligence ,Signal Processing ,Consensus clustering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Cluster analysis ,Algorithm ,Software ,k-medians clustering ,Mathematics - Abstract
In this paper, we propose a new ensemble clustering approach termed ensemble clustering using factor graph (ECFG). Compared to the existing approaches, our approach has three main advantages: (1) the cluster number is obtained automatically and need not to be specified in advance; (2) the reliability of each base clustering can be estimated in an unsupervised manner and exploited in the consensus process; (3) our approach is efficient for processing ensembles with large data sizes and large ensemble sizes. In this paper, we introduce the concept of super-object, which serves as a compact and adaptive representation for the ensemble data and significantly facilitates the computation. Through the probabilistic formulation, we cast the ensemble clustering problem into a binary linear programming (BLP) problem. The BLP problem is NP-hard. To solve this optimization problem, we propose an efficient solver based on factor graph. The constrained objective function is represented as a factor graph and the max-product belief propagation is utilized to generate the solution insensitive to initialization and converged to the neighborhood maximum. Extensive experiments are conducted on multiple real-world datasets, which demonstrate the effectiveness and efficiency of our approach against the state-of-the-art approaches. HighlightsIntroduce the super-object representation to facilitate the consensus process.Probabilistically formulate the ensemble clustering problem into a BLP problem.Propose an efficient solver for the BLP problem based on factor graph.The cluster number of the consensus clustering is estimated automatically.Our method achieves the state-of-the-art performance in effectiveness and efficiency.
- Published
- 2016
30. Kernel subspace pursuit for sparse regression
- Author
-
Ioannis N. Psaromiligkos and Jad Kabbara
- Subjects
business.industry ,020206 networking & telecommunications ,Pattern recognition ,010103 numerical & computational mathematics ,02 engineering and technology ,01 natural sciences ,Kernel principal component analysis ,Kernel method ,Artificial Intelligence ,Kernel embedding of distributions ,Polynomial kernel ,Variable kernel density estimation ,Kernel (statistics) ,Signal Processing ,Radial basis function kernel ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Artificial intelligence ,0101 mathematics ,Tree kernel ,business ,Algorithm ,Software ,Mathematics - Abstract
This paper introduces a kernel version of the Subspace Pursuit algorithm.The proposed method, KSP, is a new iterative method for sparse regression.KSP outperforms and is less computationally intensive than related kernel methods. Recently, results from sparse approximation theory have been considered as a means to improve the generalization performance of kernel-based machine learning algorithms. In this paper, we present Kernel Subspace Pursuit (KSP), a new method for sparse non-linear regression. KSP is a low-complexity method that iteratively approximates target functions in the least-squares sense as a linear combination of a limited number of elements selected from a kernel-based dictionary. Unlike other kernel methods, by virtue of KSP's algorithmic design, the number of KSP iterations needed to reach the final solution does not depend on the number of basis functions used nor that of elements in the dictionary. We experimentally show that, in many scenarios involving learning synthetic and real data, KSP is less complex computationally and outperforms other kernel methods that solve the same problem, namely, Kernel Matching Pursuit and Kernel Basis Pursuit.
- Published
- 2016
31. Coherent summation of multiple short-time signals for direct positioning of a wideband source based on delay and Doppler
- Author
-
Le Yang, Jinzhou Li, Wenli Jiang, and Fucheng Guo
- Subjects
0209 industrial biotechnology ,02 engineering and technology ,law.invention ,symbols.namesake ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,law ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,Waveform ,Electrical and Electronic Engineering ,Radar ,Wideband ,Mathematics ,Applied Mathematics ,Transmitter ,Estimator ,020206 networking & telecommunications ,Computational Theory and Mathematics ,Signal Processing ,symbols ,Computer Vision and Pattern Recognition ,Statistics, Probability and Uncertainty ,Doppler effect ,Cramér–Rao bound ,Algorithm - Abstract
We consider identifying the source position directly from the received source signals. This direct position determination (DPD) approach has been shown to be superior in terms of better estimation accuracy and improved robustness to low signal-to-noise ratios (SNRs) to the conventional two-step localization technique, where signal measurements are extracted first and the source position is then estimated from them. The localization of a wideband source such as a communication transmitter or a radar whose signal should be considered deterministic is investigated in this paper. Both passive and active localization scenarios, which correspond to the source signal waveform being unknown and being known respectively, are studied. In both cases, the source signal received at each receiver is partitioned into multiple non-overlapping short-time signal segments for the DPD task. This paper proposes the use of coherent summation that takes into account the coherency among the short-time signals received at the same receiver. The study begins with deriving the Cramer-Rao lower bounds (CRLBs) of the source position under coherent summation-based and non-coherent summation-based DPDs. Interestingly, we show analytically that with coherent summation, the localization accuracy of the DPD improves as the time interval between two short-time signals increases. This paper also develops approximate maximum likelihood (ML) estimators for DPDs with coherent and non-coherent summations. The CRLB results and the performance of the proposed source position estimators are illustrated via simulations.
- Published
- 2016
32. Fast and exact unidimensional L2–L1 optimization as an accelerator for iterative reconstruction algorithms
- Author
-
Daniel R. Pipa, Alvaro R. De Pierro, and Marcelo V. W. Zibetti
- Subjects
Deblurring ,Line search ,Signal reconstruction ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Iterative reconstruction ,Iteratively reweighted least squares ,Nonlinear conjugate gradient method ,Compressed sensing ,Computational Theory and Mathematics ,Artificial Intelligence ,Search algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Mathematics - Abstract
This paper studies the use of fast and exact unidimensional L2-L1 minimization as a line search for accelerating iterative reconstruction algorithms. In L2-L1 minimization reconstruction problems, the squared Euclidean, or L2 norm, measures signal-data discrepancy and the L1 norm stands for a sparsity preserving regularization term. Functionals as these arise in important applications such as compressed sensing and deconvolution. Optimal unidimensional L2-L1 minimization has only recently been studied by Li and Osher for denoising problems and by Wen et al. for line search. A fast L2-L1 optimization procedure can be adapted for line search and used in iterative algorithms, improving convergence speed with little increase in computational cost. This paper proposes a new method for exact L2-L1 line search and compares it with the Li and Osher's, Wen et al.'s, as well as with a standard line search algorithm, the method of false position. The use of the proposed line search improves convergence speed of different iterative algorithms for L2-L1 reconstruction such as iterative shrinkage, iteratively reweighted least squares, and nonlinear conjugate gradient. This assertion is validated experimentally in applications to signal reconstruction in compressed sensing and sparse signal deblurring.
- Published
- 2016
33. General relation-based variable precision rough fuzzy set
- Author
-
Weimin Ma, Eric C. C. Tsang, and Bingzhen Sun
- Subjects
0209 industrial biotechnology ,Fuzzy classification ,Fuzzy set ,Dominance-based rough set approach ,02 engineering and technology ,Type-2 fuzzy sets and systems ,computer.software_genre ,Fuzzy logic ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,Fuzzy set operations ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Rough set ,Data mining ,Algorithm ,computer ,Software ,Mathematics - Abstract
In order to effectively handle the real-valued data sets in practice, it is valuable from theoretical and practical aspects to combine fuzzy rough set and variable precision rough set so that a powerful tool can be developed. That is, the model of fuzzy variable precision rough set, which not only can handle numerical data but also is less sensitive to misclassification and perturbation,In this paper, we propose a new variable precision rough fuzzy set by introducing the variable precision parameter to generalized rough fuzzy set, i.e., the variable precision rough fuzzy set based on general relation. We, respectively, define the variable precision rough lower and upper approximations of any fuzzy set and it level set with variable precision parameter by constructive approach. Also, we present the properties of the proposed model in detail. Meanwhile, we establish the relationship between the variable precision rough approximation of a fuzzy set and the rough approximation of the level set for a fuzzy set. Furthermore, we give a new approach to uncertainty measure for variable precision rough fuzzy set established in this paper in order to overcome the limitations of the traditional methods. Finally, some numerical example are used to illuminate the validity of the conclusions given in this paper.
- Published
- 2015
34. A new rotating machinery fault diagnosis method based on improved local mean decomposition
- Author
-
Yu Wei, Yongbo Li, Wenhu Huang, Zhao Haiyang, and Minqiang Xu
- Subjects
Rank (linear algebra) ,Applied Mathematics ,Bandwidth (signal processing) ,Fault (power engineering) ,Computational Theory and Mathematics ,Orthogonality ,Artificial Intelligence ,Hermite interpolation ,Moving average ,Signal Processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Envelope (mathematics) ,Algorithm ,Smoothing ,Mathematics - Abstract
A demodulation technique based on improved local mean decomposition (LMD) is investigated in this paper. LMD heavily depends on the local mean and envelope estimate functions in the sifting process. It is well known that the moving average (MA) approach exists in many problems (such as step size selection, inaccurate results and time-consuming). Aiming at the drawbacks of MA in the smoothing process, this paper proposes a new self-adaptive analysis algorithm called optimized LMD (OLMD). In OLMD method, an alternative approach called rational Hermite interpolation is proposed to calculate local mean and envelope estimate functions using the upper and lower envelopes of a signal. Meanwhile, a reasonable bandwidth criterion is introduced to select the optimum product function (OPF) from pre-OPFs derived from rational Hermite interpolation with different shape controlling parameters in each rank. Subsequently, the orthogonality criterion (OC) is taken as the product function (PF) iterative stopping condition. The effectiveness of OLMD method is validated by the numerical simulations and applications to gearbox and roller bearing fault diagnosis. Results demonstrate that OLMD method has better fault identification capacity, which is effective in rotating machinery fault diagnosis. A novel time-frequency analysis method called OLMD is presented in this paper.OLMD can weaken the mode mixing problem in traditional LMD.The simulation and experimental results validate the reliability and feasibility of the proposed methodology.
- Published
- 2015
35. Fast algorithms of attribute reduction for covering decision systems with minimal elements in discernibility matrix
- Author
-
Ming Sun, Yanyan Yang, and Ze Dong
- Subjects
0209 industrial biotechnology ,Reduction (recursion theory) ,Relation (database) ,Computation ,Computational intelligence ,02 engineering and technology ,Matrix (mathematics) ,020901 industrial engineering & automation ,Artificial Intelligence ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Rough set ,Algorithm ,Software ,Maximal element ,Mathematics - Abstract
Covering rough sets, which generalize traditional rough sets by considering coverings instead of partitions, are introduced to deal with set-valued, missing-valued or real-valued data sets. For decision systems with such kinds of data sets, attribute reduction with covering rough sets aims to delete superfluous attributes, and fast algorithms of finding reducts are clearly meaningful for practical problems. In the existing study of attribute reduction with covering rough sets, the approach of discernibility matrix is the theoretical foundation. However, it always shares heavy computation load and large store space because of finding and storing all elements in the discernibility matrix. In this paper, we find that only minimal elements in the discernibility matrix are sufficient to find reducts. This fact motivates us in this paper to develop algorithms to find reducts by only employing minimal elements without computing other elements in the discernibility matrix. We first define the relative discernible relation of covering to characterize the relationship between minimal elements in the discernibility matrix and particular sample pairs in the covering decision system. By employing this relative discernible relation, we then develop algorithms to search the minimal elements in the discernibility matrix and find reducts for the covering decision system. Finally, experimental comparisons with other existing algorithms of covering rough sets on several data sets demonstrate that the proposed algorithms in this paper can greatly reduce the running time of finding reducts.
- Published
- 2015
36. Nonlinear and adaptive undecimated hierarchical multiresolution analysis for real valued discrete time signals via empirical mode decomposition approach
- Author
-
Charlotte Yuk-Fan Ho, Zhijing Yang, Weichao Kuang, Bingo Wing-Kuen Ling, and Qingyun Dai
- Subjects
Mathematical optimization ,Applied Mathematics ,Multiresolution analysis ,Filter (signal processing) ,Filter bank ,Discrete Fourier transform ,Hilbert–Huang transform ,Wavelet ,Computational Theory and Mathematics ,Artificial Intelligence ,Frequency domain ,Signal Processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Spectral leakage ,Mathematics - Abstract
Hierarchical multiresolution analysis is an important tool for the analysis of signals. Since this multiresolution representation provides a pyramid like framework for representing signals, it can extract signal information effectively via levels by levels. On the other hand, a signal can be nonlinearly and adaptively represented as a sum of intrinsic mode functions (IMFs) via the empirical mode decomposition (EMD) algorithm. Nevertheless, as the IMFs are obtained only when the EMD algorithm converges, no further iterative sifting process will be performed directly when the EMD algorithm is applied to an IMF. As a result, the same IMF will be resulted and further level decompositions of the IMFs cannot be obtained directly by the EMD algorithm. In other words, the hierarchical multiresolution analysis cannot be performed via the EMD algorithm directly. This paper is to address this issue by performing a nonlinear and adaptive hierarchical multiresolution analysis based on the EMD algorithm via a frequency domain approach. In the beginning, an IMF is expressed in the frequency domain by applying discrete Fourier transform (DFT) to it. Next, zeros are inserted to the DFT sequence and a conjugate symmetric zero padded DFT sequence is obtained. Then, inverse discrete Fourier transform (IDFT) is applied to the zero padded DFT sequence and a new signal expressed in the time domain is obtained. Actually, the next level IMFs can be obtained by applying the EMD algorithm to this signal. However, the lengths of these next level IMFs are increased. To reduce these lengths, first DFT is applied to each next level IMF. Second, the DFT coefficients of each next level IMF at the positions where the zeros are inserted before are removed. Finally, by applying IDFT to the shorten DFT sequence of each next level IMF, the final set of next level IMFs are obtained. It is shown in this paper that the original IMF can be perfectly reconstructed. Moreover, computer numerical simulation results show that our proposed method can reach a component with less number of levels of decomposition compared to that of the conventional linear and nonadaptive wavelets and filter bank approaches. Also, as no filter is involved in our proposed method, there is no spectral leakage in various levels of decomposition introduced by our proposed method. Whereas there could be some significant leakage components in the various levels of decomposition introduced by the wavelets and filter bank approaches.
- Published
- 2015
37. A multi-parameter regularization model for image restoration
- Author
-
Qibin Fan, Yuling Jiao, and Dandan Jiang
- Subjects
business.industry ,Regularization (mathematics) ,Effective algorithm ,Wavelet ,Control and Systems Engineering ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Multi parameter ,Algorithm ,Software ,Image restoration ,Mathematics - Abstract
This paper presents a new multi-parameter regularization model for image restoration (IR) based on total variation (TV) and wavelet frame (WF). On one hand, the Rudin-Osher-Fatemi (ROF) model using TV as the regularization term has been proven to be very effective in preserving sharp edges and object boundaries which are usually the most important features to recover. On the other hand, adaptively exploiting the regularity of natural images has led to the successful WF approaches for IR. In this paper, we propose a novel model that combines these two approaches together to restore images from blurry, noisy and partial observations. Computationally, we use the alternative direction method of multiplier (ADMM) to solve the new model and provide its convergence analysis in the appendix. Numerical experiments on a set of IR benchmark problems show that the proposed model and algorithm outperform several state-of-the-art approaches in terms of the restoration quality. HighlightsBased on total variation and wavelet frame, a multi-parameter regularization model for image restoration is proposed.An effective algorithm based on ADMM is given for solving the new model, i.e., TVframe.Convergence analysis of the new algorithm is given.Numerical experiments show that TVframe outperforms several state-of-the-art image restoration approaches.
- Published
- 2015
38. Symbol coding of Laplacian distributed prediction residuals
- Author
-
Mortuza Ali and Manzur Murshed
- Subjects
Theoretical computer science ,Applied Mathematics ,Tunstall coding ,Variable-length code ,Adaptive predictive coding ,Huffman coding ,symbols.namesake ,Shannon–Fano coding ,Computational Theory and Mathematics ,Artificial Intelligence ,Signal Processing ,symbols ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Context-adaptive binary arithmetic coding ,Harmonic Vector Excitation Coding ,Mathematics ,Context-adaptive variable-length coding - Abstract
Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS.
- Published
- 2015
39. Human movement analysis around a view circle using time-order similarity distributions
- Author
-
Hui-Fen Chiang, Jun-Wei Hsieh, Yi-Da Chiou, and Chi-Hung Chuang
- Subjects
Similarity (geometry) ,Kullback–Leibler divergence ,Matching (graph theory) ,Property (programming) ,business.industry ,String searching algorithm ,Domain (software engineering) ,Dynamic programming ,Signal Processing ,Media Technology ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Hidden Markov model ,Algorithm ,Mathematics - Abstract
We propose a novel scheme for view-changeable action event analysis.A view alignment scheme is proposed for action analysis around a view circle.A property of mirror symmetry is proposed for reducing the whole view space.A novel time-order similarity distribution matrix is proposed for robust event analysis. This paper presents a new behavior classification system to analyze human movements around a view circle using time-order similarity distributions. To maintain the view in-variance, an action is represented not only from its spatial domain but also its temporal domain. After that, a novel alignment scheme is proposed for aligning each action to a fixed view. With the best view, the task of behavior analysis becomes a string matching problem. One novel idea proposed in this paper is to code a posture using not only its best matched key posture but also other unmatched key postures to form various similarity distributions. Then, recognition of two actions becomes a problem of matching two time-order distributions which can be very effectively solved by comparing their KL distance via a dynamic programming scheme.
- Published
- 2015
40. An improved global lower bound for graph edit similarity search
- Author
-
Karam Gouda and Mona M. Arafa
- Subjects
Discrete mathematics ,Comparability graph ,Strength of a graph ,Upper and lower bounds ,Artificial Intelligence ,Signal Processing ,Computer Vision and Pattern Recognition ,Graph property ,Graph operations ,Null graph ,Lattice graph ,Algorithm ,Software ,Complement graph ,Mathematics - Abstract
New global lower bound on the edit distance between graphs.An efficient preliminary filter for similarity search in graph databases.Almost-for-free improvement on the previous global lower bounds.The new bound is at least as tight as the previous global ones.Experiments show the effectiveness of the new bound. Graph similarity search is to retrieve data graphs that are similar to a given query graph. It has become an essential operation in many application areas. In this paper, we investigate the problem of graph similarity search with edit distance constraints. Existing solutions adopt the filter-and-verify strategy to speed up the search, where lower and upper bounds of graph edit distance are employed as pruning and validation rules in this process. The main problem with existing lower bounds is that they show different performance on different data graphs. An interesting group of lower bounds is the global counting ones. These bounds come almost for free and can be injected with any filtering methodology to work as preliminary filters. In this paper, we present an improvement upon these bounds without adding any computation overhead. We show that the new bound is tighter than the previous global ones except for few cases where they identically evaluate. Via experiments, we show how the new bound, when incorporated into previous lower bounding methods, increases the performance significantly.
- Published
- 2015
41. Near optimum OSIC-based ML algorithm in a quantized space for LTE-A downlink physical layer
- Author
-
Sayed El-Rabaie and Mohamed G. El-Mashed
- Subjects
Propagation of uncertainty ,Computational complexity theory ,Applied Mathematics ,MIMO ,Physical layer ,LTE Advanced ,Computational Theory and Mathematics ,Dimension (vector space) ,Artificial Intelligence ,Signal Processing ,Telecommunications link ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Mathematics ,Block (data storage) - Abstract
In this paper, we propose a scalable and implementation efficient OSIC-based ML algorithm in a quantized space with higher performance for MIMO detection, which can be applied to the LTE-A downlink physical layer system. It is characterized by dividing the overall OSIC detector into small dimension blocks to reduce complexity. The proposed algorithm utilizes ML algorithm in a quantized space to detect the first data streams and overcome error propagation problem. Then, it applies small dimension OSIC block to detect other data streams. The mathematical analysis is illustrated and derived. This paper shows BER performance of the proposed algorithm and compares its performance with other algorithms. This paper also presents the computational complexity to show that it gives lower complexity close to optimal ML algorithm. Simulation results show that the proposed algorithm provides a better performance and low BER values compared to OSIC algorithm. Finally, the proposed algorithm enhances the detection in LTE-A system and gives results close to optimum ML.
- Published
- 2015
42. SPiraL Aggregation Map (SPLAM): A new descriptor for robust template matching with fast algorithm
- Author
-
Huang-Chia Shih and Kuan-Chun Yu
- Subjects
Pixel ,business.industry ,Template matching ,Search engine indexing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Projection model ,Artificial Intelligence ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Invariant (mathematics) ,Image warping ,business ,Algorithm ,Software ,Parametric statistics ,Mathematics - Abstract
This paper describes a robust template matching algorithm undergoing rotation-scaling-translation (RST) variations via our proposed SPiraL Aggregation Map (SPLAM), which is a novel image warping scheme. It not only provides an efficient method for generating the desired projection profiles for matching, it also enables us to determine the rotation angle, and is invariant to scale changes. Compared to other model-based methods, the proposed spiral projection model (SPM) provides the structural and statistical information about the template in a more general and easier to comprehend format. The SPM is a model-based texture-description scheme that enables the simultaneous representation for each value of projection profile. The profile, a set of parametric projection values functions by angular indexing, is the aggregate from a group of spiral sampling pixels. The experimental evaluation shows that the properties of the algorithm achieved very attractive results. This paper describes a robust and fast template matching algorithm.We introduced a novel image warping scheme called the SPiraL Aggregation Map (SPLAM).This descriptor is capable of expressing the template with high-order texture characteristics.
- Published
- 2015
43. A Maximum Entropy inspired model for the convolutional noise PDF
- Author
-
Monika Pinchas and Adiel Freiman
- Subjects
Blind deconvolution ,Applied Mathematics ,Gaussian ,Speech recognition ,Noise ,symbols.namesake ,Signal-to-noise ratio ,Computational Theory and Mathematics ,Artificial Intelligence ,Gaussian noise ,Signal Processing ,symbols ,Computer Vision and Pattern Recognition ,Deconvolution ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Algorithm ,Quadrature amplitude modulation ,Linear filter ,Mathematics - Abstract
In this paper we consider a blind adaptive deconvolution problem in which we observe the output of an unknown linear system (channel) from which we want to recover its input using an adaptive blind equalizer (adaptive linear filter). Since the channel coefficients are unknown, the optimal equalizer's coefficients are also unknown. Thus, the equalizer's coefficients used in the deconvolution process are only approximated values leading to an error signal in addition to the source signal at the output of the deconvolutional process. We define this error signal throughout the paper as the convolutional noise. It is well known that the convolutional noise probability density function (pdf) is not a Gaussian pdf at the early stages of the deconvolutional process and only at the latter stages of the deconvolutional process the convolutional noise pdf tends to be approximately Gaussian. Despite this knowledge, the convolutional noise pdf was modeled up to recently as a Gaussian pdf because it simplifies the Bayesian calculations when carrying out the conditional expectation of the source input given the equalized or deconvolutional output and since no other model was suggested for it. Recently, a new model was suggested by the same author for the convolutional noise pdf based on the Edgeworth expansion series. This new model leads to improved deconvolution performance for the 16 Quadrature Amplitude Modulation (QAM) input and for a signal to noise ratio (SNR) of 30 dB. Thus, the question that arose here was whether we may find another model for the convolutional noise pdf that will also lead the system with improved deconvolutional performance compared to the case when the Gaussian model is applied for the convolutional noise pdf. In this paper, we propose a new model for the convolutional noise pdf inspired by the Maximum Entropy density approximation technique. We derive the relevant Lagrange multipliers and obtain as a by-product new closed-form approximated expressions for the conditional expectation and mean square error (MSE). Simulation results indicate that improved system performance is obtained from the residual ISI point of view for the 16QAM input case with our new proposed model for the convolutional noise pdf compared to the case when the Gaussian model or Edgeworth expansion series are applied for the convolutional noise pdf. For two other chosen input sources, a faster convergence rate is observed with the algorithm using our new proposed model for the convolutional noise pdf compared to the Maximum Entropy and Godard's algorithm.
- Published
- 2015
44. Improving bipartite graph edit distance approximation using various search strategies
- Author
-
Kaspar Riesen and Horst Bunke
- Subjects
Matching (graph theory) ,Butterfly graph ,law.invention ,Distance matrix ,Artificial Intelligence ,law ,Signal Processing ,Line graph ,Edit distance ,Computer Vision and Pattern Recognition ,Adjacency matrix ,Graph operations ,Lattice graph ,Algorithm ,Software ,Mathematics - Abstract
Recently the authors of the present paper introduced an approximation framework for the graph edit distance problem. The basic idea of this approximation is to first build a square cost matrix C = ( c ij ) , where each entry cij reflects the cost of a node substitution, deletion or insertion plus the matching cost arising from the local edge structure. Based on C an optimal assignment of the nodes and their local structure can be established in polynomial time. Since this approach considers local - rather than the global - structural properties of the graphs only, the graph edit distance derived from the optimal node assignment generally overestimates the true edit distance. The present paper pursues the idea of applying additional search strategies that build upon the initial assignment in order to reduce this overestimation. To this end, six different search strategies are investigated in this paper. In an exhaustive experimental evaluation on five real world graph data sets we empirically verify a substantial gain of distance accuracy by means of all search methods while run time remains remarkably low. HighlightsWe show how the well-known bipartite graph edit distance approximation can substantially be improved with respect to distance accuracy.To this end, we introduce and compare six different methodologies for extending the graph matching framework.We empirically verify a substantial gain of distance accuracy by means of all methods while run time remains remarkably low.The benefit of improved distance quality is also verified in a clustering application.
- Published
- 2015
45. The theoretical analysis for an iterative envelope algorithm
- Author
-
Lihua Yang, Lijun Yang, and Zhihua Yang
- Subjects
Iterative and incremental development ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Monotone cubic interpolation ,Exponential function ,Monotone polygon ,Computational Theory and Mathematics ,Rate of convergence ,Artificial Intelligence ,Signal Processing ,Piecewise ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Statistics, Probability and Uncertainty ,Spline interpolation ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics ,Envelope (motion) - Abstract
Cubic spline interpolating the local maximal/minimal points is often employed to calculate the envelopes of a signal approximately. However, the undershoots occur frequently in the cubic spline envelopes. To improve them, in our previous paper we proposed a new envelope algorithm, which is an iterative process by using the Monotone Piecewise Cubic Interpolation. Experiments show very satisfying results. But the theoretical analysis on why and how it works well was not given there. This paper establishes the theoretical foundation for the algorithm. We will study the structure of undershoots, prove rigorously that the algorithm converges to an envelope without undershoots with exponential rate of convergence, which can be used to determine the number of iterations needed in the algorithm for a good envelope in applications.
- Published
- 2015
46. A new extracting algorithm of k nearest neighbors searching for point clouds
- Author
-
Shengfeng Qin, Zisheng Li, Rong Li, and Guofu Ding
- Subjects
business.industry ,Computation ,Point cloud ,Pattern recognition ,k-nearest neighbors algorithm ,Set (abstract data type) ,Euclidean distance ,Artificial Intelligence ,Search algorithm ,Nearest-neighbor chain algorithm ,Signal Processing ,Point (geometry) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Mathematics - Abstract
We propose an extracting algorithm for k nearest neighbors searching.Vector inner product instead of distance calculation for distance comparison.Extracting algorithm can integrate with any other algorithm as plug-in.Two prominent algorithms and seven models are employed to experiment.Open source of proposed algorithm using dynamic memory allocation. k Nearest neighbors (kNN) searching algorithm is widely used for finding k nearest neighbors for each point in a point cloud model for noise removal and surface curvature computation. When the number of points and their density in a point cloud model increase significantly, the efficiency of a kNN searching algorithm becomes critical to various applications, thus, a better kNN approach is needed. In order to improve the efficiency of a kNN searching algorithm, in this paper, a new strategy and the corresponding algorithm are developed for reducing the amount of target points in a given data set by extracting nearest neighbors before the search begins. The nearest neighbors of a reverse nearest neighborhood are proposed to use in extracting nearest points of a query point, avoiding repetitive Euclidean distance calculation in an extracting process for saving time and memories. For any point in the model, its initial nearest neighbors can be extracted from its reverse neighborhood using an inner product of two related vectors other than direct Euclidean distance calculations and comparisons. The initial neighbors can be its full or partial set of the all nearest neighbors. If it is a partial set, the rest can be obtained by using other fast searching algorithms, which can be integrated with the proposed approach. Experimental results show that integrating extracting algorithm proposed in this paper with other excellent algorithms provides a better performance by comparing to their performances alone.
- Published
- 2014
47. Fisher discrimination based low rank matrix recovery for face recognition
- Author
-
Daohong Xiang, Huawen Liu, Jie Yang, Jiong Jia, Zhonglong Zheng, Xiaoqiao Huang, and Mudan Yu
- Subjects
Signal processing ,business.industry ,Statistical learning ,Low-rank approximation ,Machine learning ,computer.software_genre ,Regularization (mathematics) ,Facial recognition system ,Matrix (mathematics) ,symbols.namesake ,Artificial Intelligence ,Lagrange multiplier ,Signal Processing ,symbols ,Fisher criterion ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithm ,Software ,Mathematics - Abstract
In this paper, we consider the issue of computing low rank (LR) recovery of matrices with sparse errors. Based on the success of low rank matrix recovery in statistical learning, computer vision and signal processing, a novel low rank matrix recovery algorithm with Fisher discrimination regularization (FDLR) is proposed. Standard low rank matrix recovery algorithm decomposes the original matrix into a set of representative basis with a corresponding sparse error for modeling the raw data. Motivated by the Fisher criterion, the proposed FDLR executes low rank matrix recovery in a supervised manner, i.e., taking the with-class scatter and between-class scatter into account when the whole label information are available. The paper shows that the formulated model can be solved by the augmented Lagrange multipliers and provides additional discriminating power over the standard low rank recovery models. The representative bases learned by the proposed method are encouraged to be closer within the same class, and as far as possible between different classes. Meanwhile, the sparse error recovered by FDLR is not discarded as usual, but treated as a feedback in the following classification tasks. Numerical simulations demonstrate that the proposed algorithm achieves the state of the art results.
- Published
- 2014
48. Blind noisy mixture separation for independent/dependent sources through a regularized criterion on copulas
- Author
-
H. Fenniri, Abdelilah Hakim, M. El Rhabi, Abdelghani Ghazdali, Amor Keziou, Département Ingénierie Mathématique et Informatique, Ecole des Ponts ParisTech ( IMI ), Centre de Recherche en Sciences et Technologies de l'Information et de la Communication - EA 3804 ( CRESTIC ), Université de Reims Champagne-Ardenne ( URCA ), Laboratoire Ingénierie des Procédés et Optimisation des Systèmes Industriels, Ecole Nationale des Sciences Appliquées, Université Hassan I ( LIPOSI ), Laboratoire de Mathématiques Appliquées, Faculté des Sciences et Techniques, Université Cadi Ayyad ( LAMAI ), Département Ingénierie Mathématique et Informatique, École des Ponts ParisTech (IMI), École des Ponts ParisTech (ENPC), Centre de Recherche en Sciences et Technologies de l'Information et de la Communication - EA 3804 (CRESTIC), Université de Reims Champagne-Ardenne (URCA), Laboratoire Ingénierie des Procédés et Optimisation des Systèmes Industriels, Ecole Nationale des Sciences Appliquées, Université Hassan I (LIPOSI), and Laboratoire de Mathématiques Appliquées, Faculté des Sciences et Techniques, Université Cadi Ayyad (LAMAI)
- Subjects
Noisy instantaneous mixtures ,[ MATH ] Mathematics [math] ,[ INFO.INFO-TS ] Computer Science [cs]/Signal and Image Processing ,Noise reduction ,Copula (linguistics) ,02 engineering and technology ,01 natural sciences ,Blind signal separation ,010104 statistics & probability ,[ INFO.INFO-IT ] Computer Science [cs]/Information Theory [cs.IT] ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST] ,0202 electrical engineering, electronic engineering, information engineering ,Copulas ,[ MATH.MATH-ST ] Mathematics [math]/Statistics [math.ST] ,0101 mathematics ,Electrical and Electronic Engineering ,[MATH]Mathematics [math] ,Mathematics ,Total variation ,business.industry ,Kullbak–Leibler divergence between ,020206 networking & telecommunications ,Pattern recognition ,Mutual information ,ComputingMethodologies_PATTERNRECOGNITION ,Dependent source ,Control and Systems Engineering ,[INFO.INFO-IT]Computer Science [cs]/Information Theory [cs.IT] ,Kullbak-Leibler divergence ,Signal Processing ,Blind source separation ,Computer Vision and Pattern Recognition ,Minification ,Artificial intelligence ,business ,Algorithm ,Software - Abstract
The paper introduces a new method for Blind Source Separation (BSS) in noisy instantaneous mixtures of both independent or dependent source component signals. This approach is based on the minimization of a regularized criterion. Precisely, it consists in combining the total variation method for denoising with the Kullback-Leibler divergence between copula densities. The latter takes advantage of the copula to model the structure of the dependence between signal components. The obtained algorithm achieves separation in a noisy context where standard BSS methods fail. The efficiency and robustness of the proposed approach are illustrated by numerical simulations. HighlightsThe paper provides a new Blind source separation for noisy mixtures of independent/dependent sources.The proposed approach combine TV-regularization for denoising and separation by minimizing TV-regularized Kullback-Leibler divergence between copulas.The efficiency and robustness properties of the proposed approach have been illustrated by simulations.
- Published
- 2017
49. Analysis of Noisy Digital Contours with Adaptive Tangential Cover
- Author
-
Hayat Nasser, Bertrand Kerautret, Isabelle Debled-Rennesson, Phuc Ngo, Applying Discrete Algorithms to Genomics and Imagery (ADAGIO), Department of Algorithms, Computation, Image and Geometry (LORIA - ALGO), Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS), and Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Statistics and Probability ,Geometric analysis ,normal vectors ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,noise level ,[SCCO.COMP]Cognitive science/Computer science ,dominant points ,02 engineering and technology ,length contour estimator ,Digital image ,0202 electrical engineering, electronic engineering, information engineering ,concave/convexe parts ,maximal blurred segment ,Computer vision ,Point (geometry) ,Mathematics ,ComputingMethodologies_COMPUTERGRAPHICS ,geometrical parameters ,business.industry ,Applied Mathematics ,Regular polygon ,Tangent ,Estimator ,020207 software engineering ,Condensed Matter Physics ,Noise ,Cover (topology) ,Modeling and Simulation ,tangent ,contour ,020201 artificial intelligence & image processing ,Geometry and Topology ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm - Abstract
International audience; The notion of tangential cover, based on maximal segments, is a well-known tool to study the geometrical characteristics of a discrete curve. However, it is not robust to noise, while extracted contours from digital images typically contain noise and this makes the geometric analysis tasks on such contours difficult. To tackle this issue, we investigate in this paper a discrete structure, named Adaptive Tangential Cover (ATC), which is based on the notion of tangential cover and on a local noise estimator. More specifically, the ATC is composed of maximal segments with different widths deduced from the local noise values estimated at each point of the contour. Furthermore, a parameter-free algorithm is also presented to compute ATC. This study leads to the proposal of several applications of ATC on noisy digital contours: dominant point detection, contour length estimator, tangent/normal estimator, detection of convex and concave parts. An extension of ATC to 3D curves is also proposed in this paper. The experimental results demonstrate the efficiency of this new notion.
- Published
- 2017
50. Gaussian Process Morphable Models
- Author
-
Christoph Jud, Thomas Vetter, Thomas Gerig, and Marcel Lüthi
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Image registration ,02 engineering and technology ,Machine learning ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Artificial Intelligence ,Active shape model ,0202 electrical engineering, electronic engineering, information engineering ,Gaussian process ,Mathematics ,business.industry ,Applied Mathematics ,Statistical model ,Image segmentation ,Spline (mathematics) ,Computational Theory and Mathematics ,Point distribution model ,symbols ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithm ,Software ,Shape analysis (digital geometry) - Abstract
Models of shape variations have become a central component for the automated analysis of images. An important class of shape models are point distribution models (PDMs). These models represent a class of shapes as a normal distribution of point variations, whose parameters are estimated from example shapes. Principal component analysis (PCA) is applied to obtain a low-dimensional representation of the shape variation in terms of the leading principal components. In this paper, we propose a generalization of PDMs, which we refer to as Gaussian Process Morphable Models (GPMMs). We model the shape variations with a Gaussian process, which we represent using the leading components of its Karhunen-Loeve expansion. To compute the expansion, we make use of an approximation scheme based on the Nystrom method. The resulting model can be seen as a continuous analog of a standard PDM. However, while for PDMs the shape variation is restricted to the linear span of the example data, with GPMMs we can define the shape variation using any Gaussian process. For example, we can build shape models that correspond to classical spline models and thus do not require any example data. Furthermore, Gaussian processes make it possible to combine different models. For example, a PDM can be extended with a spline model, to obtain a model that incorporates learned shape characteristics but is flexible enough to explain shapes that cannot be represented by the PDM. We introduce a simple algorithm for fitting a GPMM to a surface or image. This results in a non-rigid registration approach whose regularization properties are defined by a GPMM. We show how we can obtain different registration schemes, including methods for multi-scale or hybrid registration, by constructing an appropriate GPMM. As our approach strictly separates modeling from the fitting process, this is all achieved without changes to the fitting algorithm. To demonstrate the applicability and versatility of GPMMs, we perform a set of experiments in typical usage scenarios in medical image analysis and computer vision: The model-based segmentation of 3D forearm images and the building of a statistical model of the face. To complement the paper, we have made all our methods available as open source.
- Published
- 2017
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.