321 results on '"MINIMIZATION"'
Search Results
2. AMG by element agglomeration and constrained energy minimization interpolation
- Author
-
Vassilevski, P
- Published
- 2006
- Full Text
- View/download PDF
3. UNBIASED MOMENT-RATE SPECTRA AND ABSOLUTE SITE EFFECTS IN THE KACHCHH BASIN, INDIA, FROM THE ANALYSIS OF THE AFTERSHOCKS OF THE 2001 Mw 7.6 BHUJ EARTHQUAKE
- Author
-
Akinci, A
- Published
- 2005
4. iTOUGH2 Universal Optimization Using the PEST Protocol
- Author
-
Finsterle, S
- Published
- 2010
- Full Text
- View/download PDF
5. Linearized Functional Minimization for Inverse Modeling
- Author
-
Dentz, Marco [Institute of Environmental Assessment and Water Research, Barcelona, Spain]
- Published
- 2012
6. The Determination of Pseudoscalar Meson Photoproduction Amplitudes from Complete Experiments
- Published
- 2011
- Full Text
- View/download PDF
7. Synthesis for Width Minimization in the Single-Electron Transistor Array.
- Author
-
Liu, Chian-Wei, Chiang, Chang-En, Huang, Ching-Yi, Chen, Yung-Chih, Wang, Chun-Yao, Datta, Suman, and Narayanan, Vijaykrishnan
- Subjects
SINGLE electron transistors ,ENERGY consumption ,TRANSISTORS ,BOOLEAN functions ,AUTOMATION - Abstract
Power consumption has become one of the primary challenges to meetMoore’s law. For reducing power consumption, single-electron transistor (SET) at room temperature has been demonstrated as a promising device for extending Moore’s law due to its ultralow power consumption in operation. Previous works have proposed automated mapping approaches for SET arrays that focused on minimizing the number of hexagons in the SET arrays. However, the area of an SET array is the product of the bounded height and the bounded width, and the height usually equals the number of inputs in the Boolean function. Consequently, in this paper, we focus on the width minimization to reduce the overall area in the mapping of the SET arrays. Our approach consists of techniques of product term minimization, branch-then-share (BTS)-aware variable reordering, SET array architecture relaxation, and BTS-aware product term reordering. The experimental results on a set of MCNC and IWLS 2005 benchmarks show that the proposed approach saves 45% of width compared with the work by Chiang et al., which focused on hexagon count minimization, and also saves 13% of width compared with the work by Chen et al., which focused on width minimization. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
8. Robust Visual Tracking Using Structurally Random Projection and Weighted Least Squares.
- Author
-
Zhang, Shengping, Zhou, Huiyu, Jiang, Feng, and Li, Xuelong
- Subjects
- *
LEAST squares , *VARIANCE inflation factors (Statistics) , *MULTILEVEL models , *RANDOM effects model , *PROBABILITY theory - Abstract
Sparse representation-based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates, while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using \ell 1 -norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximized. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection (RP) and weighted least squares (WLS) techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a WLS method to obtain the representation coefficients. To further reduce the computational complexity, structurally RP is used to reduce the dimensionality of the feature space, while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
9. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
- Author
-
Liu, Yuanyuan, Shang, Fanhua, Jiao, Licheng, Cheng, James, and Cheng, Hong
- Abstract
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
10. An Interval Type-2 Neural Fuzzy Classifier Learned Through Soft Margin Minimization and its Human Posture Classification Application.
- Author
-
Juang, Chia-Feng and Wang, Po-Hsuan
- Subjects
INTERVAL analysis ,FUZZY systems ,SUPPORT vector machines ,LINEAR systems ,MACHINE learning - Abstract
This paper proposes an interval type-2 neural fuzzy classifier learned through soft margin minimization (IT2NFC-SMM) and applies it to human body posture classification. The IT2NFC-SMM consists of interval type-2, zero-order Takagi–Sugeno (T–S) fuzzy rules established through online structure learning. The antecedent part of the IT2NFC-SMM uses interval type-2 fuzzy sets to decrease the number of rules and manage noisy data. For parameter learning, the consequent parameters are learned through a linear support vector machine (SVM) for soft margin minimization to improve the generalization ability. The proposed SVM-based learning addresses the problem that the orders of the fuzzy rules in computing the outputs of an interval type-2 fuzzy system depend on the consequent values that are unknown in advance. To address this problem, the IT2NFC-SMM uses weighted bound-set boundaries to simplify the type-reduction operation and a novel crisp-to-interval linear SVM learning algorithm. Based on the soft margin minimization, the antecedent parameters are tuned using the gradient descent algorithm. The IT2NFC-SMM is applied to a vision-based human body posture classification system. The system uses two cameras and novel classification features extracted from a silhouette of the human body to classify the four postures of standing, bending, sitting, and lying. The classification performance of the IT2NFC-SMM is verified through results in clean and noisy classification examples and through the posture classification problem, as well as through comparisons with various type-1 and type-2 fuzzy classifiers. The overall result shows that the IT2NFC-SMM achieves higher classification rates with a smaller or similar model size than the classifiers used for comparison, especially for noisy classification problems. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
11. Preconditioning for Underdetermined Linear Systems with Sparse Solutions.
- Author
-
Tsiligianni, Evaggelia, Kondi, Lisimachos P., and Katsaggelos, Aggelos K.
- Subjects
LINEAR systems ,SPARSE approximations ,COMPRESSED sensing ,SIGNAL processing ,SPARSE matrices - Abstract
Performance guarantees for the algorithms deployed to solve underdetermined linear systems with sparse solutions are based on the assumption that the involved system matrix has the form of an incoherent unit norm tight frame. Learned dictionaries, which are popular in sparse representations, often do not meet the necessary conditions for signal recovery. In compressed sensing (CS), recovery rates have been improved substantially with optimized projections; however, these techniques do not produce binary matrices, which are more suitable for hardware implementation. In this paper, we consider an underdetermined linear system with sparse solutions and propose a preconditioning technique that yields a system matrix having the properties of an incoherent unit norm tight frame. While existing work in preconditioning concerns greedy algorithms, the proposed technique is based on recent theoretical results for standard numerical solvers such as BP and OMP. Our simulations show that the proposed preconditioning improves the recovery rates both in sparse representations and CS; the results for CS are comparable to optimized projections. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
12. Universality of the Local Marginal Polytope.
- Author
-
Prusa, Daniel and Werner, Tomas
- Subjects
- *
POLYTOPES , *ENCODING , *GRAPHICAL modeling (Statistics) , *MARKOV random fields , *LINEAR programming - Abstract
We show that solving the LP relaxation of the min-sum labeling problem (also known as MAP inference problem in graphical models, discrete energy minimization, or valued constraint satisfaction) is not easier than solving any linear program. Precisely, every polytope is linear-time representable by a local marginal polytope and every LP can be reduced in linear time to a linear optimization (allowing infinite costs) over a local marginal polytope. The reduction can be done (though with a higher time complexity) even if the local marginal polytope is restricted to have a planar structure. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
13. Distributed Energy Trading: The Multiple-Microgrid Case.
- Author
-
Gregoratti, David and Matamoros, Javier
- Subjects
- *
SMART power grids , *ALGORITHMS , *POWER resources , *ELECTRIC power distribution , *TOPOLOGY - Abstract
In this paper, a distributed convex optimization framework is developed for energy trading between islanded microgrids. More specifically, the problem consists of several islanded microgrids that exchange energy flows by means of an arbitrary topology. Due to scalability issues and in order to safeguard local information on cost functions, a subgradient-based cost minimization algorithm that converges to the optimal solution in a practical number of iterations and with limited communication overhead is proposed. Furthermore, this approach allows for a very intuitive economics interpretation that explains the algorithm iterations in terms of a “supply–demand model” and “market clearing.” Numerical results are given in terms of the convergence rate of the algorithm and the attained costs for different network topologies. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
14. Sparsity-Based Recovery of Finite Alphabet Solutions to Underdetermined Linear Systems.
- Author
-
Aissa-El-Bey, Abdeldjalil, Pastor, Dominique, Sbai, Si Mohamed Aziz, and Fadlallah, Yasser
- Subjects
- *
LINEAR systems , *PROBABILITY theory , *CONVEX programming , *SPARSE matrices , *INFORMATION theory - Abstract
We consider the problem of estimating a deterministic finite alphabet vector f from underdetermined measurements y= A f , where A is a given (random) n \times N matrix. Two new convex optimization methods are introduced for the recovery of finite alphabet signals via \ell _{1} -norm minimization. The first method is based on regularization. In the second approach, the problem is formulated as the recovery of sparse signals after a suitable sparse transform. The regularization-based method is less complex than the transform-based one. When the alphabet size p$ equals 2 and $(n,N)$ grows proportionally, the conditions under which the signal will be recovered with high probability are the same for the two methods. When $p > 2$ , the behavior of the transform-based method is established. Experimental results support this theoretical result and show that the transform method outperforms the regularization-based one. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. On the State Minimization of Fuzzy Automata.
- Author
-
Li, Lvzhou and Qiu, Daowen
- Subjects
FUZZY automata ,EQUIVALENCE classes (Set theory) ,FINITE state machines ,ARTIFICIAL neural networks ,SEARCH algorithms - Abstract
This paper investigates the minimization problem of fuzzy automata, aiming to obtain a procedure for finding a minimal state fuzzy automaton equivalent to a given one. The decision version of the minimization problem is as follows: Given a fuzzy automaton \cal A and a natural number k, i.e., a pair \langle {\cal A}, k\rangle, is there a k$-state fuzzy automaton equivalent to \cal A? We prove that the above problem is decidable for fuzzy automata over totally ordered lattices and then obtain a procedure for minimizing a given fuzzy automaton. To this end, we introduce the concept of systems of fuzzy polynomial equations, present a procedure for finding solutions of these systems and, finally, reduce the above decision problem to finding a solution of a system of fuzzy polynomial equations. It is worth pointing out that although some algorithms in the literature were claimed to be minimization algorithms, the term “minimization” there did not mean state minimization in our sense, since these algorithms did not aim at a minimal fuzzy automaton but found “reasonably” small fuzzy automata. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
16. Progressive Band Processing of Constrained Energy Minimization for Subpixel Detection.
- Author
-
Chein-I Chang, Schultz, Robert C., Hobbs, Marissa C., Shih-Yu Chen, Yulei Wang, and Chunhong Liu
- Subjects
- *
ENERGY bands , *REMOTE sensing , *SIGNAL processing , *SATELLITE telemetry , *DATA transmission systems - Abstract
Constrained energy minimization (CEM) has been widely used for subpixel detection. It takes advantage of inverting the global sample correlation matrix R to suppress background so as to enhance detection of targets of interest. This paper presents a progressive band processing of CEM (PBP-CEM) which can perform CEM for target detection progressively band by band according to band sequential format. In doing so, a new concept, called causal band correlation matrix (CBCM), is introduced to replace the global sample correlation matrix R. It is a global correlation matrix formed by only those bands that were already visited up to the band currently being processed while excluding bands yet to be visited in the future. The proposed PBP-CEM allows CEM to be processed whenever bands are available, without waiting for completing band collection. With such an advantage, CEM has potential in data transmission and communication, specifically in satellite data processing. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
17. Time Invariant Error Bounds for Modified-CS-Based Sparse Signal Sequence Recovery.
- Author
-
Zhan, Jinchun and Vaswani, Namrata
- Subjects
- *
NOISE measurement , *IMAGE reconstruction , *COMPUTER simulation , *SPARSE approximations , *ERROR correction (Information theory) - Abstract
In this paper, we obtain performance guarantees for modified-CS and for its improved version, modified-CS-Add-LS-Del, for recursive reconstruction of a time sequence of sparse signals from a reduced set of noisy measurements available at each time. Under mild assumptions, we show that the support recovery error of both algorithms is bounded by a time-invariant and small value at all times. The same is also true for the reconstruction error. Under a slow support change assumption: 1) the support recovery error bound is small compared with the support size and 2) our results hold under weaker assumptions on the number of measurements than what \ell 1 minimization for noisy data needs. We first give a general result that only assumes a bound on support size, number of support changes, and number of small magnitude nonzero entries at each time. Later, we specialize the main idea of these results for two sets of signal change assumptions that model the class of problems in which a new element that is added to the support either gets added at a large initial magnitude or its magnitude slowly increases to a large enough value within a finite delay. Simulation experiments are shown to back up our claims. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
18. Cross-term-free time?frequency distribution reconstruction via lifted projections.
- Author
-
Deprem, Zeynel and Cetin, A.
- Subjects
- *
TIME-frequency analysis , *RADAR , *ELECTROENCEPHALOGRAPHY , *SPATIAL distribution (Quantum optics) , *DOPPLER effect , *DIRECT currents - Abstract
A crucial aspect of time?frequency (TF) analysis is the identification of separate components in a multicomponent signal. The Wigner?Ville distribution is the classical tool for representing such signals, but it suffers from cross-terms. Other methods, which are members of Cohen?s class of distributions, also aim to remove the cross-terms by masking the ambiguity function (AF), but they result in reduced resolution. Most practical time-varying signals are in the form of weighted trajectories on the TF plane, and many others are sparse in nature. Therefore, in recent studies the problem is cast as TF distribution reconstruction using a subset of AF domain coefficients and sparsity assumption. Sparsity can be achieved by constraining or minimizing the l1 norm. In this article, an l1 minimization approach based on projections onto convex sets is proposed to obtain a high-resolution, cross-term-free TF distribution for a given signal. The new method does not require any parameter adjustment to obtain a solution. Experimental results are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. Gentle Nearest Neighbors Boosting over Proper Scoring Rules.
- Author
-
Nock, Richard, Ali, Wafa Bel Haj, DAmbrosio, Roberto, Nielsen, Frank, and Barlaud, Michel
- Subjects
- *
ARTIFICIAL intelligence , *PATTERN recognition systems , *IMAGE processing , *SUN computer peripherals , *ELECTRONIC records - Abstract
Tailoring nearest neighbors algorithms to boosting is an important problem. Recent papers study an approach, unn, which provably minimizes particular convex surrogates under weak assumptions. However, numerical issues make it necessary to experimentally tweak parts of the unn algorithm, at the possible expense of the algorithm’s convergence and performance. In this paper, we propose a lightweight Newton-Raphson alternative optimizing proper scoring rules from a very broad set, and establish formal convergence rates under the boosting framework that compete with those known for unn. To the best of our knowledge, no such boosting-compliant convergence rates were previously known in the popular Gentle Adaboost’s lineage. We provide experiments on a dozen domains, including Caltech and SUN computer vision databases, comparing our approach to major families including support vector machines, (Ada)boosting and stochastic gradient descent. They support three major conclusions: (i) gnnb significantly outperforms unn , in terms of convergence rate and quality of the outputs, (ii) gnnb performs on par with or better than computationally intensive large margin approaches, (iii) on large domains that rule out those latter approaches for computational reasons, gnnb provides a simple and competitive contender to stochastic gradient descent. Experiments include a divide-and-conquer improvement of gnnb exploiting the link with proper scoring rules optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
20. Matrix Completion for Weakly-Supervised Multi-Label Image Classification.
- Author
-
Cabral, Ricardo, Torre, Fernando De la, Costeira, Joao Paulo, and Bernardino, Alexandre
- Subjects
- *
MATRICES (Mathematics) , *ART students , *ART publishing , *ACOUSTIC transients , *DATA analysis , *MATHEMATICAL programming - Abstract
In the last few years, image classification has become an incredibly active research topic, with widespread applications. Most methods for visual recognition are fully supervised, as they make use of bounding boxes or pixelwise segmentations to locate objects of interest. However, this type of manual labeling is time consuming, error prone and it has been shown that manual segmentations are not necessarily the optimal spatial enclosure for object classifiers. This paper proposes a weakly-supervised system for multi-label image classification. In this setting, training images are annotated with a set of keywords describing their contents, but the visual concepts are not explicitly segmented in the images. We formulate the weakly-supervised image classification as a low-rank matrix completion problem. Compared to previous work, our proposed framework has three advantages: (1) Unlike existing solutions based on multiple-instance learning methods, our model is convex. We propose two alternative algorithms for matrix completion specifically tailored to visual data, and prove their convergence. (2) Unlike existing discriminative methods, our algorithm is robust to labeling errors, background noise and partial occlusions. (3) Our method can potentially be used for semantic segmentation. Experimental validation on several data sets shows that our method outperforms state-of-the-art classification algorithms, while effectively capturing each class appearance. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
21. A Convolutive Bounded Component Analysis Framework for Potentially Nonstationary Independent and/or Dependent Sources.
- Author
-
Inan, Huseyin A. and Erdogan, Alper T.
- Subjects
- *
MIMO systems , *BLIND source separation , *SIGNAL separation , *SIGNAL processing , *MACHINE learning - Abstract
Bounded Component Analysis (BCA) is a recent framework which enables development of methods for the separation of dependent as well as independent sources from their mixtures. This paper extends a recent geometric BCA approach introduced for the instantaneous mixing problem to the convolutive mixing problem. The paper proposes novel deterministic convolutive BCA frameworks for the blind source extraction and blind source separation of convolutive mixtures of sources which allows the sources to be potentially nonstationary. The global maximizers of the proposed deterministic BCA optimization settings are proved to be perfect separators. The paper also illustrates that the iterative algorithms corresponding to these frameworks are capable of extracting/separating convolutive mixtures of not only independent sources but also dependent (even correlated) sources in both component (space) and sample (time) dimensions through simulations based on a Copula distributed source system. In addition, even when the sources are independent, it is shown that the proposed BCA approach have the potential to provide improvement in separation performance especially for short data records based on the setups involving convolutive mixtures of digital communication sources. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
22. Compressed Sensing Matrices From Fourier Matrices.
- Author
-
Xu, Guangwu and Xu, Zhiqiang
- Subjects
- *
COMPRESSED sensing , *SPARSE matrices , *QUANTUM information theory , *DETECTORS , *TELECOMMUNICATION - Abstract
The class of Fourier matrices is of special importance in compressed sensing (CS). This paper concerns deterministic construction of CS matrices from Fourier matrices. Using Katz’ character sum estimation, we are able to design a deterministic procedure to select rows from a Fourier matrix to form a good CS matrix for sparse recovery. The sparsity bound in our construction is similar to that of binary CS matrices constructed by DeVore, which greatly improves previous results for CS matrices from Fourier matrices. Our approach also provides more flexibility in terms of the dimension of CS matrices. This paper also contains a useful improvement to Katz’ character sum estimation for quadratic extensions, with an elementary and transparent proof. Based on this improvement, we construct a class of special CS matrices consisting of partial Fourier matrices whose columns are a union of orthonormal bases. As a consequence, our construction yields an approximately mutually unbiased bases from Fourier matrices which is of particular interest to quantum information theory. Some numerical examples are also included. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
23. Energy-Minimizing Error-Correcting Codes.
- Author
-
Cohn, Henry and Zhao, Yufei
- Subjects
- *
ERROR-correcting codes , *LINEAR programming , *POTENTIAL energy , *REED-Solomon codes , *HAMMING codes , *POLYNOMIALS - Abstract
We study a discrete model of repelling particles, and we show using linear programming bounds that many familiar families of error-correcting codes minimize a broad class of potential energies when compared with all other codes of the same size and block length. Examples of these universally optimal codes include Hamming, Golay, and Reed-Solomon codes, among many others, and this helps to explain their robustness as the channel model varies. Universal optimality of these codes is equivalent to minimality of their binomial moments, which has been proved in many cases by Ashikhmin and Barg. We highlight connections with mathematical physics and the analogy between these results and previous work by Cohn and Kumar in the continuous setting, and we develop a framework for optimizing the linear programming bounds. Furthermore, we show that if these bounds prove a code is universally optimal, then the code remains universally optimal even if one codeword is removed. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
24. An Inner-Product-Based Discriminative IRLS Algorithm for Sparse Hyperspectral Detection.
- Author
-
Huang, Zhongwei, Shi, Zhenwei, and Zhang, Yun
- Abstract
Automatic target detection is an important application in the hyperspectral image processing field. It is currently known that any test pixels in hyperspectral images can be represented within a spectral dictionary using appropriate sparse coefficients. Based on this assumption, some sparsity-based algorithms are developed for hyperspectral detection. This kind of sparse learning method attempts to find the sparse representation from a spectral library, i.e., a dictionary data set from which useful information is extracted. Among these algorithms, the iteratively reweighted least squares (IRLS) strategy is believed to be a simple and useful tool for sparse representation. However, when dealing with the hyperspectral data, the dictionary for sparse learning is usually high-dimensional which dramatically increases the scale and complexity of sparse learning. In such cases, most sparsity-based algorithms including the IRLS strategy lost their efficacy. To deal with this situation, we propose a discriminative IRLS algorithm, called inner-product-based discriminative IRLS detector (IDIRLSD), which decreases the scale and complexity problem by discriminatively seeking a subdictionary that retains the most critical information. Also, IDIRLSD applies a convex minimization for approximately solving the sparse recovery problem. A weighted ${{\mmb {\ell}}_{\bf 1}}$ minimization is relaxed and solved by IRLS strategy. The proposed algorithm applies an inner-product-based function for constructing the small-scale weighted ${{\mmb {\ell}}_{\bf 1}}$ minimization with respect to the subdictionary. The solution provided by IDIRLSD is then applied to label the test pixel as target or background. Experimental results from both synthetic and real hyperspectral data demonstrate the improved efficacy of the proposed algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
25. Reweighted Low-Rank Matrix Recovery and its Application in Image Restoration.
- Author
-
Peng, Yigang, Suo, Jinli, Dai, Qionghai, and Xu, Wenli
- Abstract
In this paper, we propose a reweighted low-rank matrix recovery method and demonstrate its application for robust image restoration. In the literature, principal component pursuit solves low-rank matrix recovery problem via a convex program of mixed nuclear norm and \ell1 norm. Inspired by reweighted \ell1 minimization for sparsity enhancement, we propose reweighting singular values to enhance low rank of a matrix. An efficient iterative reweighting scheme is proposed for enhancing low rank and sparsity simultaneously and the performance of low-rank matrix recovery is prompted greatly. We demonstrate the utility of the proposed method both on numerical simulations and real images/videos restoration, including single image restoration, hyperspectral image restoration, and background modeling from corrupted observations. All of these experiments give empirical evidence on significant improvements of the proposed algorithm over previous work on low-rank matrix recovery. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
26. Sparse representation and recovery of a class of signals using information theoretic measures.
- Author
-
Meena, V. and Abhilash, G.
- Abstract
In this paper, we discuss a novel scheme for arriving at a sparse representation and recovery of a class of signals using information theoretic measures. Constituent components containing distinct features of any signal, belonging to a specific class, are separated and represented sparsely in an appropriate fixed basis. The morphological correlation between each of the constituent components and a subset of basis leads to sparse representation of the signal in that basis. The basis is selected using entropy minimization based method which is known to result in coefficient concentration. Simulation studies on speech signals show that in the presence of input noise, the proposed method outperforms conventional methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
27. CANDECOMP/PARAFAC (CP) direction finding with multi-scale array.
- Author
-
Miron, Sebastian, Yang Song, Brie, David, and Wong, Kainam Thomas
- Abstract
In this paper, we introduce a novel direction of arrival (DOA) estimation algorithm for an array presenting multiple scales of invariance, based on a CANDECOMP/PARAFAC (CP) model of the data. The proposed approach is a generalization of the results given in [1] to an array presenting an arbitrary number of spatial invariances. We show, on a particular array geometry, that our method could out-perform the ESPRIT-based approach introduced in [2]. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
28. To convexify or not? Regression with clustering penalties on graphs.
- Author
-
El Halabi, Marwa, Baldassarre, Luca, and Cevher, Volkan
- Abstract
We consider minimization problems that are compositions of convex functions of a vector x ∈ ℝN with submodular set functions of its support (i.e., indices of the non-zero coefficients of x). Such problems are in general difficult for large N due to their combinatorial nature. In this setting, existing approaches rely on “convexifications” of the submodular set function based on the Lovász extension for tractable approximations. In this paper, we first demonstrate that such convexifications can fundamentally change the nature of the underlying submodular regularization. We then provide a majorization-minimization framework for the minimization of such composite objectives. For concreteness, we use the Ising model to motivate a submodular regularizer, establish the total variation semi-norm as its Lovász extension, and numerically illustrate our new optimization framework. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
29. Unsupervised nearest regularized subspace for anomaly detection in hyperspectral imagery.
- Author
-
Li, Wei and Du, Qian
- Abstract
A method of unsupervised nearest regularized subspace is proposed for anomaly detection in hyperspectral imagery. Based on a dual window, an approximation of each testing pixel is a representation of surrounding data via a linear combination, for which the weight vector is calculated by distance-weighted Tikhonov regularization. Proposed detector returns the similarity measurement between the testing pixel and its approximation. Experimental results for real hyperspectral data of proposed approach are demonstrated and compared to other traditional detection techniques. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
30. Ridge Regression based classifiers for large scale class imbalanced datasets.
- Author
-
Arpit, Devansh, Wu, Shuang, Natarajan, Pradeep, Prasad, Rohit, and Natarajan, Premkumar
- Abstract
Large scale, class imbalanced data classification is a challenging task that occurs frequently in several computer vision tasks such as web video retrieval. A number of algorithms have been proposed in literature that approach this problem from different perspectives (e.g. Sampling, Cost-sensitive learning, Active learning). The challenge is two fold in this task — first the data imbalance causes many classification algorithms to learn trivial classifiers that declare all test examples to be from the majority class. Second, many algorithms do not scale to large dataset sizes. We address these two issues by using two different cost-sensitive versions of Ridge Regression as our binary classifiers. We demonstrate our approach for retrieving unstructured web videos from 10 events on the benchmark TRECVID MED 12 dataset containing ≈47000 videos. We empirically show that they perform at par with state-of-the-art support vector machine based classifiers using χ2 kernels while being 30 to 60 times faster. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
31. Different-level simultaneous resolution of robot redundancy with end-effector path tracked and with joint velocity and acceleration both minimized.
- Author
-
Yunong, Zhang, Kene, Li, Dongsheng, Guo, and Binghuang, Cai
- Abstract
To remedy the phenomena of high joint-velocity/acceleration as well as nonzero end-motion joint velocity, a different-level bi-criteria minimization scheme is proposed for the redundancy resolution and end-effector path tracking of robot manipulators. The scheme combines the minimum two-norm joint-velocity and joint-acceleration solutions via two weighting factors. Physical constraints (e.g., joint-angle limits, joint-velocity limits, and joint-acceleration limits) are also incorporated simultaneously into the scheme formulation. More importantly, the proposed different-level bi-criteria scheme is finally formulated as one quadratic program (QP) to be solved at the joint-acceleration level. A simplified primal-dual neural network based on linear variational inequalities (LVI) is then employed for solving such a quadratic program and its original robotic scheme. Computer simulations based on a PA10-type manipulator and a three-link planar robot arm verify the efficacy, unification and flexibility of the proposed different-level bi-criteria minimization scheme. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
32. Positive realness and optimality problem for rectangular descriptor systems.
- Author
-
Lei, Liu, Ying, Yang, and Guoshan, Zhang
- Abstract
The inverse linear quadratic optimal problem based on dynamic compensation for rectangular descriptor system is considered in this paper. First a dynamic compensator with a proper dynamic order is given such that the closed-loop system is admissible and extended strictly positive real (ESPR) in terms of Bilinear Matrix Inequality (BMI). In this case, a sufficient condition for the existence of the optimal solution is presented. Then the weight matrices of the linear quadratic performance index are derived to be a parameterized expression. In order to solve the inverse optimal control problem for the system, an algorithm to the minimization problem with the BMI constraints is proposed based on path-following algorithm, in which an optimal dynamic compensator and the weight matrices of the linear quadratic performance index can be obtained. Finally, a numerical example is provided to demonstrate the effectiveness and correctness of the proposed results. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
33. A splitting algorithm for directional regularization and sparsification.
- Author
-
Raket, Lars Lau and Nielsen, Mads
- Abstract
We present a new split-type algorithm for the minimization of a p-harmonic energy with added data fidelity term. The half-quadratic splitting reduces the original problem to two straightforward problems, that can be minimized efficiently. The minimizers to the two sub-problems can typically be computed pointwise and are easily implemented on massively parallel processors. Furthermore the splitting method allows for the computation of solutions to a large number of more advanced directional regularization problems. In particular we are able to handle robust, non-convex data terms, and to define a 0-harmonic regularization energy where we sparsify directions by means of an L0 norm. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
34. Manhattan-Pyramid Distance: A solution to an anomaly in pyramid matching by minimization.
- Author
-
Chauhan, Aneesh and Lopes, Luis Seabra
- Abstract
In the field of computer vision, pyramid matching by minimization has gained increasing popularity. This paper points out and discusses an inherent anomaly in pyramid matching by minimization that can affect the performance of classification approaches based on this type of matching. As a solution, a new multiresolution measure, called Manhattan-Pyramid Distance (MPD), is proposed. Systematic evaluations are carried out at the task of instance-based object classification on four object image datasets. Results show that MPD improves object classification performance with respect to a standard approach based on pyramid matching by minimization. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
35. An improved entropy-based multiple kernel learning.
- Author
-
Hino, Hideitsu and Ogawa, Tetsuji
- Abstract
Kernel methods have been successfully used in many practical machine learning problems. However, the problem of choosing a suitable kernel is left to practitioners. One method to select the optimal kernel is to learn a linear combination of element kernels. A framework of multiple kernel learning based on conditional entropy minimization criterion (MCEM) has been proposed and it has been shown to work well for, e.g., speaker recognition tasks. In this paper, a computationally efficient implementation for MCEM, which utilizes sequential quadratic programming, is formulated. Through a comparative experiment to conventional MCEM algorithm on a speaker verification task, the proposed method is shown to offer comparable verification accuracy with considerable improvement in computational speed. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
36. Unconstrained scalable test problems for single-objective bilevel optimization.
- Author
-
Sinha, Ankur, Malo, Pekka, and Deb, Kalyanmoy
- Abstract
In this paper, we propose a set of six test problems for single-objective bilevel optimization. The test-collection represents various difficulties which are commonly encountered in practical bilevel optimization problems. To support experiments with problems of different size, all of the test problems are scalable in terms of the number of variables. The problem set is also accompanied by a construction procedure, which helps to generate new test problems with controlled difficulties in convergence and interaction patterns between the two optimization levels. To provide a baseline result for easy comparisons, we have solved a 10 variable instance for each of the test problems using a simple bilevel evolutionary algorithm. The results presented may be used as a benchmark while evaluating the performance of any bilevel optimization algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
37. l2, 1 Regularized correntropy for robust feature selection.
- Author
-
He, Ran, Tan, Tieniu, Wang, Liang, and Zheng, Wei-Shi
- Abstract
In this paper, we study the problem of robust feature extraction based on l2, 1 regularized correntropy in both theoretical and algorithmic manner. In theoretical part, we point out that an l2, 1-norm minimization can be justified from the viewpoint of half-quadratic (HQ) optimization, which facilitates convergence study and algorithmic development. In particular, a general formulation is accordingly proposed to unify l1-norm and l2, 1-norm minimization within a common framework. In algorithmic part, we propose an l2, 1 regularized correntropy algorithm to extract informative features meanwhile to remove outliers from training data. A new alternate minimization algorithm is also developed to optimize the non-convex correntropy objective. In terms of face recognition, we apply the proposed method to obtain an appearance-based model, called Sparse-Fisherfaces. Extensive experiments show that our method can select robust and sparse features, and outperforms several state-of-the-art subspace methods on large-scale and open face recognition datasets. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
38. Fast algorithms for structured robust principal component analysis.
- Author
-
Ayazoglu, Mustafa, Sznaier, Mario, and Camps, Octavia I.
- Abstract
A large number of problems arising in computer vision can be reduced to the problem of minimizing the nuclear norm of a matrix, subject to additional structural and sparsity constraints on its elements. Examples of relevant applications include, among others, robust tracking in the presence of outliers, manifold embedding, event detection, in-painting and tracklet matching across occlusion. In principle, these problems can be reduced to a convex semi-definite optimization form and solved using interior point methods. However, the poor scaling properties of these methods limit the use of this approach to relatively small sized problems. The main result of this paper shows that structured nuclear norm minimization problems can be efficiently solved by using an iterative Augmented Lagrangian Type (ALM) method that only requires performing at each iteration a combination of matrix thresholding and matrix inversion steps. As we illustrate in the paper with several examples, the proposed algorithm results in a substantial reduction of computational time and memory requirements when compared against interior-point methods, opening up the possibility of solving realistic, large sized problems. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
39. Fixed-rank representation for unsupervised visual learning.
- Author
-
Liu, Risheng, Lin, Zhouchen, De la Torre, Fernando, and Su, Zhixun
- Abstract
Subspace clustering and feature extraction are two of the most commonly used unsupervised learning techniques in computer vision and pattern recognition. State-of-the-art techniques for subspace clustering make use of recent advances in sparsity and rank minimization. However, existing techniques are computationally expensive and may result in degenerate solutions that degrade clustering performance in the case of insufficient data sampling. To partially solve these problems, and inspired by existing work on matrix factorization, this paper proposes fixed-rank representation (FRR) as a unified framework for unsupervised visual learning. FRR is able to reveal the structure of multiple subspaces in closed-form when the data is noiseless. Furthermore, we prove that under some suitable conditions, even with insufficient observations, FRR can still reveal the true subspace memberships. To achieve robustness to outliers and noise, a sparse regularizer is introduced into the FRR framework. Beyond subspace clustering, FRR can be used for unsupervised feature extraction. As a non-trivial byproduct, a fast numerical solver is developed for FRR. Experimental results on both synthetic data and real applications validate our theoretical analysis and demonstrate the benefits of FRR for unsupervised visual learning. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
40. Robust photometric stereo using sparse regression.
- Author
-
Ikehata, Satoshi, Wipf, David, Matsushita, Yasuyuki, and Aizawa, Kiyoharu
- Abstract
This paper presents a robust photometric stereo method that effectively compensates for various non-Lambertian corruptions such as specularities, shadows, and image noise. We construct a constrained sparse regression problem that enforces both Lambertian, rank-3 structure and sparse, additive corruptions. A solution method is derived using a hierarchical Bayesian approximation to accurately estimate the surface normals while simultaneously separating the non-Lambertian corruptions. Extensive evaluations are performed that show state-of-the-art performance using both synthetic and real-world images. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
41. Sparse multi-target localization using cooperative access points.
- Author
-
Jamali-Rad, Hadi, Ramezani, Hamid, and Leus, Geert
- Abstract
In this paper, a novel multi-target sparse localization (SL) algorithm based on compressive sampling (CS) is proposed. Different from the existing literature for target counting and localization where signal/received-signal-strength (RSS) readings at different access points (APs) are used separately, we propose to reformulate the SL problem so that we can make use of the cross-correlations of the signal readings at different APs. We analytically show that this new framework can provide a considerable amount of extra information compared to classical SL algorithms. We further highlight that in some cases this extra information converts the under-determined problem of SL into an over-determined problem for which we can use ordinary least-squares (LS) to efficiently recover the target vector even if it is not sparse. Our simulation results illustrate that compared to classical SL this extra information leads to a considerable improvement in terms of number of localizable targets as well as localization accuracy. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
42. Analysis of l2-sensitivity for canonical forms in 1-D and 2-D separable-denominator digital filters.
- Author
-
Hinamoto, Yoichi and Doi, Akimitsu
- Abstract
Based on a pure l2 norm, the l2-sensitivity for canonical forms in one-dimensional (1-D) and two-dimensional (2-D) separable-denoninator state-space digital filters is analyzed more precisely by taking into account 0 and 1 elements. First, l2-sensitivity measures are explored for a controllable canonical form as well as an observable canonical form in state-space digital filters. Next, an l2-sensitivity measure is investigated for a canonical form in 2-D separable-denominator state-space digital filters. Finally, numerical examples are presented to compare the resulting l2-sensitivity measures with the conventional ones. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
43. An algorithm for fast constrained nuclear norm minimization and applications to systems identification.
- Author
-
Ayazoglu, Mustafa and Sznaier, Mario
- Abstract
This paper presents a novel algorithm for efficiently minimizing the nuclear norm of a matrix subject to structural and semi-definite constraints. It requires performing only thresholding and eigenvalue decomposition steps and converges Q-superlinearly to the optimum. Thus, this algorithm offers substantial advantages, both in terms of memory requirements and computational time over conventional semi-definite programming solvers. These advantages are illustrated using as an example the problem of finding the lowest order system that interpolates a collection of noisy measurements. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
44. A new approach to rigid body minimization with application to molecular docking.
- Author
-
Mirzaei, Hanieh, Kozakov, Dima, Beglov, Dmitri, Paschalidis, Ioannis Ch., Vajda, Sandor, and Vakili, Pirooz
- Abstract
Our work is motivated by energy minimization in the space of rigid affine transformations of macromolecules, an essential step in computational protein-protein docking. We introduce a novel representation of rigid body motion that leads to a natural formulation of the energy minimization problem as an optimization on the SO(3)×ℜ3 manifold, rather than the commonly used SE(3). The new representation avoids the complications associated with optimization on the SE(3) manifold and provides additional flexibilities for optimization not available in that formulation. The approach is applicable to general rigid body minimization problems. Our computational results for a local optimization algorithm developed based on the new approach show that it is about an order of magnitude faster than a state of art local minimization algorithms for computational protein-protein docking. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
45. On Games with Coupled Constraints.
- Author
-
Arslan, Gurdal, Demirkol, M. Fatih, and Yuksel, Serdar
- Abstract
We study the problem of cost minimization in competitive resource allocation problems, motivated by our previous work on power minimization in MIMO interference systems. Our setup leads to a general cost minimization game in which each player wishes to minimize the cost of its resource consumption while achieving a target utility level. In general, the player strategies are coupled through both their cost functions and their utility functions. Equilibrium exists only for a certain set of target utility levels which in general is a proper set of all achievable utility levels. To characterize the set of equilibrium utility levels, we introduce the dual of a cost minimization game called a utility maximization game in which each player wishes to maximize its utility while keeping the cost of its resource consumption below a cost threshold. We associate the set of equilibrium utility levels with the set of equilibrium of the dual game corresponding to all cost thresholds, and show that the dual game always possesses an equilibrium. We also obtain an inner estimate of the set of equilibrium utility levels in the case of decoupled cost functions by a minimax approach. We then relax the hard constraint on achieving a target utility level, and introduce a weighted cost minimization game which always possesses an equilibrium. We recover the original equilibria through the equilibria of the weighted cost minimization game as the penalty on not achieving the target utility levels increases. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
46. New Form of Lagrangian Multiplier Methods.
- Author
-
Feng, Aifen, Xu, Cuixia, and Pu, Dingguo
- Abstract
The Lagrangian multiplier method is one of the important methods of solving nonlinear constraints programming. In this paper, a new class of augmented Lagrangian functions with a new NCP function is proposed for the minimization of a smooth function subject to smooth equation and inequality constraints. Under certain conditions, We prove 1-1 corresponding relationship of optimality solution between the primal constrained problem and the new unconstrained problem. Then a Algorithm is constructed to solve nonlinear constraints problem, and also prove the convergence of the algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
47. Broyden restricted class of variable metric methods and oblique projections.
- Author
-
Stachurski, Andrzej
- Abstract
In the paper the new formulation of the Broyden restricted convex class of updates involving oblique projections is introduced. It is a sum of two terms: the first one containing special oblique projection and the second standard term ensuring verification of the quasi-Newton condition (it is also an oblique projection multiplied by appropriate scalar). The applied oblique projection involves vector defined as the convex, linear combination of the difference between consecutive iterative points and the image of the previous inverse hessian approximation on the corresponding difference of derivatives, i.e. gradients. Formula relating coefficient in the convex combination of vectors in the oblique projection with its counterpart in the standard representation of the Broyden convex class is presented. Some preliminary numerical experiments results on two twice continuously differentiable strictly convex functions with increasing dimension are included. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
48. A sensitivity-based construction approach to sample-path variance minimization of Markov decision processes.
- Author
-
Huang, Yonghao and Chen, Xi
- Abstract
We study the limiting average variance along the sample path as the secondary criterion for Markov decision processes, with the long-run average performance as the primary criterion. By applying the sensitivity-based approach, we intuitively construct the difference formula for the samplepath variance under different policies. Thereby, a sufficient condition for the sample-path variance optimality can be easily derived. This work extends the sensitivity-based construction approach to the Markov decision processes with the nonstandard performance criterion. Compared with the pure mathematical verification, the sensitivity-based construction approach shows more intuition and provides insights on the sample-path structure of Markov decision processes. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
49. Error minimization in Phase-Based Neurons.
- Author
-
Pavaloiu, Ionel-Bujorel and Dan Cristea, Paul
- Abstract
Complex-Valued Neural Networks are extensions of the classical Neural Networks. They have complex-valued weights, accept complex inputs and have more computational power than the classical ones. We discuss in this paper the training for Phase-Based Neurons, neural processing elements similar to Universal Binary Neurons, that uses as weights and bias complex numbers with unit magnitude, the phase being the only tunable parameter. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
50. A new evidential c-means clustering method.
- Author
-
Liu, Zhun-ga, Dezert, Jean, Pan, Quan, and Cheng, Yong-mei
- Abstract
Data clustering methods integrating information fusion techniques have been recently developed in the framework of belief functions. More precisely, the evidential version of fuzzy c-means (ECM) method has been proposed to deal with the clustering of proximity data based on an extension of the popular fuzzy c-means (FCM) clustering method. In fact ECM doesn't perform very well for proximity data because it is based only on the distance between the object and the clusters' center to determine the mass of belief of the object commitment. As a result, different clusters can overlap with close centers which is not very efficient for data clustering. To overcome this problem, we propose a new clustering method called belief functions c-means (BFCM) in this work. In BFCM, both the distance between the object and the imprecise cluster's center, and the distances between the object and the centers of the involved specific clusters for the mass determination are taken into account. The object will be considered belonging to a specific cluster if it is very close to this cluster's center, or belonging to an imprecise cluster if it lies in the middle (overlapped zone) of some specific clusters, or belonging to the outlier cluster if it is too far from the data set. Pignistic probability can be applied for the hard decision making support in BFCM. Several examples are given to illustrate how BFCM works, and to show how it outperforms ECM and FCM for the proximity data. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.