334 results on '"Majorization-Minimization"'
Search Results
2. PointSGRADE: Sparse learning with graph representation for anomaly detection by using unstructured 3D point cloud data.
- Author
-
Tao, Chengyu and Du, Juan
- Subjects
- *
DATA structures , *ANOMALY detection (Computer security) , *REPRESENTATIONS of graphs , *POINT cloud , *SPARSE graphs , *OPTICAL scanners - Abstract
Surface anomaly detection by using 3D point cloud data has recently received significant attention. To completely measure the common free-form surfaces without loss of details, advanced 3D scanning technologies, such as 3D laser scanners, can be applied and will produce an unstructured point cloud. However, this irregular data structure poses challenges to anomaly detection, in that the existing methods based on regular data, e.g., 2D image, cannot be directly applied. This article proposes a sparse learning framework with a graph representation of the unstructured point cloud for anomaly detection (PointSGRADE). Specifically, the free-form surface is assumed to be smooth. Then, the associated point cloud can be represented as a graph. Subsequently, considering the sparse anomalies, we propose a sparse learning framework and formulate the anomaly detection problem as a penalized optimization problem, which is further solved by a computationally efficient majorization-minimization framework. Case studies demonstrate the accuracy and robustness of the proposed method. This article proposes a novel methodology for sparse anomaly detection on smooth free-form surfaces represented by unstructured point cloud, which is critical for quality inspection in manufacturing and other application areas. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. The stochastic proximal distance algorithm.
- Author
-
Jiang, Haoyu and Xu, Jason
- Abstract
Stochastic versions of proximal methods have gained much attention in statistics and machine learning. These algorithms tend to admit simple, scalable forms, and enjoy numerical stability via implicit updates. In this work, we propose and analyze a stochastic version of the recently proposed proximal distance algorithm, a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter ρ → ∞ . By uncovering connections to related stochastic proximal methods and interpreting the penalty parameter as the learning rate, we justify heuristics used in practical manifestations of the proximal distance method, establishing their convergence guarantees for the first time. Moreover, we extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates regimes. We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Min-Max Framework for Majorization-Minimization Algorithms in Signal Processing Applications: An Overview.
- Author
-
Saini, Astha, Stoica, Petre, Babu, Prabhu, and Arora, Aakash
- Abstract
This monograph presents a theoretical background and a broad introduction to the Min-Max Framework for Majorization-Minimization (MM4MM), an algorithmic methodology for solving minimization problems by formulating them as min-max problems and then employing majorization-minimization. The monograph lays out the mathematical basis of the approach used to reformulate a minimization problem as a min-max problem. With the prerequisites covered, including multiple illustrations of the formulations for convex and non-convex functions, this work serves as a guide for developing MM4MM-based algorithms for solving non-convex optimization problems in various areas of signal processing. As special cases, we discuss using the majorization-minimization technique to solve min-max problems encountered in signal processing applications and min-max problems formulated using the Lagrangian. Lastly, we present detailed examples of using MM4MM in ten signal processing applications such as phase retrieval, source localization, independent vector analysis, beamforming and optimal sensor placement in wireless sensor networks. The devised MM4MM algorithms are free of hyper-parameters and enjoy the advantages inherited from the use of the majorization-minimization technique such as monotonicity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A new sufficient dimension reduction method via rank divergence.
- Author
-
Liu, Tianqing, Li, Danning, Ren, Fengjiao, Sun, Jianguo, and Yuan, Xiaohui
- Abstract
Sufficient dimension reduction is commonly performed to achieve data reduction and help data visualization. Its main goal is to identify functions of the predictors that are smaller in number than the predictors and contain the same information as the predictors for the response. In this paper, we are concerned with the linear functions of the predictors, which determine a central subspace that preserves sufficient information about the conditional distribution of a response given covariates. Many methods have been developed in the literature for the estimation of the central subspace. However, most of the existing sufficient dimension reduction methods are sensitive to outliers and require some strict restrictions on both covariates and response. To address this, we propose a novel dependence measure, rank divergence, and develop a rank divergence-based sufficient dimension reduction approach. The new method only requires some mild conditions on the covariates and response and is robust to outliers or heavy-tailed distributions. Moreover, it applies to both discrete or categorical covariates and multivariate responses. The consistency of the resulting estimator of the central subspace is established, and numerical studies suggest that it works well in practical situations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Revisiting Possibilistic Fuzzy C-Means Clustering Using the Majorization-Minimization Method.
- Author
-
Chen, Yuxue and Zhou, Shuisheng
- Subjects
- *
COMPUTATIONAL complexity , *PROBLEM solving , *MEMORY - Abstract
Possibilistic fuzzy c-means (PFCM) clustering is a kind of hybrid clustering method based on fuzzy c-means (FCM) and possibilistic c-means (PCM), which not only has the stability of FCM but also partly inherits the robustness of PCM. However, as an extension of FCM on the objective function, PFCM tends to find a suboptimal local minimum, which affects its performance. In this paper, we rederive PFCM using the majorization-minimization (MM) method, which is a new derivation approach not seen in other studies. In addition, we propose an effective optimization method to solve the above problem, called MMPFCM. Firstly, by eliminating the variable V ∈ R p × c , the original optimization problem is transformed into a simplified model with fewer variables but a proportional term. Therefore, we introduce a new intermediate variable s ∈ R c to convert the model with the proportional term into an easily solvable equivalent form. Subsequently, we design an iterative sub-problem using the MM method. The complexity analysis indicates that MMPFCM and PFCM share the same computational complexity. However, MMPFCM requires less memory per iteration. Extensive experiments, including objective function value comparison and clustering performance comparison, demonstrate that MMPFCM converges to a better local minimum compared to PFCM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. The appeals of quadratic majorization–minimization.
- Author
-
Robini, Marc C., Wang, Lihui, and Zhu, Yuemin
- Subjects
MULTIDIMENSIONAL scaling ,MATHEMATICAL optimization ,POSITIVE systems ,DIFFERENTIABLE dynamical systems ,CONJUGATE gradient methods ,INVERSE problems ,TOMOGRAPHY - Abstract
Majorization–minimization (MM) is a versatile optimization technique that operates on surrogate functions satisfying tangency and domination conditions. Our focus is on differentiable optimization using inexact MM with quadratic surrogates, which amounts to approximately solving a sequence of symmetric positive definite systems. We begin by investigating the convergence properties of this process, from subconvergence to R-linear convergence, with emphasis on tame objectives. Then we provide a numerically stable implementation based on truncated conjugate gradient. Applications to multidimensional scaling and regularized inversion are discussed and illustrated through numerical experiments on graph layout and X-ray tomography. In the end, quadratic MM not only offers solid guarantees of convergence and stability, but is robust to the choice of its control parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Convergence analysis of block majorize-minimize subspace approach.
- Author
-
Chouzenoux, Emilie and Fest, Jean-Baptiste
- Abstract
We consider the minimization of a differentiable Lipschitz gradient but non necessarily convex, function F defined on R N . We propose an accelerated gradient descent approach which combines three strategies, namely (i) a variable metric derived from the majorization-minimization principle; (ii) a subspace strategy incorporating information from the past iterates; (iii) a block alternating update. Under the assumption that F satisfies the Kurdyka–Łojasiewicz property, we give conditions under which the sequence generated by the resulting block majorize-minimize subspace algorithm converges to a critical point of the objective function, and we exhibit convergence rates for its iterates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Functional principal component analysis for incomplete space–time data.
- Author
-
Palummo, Alessandro, Arnone, Eleonora, Formaggia, Luca, and Sangalli, Laura M.
- Subjects
PRINCIPAL components analysis ,BODIES of water ,SPACE-time block codes ,FUNCTIONAL analysis ,REMOTE sensing ,WATER temperature ,MISSING data (Statistics) - Abstract
Environmental signals, acquired, e.g., by remote sensing, often present large gaps of missing observations in space and time. In this work, we present an innovative approach to identify the main variability patterns, in space–time data, when data may be affected by complex missing data structures. We formalize the problem in the framework of functional data analysis, proposing an innovative method of functional principal component analysis (fPCA) for incomplete space–time data. The functional nature of the proposed method permits to borrow information from measurements observed at nearby spatio-temporal locations. The resulting functional principal components are smooth fields over the considered spatio-temporal domain, and can lead to interesting insights in the spatio-temporal dynamic of the phenomenon under study. Moreover, they can be used to provide a reconstruction of the missing entries, also under severe missing data patterns. The proposed model combines a weighted rank-one approximation of the data matrix with a roughness penalty. We show that the estimation problem can be solved using a majorize–minimization approach, and provide a numerically efficient algorithm for its solution. Thanks to a discretization based on finite elements in space and B-splines in time, the proposed method can handle multidimensional spatial domains with complex shapes, such as water bodies with complicated shorelines, or curved spatial regions with complex orography. As shown by simulation studies, the proposed space–time fPCA is superior to alternative techniques for Principal Component Analysis with missing data. We further highlight the potentiality of the proposed method for environmental problems, by applying space–time fPCA to the study of the lake water surface temperature (LWST) of Lake Victoria, in Central Africa, starting from satellite measurements with large gaps. LWST is considered one of the fundamental indicators of how climate change is affecting the environment, and is recognized as an essential climate variable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Convergence analysis of stochastic higher-order majorization–minimization algorithms.
- Author
-
Lupu, Daniela and Necoara, Ion
- Subjects
- *
STOCHASTIC analysis , *STOCHASTIC convergence , *NONSMOOTH optimization , *SMOOTHNESS of functions , *ERROR functions , *ALGORITHMS - Abstract
Majorization–minimization schemes are a broad class of iterative methods targeting general optimization problems, including nonconvex, nonsmooth and stochastic. These algorithms minimize successively a sequence of upper bounds of the objective function so that along the iterations the objective value decreases. We present a stochastic higher-order algorithmic framework for minimizing the average of a very large number of sufficiently smooth functions. Our stochastic framework is based on the notion of stochastic higher-order upper bound approximations of the finite-sum objective function and minibatching. We derive convergence results for nonconvex and convex optimization problems when the higher-order approximation of the objective function yields an error that is p times differentiable and has Lipschitz continuous p derivative. More precisely, for general nonconvex problems we present asymptotic stationary point guarantees and under Kurdyka–Lojasiewicz property we derive local convergence rates ranging from sublinear to linear. For convex problems with uniformly convex objective function, we derive local (super)linear convergence results for our algorithm. Compared to existing stochastic (first-order) methods, our algorithm adapts to the problem's curvature and allows using any batch size. Preliminary numerical tests support the effectiveness of our algorithmic framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. High-dimensional sparse single–index regression via Hilbert–Schmidt independence criterion.
- Author
-
Chen, Xin, Deng, Chang, He, Shuaida, Wu, Runxiong, and Zhang, Jia
- Abstract
Hilbert-Schmidt Independence Criterion (HSIC) has recently been introduced to the field of single-index models to estimate the directions. Compared with other well-established methods, the HSIC based method requires relatively weak conditions. However, its performance has not yet been studied in the prevalent high-dimensional scenarios, where the number of covariates can be much larger than the sample size. In this article, based on HSIC, we propose to estimate the possibly sparse directions in the high-dimensional single-index models through a parameter reformulation. Our approach estimates the subspace of the direction directly and performs variable selection simultaneously. Due to the non-convexity of the objective function and the complexity of the constraints, a majorize-minimize algorithm together with the linearized alternating direction method of multipliers is developed to solve the optimization problem. Since it does not involve the inverse of the covariance matrix, the algorithm can naturally handle large p small n scenarios. Through extensive simulation studies and a real data analysis, we show that our proposal is efficient and effective in the high-dimensional settings. The Matlab codes for this method are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. A majorization–minimization-based method for nonconvex inverse rig problems in facial animation: algorithm derivation.
- Author
-
Racković, Stevo, Soares, Cláudia, Jakovetić, Dušan, and Desnica, Zoranka
- Abstract
Automated methods for facial animation are a necessary tool in the modern industry since the standard blendshape head models consist of hundreds of controllers, and a manual approach is painfully slow. Different solutions have been proposed that produce output in real-time or generalize well for different face topologies. However, all these prior works consider a linear approximation of the blendshape function and hence do not provide a high-enough level of detail for modern realistic human face reconstruction. A second-order blendshape approximation leads to higher fidelity facial animation but generates a non-linear least squares optimization problem with high dimensionality. We derive a method for solving the inverse rig in blendshape animation using quadratic corrective terms, which increases accuracy. At the same time, due to the proposed construction of the objective function, it yields a sparser estimated weight vector compared to the state-of-the-art methods. The former feature means lower demand for subsequent manual corrections of the solution, while the latter indicates that the manual modifications are also easier to include. Our algorithm is iterative and employs a Majorization–Minimization paradigm to cope with the increased complexity produced by adding corrective terms. The surrogate function is easy to solve and allows for further parallelization on the component level within each iteration. This paper is complementary to an accompanying paper (Racković et al. arxiv preprint. https://arxiv.org/abs/2302.04843, 2023) where we provide detailed experimental results and discussion, including highly-realistic animation data, and show a clear superiority of the results compared to the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Infimal convolution and AM-GM majorized total variation-based integrated approach for biosignal denoising.
- Author
-
Kumar, Vinit and Muduli, Priya Ranjan
- Abstract
Biomedical measurements are generally contaminated with substantial noise from various sources, including thermal noise, interference from other physiological signals, environment, or electrode movements. Complete restoration of biosignals from noisy measurements using linear filtering techniques is not feasible owing to the spectral overlapping problem. This paper proposes an arithmetic–geometric mean inequality-based robust denoising method. The proposed method incorporates a novel convexified cost function using the concept of majorization-minimization. A two-step algorithm is derived using the forward–backward splitting technique. An optimality condition is derived to set the hyperparameters of the new algorithm. The proposed convex optimization-based method effectively denoises cardiovascular signals, including both electrocardiogram and photoplethysmography. Furthermore, the efficacy of the proposed approach is verified over different datasets. The quantitative and qualitative results obtained using the proposed method demonstrate the superiority of the proposed method in biosignal denoising concerning state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Deep Learning Aided Secure Transmission in Wirelessly Powered Untrusted Relaying in the Face of Hardware Impairments
- Author
-
Vahid Shahiri, Moslem Forouzesh, Hamid Behroozi, Ali Kuhestani, and Kai-Kit Wong
- Subjects
Deep learning ,energy harvesting ,hardware impairments ,majorization-minimization ,physical layer security ,untrusted relaying ,Telecommunication ,TK5101-6720 ,Transportation and communications ,HE1-9990 - Abstract
Limited power and computational resources make the employment of complex classical encryption schemes unrealistic in resource-limited networks, e.g., the Internet of Things (IoT). To this end, physical layer security (PLS) has shown great potential in securing such resource-limited networks. To further combat the power scarcity in IoT nodes, radio frequency (RF) based energy harvesting (EH) is an attractive energy source while relaying can enhance the energy efficiency and extend the range of data transmission. Additionally, due to deploying low-cost hardware, imperfections in the RF chain of IoT transceivers are common. Against this background, in this paper, we investigate an untrusted EH relay-aided secure communication with RF impairments. Specifically, the relay simultaneously receives the desired signal from the source and the jamming from the destination in the first phase. Hence the relay is unable to extract the confidential desired signal. The resultant composite signal is then amplified by the relay in the second phase by using the energy harvested from the composite signal followed by its transmission to the destination. Since the destination is the original source of the jamming, its effect can be readily subtracted from the composite signal to recover the original desired signal of the source. Moreover, in the face of hardware impairments (HWIs) in all nodes, maintaining optimal power management both at the source and destination may impose excessive computations on an IoT node. We solve this problem by deep learning (DL) based optimal power management maximizing the secrecy rate based on the instantaneous channel coefficients. We show that our learning-based scheme can reach the accuracy of the exhaustive search method despite its considerably lower computational complexity. Moreover, we developed an optimization framework for judiciously sharing HWIs across the nodes, so that we attain the maximum secrecy rate. To derive an efficient solution, we utilize a majorization-minimization (MM)algorithm, which is a particular instance in the family of successive convex approximation (SCA) methods. The simulation results show that the proposed HWI aware design considerably improves the secrecy rate.
- Published
- 2024
- Full Text
- View/download PDF
15. User-Centric Joint Beamforming for Multi-Antenna UAV Networks With 3D Blockage Effects
- Author
-
Xianming Zhao, Yuting Zhu, and Hongtao Zhang
- Subjects
Unmanned aerial vehicle ,user-centric joint beamforming ,3D blockage effects ,majorization-minimization ,multi-antenna configuration ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In unmanned aerial vehicle (UAV) networks, introducing multiple-input/multiple-output (MIMO) technology brings promising capacity gains by exploiting spatial multiplexing. However, for UAV communications in urban scenarios, dense buildings block air-to-ground links, which leads to non-consistent service for users. In this paper, a user-centric joint beamforming design for multi-UAV networks aiming at maximizing the sum-rate performance is proposed, where the distributed antennas are deployed in UAV clusters forming virtual MIMO links to seamlessly guarantee user service under the 3D blockage effect. Specifically, the UAV clusters are constructed dynamically in a user-centric way for cooperative transmissions, which are dominated by the line-of-sight (LoS) connections considering building blockage effects. In addition, the selected user-to-UAV links must be less affected by the building blockages which are measured through the established channel model from comprehensive dimensions including sizes, locations, and heights. Within the UAV clusters, the proposed joint beamforming is performed based on the majorization-minimization (MM) algorithm for the NP-hard problem under per-antenna power constraints, and the optimal UAV parameters including UAV placement and antenna configuration with a fixed total number of antennas are investigated under different blockage effect. Simulation results demonstrate that the proposed design can improve sum-rate performance by more than $1\times $ compared with single UAV deployment under densely-located building environments.
- Published
- 2024
- Full Text
- View/download PDF
16. VCSEL: PRIORITIZING SNP-SET BY PENALIZED VARIANCE COMPONENT SELECTION.
- Author
-
Kim, Juhyun, Shen, Judong, Wang, Anran, Mehrotra, Devan V, Ko, Seyoon, Zhou, Jin J, and Zhou, Hua
- Subjects
Mathematical Sciences ,Statistics ,Genetics ,Human Genome ,8.4 Research design and methodologies (health services) ,Rare variants ,majorization-minimization ,penalized estimation ,variance components model ,restricted maximum likelihood ,group selection ,nonconvex penalties ,multiple phenotypes ,Econometrics ,Statistics & Probability - Abstract
Single nucleotide polymorphism (SNP) set analysis aggregates both common and rare variants and tests for association between phenotype(s) of interest and a set. However, multiple SNP-sets, such as genes, pathways, or sliding windows are usually investigated across the whole genome in which all groups are tested separately, followed by multiple testing adjustments. We propose a novel method to prioritize SNP-sets in a joint multivariate variance component model. Each SNP-set corresponds to a variance component (or kernel), and model selection is achieved by incorporating either convex or nonconvex penalties. The uniqueness of this variance component selection framework, which we call VCSEL, is that it naturally encompasses multivariate traits (VCSEL-M) and SNP-set-treatment or -environment interactions (VCSEL-I). We devise an optimization algorithm scalable to many variance components, based on the majorization-minimization (MM) principle. Simulation studies demonstrate the superiority of our methods in model selection performance, as measured by the area under the precision-recall (PR) curve, compared to the commonly used marginal testing and group penalization methods. Finally, we apply our methods to a real pharmacogenomics study and a real whole exome sequencing study. Some top ranked genes by VCSEL are detected as insignificant by the marginal test methods which emphasizes formal inference of individual genes with a strict significance threshold. This provides alternative insights for biologists to prioritize follow-up studies and develop polygenic risk score models.
- Published
- 2021
17. Regularized Discrete Optimal Transport for Class-Imbalanced Classifications.
- Author
-
Chen, Jiqiang, Wan, Jie, and Ma, Litao
- Subjects
- *
TRANSPORT theory , *DATA distribution , *REGULARIZATION parameter , *WATER quality , *MACHINE learning , *CLASSIFICATION , *RESAMPLING (Statistics) - Abstract
Imbalanced class data are commonly observed in pattern analysis, machine learning, and various real-world applications. Conventional approaches often resort to resampling techniques in order to address the imbalance, which inevitably alter the original data distribution. This paper proposes a novel classification method that leverages optimal transport for handling imbalanced data. Specifically, we establish a transport plan between training and testing data without modifying the original data distribution, drawing upon the principles of optimal transport theory. Additionally, we introduce a non-convex interclass regularization term to establish connections between testing samples and training samples with the same class labels. This regularization term forms the basis of a regularized discrete optimal transport model, which is employed to address imbalanced classification scenarios. Subsequently, in line with the concept of maximum minimization, a maximum minimization algorithm is introduced for regularized discrete optimal transport. Subsequent experiments on 17 Keel datasets with varying levels of imbalance demonstrate the superior performance of the proposed approach compared to 11 other widely used techniques for class-imbalanced classification. Additionally, the application of the proposed approach to water quality evaluation confirms its effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Generalized Representation-based Classification by Lp-norm for Face Recognition.
- Author
-
Jing Wang, Xiao Xie, Li Zhang, Xusheng Chen, Hongzhou Yue, and Huaping Guo
- Subjects
FACE perception ,HUMAN facial recognition software ,CLASSIFICATION ,SOURCE code ,PROBLEM solving - Abstract
Representation-based Classification (RC) algorithms have been extensively applied in the domain of face recognition. This paper introduces a novel approach by incorporating arbitrary norms into both the objective and constraint functions of conventional RC algorithms, resulting in the development of the Generalized RC (GRC) algorithm. The primary goal of this enhancement is to fully leverage the unique advantages offered by various norms. Within the majorization-minimization framework, an iterative procedure was designed to solve the optimization problem of GRC. This approach ensures that a closed-form solution is obtained in each iteration and a locally optimal solution can be found upon convergence. Experimental results on four benchmark face databases demonstrated that GRC generally outperforms competing RC algorithms in face recognition. Data and source code of this study are publicly available on the GitHub repository at https://github.com/yuzhounh/GRC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
19. Unification of sparse Bayesian learning algorithms for electromagnetic brain imaging with the majorization minimization framework.
- Author
-
Hashemi, Ali, Cai, Chang, Kutyniok, Gitta, Müller, Klaus-Robert, Nagarajan, Srikantan S, and Haufe, Stefan
- Subjects
Brain source imaging ,Electro-/magnetoencephalography ,Hyperparameter learning ,Majorization-Minimization ,Noise learning ,Non-convex ,Type I/II Bayesian learning ,Algorithms ,Bayes Theorem ,Computer Simulation ,Electroencephalography ,Evoked Potentials ,Auditory ,Humans ,Likelihood Functions ,Magnetoencephalography ,Nonlinear Dynamics ,Signal-To-Noise Ratio ,Electro- ,magnetoencephalography ,Type I ,II Bayesian learning ,Bioengineering ,Neurology & Neurosurgery ,Medical and Health Sciences ,Psychology and Cognitive Sciences - Abstract
Methods for electro- or magnetoencephalography (EEG/MEG) based brain source imaging (BSI) using sparse Bayesian learning (SBL) have been demonstrated to achieve excellent performance in situations with low numbers of distinct active sources, such as event-related designs. This paper extends the theory and practice of SBL in three important ways. First, we reformulate three existing SBL algorithms under the majorization-minimization (MM) framework. This unification perspective not only provides a useful theoretical framework for comparing different algorithms in terms of their convergence behavior, but also provides a principled recipe for constructing novel algorithms with specific properties by designing appropriate bounds of the Bayesian marginal likelihood function. Second, building on the MM principle, we propose a novel method called LowSNR-BSI that achieves favorable source reconstruction performance in low signal-to-noise-ratio (SNR) settings. Third, precise knowledge of the noise level is a crucial requirement for accurate source reconstruction. Here we present a novel principled technique to accurately learn the noise variance from the data either jointly within the source reconstruction procedure or using one of two proposed cross-validation strategies. Empirically, we could show that the monotonous convergence behavior predicted from MM theory is confirmed in numerical experiments. Using simulations, we further demonstrate the advantage of LowSNR-BSI over conventional SBL in low-SNR regimes, and the advantage of learned noise levels over estimates derived from baseline data. To demonstrate the usefulness of our novel approach, we show neurophysiologically plausible source reconstructions on averaged auditory evoked potential data.
- Published
- 2021
20. A Proximity Operator-Based Method for Denoising Biomedical Measurements.
- Author
-
Muduli, Priya Ranjan and Kumar, Vinit
- Subjects
- *
COST functions , *ORTHOGONAL matching pursuit , *SIGNAL reconstruction - Abstract
The reconstruction of biomedical signals from noisy measurements has been an indispensable research topic. A majority of biosignals exhibit typical piecewise characteristics. The recovery of these piecewise biomedical signals embedded in noise through conventional nonlinear filtering schemes fails due to the lack of proper balance between strict sparsity and smoothness-inducing property of regularizers at large noise levels. This work proposes a nonlinear convex optimization-based filtering approach, which incorporates a Moreau envelope-based regularizer using the majorized version of the total variation function. The source signals are restored by exploiting their piecewise characteristics through a majorized cost function. The majorized functions provide some relaxation in solving non-convex functions. The relaxation of the stringent sparsifying penalty provides the balance between the smoothness property and the piecewise features of biosignals. The optimality criterion for the proposed method is analyzed in this work. Furthermore, we evaluate the new method using a standard IoT platform. The recovery performance of this method is found to be superior to various state-of-the-art techniques for piecewise synthetic and real-world physiological signals corrupted by additive noise. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. A super resolution target separation and reconstruction approach for single channel sar against deceptive jamming
- Author
-
Shi-qi Liu, Bing Li, Bo Zhao, Lei Huang, Yue-zhou Wu, and Wei-min Bao
- Subjects
Deceptive jamming suppression ,Off-grid reconstruction ,Majorization-minimization ,Synthetic aperture radar (SAR) ,Military Science - Abstract
The excellent remote sensing ability of synthetic aperture radar (SAR) will be misled seriously when it encounters deceptive jamming which possesses high fidelity and fraudulence. In this paper, the dynamic synthetic aperture (DSA) scheme is used to extract the difference between the true and false targets. A simultaneous deceptive jamming suppression and target reconstruction method is proposed for a single channel SAR system to guarantee remote sensing ability. The system model is formulated as a sparse signal recovery problem with an unknown parametric dictionary to be estimated. An iterative re-weighted method is employed to jointly handle the dictionary parameter learning and target reconstruction problem in an majorization-minimization framework, where a surrogate function majorizing the Gaussian entropy in the objective function is introduced to circumvent its non-convexity. After dictionary parameter learning, the grid mismatching problem in a fixed grid based method is avoided. Therefore, the proposed method can reap a super resolution result. Besides, a simple yet effective DSA section scheme is developed for the SAR data excerpting, in which only two DSAs are required. Experimental results about location error and reconstruction power error reveal that the proposed method is able to achieve a good performance in deceptive jamming suppression.
- Published
- 2023
- Full Text
- View/download PDF
22. [formula omitted]-norm minimization for outlier-resistant elliptic positioning in [formula omitted]-stable impulsive interference.
- Author
-
Xiong, Wenxin, He, Jiajun, So, Hing Cheung, Liang, Junli, and Leung, Chi-Sing
- Subjects
- *
LEAST squares , *COMMUNICATION barriers , *COMPUTER simulation - Abstract
This short communication considers the problem of robust elliptic positioning (EP) in α -stable impulsive interference based on the ℓ p -norm minimization criterion. We start with converting the associated nonconvex optimization task into the well-established ℓ p -minimization framework of iteratively reweighted least squares (IRLS). While the effect of p is taken into account by a weight update step in each IRLS iteration, we employ an assortment of surrogate functions, in order to effectively handle the location update subproblem using the majorization–minimization technique. The superiority of our proposal over a number of existing EP methods in terms of localization accuracy in α -stable impulsive noise is demonstrated via numerical simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Unrolled Variational Bayesian Algorithm for Image Blind Deconvolution.
- Author
-
Huang, Yunshi, Chouzenoux, Emilie, and Pesquet, Jean-Christophe
- Subjects
- *
IMAGE reconstruction , *GRAYSCALE model , *MATHEMATICAL optimization , *ALGORITHMS , *DEEP learning , *DECONVOLUTION (Mathematics) , *COLOR - Abstract
In this paper, we introduce a variational Bayesian algorithm (VBA) for image blind deconvolution. Our VBA generic framework incorporates smoothness priors on the unknown blur/image and possible affine constraints (e.g., sum to one) on the blur kernel, integrating the VBA within a neural network paradigm following an unrolling methodology. The proposed architecture is trained in a supervised fashion, which allows us to optimally set two key hyperparameters of the VBA model and leads to further improvements in terms of resulting visual quality. Various experiments involving grayscale/color images and diverse kernel shapes, are performed. The numerical examples illustrate the high performance of our approach when compared to state-of-the-art techniques based on optimization, Bayesian estimation, or deep learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A Local MM Subspace Method for Solving Constrained Variational Problems in Image Recovery.
- Author
-
Chouzenoux, Emilie, Martin, Ségolène, and Pesquet, Jean-Christophe
- Abstract
This article introduces a new penalized majorization–minimization subspace algorithm (P-MMS) for solving smooth, constrained optimization problems. In short, our approach consists of embedding a subspace algorithm in an inexact exterior penalty procedure. The subspace strategy, combined with a majoration–minimization step-size search, takes great advantage of the smoothness of the penalized cost function, while the penalty method allows to handle a wide range of constraints. The main drawback of exterior penalty approaches, namely ill-conditioning for large values of the penalty parameter, is overcome by using a trust-region-like technique. The convergence of the resulting algorithm is analyzed. Numerical experiments carried out on two large-scale image recovery applications demonstrate that, compared with state-of-the-art algorithms, the proposed method performs well in terms of computational time. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. SABRINA: A Stochastic Subspace Majorization-Minimization Algorithm.
- Author
-
Chouzenoux, Emilie and Fest, Jean-Baptiste
- Subjects
- *
ALGORITHMS , *IMAGE processing , *STOCHASTIC processes , *MACHINE learning , *IMAGE reconstruction , *STOCHASTIC convergence , *DIFFERENTIABLE functions - Abstract
A wide class of problems involves the minimization of a coercive and differentiable function F on R N whose gradient cannot be evaluated in an exact manner. In such context, many existing convergence results from standard gradient-based optimization literature cannot be directly applied and robustness to errors in the gradient is not necessarily guaranteed. This work is dedicated to investigating the convergence of Majorization-Minimization (MM) schemes when stochastic errors affect the gradient terms. We introduce a general stochastic optimization framework, called StochAstic suBspace majoRIzation-miNimization Algorithm SABRINA that encompasses MM quadratic schemes possibly enhanced with a subspace acceleration strategy. New asymptotical results are built for the stochastic process generated by SABRINA. Two sets of numerical experiments in the field of machine learning and image processing are presented to support our theoretical results and illustrate the good performance of SABRINA with respect to state-of-the-art gradient-based stochastic optimization methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. proximal distance algorithm for likelihood-based sparse covariance estimation.
- Author
-
Xu, Jason and Lange, Kenneth
- Subjects
- *
CELL communication , *ALGORITHMS , *COVARIANCE matrices , *EMIGRATION & immigration - Abstract
This paper addresses the task of estimating a covariance matrix under a patternless sparsity assumption. In contrast to existing approaches based on thresholding or shrinkage penalties, we propose a likelihood-based method that regularizes the distance from the covariance estimate to a symmetric sparsity set. This formulation avoids unwanted shrinkage induced by more common norm penalties, and enables optimization of the resulting nonconvex objective by solving a sequence of smooth, unconstrained subproblems. These subproblems are generated and solved via the proximal distance version of the majorization-minimization principle. The resulting algorithm executes rapidly, gracefully handles settings where the number of parameters exceeds the number of cases, yields a positive-definite solution, and enjoys desirable convergence properties. Empirically, we demonstrate that our approach outperforms competing methods across several metrics, for a suite of simulated experiments. Its merits are illustrated on international migration data and a case study on flow cytometry. Our findings suggest that the marginal and conditional dependency networks for the cell signalling data are more similar than previously concluded. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Variable Selection with Multiply-Imputed Datasets: Choosing Between Stacked and Grouped Methods.
- Author
-
Du, Jiacong, Boss, Jonathan, Han, Peisong, Beesley, Lauren J., Kleinsasser, Michael, Goutman, Stephen A., Batterman, Stuart, Feldman, Eva L., and Mukherjee, Bhramar
- Subjects
- *
POLLUTANTS , *MISSING data (Statistics) , *MATHEMATICAL optimization - Abstract
Penalized regression methods are used in many biomedical applications for variable selection and simultaneous coefficient estimation. However, missing data complicates the implementation of these methods, particularly when missingness is handled using multiple imputation. Applying a variable selection algorithm on each imputed dataset will likely lead to different sets of selected predictors. This article considers a general class of penalized objective functions which, by construction, force selection of the same variables across imputed datasets. By pooling objective functions across imputations, optimization is then performed jointly over all imputed datasets rather than separately for each dataset. We consider two objective function formulations that exist in the literature, which we will refer to as "stacked" and "grouped" objective functions. Building on existing work, we (i) derive and implement efficient cyclic coordinate descent and majorization-minimization optimization algorithms for continuous and binary outcome data, (ii) incorporate adaptive shrinkage penalties, (iii) compare these methods through simulation, and (iv) develop an R package miselect. Simulations demonstrate that the "stacked" approaches are more computationally efficient and have better estimation and selection properties. We apply these methods to data from the University of Michigan ALS Patients Biorepository aiming to identify the association between environmental pollutants and ALS risk. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Efficient Low-Rank Semidefinite Programming With Robust Loss Functions.
- Author
-
Yao, Quanming, Yang, Hansi, Hu, En-Liang, and Kwok, James T.
- Subjects
- *
ROBUST programming , *SEMIDEFINITE programming , *DATA corruption , *SPARSE matrices , *ALGORITHMS , *MACHINE learning - Abstract
In real-world applications, it is important for machine learning algorithms to be robust against data outliers or corruptions. In this paper, we focus on improving the robustness of a large class of learning algorithms that are formulated as low-rank semi-definite programming (SDP) problems. Traditional formulations use the square loss, which is notorious for being sensitive to outliers. We propose to replace this with more robust noise models, including the $\ell _1$ ℓ 1 -loss and other nonconvex losses. However, the resultant optimization problem becomes difficult as the objective is no longer convex or smooth. To alleviate this problem, we design an efficient algorithm based on majorization-minimization. The crux is on constructing a good optimization surrogate, and we show that this surrogate can be efficiently obtained by the alternating direction method of multipliers (ADMM). By properly monitoring ADMM's convergence, the proposed algorithm is empirically efficient and also theoretically guaranteed to converge to a critical point. Extensive experiments are performed on four machine learning applications using both synthetic and real-world data sets. Results show that the proposed algorithm is not only fast but also has better performance than the state-of-the-arts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Convergence-Guaranteed Parametric Bayesian Distributed Cooperative Localization.
- Author
-
Li, Bin, Wu, Nan, Wu, Yik-Chung, and Li, Yonghui
- Abstract
Belief propagation (BP) is a popular message passing algorithm for distributed cooperative localization. However, due to the nonlinearity of measurement functions, BP implementation has no closed-form expression and requires message approximations. While nonparametric BP can be used, it suffers from a high computational complexity, thus being impractical in energy-constrained networks. In this paper, a parametric Bayesian method with Gaussian BP implementation is proposed for distributed cooperative localization. With linearization of the Euclidean norm in ranging measurements, the joint posterior distribution of agents’ locations is successively approximated with a sequence of high-dimensional Gaussian distributions. At each iteration of the successive Gaussian approximation, vector-valued Gaussian BP is further adopted to compute the marginal distributions of agents’ locations in a distributed way. It is proved by the principle of majorization-minimization that the proposed successive Gaussian approximation is guaranteed to converge, and the sequence of the estimated agents’ locations converges to a stationary point of the objective function of the maximum a posteriori estimation. Furthermore, although cooperative localization involves loopy network topologies, in which convergence property of Gaussian BP is generally unknown, it is proved in this paper that vector-valued Gaussian BP converges, making the proposed parametric BP-based method being the first one achieving convergence guarantee. Compared to the nonparametric BP counterpart, the proposed method has a much lower computational complexity and communication overhead. Simulation results demonstrate that the proposed method achieves a superior performance in localization accuracy compared to existing cooperative localization methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Truncated Inference for Latent Variable Optimization Problems: Application to Robust Estimation and Learning
- Author
-
Zach, Christopher, Le, Huu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vedaldi, Andrea, editor, Bischof, Horst, editor, Brox, Thomas, editor, and Frahm, Jan-Michael, editor
- Published
- 2020
- Full Text
- View/download PDF
31. A Ranking Model Motivated by Nonnegative Matrix Factorization with Applications to Tennis Tournaments
- Author
-
Xia, Rui, Tan, Vincent Y. F., Filstroff, Louis, Févotte, Cédric, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Brefeld, Ulf, editor, Fromont, Elisa, editor, Hotho, Andreas, editor, Knobbe, Arno, editor, Maathuis, Marloes, editor, and Robardet, Céline, editor
- Published
- 2020
- Full Text
- View/download PDF
32. Sparsity-Based Time Delay Estimation Through the Matched Filter Outputs.
- Author
-
Zhang, Yuxuan, Jin, Yuxi, Wu, Yongqing, Hao, Chengpeng, and Orlando, Danilo
- Subjects
TIME delay estimation ,MATCHED filters ,KALMAN filtering ,GAUSSIAN distribution ,SIGNAL-to-noise ratio ,MAXIMUM likelihood statistics - Abstract
In this letter, we deal with the problem of high-resolution time delay estimation (TDE) in multipath environments exploiting the matched filter (MF) outputs data. To this end, we develop a systematic post-processing framework, consisting of two sparsity-based algorithms and a refining procedure aimed at reducing the computational load. The TDE problem is formulated as a sparse signal recovery problem and efficiently solved resorting to a majorization-minimization paradigm and a cyclic procedure. At the design stage, we assume a complex-valued Gaussian distribution model for the MF samples and incorporate a module-product prior that promotes the sparsity more significantly than the conventional complex Laplacian distribution. The preliminary performance assessment, conducted on simulated data, shows that, at least for the considered parameter values, the proposed delay estimators approach the Cramér-Rao bound for different signal-to-noise ratios and bandwidths. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. A Majorization-Minimization Algorithm for Nonnegative Binary Matrix Factorization.
- Author
-
Magron, Paul and Fevotte, Cedric
- Subjects
MATRIX decomposition ,FACTORIZATION ,NONNEGATIVE matrices ,PROBABILISTIC generative models ,ALGORITHMS ,BAYESIAN field theory ,COMPUTATIONAL complexity - Abstract
This paper tackles the problem of decomposing binary data using matrix factorization. We consider the family of mean-parametrized Bernoulli models, a class of generative models that are well suited for modeling binary data and enables interpretability of the factors. We factorize the Bernoulli parameter and consider an additional Beta prior on one of the factors to further improve the model’s expressive power. While similar models have been proposed in the literature, they only exploit the Beta prior as a proxy to ensure a valid Bernoulli parameter in a Bayesian setting; in practice it reduces to a uniform or uninformative prior. Besides, estimation in these models has focused on costly Bayesian inference. In this paper, we propose a simple yet very efficient majorization-minimization algorithm for maximum a posteriori estimation. Our approach leverages the Beta prior whose parameters can be tuned to improve performance in matrix completion tasks. Experiments conducted on three public binary datasets show that our approach offers an excellent trade-off between prediction performance, computational complexity, and interpretability. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. An SQP Algorithm for Structural Topology Optimization Based on Majorization–Minimization Method.
- Author
-
Liao, Weilong, Zhang, Qiliang, and Meng, Huanli
- Subjects
STRUCTURAL optimization ,QUADRATIC programming ,HESSIAN matrices ,QUASI-Newton methods ,ALGORITHMS ,CONVEX programming - Abstract
When applying the sequential quadratic programming (SQP) algorithm to topology optimization, using the quasi-Newton methods or calculating the Hessian matrix directly will result in a considerable amount of calculation, making it computationally infeasible when the number of optimization variables is large. To solve the above problems, this paper creatively proposes a method for calculating the approximate Hessian matrix for structural topology optimization with minimum compliance problems. Then, the second-order Taylor expansion transforms the original problem into a series of separable and easy-to-solve convex quadratic programming (QP) subproblems. Finally, the quadratic programming optimality criteria (QPOC) method and the QP solver of MATLAB are used to solve the subproblems. Compared with other sequential quadratic programming methods, the advantage of the proposed method is that the Hessian matrix is diagonally positive definite and its calculation is simple. Numerical experiments on an MBB beam and cantilever beam verify the feasibility and efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. PointSGRADE: Sparse learning with graph representation for anomaly detection by using unstructured 3D point cloud data
- Author
-
Tao, Chengyu, Du, Juan, Tao, Chengyu, and Du, Juan
- Abstract
Surface anomaly detection by using 3D point cloud data has recently received significant attention. To completely measure the common free-form surfaces without loss of details, advanced 3D scanning technologies, such as 3D laser scanners, can be applied and will produce an unstructured point cloud. However, this irregular data structure poses challenges to anomaly detection, in that the existing methods based on regular data, e.g., 2D image, cannot be directly applied. This article proposes a sparse learning framework with a graph representation of the unstructured point cloud for anomaly detection (PointSGRADE). Specifically, the free-form surface is assumed to be smooth. Then, the associated point cloud can be represented as a graph. Subsequently, considering the sparse anomalies, we propose a sparse learning framework and formulate the anomaly detection problem as a penalized optimization problem, which is further solved by a computationally efficient majorization-minimization framework. Case studies demonstrate the accuracy and robustness of the proposed method. This article proposes a novel methodology for sparse anomaly detection on smooth free-form surfaces represented by unstructured point cloud, which is critical for quality inspection in manufacturing and other application areas. © Copyright © 2024 “IISE”.
- Published
- 2024
36. Channel Estimation for Reconfigurable Intelligent Surface Aided MISO Communications: From LMMSE to Deep Learning Solutions
- Author
-
Neel Kanth Kundu and Matthew R. McKay
- Subjects
Reconfigurable intelligent surface ,MISO ,LMMSE ,MMSE ,majorization-minimization ,deep learning ,Telecommunication ,TK5101-6720 ,Transportation and communications ,HE1-9990 - Abstract
We consider multi-antenna wireless systems aided by reconfigurable intelligent surfaces (RIS). RIS presents a new physical layer technology for improving coverage and energy efficiency by intelligently controlling the propagation environment. In practice however, achieving the anticipated gains of RIS requires accurate channel estimation. Recent attempts to solve this problem have considered the least-squares (LS) approach, which is simple but also sub-optimal. The optimal channel estimator, based on the minimum mean-squared-error (MMSE) criterion, is challenging to obtain and is non-linear due to the non-Gaussianity of the effective channel seen at the receiver. Here we present approaches to approximate the optimal MMSE channel estimator. As a first approach, we analytically develop the best linear estimator, the LMMSE, together with a corresponding majorization-minimization-based algorithm designed to optimize the RIS phase shift matrix during the training phase. This estimator is shown to yield improved accuracy over the LS approach by exploiting second-order statistical properties of the wireless channel and the noise. To further improve performance and better approximate the globally-optimal MMSE channel estimator, we propose data-driven non-linear solutions based on deep learning. Specifically, by posing the MMSE channel estimation problem as an image denoising problem, we propose two convolutional neural network (CNN)-based methods to perform the denoising and approximate the optimal MMSE channel estimation solution. Our numerical results show that these CNN-based estimators give superior performance compared with linear estimation approaches. They also have low computational complexity requirements, thereby motivating their potential use in future RIS-aided wireless communication systems.
- Published
- 2021
- Full Text
- View/download PDF
37. Non-Coherent DOA Estimation via Proximal Gradient Based on a Dual-Array Structure
- Author
-
Zhengyu Wan and Wei Liu
- Subjects
DOA estimation ,phase retrieval ,group sparsity ,dual-arrays ,majorization-minimization ,proximal gradient ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Although the non-coherent direction of arrival (DOA) estimation problem can be solved by sparse phase retrieval algorithms, known reference signals are required to deal with the inherent ambiguity issue of this approach. To avoid the use of reference signals, an effective array structure employing two uniform linear arrays is proposed (although other array structures are possible, such as the circular array), based on which a phase retrieval problem employing group sparsity is formulated. It is then replaced by its convex surrogate alternative by applying the majorization-minimization technique and the proximal gradient method is employed to solve the surrogate problem. The proposed algorithm is referred to as fasT grOup sparsitY Based phAse Retreival (ToyBar). Unlike the existing phase-retrieval based DOA estimation algorithm GESPAR, it does not need to know the number of incident signals in advance. Simulation results indicate that the proposed algorithm has a fast convergence speed and a better estimation performance is achieved.
- Published
- 2021
- Full Text
- View/download PDF
38. Designing Sequence Set With Minimal Peak Side-Lobe Level for Applications in High Resolution RADAR Imaging
- Author
-
Surya Prakash Sankuru, R Jyothi, Prabhu Babu, and Mohammad Alaee-Kerahroodi
- Subjects
RADAR waveform design ,peak side-lobe level ,PAPR ,active sensing systems ,Majorization-Minimization ,MIMO RADAR ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Constant-modulus sequence set with low peak side-lobe level is a necessity for enhancing the performance of modern active sensing systems like Multiple Input Multiple Output (MIMO) RADARs. In this paper, we consider the problem of designing a constant-modulus sequence set by minimizing the peak side-lobe level, which can be cast as a non-convex minimax problem, and propose a Majorization-Minimization technique based iterative monotonic algorithm named as the PSL minimizer. The iterative steps of our algorithm are computationally not very demanding and they can be efficiently implemented via Fast Fourier Transform (FFT) operations. We also establish the convergence of our proposed algorithm and discuss the computational and space complexities of the algorithm. Finally, through numerical simulations, we illustrate the performance of our method with the state-of-the-art methods. To highlight the potential of our approach, we evaluate the performance of the sequence set designed via our approach in the context of probing sequence set design for MIMO RADAR angle-range imaging application and show results exhibiting good performance of our method when compared with other commonly used sequence set design approaches.
- Published
- 2021
- Full Text
- View/download PDF
39. Block-Sparse Signal Recovery via General Total Variation Regularized Sparse Bayesian Learning.
- Author
-
Sant, Aditya, Leinonen, Markus, and Rao, Bhaskar D.
- Subjects
- *
EXPECTATION-maximization algorithms , *CONVEX sets , *COST functions , *MATHEMATICAL optimization , *MATHEMATICAL regularization , *PARALLEL algorithms - Abstract
One of the main challenges in block-sparse signal recovery, as encountered in, e.g., multi-antenna mmWave channel models, is block-patterned estimation without knowledge of block sizes and boundaries. We propose a novel Sparse Bayesian Learning (SBL) method for block-sparse signal recovery under unknown block patterns. Contrary to conventional approaches that impose block-promoting regularization on the signal components, we apply two classes of hyperparameter regularizers for the SBL cost function, inspired by total variation (TV) denoising. The first class relies on a conventional TV difference unit and allows performing the SBL inference iteratively through a set of convex optimization problems, enabling a flexible choice of numerical solvers. The second class incorporates a region-aware TV penalty to penalize the signal and zero blocks in a dissimilar manner, enhancing the performance. We derive an alternating optimization algorithm based on expectation-maximization to perform the SBL inference through computationally efficient parallel updates for both the regularizer classes. The numerical results show that the proposed TV-regularized SBL algorithm is robust to the nature of the block structure and is capable of recovering signals with both block-patterned and isolated components, proving effective for various signal recovery systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Optimal Sensor Placement for Source Localization: A Unified ADMM Approach.
- Author
-
Sahu, Nitesh, Wu, Linlong, Babu, Prabhu, M. R., Bhavani Shankar, and Ottersten, Bjorn
- Subjects
- *
SENSOR placement , *LOCALIZATION (Mathematics) , *NOISE measurement , *WIRELESS communications , *RADAR - Abstract
Source localization plays a key role in many applications including radar, wireless and underwater communications. Among various localization methods, the most popular ones are Time-Of-Arrival (TOA), Time-Difference-Of-Arrival (TDOA), Angle-Of-Arrival (AOA) and Received Signal Strength (RSS) based. Since the Cramér-Rao lower bounds (CRLB) of these methods depend on the sensor geometry explicitly, sensor placement becomes a crucial issue in source localization applications. In this paper, we consider finding the optimal sensor placements for the TOA, TDOA, AOA and RSS based localization scenarios. We first unify the three localization models by a generalized problem formulation based on the CRLB-related metric. Then a unified optimization framework for optimal sensor placement (UTMOST) is developed through the combination of the alternating direction method of multipliers (ADMM) and majorization-minimization (MM) techniques. Unlike the majority of the state-of-the-art works, the proposed UTMOST neither approximates the design criterion nor considers only uncorrelated noise in the measurements. It can readily adapt to to different design criteria (i.e. A, D and E-optimality) with slight modifications within the framework and yield the optimal sensor placements correspondingly. Extensive numerical experiments are performed to exhibit the efficacy and flexibility of the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Non-Coherent DOA Estimation via Majorization-Minimization Using Sign Information.
- Author
-
Delbari, Mohamadreza, Javaheri, Amirhossein, Zayyani, Hadi, and Marvasti, Farrokh
- Subjects
AMBIGUITY ,MEASUREMENT errors ,SIGNAL-to-noise ratio ,PROBLEM solving - Abstract
In this letter, the problem of non-coherent direction of arrival (DOA) estimation is investigated exploiting the sign of the measurements in order to resolve the inherent ambiguity of the problem. Although the phase values are inaccurate, the sign of real and imaginary parts of the measurements will most likely remain correct under limited phase errors. Furthermore, a new approach for solving the problem is proposed employing a modified version of the Majorization-Minimization (MM) technique, without any prior information about the number of incident signals. Some theoretical analyses of our proposed algorithm are also provided in the paper. Finally, the simulation results are presented, demonstrating the efficiency of the proposed algorithm in comparison to some state-of-the-art methods in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Maximum Likelihood Algorithm for Time-Delay Based Multistatic Target Localization.
- Author
-
Panwar, Kuntal, Babu, Prabhu, and Stoica, Petre
- Subjects
SIGNAL-to-noise ratio ,NOISE measurement ,ALGORITHMS ,MAXIMUM likelihood statistics ,TIME delay systems - Abstract
In this letter we address the problem of multistatic target localization using time-delay measurements corrupted by noise with unknown non-uniform variances. More concretely, we consider the problem of joint maximum likelihood (ML) estimation of the target position and the noise variances, for which we propose a majorization-minimization based algorithm. The proposed approach is compared with state-of-the-art algorithms, and the simulation results show the excellent accuracy of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Double Intelligent Reflecting Surface-Assisted Multi-User MIMO Mmwave Systems With Hybrid Precoding.
- Author
-
Niu, Hehao, Chu, Zheng, Zhou, Fuhui, Pan, Cunhua, Ng, Derrick Wing Kwan, and Nguyen, Huan X
- Subjects
- *
MIMO systems , *HYBRID systems , *PHASE shifters , *TRANSMITTERS (Communication) , *RIEMANNIAN manifolds , *QUADRATIC programming - Abstract
This work investigates the effect of double intelligent reflecting surface (IRS) in improving the spectrum efficient of multi-user multiple-input multiple-output (MIMO) network operating in the millimeter wave (mmWave) band. Specifically, we aim to solve a weighted sum rate maximization problem by jointly optimizing the digital precoding at the transmitter and the analog phase shifters at the IRS, subject to the minimum achievable rate constraint. To facilitate the design of an efficient solution, we first reformulate the original problem into a tractable one by exploiting the majorization-minimization (MM) method. Then, a block coordinate descent (BCD) method is proposed to obtain a suboptimal solution, where the precoding matrices and the phase shifters are alternately optimized. Specifically, the digital precoding matrix design problem is solved by the quadratically constrained quadratic programming (QCQP), while the analog phase shift optimization is solved by the Riemannian manifold optimization (RMO). The convergence and computational complexity are analyzed. Finally, simulation results are provided to verify the performance of the proposed design, as well as the effectiveness of double-IRS in improving the spectral efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. A Novel Wireless Communication Paradigm for Intelligent Reflecting Surface Based Symbiotic Radio Systems.
- Author
-
Hua, Meng, Wu, Qingqing, Yang, Luxi, Schober, Robert, and Poor, H. Vincent
- Subjects
- *
WIRELESS communications , *BIT error rate , *INTELLIGENT transportation systems , *CONVEX programming , *RADIO technology , *RADIO transmitters & transmission , *SIGNAL processing - Abstract
This paper investigates a novel intelligent reflecting surface (IRS)-based symbiotic radio (SR) system architecture consisting of a transmitter, an IRS, and an information receiver (IR). The primary transmitter communicates with the IR and at the same time assists the IRS in forwarding information to the IR. Based on the IRS’s symbol period, we distinguish two scenarios, namely, commensal SR (CSR) and parasitic SR (PSR), where two different techniques for decoding the IRS signals at the IR are employed. We formulate bit error rate (BER) minimization problems for both scenarios by jointly optimizing the active beamformer at the base station and the phase shifts at the IRS, subject to a minimum primary rate requirement. Specifically, for the CSR scenario, a penalty-based algorithm is proposed to obtain a high-quality solution, where semi-closed-form solutions for the active beamformer and the IRS phase shifts are derived based on Lagrange duality and Majorization-Minimization methods, respectively. For the PSR scenario, we apply a bisection search-based method, successive convex approximation, and difference of convex programming to develop a computationally efficient algorithm, which converges to a locally optimal solution. Simulation results demonstrate the effectiveness of the proposed algorithms and show that the proposed SR techniques are able to achieve a lower BER than benchmark schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Efficient Algorithms for Constant-Modulus Analog Beamforming.
- Author
-
Arora, Aakash, Tsinos, Christos G., Shankar, M. R. Bhavani, Chatzinotas, Symeon, and Ottersten, Bjorn
- Subjects
- *
BEAMFORMING , *ALGORITHMS , *ANTENNA arrays , *RADIO frequency , *CHARGE coupled devices , *MIMO systems - Abstract
The use of a large-scale antenna array (LSAA) has become an important characteristic of multi-antenna communication systems to achieve beamforming gains such as in designing millimeter-wave (mmWave) systems to combat severe propagation losses. In such applications, each antenna element has to be driven by a radio frequency (RF) chain for the implementation of fully-digital beamformers, significantly increasing the hardware cost, complexity, and power consumption. Therefore, constant-modulus analog beamforming (CMAB) becomes a viable solution. In this paper, we consider the scaled analog beamforming (SAB) or constant-modulus analog beamforming (CMAB) architecture and design the system parameters by solving two variants of beampattern matching problem. In the first case, both the magnitude and phase of the beampattern are matched to the given desired beampattern whereas in the second case, only the magnitude of the beampattern is matched. Both the beampattern matching problems are cast as a variant of the constant-modulus least-squares (CLS) problem. We provide efficient algorithms based on the alternating majorization-minimization (AMM) framework that combines the alternating minimization and the MM frameworks and the conventional-cyclic coordinate descent (C-CCD) algorithms to solve the problem in each case. We also propose algorithms based on a new modified-CCD (M-CCD) based approach. For all the developed algorithms we prove convergence to a Karush-Kuhn-Tucker (KKT) point (or a stationary point). Numerical results demonstrate that the proposed algorithms converge faster than the state-of-the-art solutions. Among all the algorithms, the M-CCD-based algorithms have faster convergence when evaluated in terms of the number of iterations and the AMM-based algorithms offer lower complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. PGPAL: A Monotonic Iterative Algorithm for Phase-Retrieval Under the Presence of Poisson-Gaussian Noise.
- Author
-
Fatima, Ghania and Babu, Prabhu
- Subjects
STANDARD deviations ,REMOTE sensing - Abstract
In this paper, we propose an iterative algorithm named PGPAL for the phase-retrieval problem under the scenario when the measurements follow Poisson plus Gaussian (PG) distribution, which is a more realistic model in applications like astronomy, microscopy, medical imaging, and remote sensing. The proposed algorithm is based on majorization-minimization (MM) framework, in which a simple surrogate function that tightly upperbounds the challenging underlying maximum-likelihood (ML) objective is constructed and minimized iteratively. The proposed algorithm monotonically decreases the ML objective and the algorithm is guaranteed to converge to a stationary point of the ML objective. Numerical simulations are performed for the one-dimensional phase-retrieval case, and the performance of the proposed method is compared with the recently proposed Poisson phase-retrieval algorithm, which assumes the measurement model to be only Poisson distributed. The performance of the proposed MM-based algorithm in terms of the normalized root mean square error (NRMSE) is found to be much better in comparison to the method for Poisson phase-retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Joint Beamforming Optimization in Multi-Relay Assisted MIMO Over-the-Air Computation for Multi-Modal Sensing Data Aggregation.
- Author
-
Jiang, Miao, Li, Yiqing, Zhang, Guangchi, and Cui, Miao
- Abstract
In this letter, we investigate a multi-relay assisted over-the-air computation network for multi-modal sensing with direct links, where each node is equipped with multiple antennas and all the relays are operated in an amplify-and-forward mode. Specifically, the whole transmission is divided into two phases and all the sensors transmit symbols during both two phases. In particular, we are interested in minimizing the computation distortion measured by the mean-squared error (MSE) via jointly optimizing beamforming matrices at all nodes, subject to individual power constraints at the sensors and relays. The major difficulty lies in the strong coupling of beamforming matrices in the objective function and the non-convex transmit power constraints at the relays. To tackle this problem, a low-complexity locally optimal method based on alternating optimization is proposed, where closed-form expressions are obtained in each iteration. Furthermore, simulation results show that our proposed beamforming design can substantially enhance the computation MSE performance, as compared to other benchmark schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Unification of sparse Bayesian learning algorithms for electromagnetic brain imaging with the majorization minimization framework
- Author
-
Ali Hashemi, Chang Cai, Gitta Kutyniok, Klaus-Robert Müller, Srikantan S. Nagarajan, and Stefan Haufe
- Subjects
Electro-/magnetoencephalography ,Brain source imaging ,Type I/II Bayesian learning ,Non-convex ,Majorization-Minimization ,Noise learning ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Methods for electro- or magnetoencephalography (EEG/MEG) based brain source imaging (BSI) using sparse Bayesian learning (SBL) have been demonstrated to achieve excellent performance in situations with low numbers of distinct active sources, such as event-related designs. This paper extends the theory and practice of SBL in three important ways. First, we reformulate three existing SBL algorithms under the majorization-minimization (MM) framework. This unification perspective not only provides a useful theoretical framework for comparing different algorithms in terms of their convergence behavior, but also provides a principled recipe for constructing novel algorithms with specific properties by designing appropriate bounds of the Bayesian marginal likelihood function. Second, building on the MM principle, we propose a novel method called LowSNR-BSI that achieves favorable source reconstruction performance in low signal-to-noise-ratio (SNR) settings. Third, precise knowledge of the noise level is a crucial requirement for accurate source reconstruction. Here we present a novel principled technique to accurately learn the noise variance from the data either jointly within the source reconstruction procedure or using one of two proposed cross-validation strategies. Empirically, we could show that the monotonous convergence behavior predicted from MM theory is confirmed in numerical experiments. Using simulations, we further demonstrate the advantage of LowSNR-BSI over conventional SBL in low-SNR regimes, and the advantage of learned noise levels over estimates derived from baseline data. To demonstrate the usefulness of our novel approach, we show neurophysiologically plausible source reconstructions on averaged auditory evoked potential data.
- Published
- 2021
- Full Text
- View/download PDF
49. Wideband MIMO Radar Transmit Beampattern Synthesis via Majorization–Minimization.
- Author
-
Huang, Zhongrui, Tang, Bo, Wang, Hai, and Qin, Lilong
- Subjects
- *
MIMO radar , *MATRIX inversion , *ALGORITHMS , *RADAR - Abstract
We consider the design of constant-modulus waveforms for wideband multiple-input multiple-output radar. The aim is to match a desired transmit beampattern. To tackle the non-convex design problem, we develop an iterative algorithm, which is based on cyclic optimization and majorization–minimization. We prove that the sequence of the objective values of the proposed algorithm has guaranteed convergence. Moreover, we can obtain closed-form solutions for the associated optimization problems at every iteration. Furthermore, the proposed method can be implemented without matrix inversion and hence is computationally efficient. Simulation results demonstrate that the performance of the proposed algorithm is better than existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Probabilistic control and majorisation of optimal control.
- Author
-
Lefebvre, Tom
- Subjects
- *
MAXIMUM likelihood statistics , *CLOSED loop systems , *DECISION making , *PROBLEM solving - Abstract
Probabilistic control design is founded on the principle that a rational agent attempts to match modelled with an arbitrary desired closed-loop system trajectory density. The framework was originally proposed as a tractable alternative to traditional optimal control design, parametrizing desired behaviour through fictitious transition and policy densities and using the information projection as a proximity measure. In this work we introduce an alternative parametrization of desired closed-loop behaviour and explore alternative proximity measures between densities. It is then illustrated how the associated probabilistic control problems solve into uncertain or probabilistic policies. Our main result is to show that the probabilistic control objectives majorize conventional, stochastic and risk sensitive, optimal control objectives. This observation allows us to identify two probabilistic fixed point iterations that converge to the deterministic optimal control policies establishing an explicit connection between either formulations. Further we demonstrate that the risk sensitive optimal control formulation is also technically equivalent to a Maximum Likelihood estimation problem on a probabilistic graph model where the notion of costs is directly encoded into the model. The associated treatment of the estimation problem is then shown to coincide with the moment projected probabilistic control formulation. That way optimal decision making can be reformulated as an iterative inference problem. Based on these insights we discuss directions for algorithmic development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.