73 results on '"Sparsity"'
Search Results
2. An improved sparsity‐aware normalized least‐mean‐square scheme for underwater communication.
- Author
-
Kumar, Anand and Kumar, Prashant
- Subjects
CHANNEL estimation ,COASTAL surveillance ,QUANTITATIVE research ,UNDERWATER acoustics ,ADAPTIVE filters - Abstract
Underwater communication (UWC) is widely used in coastal surveillance and early warning systems. Precise channel estimation is vital for efficient and reliable UWC. The sparse direct‐adaptive filtering algorithms have become popular in UWC. Herein, we present an improved adaptive convex‐combination method for the identification of sparse structures using a reweighted normalized least‐mean‐square (RNLMS) algorithm. Moreover, to make RNLMS algorithm independent of the reweighted l1‐norm parameter, a modified sparsity‐aware adaptive zero‐attracting RNLMS (AZA‐RNLMS) algorithm is introduced to ensure accurate modeling. In addition, we present a quantitative analysis of this algorithm to evaluate the convergence speed and accuracy. Furthermore, we derive an excess mean‐square‐error expression that proves that the AZA‐RNLMS algorithm performs better for the harsh underwater channel. The measured data from the experimental channel of SPACE08 is used for simulation, and results are presented to verify the performance of the proposed algorithm. The simulation results confirm that the proposed algorithm for underwater channel estimation performs better than the earlier schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Rotating machine fault diagnosis by a novel fast sparsity-enabled feature-energy-ratio method.
- Author
-
Biao, He, Qin, Yi, Luo, Jun, Wu, Fei, and Xiao, Dengyu
- Subjects
FAULT diagnosis ,GEARBOXES ,ROTATING machinery ,FAST Fourier transforms ,SPECTRAL lines ,FREQUENCY spectra ,MACHINERY - Abstract
To well extract the early fault characteristics of rotating machines, a new fast sparsity-enabled feature-energy-ratio method is investigated in this paper. This method includes two stages. In the first stage, the spectrum is adaptively segmented through a coarse-to-fine strategy based on the ordered local maximums. Thus, the fault characteristic band can be divided automatically. A novel index based on sparsity, energy ratio, and kurtosis, is constructed to evaluate periodic impulses in each sub-signal, and it can evaluate the periodic impulses from the globality and locality. In the second stage, the Fourier spectrum from the first stage are refined by an improved sparse coding shrinkage denoising (SCSD) method whose parameters can be dynamically determined for each point. Within the improved SCSD approach, the differential result of the amplitude spectrum is used as input to improve the sparsity. Moreover, the ratios between the SCSD output and its input are applied to weigh the Fourier spectrum and maintain the phase information. Finally, the inverse fast Fourier transform and squared envelope spectra are applied to detect the fault characteristics. Bearing and gearbox vibration signals are used to validate the proposed methodology. The experimental results show that the proposed method is superior to some typical methods and the proposed index are robust to the interferences from aperiodic impulses. Therefore, the proposed method has great potential in the fault diagnosis of rotating machine. • Frequency spectrum is adaptively segmented by a coarse-to-fine strategy. • Potential characteristic spectral lines are estimated by the harmonic product. • An improved SCSD is proposed to suppress the same frequency band interferences. • Prior about the fault type is not required in this method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Ultrasound-guided biopsy tracking using data-driven needle identification in application to kidney.
- Author
-
Park, Suhyung, Kim, Dong Joon, Beom, Dong Gyu, Lee, Myeongjin, Bae, Eun Hui, Kim, Soo Wan, and Kim, Chang Seong
- Subjects
NEEDLE biopsy ,SPECKLE interference ,GABOR filters ,ULTRASONIC imaging ,ACQUISITION of data ,HOUGH transforms - Abstract
Ultrasound-guided biopsy needle identification is a crucial step in clinical treatment planning, but remains challenging due to the difficulty in data acquisition that includes the ultrasound speckle interference pattern and the presence of strong linear anatomical structure. This paper introduces a real-time needle tracking method for visualizing 2D needle shapes and trajectory during interventions. Based on observations of the needle dynamics within a small fraction of dynamic ultrasound images, the current needle placement was estimated: (1) A subspace based background suppression technique was used to identify points representing possible needle locations using the consecutive dynamic frames in a sliding-window fashion. and (2) a Hough transform was then used to filter out false positives and fit the remaining points on the Hough space. Evaluation on datasets from 16 subjects demonstrated that the proposed method produced high-quality needle-only images, significantly reducing the mean trajectory error to 1.89°and the tip position error to 5.1 mm, outperforming temporal subtraction (2.65°and 14.3 mm) and Gabor filtering (2.94°and 13.8 mm). The attention-based U-Net achieved a comparable mean trajectory angle error of 1.82°but yielded a higher mean tip position error of 8.2 mm. Qualitative and quantitative analyses consistently indicated that the proposed method offers enhanced accuracy and robustness across subjects compared to competing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Tailoring Multi-omics to Inflammatory Bowel Diseases: All for One and One for All.
- Author
-
Sudhakar, Padhmanand, Alsoud, Dahham, Wellens, Judith, Verstockt, Sare, Arnauts, Kaline, Verstockt, Bram, and Vermeire, Severine
- Abstract
Inflammatory bowel disease [IBD] has a multifactorial origin and originates from a complex interplay of environmental factors with the innate immune system at the intestinal epithelial interface in a genetically susceptible individual. All these factors make its aetiology intricate and largely unknown. Multi-omic datasets obtained from IBD patients are required to gain further insights into IBD biology. We here review the landscape of multi-omic data availability in IBD and identify barriers and gaps for future research. We also outline the various technical and non-technical factors that influence the utility and interpretability of multi-omic datasets and thereby the study design of any research project generating such datasets. Coordinated generation of multi-omic datasets and their systemic integration with clinical phenotypes and environmental exposures will not only enhance understanding of the fundamental mechanisms of IBD but also improve therapeutic strategies. Finally, we provide recommendations to enable and facilitate generation of multi-omic datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Rapid outlier detection, model selection and variable selection using penalized likelihood estimation for general spatial models.
- Author
-
Song, Yunquan, Fang, Minglu, Wang, Yuanfeng, and Hou, Yiming
- Abstract
The outliers in the data set have a potential influence on the statistical inference and can provide some useful information behind the data set, the methodology for outlier detection and accommodation is always an important topic in data analysis. For spatial data, its influence not only affects coefficient estimation but model selection. The traditional method usually carries out outlier detection, model selection and variable selection step by step, so the data processing efficiency is not high. In order to further improve the efficiency and accuracy of data processing, based on the general spatial model, we consider a technique to achieve outlier detection, along with model and variable estimation in one step. In the general spatial model, we add a mean shift parameter for each data point to identify outliers. Penalized likelihood estimation (PLE) is proposed to simultaneously detect outliers, and to select spatial models and explanatory variables for spatial data. This method correctly identifies multiple outliers, provides a proper spatial model, and corrects coefficient estimation without removing outliers in numerical simulation and case analysis. Compared to current methods, PLE detects outliers more quickly, and solves the optimization problem to select spatial models and explanatory variables. Calculation is easy using the optimized solnp function in R software. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A Generative Model for Anomaly Detection in Time Series Data.
- Author
-
Hoh, Maximilian, Schöttl, Alfred, Schaub, Henry, and Wenninger, Franz
- Subjects
ANOMALY detection (Computer security) ,TIME series analysis ,PROBABILISTIC generative models - Published
- 2022
- Full Text
- View/download PDF
8. Poisson reduced-rank models with sparse loadings.
- Author
-
Lee, Eun Ryung and Park, Seyoung
- Abstract
High-dimensional Poisson reduced-rank models have been considered for statistical inference on low-dimensional locations of the individuals based on the observations of high-dimensional count vectors. In this study, we assume sparsity on a so-called loading matrix to enhance its interpretability. The sparsity assumption leads to the use of L 1 penalty, for the estimation of the loading. We provide novel computational and theoretical analyses for the corresponding penalized Poisson maximum likelihood estimation. We establish theoretical convergence rates for the parameters under weak-dependence conditions; this implies consistency even in large-dimensional problems. To implement the proposed method involving several computational issues, including nonconvex log-likelihoods, L 1 penalty, and orthogonal constraints, we developed an iterative algorithm. Further, we propose a Bayesian-Information-Criteria-based penalty parameter selection, which works well in the implementation. Some numerical evidence is provided by conducting real-data-based simulation analyses and the proposed method is illustrated with the analysis of German party manifesto data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Research Issues, Innovation and Associated Approaches for Recommendation on Social Networks.
- Author
-
Arora, Anuja and Taneja, Anu
- Subjects
SOCIAL networks ,RECOMMENDER systems ,SOCIAL media ,INFORMATION overload ,SOCIAL systems ,FOLKSONOMIES - Abstract
Recommendation Systems have been well established to reduce the problem of information overload and have become one of the most valuable tools applicable to different domains like computer science, mathematics, psychology etc. Despite its popularity and successful deployment in different commercial environments, this area is still exploratory due to the rapid development of social media which has accelerated the development of social recommendation systems. This paper addresses the key motivation for social media sites to apply recommendation techniques, unique properties of social recommendation systems, classification of social recommendation systems on the basis of basic models, comparison with existing traditional recommender systems, key findings from positive and negative experiences in applying social recommendation systems. Consequently, the aim of this paper is to provide research directions to improve the capability of social recommendation systems including the heterogeneous nature of social networks, understanding the role of negative relations, coldstart problems, integrating the cross-domain data and its applicability to a broader range of applications. This study will help the researchers and academicians in planning future social recommendation studies for designing a unified and coherent social recommendation system. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Efficient Dark Channel Prior Based Blind Image De-blurring.
- Author
-
AHMAD, Jawad, TOUQIR, Imran, and SIDDIQUI, Adil Masood
- Subjects
DISCRETE wavelet transforms ,COMPUTATIONAL complexity ,PIXELS - Abstract
Dark channel prior for blind image de-blurring has attained considerable attention in recent past. An interesting observation in blurring process is that the value of dark channel increases after averaging with adjacent high intensity pixels. L
o regularization is proposed to curtail the value of dark channel. Half quadratic splitting method is used to solve the non-convex behavior of Lo regularization. Furthermore, Discrete Wavelet Transform has been incorporated prior to de-blurring to increase the efficiency of algorithm. The most significant finding of this paper is a universal blind image de-blurring algorithm with reduced computational complexity. Experiments are performed and their results are comparable with state of the art de-blurring methods to evaluate the performance of algorithm. Experimental results also reveals that wavelet based dark channel prior image de-blurring is efficient for both uniform and nonuniform blur. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
11. Support matrix machine with truncated pinball loss for classification.
- Author
-
Li, Huiyi and Xu, Yitian
- Subjects
CLASSIFICATION ,QUANTILES ,NOISE ,PEDESTRIANS ,ELECTROENCEPHALOGRAPHY ,INPAINTING - Abstract
With the expansion of vector-based classifiers to matrix-based classifiers, noise insensitivity and sparsity have always been the focal points. Existing SMM and Pin-SMM enjoy the former and the latter separately. To remedy the shortcoming, we propose a support matrix machine based on truncated pinball loss (TPin-SMM) in this paper, which integrates noise insensitivity and sparsity simultaneously. Thanks to the adding of two quantiles, it possesses precious properties including Bayes rule and bounding misclassification error as well. Concerning the non-convexity of TPin-SMM, a targeted CCCP-ADMM algorithm is established, which decomposes the problem into three sub-problems of each sub-iteration. To verify the validity of TPin-SMM, we have conducted numerical experiments on image datasets, EEG signal sets and Daimler Pedestrian Classification Benchmark dataset with different noises, all of them pass the statistical tests. • SMM based on truncated pinball loss is proposed to effectively deal with matrix data. • TPin-SMM yields both noise insensitivity and sparsity during classification. • TPin-SMM obeys Bayes rule and bounds misclassification error. • Targeted CCCP-ADMM is established to solve the non-convex optimization problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Generalized sparse radial basis function networks for multi-classification problems.
- Author
-
Dai, Yunwei, Wu, Qingbiao, and Zhang, Yuao
- Subjects
RADIAL basis functions ,MATRIX inversion ,NETWORK performance - Abstract
Over the past decades, the radial basis function network (RBFN) has attracted extensive attention due to its simple network structure and powerful learning ability. Meanwhile, regularization methods have been intensively applied in RBFNs to enhance the performance of networks. A common regularization method is the ℓ 2 regularization, which improves the stability and generalization ability but leads to dense networks. Another common regularization method is to employ the ℓ 1 regularization that can successfully improve the sparsity of RBFN. The better strategy is to use the elastic-net regularization that combines both ℓ 2 and ℓ 1 regularization to improve stability and sparsity simultaneously. However, in multi-classification tasks, even the elastic-net regularization can only prune the redundant weights of nodes and cannot ensure the sparsity at the node level. In this paper, we propose a generalized sparse RBFN (GS-RBFN) based on the extended elastic-net regularization to handle multi-classification problems. By using the extended elastic-net regularization that integrates the Frobenius norm and L 2 , 1 norm, we accomplish the stability and sparsity of RBFN for multi-classification problems, of which the binary classification problem is only a special case. In order to improve the training efficiency under large-scale tasks, we further propose the parallel GS-RBFN (PGS-RBFN) with the matrix inversion lemma to accelerate the intensive computation. The alternating direction method of multipliers (ADMM) and its consensus variant are applied to train our proposed models, and we demonstrate their convergence in solving corresponding optimization problems. Experimental results on multi-classification datasets illustrate the effectiveness and advantages of our algorithms in accuracy, sparsity and convergence. • We propose a sparse radial basis function network in the multi-classification case. • The parallel version with matrix inversion lemma shortens the training time. • Prove that the solving methods converge to the global optimal solution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Quaternion tensor completion with sparseness for color video recovery.
- Author
-
Yang, Liqiao, Kou, Kit Ian, Miao, Jifei, Liu, Yang, and Hoi, Pui Man
- Subjects
QUATERNIONS ,INPAINTING ,LOGARITHMIC functions ,DISCRETE cosine transforms - Abstract
A novel low-rank completion algorithm based on the quaternion tensor is proposed in this paper. This approach uses the TQt-rank of quaternion tensor to maintain the structure of RGB channels throughout the entire process. In more detail, the pixels in each frame are encoded on three imaginary parts of a quaternion as an element in a quaternion matrix. Each quaternion matrix is then stacked into a quaternion tensor. A logarithmic function and truncated nuclear norm are employed to characterize the rank of the quaternion tensor in order to promote the low rankness of the tensor. Moreover, by introducing a newly defined quaternion tensor discrete cosine transform-based (QTDCT) regularization to the low-rank approximation framework, the optimized recovery results can be obtained in the local details of color videos. In particular, the sparsity of the quaternion tensor is reasonably characterized by l 1 norm in the QDCT domain. This strategy is optimized via the two-step alternating direction method of multipliers (ADMM) framework with convergence analysis. Numerical experimental results for recovering color videos show the obvious advantage of the proposed method over other potential competing approaches. • A novel quaternion tensor completion model is developed in this work. • The sparse regularization of the target quaternion tensor is designed. • This algorithm is optimized via the ADMM-based framework, and convergence analysis is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Efficient Training of Multi-Layer Neural Networks to Achieve Faster Validation.
- Author
-
Assiri, Adel Saad
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,BIOLOGICAL neural networks ,NEURONS ,ACCURACY - Abstract
Artificial neural networks (ANNs) are one of the hottest topics in computer science and artificial intelligence due to their potential and advantages in analyzing real-world problems in various disciplines, including but not limited to physics, biology, chemistry, and engineering. However, ANNs lack several key characteristics of biological neural networks, such as sparsity, scale-freeness, and small-worldness. The concept of sparse and scale-free neural networks has been introduced to fill this gap. Network sparsity is implemented by removing weak weights between neurons during the learning process and replacing them with random weights. When the network is initialized, the neural network is fully connected, which means the number of weights is four times the number of neurons. In this study, considering that a biological neural network has some degree of initial sparsity, we design an ANN with a prescribed level of initial sparsity. The neural network is tested on handwritten digits, Arabic characters, CIFAR-10, and Reuters newswire topics. Simulations show that it is possible to reduce the number of weights by up to 50% without losing prediction accuracy. Moreover, in both cases, the testing time is dramatically reduced compared with fully connected ANNs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. A sparse pair-preserving centroid-based supervised learning method for high-dimensional biomedical data or images.
- Author
-
Kalina, Jan and Matonoha, Ctirad
- Subjects
SUPERVISED learning ,PERFORMANCE standards ,CASE-control method ,GENE expression ,CENTROID - Abstract
In various biomedical applications designed to compare two groups (e.g. patients and controls in matched case-control studies), it is often desirable to perform a dimensionality reduction in order to learn a classification rule over high-dimensional data. This paper considers a centroid-based classification method for paired data, which at the same time performs a supervised variable selection respecting the matched pairs design. We propose an algorithm for optimizing the centroid (prototype, template). A subsequent optimization of weights for the centroid ensures sparsity, robustness to outliers, and clear interpretation of the contribution of individual variables to the classification task. We apply the method to a simulated matched case-control study dataset, to a gene expression study of acute myocardial infarction, and to mouth localization in 2D facial images. The novel approach yields a comparable performance with standard classifiers and outperforms them if the data are contaminated by outliers; this robustness makes the method relevant for genomic, metabolomic or proteomic high-dimensional data (in matched case-control studies) or medical diagnostics based on images, as (excessive) noise and contamination are ubiquitous in biomedical measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Application of sparsity-oriented VMD for gearbox fault diagnosis based on built-in encoder information.
- Author
-
Miao, Yonghao, Zhao, Ming, Yi, Yinggang, and Lin, Jing
- Subjects
GEARBOXES ,DECOMPOSITION method ,FAULT location (Engineering) ,DATA mining ,VIDEO coding - Abstract
Encoder signal as the built-in information is always used for the speed and motion control. Meanwhile, it has remarkable superiority in the fault diagnosis of gearbox compared with the popular vibration signal. Traditional decomposition method, such as EMD, gradually loses competitiveness with the increase of the complexity of the encoder signal. To solve the problem, with aid of the unique characteristic of encoder signal and the decomposition performance of variational mode decomposition (VMD), a new sparsity-oriented VMD (SOVMD), is originally designed and initially introduced for encoder signal analysis in this paper. Firstly, SOVMD is free from the selection of mode number and initial center frequency (ICF), which troubles seriously the application of VMD. Since a prior ICF which coarsely indicates the location of the fault band can enhance the decomposing efficiency of VMD, ICF = 0 is more appropriate and easier for the extraction of fault information concentrated in the low frequency region. Benefiting from the characteristics of distribution, the optimization of the mode number is unnecessary since the fault mode will generate in the first mode. Secondly, with the proposed selection criterion of the balance parameter, SOVMD can decompose the mode with most fault information more effectively and accurately. Furthermore, a sparsity operation which is originally designed for the encoder signal analysis can further suppress noise and enhance the fault impulses. Through the simulation and experimental cases from the planet gearbox bench, the feasibility and effectiveness of SOVMD can be verified. Therefore, it is reasonable to conclude that the proposed SOVMD is an alternative scheme for gearbox fault diagnosis based on built-in encoder information. • This paper originally designs SOVMD for gearbox fault diagnosis based on built-in encoder information. It provides an alternative scheme for the encoder signal analysis. • With least input parameter, the proposed SOVMD is more suitable for the encoder analysis than the traditional decomposition methods. • The sparsity operation is introduced to significantly improve the ability of denoising of VMD, which broadens the application range of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Sparse FIR Filter Design Based on Signomial Programming.
- Author
-
Bellotti, Maja Jurisic and Vucic, Mladen
- Subjects
FINITE impulse response filters ,IMPULSE response ,DIGITAL filters (Mathematics) - Abstract
The goal of sparse FIR filter design is to minimize the number of nonzero filter coefficients, while keeping its frequency response within specified boundaries. Such a design can be formally expressed via minimization of l0-norm of filter's impulse response. Unfortunately, the corresponding minimization problem has combinatorial complexity. Therefore, many design methods are developed, which solve the problem approximately, or which solve the approximate problem exactly. In this paper, we propose an approach, which is based on the approximation of the l0-norm by an lp-norm with 0 < p < 1. We minimize the lp-norm using recently developed method for signomial programming (SGP). Our design starts with forming an SGP problem that describes filter specifications. The optimum solution of the problem is then found by using iterative procedure, which solves a geometric program in each iteration. The filters whose magnitude responses are constrained in minimax sense are considered. The design examples are provided illustrating that the proposed method, in most cases, results in filters with higher sparsity than those of the filters obtained by recently published methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Warm Start of Multi-channel Weighted Nuclear Norm Minimization for Color Image Denoising.
- Author
-
Xue Guo, Feng Liu, Yiting Chen, and Xuetao Tian
- Subjects
SPATIAL filters ,COLORS - Abstract
The multi-channel weighted nuclear norm minimization (MCWNNM) performs excellently on color image denoising. Can we further improve the effectiveness of MCWNNM? To answer this question, we propose warm start of multi-channel weighted nuclear norm minimization (WS-MCWNNM) for color image denoising. In WS-MCWNNM, spatial filtering is utilized to preprocess the noisy image, and the result of preprocessing is treated as the input of MCWNNM model to remove noise. Experiment results show the proposed WS-MCWNNM outperforms MCWNNM both on peak signal-to-noise ratio and structural similarity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
19. A Comparison Evaluation of Demographic and Contextual Information of Movies using Tensor Factorization Model.
- Author
-
Taneja, Anu and Arora, Anuja
- Subjects
INFORMATION modeling ,MOTION pictures - Abstract
Recommendation systems have procured massive attention due to the fast and eruptive expansion of information on the internet. Traditionally, the recommendation systems recommend products based only on the rating criteria but nowadays user expects suggestions in accordance with his requirements and might have varying preferences in different circumstances. Thus, this work presents an innovative framework to consider additional information beyond ratings that is demographic details and under what situations user interact with the system known as contextual information. This additional information is modelled as varying dimensions of the tensor factorization model. The main motive of this study is to determine the more influential dimensions among demographic and contextual dimensions and it is observed that contextual dimensions are more influential than demographic dimensions. The results validate that usage of contextual dimensions mitigates the sparsity and cold-start problems by 16% and 22% respectively in comparison to demographic information. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. The recommendation service of the shareholding for fund companies based on improved collaborative filtering method.
- Author
-
Cui, Beiliang, Feng, Hao, Li, Shuqing, and Liu, Lu
- Subjects
RECOMMENDER systems ,FINANCE ,BUSINESS enterprises - Abstract
Through the effective processing of the financial dataset of listed companies and the improvement of the traditional collaborative filtering algorithm, this paper proposes a recommendation service which can predict the shareholding proportion of the listed companies more accurately in order to help fund companies to make reasonable decisions to invest listed companies. This service includes two important methods. To deal with the sparsity of data, one solution is to integrate similarity of user ratings and similarity of common-selected item to collaborative filtering algorithm. The relationship of common-selected item defined in this paper fully considers user's common scored items. The greater the relationship value between users is, more similar two users are. The other solution is mainly applicable to new companies with very few indicators. The approach proposed in this paper is mainly based on the similarity between items, and generates different filling value for each unrated item based on re-filling with item-based collaborative filtering algorithm. From the perspective of the recommendation method itself, the method proposed in this paper is superior to the traditional standard collaborative filtering algorithm and the SVD algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. TOWARDS REFINING UNRATED AND UNINTERESTED ITEMS FOR EFFECTIVE COLLABORATIVE FILTERING RECOMMENDATIONS.
- Author
-
Almu, Abba, Roko, Abubakar, Mohammed, Aminu, and Sa’idu, Ibrahim
- Subjects
PUNISHMENT ,RECOMMENDER systems ,LIBRARY catalogs - Abstract
Collaborative filtering recommender systems being the most successful and widely used plays an important role in providing suggestions or recommendations to users for the items of interest. However, many of these systems recommend items to individual users based on ratings which may not be possible if they are not sufficient due to the following problems: it may lead to the prediction of uninterested popular items already known to the users because of the penalty function employed to punish those items, the sparsity of the user-item rating matrix increases making it difficult to provide accurate recommendations and also it ignores the users general preferences on the recommended items whether they are of interest to users or not. Therefore, many times uninterested items can be found in the recommended lists of an individual user. This will make user to lose interest in the recommendations if these uninterested predicted items always appear in the lists. In this paper, we proposed a collaborative filtering recommendations refinement framework that combines the solutions to these three identified problems. The framework incorporates a popularise similarity function to reduce the influence of popular items during recommendations, an algorithm to fill up the missing ratings of unwanted recommendations in the user-item rating matrix thereby reducing the sparsity problem and finally an algorithm to solicit for user feedback on the recommended items to minimise uninterested recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
22. Comment: Empirical Bayes, Compound Decisions and Exchangeability.
- Author
-
Greenshtein, Eitan and Ritov, Ya’acov
- Abstract
We present some personal reflections on empirical Bayes/compound decision (EB/CD) theory following Efron (2019). In particular, we consider the role of exchangeability in the EB/CD theory and how it can be achieved when there are covariates. We also discuss the interpretation of EB/CD confidence interval, the theoretical efficiency of the CD procedure, and the impact of sparsity assumptions [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Investigation and comparison of ECG signal sparsity and features variations due to pre-processing steps.
- Author
-
Khorasani, S. Monem, Hodtani, G.A., and Kakhki, M. Molavi
- Subjects
ELECTROCARDIOGRAPHY ,WAVELET transforms ,DATA transmission systems ,INFORMATION-theoretic security ,COMPRESSED sensing - Abstract
Highlights • While having a great respect to your comments, we have addressed the comments in the new version. • All the relations have been cited, where necessary, (in blue color). • Some sentences in the conclusion have been changed and we have been tried to more clarify our work (in blue color). Abstract The pre-processing steps such as filtering, derivatives, and wavelet transform are necessary for many applications before data transmission, especially in telemedicine; however, the pre-processing makes variations on the signal sparsity, entropy, and compression metrics. In this paper, aiming at an information-theoretical study, we exemplify pre-processing by Savitzky Golay filtering because of its special properties, and then, show that (i) adding noise to an ECG signal decreases its sparsity and increases the diversity index named Gini-Sympson as a special case of Tsallis entropy; (ii) the sparsity of filtered, and wavelet transformed ECG is increased; (iii) Gini index of the modified signal is not more than that of the main one, but the non-zero elements are decreased, (iv) the compression metrics such as PRD and CR are improved if the compressed sensing method is performed on the filtered signal. And, finally, theoretical claims are validated by numerical results. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Joint power line interference suppression and ECG signal recovery in transform domains.
- Author
-
Liu, Hongqing, Li, Yong, Zhou, Yi, Jing, Xiaorong, and Truong, Trieu-Kien
- Subjects
ELECTROENCEPHALOGRAPHY ,MATHEMATICAL domains ,ELECTRIC interference ,BASIS pursuit ,MATHEMATICAL optimization - Abstract
This work addresses the electrocardiogram (ECG) recovery problem in the presence of power line interference (PLI) that corrupts the signal quality if it is not effectively suppressed. In this paper, the PLI is modeled as a linear superposition of sinusoidal signals, which has a sparse representation in the frequency domain. To accurately reconstruct the ECG, the time, second-order difference, and wavelet domains are exploited to sparsely represent the ECG. From the reformulations conducted, a novel joint optimization estimation is devised to simultaneously perform the ECG recovery and PLI suppression in the transform domains. Moreover, in order to solve the optimization problem, two efficient schemes based on the greedy algorithm together with the basis pursuit (BP) are developed. Finally, numerical studies demonstrate that the performance of the joint estimation algorithm is superior to the state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. Exploiting Spatial Sparsity in Vibration-based Damage Detection.
- Author
-
Smith, Chandler B. and Hernandez, Eric M.
- Subjects
STRUCTURAL health monitoring ,IMPULSE response ,FINITE element method ,VIBRATION measurements ,STRUCTURAL dynamics - Abstract
One of the main limitations traditionally encountered in vibration-based structural health monitoring (SHM) is detecting, localizing and quantifying localized damage using global response measurements. This paper presents an impulse response sensitivity approach enhanced with a LASSO regularization in order to detect spatially sparse (localized) damage. The analytical expression for impulse response sensitivity was derived using Vetter calculus. The proposed algorithm exploits the fact that when damage is sparse, an l 1 -norm regularization is more suitable than the common least squares ( l 2-norm) minimization. The proposed methodology is successfully applied in the context of a simulated 21 degree of freedom non-uniform shear beam with noise-contaminated measurements, limited modal parameters, and single input-single output and single input-two output systems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. Sparse models and recursive computations for determining arterial dynamics.
- Author
-
Ganesh, Thendral, Joseph, Jayaraj, Bhikkaji, Bharath, and Sivaprakasam, Mohanasankar
- Subjects
HEART beat ,DIAGNOSTIC imaging ,GAUSSIAN distribution ,IMPULSE response ,ARTERIES - Abstract
Arteries expand and contract in every cardiac cycle. Arteries of a healthy individual are elastic. Increased arterial stiffness is an established marker of the vascular health. An estimate of this vascular stiffness may be obtained by measuring the diameter of the Common Carotid Artery (CCA) in each cardiac cycle. This is typically done using image based systems. ARTSENS 1 1 ARTSENS was developed in Healthcare Technology Innovation Centre. is a portable, image free, ultrasound device for evaluating the stiffness of the CCA. ARTSENS emits a sequence of ultrasound pulses and records the reflected echoes. These echoes are then used to identify the CCA and estimate its diameter, and thereby evaluate the arterial stiffness. This paper deals with development of algorithms for determining the echoes due to the CCA and the estimation of its diameter. Here, the propagation path of each ultrasound pulse is modeled as an FIR filter considering the Gaussian modulated sine (GMS) pulse as the input and its reflections from the walls of the artery and other anatomical structures as the output. The impulse response of the FIR filter is sparse as its output has only few significant echoes. The echoes are reconstructed using the estimated filter coefficients and observed that the reconstructed signal is noise free. This results in the reliable tracking of the artery walls and evaluating its lumen (inner) diameter. The filter coefficients (impulse response) are first determined using Matching Pursuit (MP) algorithms. Additionally, the MP algorithms are made recursive to enable online filtering of the data. The inner diameter of the CCA was calculated for twenty seven subjects using the reconstructed (filtered) data. The estimated diameters were compared with diameters obtained from a B-mode imaging system and was found to be in close match. Furthermore, it is found that for a subject, only the non-zero impulse responses and their sample numbers need to be stored to recover the filtered echoes. Leading to a significant data compression. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. MULTISCALE SPARSE APPEARANCE MODELING AND SIMULATION OF PATHOLOGICAL DEFORMATIONS.
- Author
-
Zewail, Rami and Hag-ElSafi, Ahmed
- Subjects
MACHINE learning ,MULTISCALE modeling ,DIAGNOSTIC imaging - Abstract
Machine learning and statistical modeling techniques has drawn much interest within the medical imaging research community. However, clinically-relevant modeling of anatomical structures continues to be a challenging task. This paper presents a novel method for multiscale sparse appearance modeling in medical images with application to simulation of pathological deformations in X-ray images of human spine. The proposed appearance model benefits from the non-linear approximation power of Contourlets and its ability to capture higher order singularities to achieve a sparse representation while preserving the accuracy of the statistical model. Independent Component Analysis is used to extract statistical independent modes of variations from the sparse Contourlet-based domain. The new model is then used to simulate clinically-relevant pathological deformations in radiographic images. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. A novel fuzzy based approach for multiple target detection in MIMO radar.
- Author
-
Vignesh, G.J., Vikranth, S., and Ramanathan, R
- Subjects
MIMO radar ,MIMO systems ,FUZZY systems ,SIGNAL-to-noise ratio ,COMPUTATIONAL complexity ,FALSE alarms - Abstract
This paper deals with the problem of multiple target detection in MIMO Radar systems. We propose a novel fuzzy-based approach for detecting multiple targets when conventional Compressive Sensing algorithms fall short. Ease of interpretability, modeling, limited training data requirement and implementation are some of the benefits of the Fuzzy-Logic based approach. The variation of the probability of detection, the probability of false alarm and the Mean Squared Error with the Signal-to-Noise ratio are studied. Also, the time complexity of the proposed Fuzzy-based approach is measured. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. Sparse Locally Linear Embedding.
- Author
-
Ziegelmeier, Lori, Kirby, Michael, and Peterson, Chris
- Subjects
COLOR image processing ,COMPUTER algorithms ,NEAREST neighbor analysis (Statistics) ,MACHINE learning ,ROBUST control - Abstract
The Locally Linear Embedding (LLE) algorithm has proven useful for determining structure preserving, dimension reducing mappings of data on manifolds. We propose a modification to the LLE optimization problem that serves to minimize the number of neighbors required for the representation of each data point. The algorithm is shown to be robust over wide ranges of the sparsity parameter producing an average number of nearest neighbors that is consistent with the best performing parameter selection for LLE. Given the number of non-zero weights may be substantially reduced in comparison to LLE, Sparse LLE can be applied to larger data sets. We provide three numerical examples including a color image, the standard swiss roll, and a gene expression data set to illustrate the behavior of the method in comparison to LLE. The resulting algorithm produces comparatively sparse representations that preserve the neighborhood geometry of the data in the spirit of LLE. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
30. Import Vector Machine Based Hyperspectral Imagery Classification.
- Author
-
Wei, Xiangpo, Yu, Xuchu, Zhang, Pengqiang, Yu, Anzhu, and Li, Runsheng
- Subjects
SUPPORT vector machines ,HYPERSPECTRAL imaging systems ,KERNEL functions ,ALGORITHMS ,REGRESSION analysis - Abstract
Some deficiencies still exist in the support vector machine (SVM) based classification. For example, model training takes long time; the number of support vectors changes along with the number of training samples, resulting in poor stability and sparsity. In this paper, we describe a novel import vector machine (IVM) based approach that can achieve sparsity and improve the efficiency and accuracy for hyperspectral imagery classification. On the basis of kernel logistic regression model, IVM used the greedy forward algorithm to choose import vector from training samples for model training. The proposed approach is tested on the PHI data and performance comparison shows better stability and stronger sparsity of the proposed approach over support vector machine based classification. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
31. 一种稀疏度拟合的图像自适应压缩感知算法.
- Author
-
王晓华, 许雪, 王卫江, and 高东红
- Subjects
IMAGE compression standards ,IMAGE reconstruction ,SIGNAL-to-noise ratio ,IMAGE reconstruction algorithms ,ADAPTIVE sampling (Statistics) ,MATHEMATICAL models - Abstract
Copyright of Transactions of Beijing Institute of Technology is the property of Beijing University of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2017
- Full Text
- View/download PDF
32. Exploring New Vista of intelligent collaborative filtering: A restaurant recommendation paradigm.
- Author
-
Roy, Arup, Banerjee, Soumya, Sarkar, Manash, Darwish, Ashraf, Elhoseny, Mohamed, and Hassanien, Aboul Ella
- Subjects
ARTIFICIAL intelligence ,RECOMMENDER systems ,PRODUCTION scheduling ,ALGORITHMS ,PROBLEM solving - Abstract
Due to a busy schedule, people highly dependent on various kinds of online recommendations to utilize their precious time. The collaborative filtering is wide as the recommendation tool in the majority of the commercial recommenders. However, the outcome of collaborative filtering is often jeopardized by the sparsity, cold start, and grey sheep problems. To solve these issues in a more efficient way, a novel collaborative filtering algorithm entitled as Altered Client-based Collaborative Filtering (ACCF) for group recommendation is proposed. ACCF employs Dragonfly Algorithm to deal with the sparsity and neighbor selection. Restaurant recommendation system is utilized as a test bed for the validation of ACCF. With the end goal of performance assessment, a comparative study has been incorporated that depicts the proposed algorithm successfully minimizes the sparsity problem. The experimental outcome rendering ACCF provides 37%, 59%, 53% more Coverage, Precision and F-Measure than the user-based collaborative filtering even applicable for a small sample of data. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. Brain Imaging using Compressive Sensing.
- Author
-
Noore, Zahra
- Subjects
APPLIED mathematics ,BRAIN imaging ,COMPRESSED sensing ,IMAGE processing ,PERIODICALS - Abstract
Compressive sensing is an efficient way to represent signal with less number of samples. Shannon's theorem, which states that the sampling rate must be at least twice the maximum frequency present in the signal (the so-called Nyquist rate), is a common practice and conventional approach to sampling signals or images. Compressive sensing reveals that signals can be sensed or recovered from lesser data than required by Shanon's theorem. This paper presents a brief historical background, mathematical foundation and theory behind compressive sensing and its emerging applications with a special emphasis on communication, network design, signal processing and image processing. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
34. Dynamic Group Recommendation with Modified Collaborative Filtering and Temporal Factor.
- Author
-
Jinpeng Chen, Yu Liu, and Deyi Li
- Subjects
RECOMMENDER systems ,INFORMATION filtering systems ,COMPUTER users ,PERFORMANCE evaluation ,INFORMATION theory ,COLLECTIVE action - Abstract
Group recommendation, which provides a group of users with information items, has become increasingly important in both the workspace and people's social activities. Because users change their preferences or interests over time, the dynamics and diversity of group members is a challenging problem for group recommendation. In this article, we introduce a novel group recommendation method via fusing the modified collaborative filtering methodology with the temporal factor in order to, solve the dynamics problem. Meanwhile, we also put forward a new method of eliminating sparse problem so as to improve the accuracy of recommendation. We have tested our method in the music recommendation domain. Experimental results indicate the proposed group recommender method provides better performance than an original method and gRecs. The result of efficiency and scalability test also shows our method is usable. [ABSTRACT FROM AUTHOR]
- Published
- 2016
35. A fast algorithm for sparse multichannel blind deconvolution.
- Author
-
Nose-Filho, Kenji, Takahata, André K., Lopes, Renato, and Romano, João M. T.
- Subjects
DECONVOLUTION (Mathematics) ,BAYESIAN analysis ,COMPUTATIONAL complexity ,MINIMUM entropy method ,MATHEMATICAL regularization - Abstract
We have addressed blind deconvolution in a multichannel framework. Recently, a robust solution to this problem based on a Bayesian approach called sparse multichannel blind deconvolution (SMBD) was proposed in the literature with interesting results. However, its computational complexity can be high. We have proposed a fast algorithm based on the minimum entropy deconvolution, which is considerably less expensive. We designed the deconvolution filter to minimize a normalized version of the hybrid l
1 /l2 - norm loss function. This is in contrast to the SMBD, in which the hybrid l1 /l2 -norm function is used as a regularization term to directly determine the deconvolved signal. Results with synthetic data determined that the performance of the obtained deconvolution filter was similar to the one obtained in a supervised framework. Similar results were also obtained in a real marine data set for both techniques. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
36. Columnar Machine: Fast estimation of structured sparse codes.
- Author
-
Lőrincz, András, Milacski, Zoltán Á., Pintér, Balázs, and Verő, Anita L.
- Abstract
Ever since the discovery of columnar structures, their function remained enigmatic. As a potential explanation for this puzzling function, we introduce the ‘Columnar Machine’. We join two neural network types, Structured Sparse Coding (SSC) of generative nature exploiting sparse groups of neurons and Feed-Forward Networks (FFNs) into one architecture. Memories supporting recognition can be quickly loaded into SSC via supervision or can be learned by SSC in a self-organized manner. However, SSC evaluation is slow. We train FFNs for predicting the sparse groups and then the representation is computed by fast undercomplete methods. This two step procedure enables fast estimation of the overcomplete group sparse representations. The suggested architecture works fast and it is biologically plausible. Beyond the function of the minicolumnar structure it may shed light onto the role of fast feed-forward inhibitory thalamocortical channels and cortico-cortical feed-back connections. We demonstrate the method for natural image sequences where we exploit temporal structure and for a cognitive task where we explain the meaning of unknown words from their contexts. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
37. Forward selection and estimation in high dimensional single index models.
- Author
-
Luo, Shikai and Ghosal, Subhashis
- Abstract
We propose a new variable selection and estimation technique for high dimensional single index models with unknown monotone smooth link function. Among many predictors, typically, only a small fraction of them have significant impact on prediction. In such a situation, more interpretable models with better prediction accuracy can be obtained by variable selection. In this article, we propose a new penalized forward selection technique which can reduce high dimensional optimization problems to several one dimensional optimization problems by choosing the best predictor and then iterating the selection steps until convergence. The advantage of optimizing in one dimension is that the location of optimum solution can be obtained with an intelligent search by exploiting smoothness of the criterion function. Moreover, these one dimensional optimization problems can be solved in parallel to reduce computing time nearly to the level of the one-predictor problem. Numerical comparison with the LASSO and the shrinkage sliced inverse regression shows very promising performance of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. Proximal Algorithms in Statistics and Machine Learning.
- Author
-
Poison, Nicholas G., Scott, James G., and Willard, Brandon T.
- Subjects
PROXIMITY spaces ,MATHEMATICAL optimization ,MACHINE learning ,MATHEMATICAL models - Abstract
Proximal algorithms are useful for obtaining solutions to difficult optimization problems, especially those involving nonsmooth or composite objective functions. A proximal algorithm is one whose basic iterations involve the proximal operator of some function, whose evaluation requires solving a specific optimization problem that is typically easier than the original problem. Many familiar algorithms can be cast in this form, and this "proximal view" turns out to provide a set of broad organizing principles for many algorithms useful in statistics and machine learning. In this paper, we show how a number of recent advances in this area can inform modern statistical practice. We focus on several main themes: (1) variable splitting strategies and the augmented Lagrangian; (2) the broad utility of envelope (or variational) representations of objective functions; (3) proximal algorithms for composite objective functions; and (4) the surprisingly large number of functions for which there are closed-form solutions of proximal operators. We illustrate our methodology with regularized Logistic and Poisson regression incorporating a nonconvex bridge penalty and a fused lasso penalty. We also discuss several related issues, including the convergence of nondescent algorithms, acceleration and optimization for nonconvex functions. Finally, we provide directions for future research in this exciting area at the intersection of statistics and optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. Dynamic Data Driven Sensor Network Selection and Tracking.
- Author
-
Schizas, Ioannis D. and Maroulas, Vasileios
- Subjects
DYNAMIC data exchange ,SENSOR networks ,STATISTICAL correlation ,DISTRIBUTED network protocols ,HOMOTOPY theory - Abstract
The deployment of networks of sensors and development of pertinent information processing techniques can facilitate the requirement of situational awareness present in many defense/surveillance systems. Sensors allow the collection and distributed processing of information in a variety of environments whose structure is not known and is dynamically changing with time. A distributed dynamic data driven (DDDAS-based) framework is developed in this paper to address distributed multi-threat tracking under limited sensor resources. The acquired sensor data will be used to control the sensing part of the sensor network, and utilize only the sensing devices that acquire good quality measurements about the present targets. The DDDAS-based concept will be utilized to enable efficient sensor activation of only those parts of the network located close to a target/object. A novel combination of stochastic filtering techniques, drift homotopy and sparsity-inducing canonical correlation analysis (S-CCA) is utilized to dynamically identify the target-informative sensors and utilize them to perform improved drift-based particle filtering techniques that will allow robust, stable and accurate distributed tracking of multiple objects. Numerical tests demonstrate the effectiveness of the novel framework. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
40. Single Image Super Resolution from Compressive Samples Using Two Level Sparsity Based Reconstruction.
- Author
-
Nath, Aneesh G., Nair, Madhu S., and Rajan, Jeny
- Subjects
IMAGE compression ,DATA compression ,DIGITAL image processing ,PIXELS ,IMAGE processing - Abstract
Super Resolution based on Compressed Sensing (CS) considers low resolution (LR) image patch as the compressive samples of its high resolution (HR) patch. Compressed sensing based image acquisition systems acquire less number of random linear measurements without first collecting all the pixel values. But using these compressive measurements directly to reconstruct the image causes quality issues. In this paper an image super-resolution method with two level sparsity based reconstruction via patch based image interpolation and dictionary learning is proposed. The first level reconstruction generates a low resolution image from random samples and the interpolation scheme used in this algorithm reduces the HR-LR patch coherency due to neighborhood issue which is a major drawback of single image super resolution algorithms. The dictionary based reconstruction phase generates the high resolution image from the low resolution output of the first level reconstruction phase. The experimental results proved that the proposed two level reconstruction scheme recovers more details of the image and yields improved results from very few samples (around 35-45%) than the state-of-the-art algorithms which uses low resolution image itself as input. The results are compared by considering both PSNR values and visual perception. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
41. Sparse modeling of spatial environmental variables associated with asthma.
- Author
-
Chang, Timothy S., Gangnon, Ronald E., David Page, C., Buckingham, William R., Tandias, Aman, Cowan, Kelly J., Tomasallo, Carrie D., Arndt, Brian G., Hanrahan, Lawrence P., and Guilbert, Theresa W.
- Abstract
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
42. Sparse Kernel Canonical Correlation Analysis.
- Author
-
Delin Chu, Li-Zhi Liao, Michael K. Ng, and Xiaowei Zhang
- Subjects
STATISTICAL correlation ,CANONICAL correlation (Statistics) ,CLASSIFICATION algorithms ,KERNEL operating systems ,GENERALIZATION - Abstract
Canonical correlation analysis (CCA) is a multivariate statistical technique for finding the linear relationship between two sets of variables. The kernel generalization of CCA named kernel CCA has been proposed to find nonlinear relations between data sets. Despite the wide usage of CCA and kernel CCA, they have one common limitation that is the lack of sparsity in their solution. In this paper, we consider sparse kernel CCA and propose a novel sparse kernel CCA algorithm (SKCCA). Our algorithm is based on a relationship between kernel CCA and least squares. Sparsity of the dual transformations is introduced by penalizing the l
1 -norm of dual vectors. Experiments demonstrate that our algorithm not only performs well in computing sparse dual transformations but also can alleviate the over-fitting problem of kernel CCA. [ABSTRACT FROM AUTHOR]- Published
- 2013
43. Sparse Kernel Canonical Correlation Analysis.
- Author
-
Delin Chu, Li-Zhi Liao, Ng, Michael K., and Xiaowei Zhang
- Subjects
CANONICAL correlation (Statistics) ,MULTIVARIATE analysis ,LEAST squares ,ALGORITHMS ,KERNEL functions ,NONLINEAR systems - Abstract
Canonical correlation analysis (CCA) is a multivariate statistical technique for finding the linear relationship between two sets of variables. The kernel generalization of CCA named kernel CCA has been proposed to find nonlinear relations between data sets. Despite the wide usage of CCA and kernel CCA, they have one common limitation that is the lack of sparsity in their solution. In this paper, we consider sparse kernel CCA and propose a novel sparse kernel CCA algorithm (SKCCA). Our algorithm is based on a relationship between kernel CCA and least squares. Sparsity of the dual transformations is introduced by penalizing the `1-norm of dual vectors. Experiments demonstrate that our algorithm not only performs well in computing sparse dual transformations but also can alleviate the over-fitting problem of kernel CCA. [ABSTRACT FROM AUTHOR]
- Published
- 2013
44. DCT-compressive Sampling of Frequency-sparse Audio Signals.
- Author
-
Moreno-Alvarado, R. G. and Martinez-Garcia, Mauricio
- Subjects
DISCRETE cosine transforms ,MATHEMATICAL transformations ,COMPRESSED sensing ,SIGNAL processing ,SIGNAL theory - Abstract
The discrete cosine transform (DCT) and the compressive sampling (CS) are two signal processing techniques with many applications on a great number of engineering fields. In this paper, we propose to apply both techniques to the compression of audio signals. Using spectral analysis and the properties of the DCT, we can treat audio signals as sparse signals in the frequency domain. This is especially true for sounds representing tones. On the other hand, CS has been traditionally used to acquire and compress certain sparse images. We propose the use of DCT and CS to obtain an efficient representation of audio signals, especially when they are sparse in the frequency domain. By using the DCT as signal preprocessor in order to obtain a sparse representation in the frequency domain, we show that the subsequent application of CS represent our signals with less information than the well-known sampling theorem. This means that our results could be the basis for a new compression method for audio and speech signals. [ABSTRACT FROM AUTHOR]
- Published
- 2011
45. 2D and 3D prestack seismic data regularization using an accelerated sparse time-invariant Radon transform.
- Author
-
Ying-Qiang Zhang and Wen-Kai Lu
- Subjects
DATA analysis ,INTEGRAL geometry ,RADIOACTIVE substances ,NOBLE gases ,TIME series analysis - Abstract
The time-invariant Radon transform (RT) is commonly used to regularize and interpolate sparsely sampled or irregularly acquired prestack seismic data. The sparseness of the Radon model significantly influences the results of regularization. We have developed an effective and efficient method for the regularization and interpolation of 2D as well as 3D prestack seismic data. We used an accelerated sparse time-invariant RT in the mixed frequency-time domain to improve the performance of RT-based seismic data regularization. This 2D sparse RT incorporated the iterative 2D model shrinkage algorithm instead of the traditional iteratively reweighted least-squares (IRLS) algorithm in the time domain, and we computed the forward and inverse RTs in the frequency domain to solve the sparse inverse problem, which dramatically reduced the computational cost while obtaining a high-resolution result. The 2D synthetic and real data examples revealed that our 2D approach can better interpolate beyond aliasing a 2D prestack seismic record that contains a large gap, compared with the least-squares-based RT and the frequency-domain sparse RT methods. To extend the 2D technique to 3D more efficiently, we first formulate the 3D RT as a problem of solving a special matrix equation. Next, we use the iterative 3D model shrinkage algorithm to obtain a high-resolution 3D Radon model. The proposed 3D sparse RT method can be applied in the regularization of 3D prestack gathers, such as in the cable interpolation in a 3D marine survey. We achieved robustness and effectiveness with our 3D approach with successful applications to 3D synthetic and real data. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
46. Passive Multistatic Radar Imaging Based on Compressed Sensing Joint Sparse Aperture Autofocusing.
- Author
-
WU Hao, SU Wei-min, and GU Hong
- Published
- 2014
- Full Text
- View/download PDF
47. Classification of High Frequency Oscillations in intracranial EEG signals based on coupled time-frequency and image-related features.
- Author
-
Krikid, Fatma, Karfoul, Ahmad, Chaibi, Sahbi, Kachenoura, Amar, Nica, Anca, Kachouri, Abdennaceur, and Le Bouquin Jeannès, Régine
- Subjects
ELECTROENCEPHALOGRAPHY ,BIOMEDICAL signal processing ,RADIAL basis functions ,SIGNAL-to-noise ratio ,SUPPORT vector machines ,INSPECTION & review ,CLASSIFICATION - Abstract
• Proposal of a multi-classification approach for HFOs in intracranial EEG signals. • Coupled time-frequency and image-related features towards efficient HFOs multi-classification. • High performance on simulated and real iEEG signals. High Frequency Oscillations (HFOs) occurring in the range of [30–500 Hz] in epileptic intracranial ElectroEncephaloGraphic (iEEG) signals have recently proven to be good biomarkers for localizing the epileptogenic zone. Identifying these particular cerebral events and their discrimination from other transient events like interictal epileptic spikes is traditionally performed by experts through a visual inspection. However, this is laborious, very time-consuming and subjective. In this paper, a new classification approach of HFOs is proposed. This approach mainly relies on the combination of raw time frequency (TF) features, computed from a TF representation of HFOs using S-transform, with relevant image-based ones derived from a binarization of the corresponding TF grayscale image. The obtained feature vector is then used to learn a multi-class Radial Basis Function (RBF) based Support Vector Machine (SVM) classifier. The efficiency of the proposed approach, compared to conventional classification schemes based only on time, frequency or energy-based features, is confirmed, using both simulated and real iEEG signals. The proposed classification system has achieved, using simulated data and a signal to noise ratio (SNR) of 15 dB, a sensitivity, specificity, accuracy, area under the curve and F1-score around 0.990, 0.996, 0.995, 0.993 and 0.990 respectively. Besides, for real data, our proposed approach has attained the scores of 0.765, 0.941, 0.906, 0.929 and 0.768 for sensitivity, specificity, accuracy, area under the curve and F1-score respectively. These results confirm the relevance of coupling TF and image-related features, in the way proposed in this paper, for higher HFOs classification quality compared to already existing approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Electro/magnetoencephalography beamforming with spatial restrictions based on sparsity.
- Author
-
Zaragoza-Martínez, Claudia C. and Gutiérrez, David
- Subjects
ELECTROENCEPHALOGRAPHY ,MAGNETOENCEPHALOGRAPHY ,BEAMFORMING ,SIGNAL-to-noise ratio ,NEURAL stimulation ,NEURAL circuitry - Abstract
Abstract: We present a source localization method for electroencephalographic (EEG) and magnetoencephalographic (MEG) data which is based on an estimate of the sparsity obtained through the eigencanceler (EIG), which is a spatial filter whose weights are constrained to lie in the noise subspace. The EIG provides rejection of directional interferences while minimizing noise contributions and maintaining specified beam pattern constraints. In our case, the EIG is used to estimate the sparsity of the signal as a function of the position, then we use this information to spatially restrict the neural sources to locations out of the sparsity maxima. As proof of the concept, we incorporate this restriction in the “classical” linearly constrained minimum variance (LCMV) source localization approach in order to enhance its performance. We present numerical examples to evaluate the proposed method using realistically simulated EEG/MEG data for different signal-to-noise (SNR) conditions and various levels of correlation between sources, as well as real EEG/MEG measurements of median nerve stimulation. Our results show that the proposed method has the potential of reducing the bias on the search of neural sources in the classical approach, as well as making it more effective in localizing correlated sources. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
49. A Distributed Compressive Sensing Technique for Data Gathering in Wireless Sensor Networks.
- Author
-
Masoum, Alireza, Meratnia, Nirvana, and Havinga, Paul J.M.
- Subjects
WIRELESS sensor networks ,COMPRESSED sensing ,ACQUISITION of data ,ENERGY consumption ,SIGNAL processing ,COMPUTER algorithms - Abstract
Abstract: Compressive sensing is a new technique utilized for energy efficient data gathering in wireless sensor networks. It is characterized by its simple encoding and complex decoding. The strength of compressive sensing is its ability to reconstruct sparse or compressible signals from small number of measurements without requiring any a priori knowledge about the signal structure. Considering the fact that wireless sensor nodes are often deployed densely, the correlation among them can be utilized for further compression. By utilizing this spatial correlation, we propose a joint sparsity-based compressive sensing technique in this paper. Our approach employs Bayesian inference to build probabilistic model of the signals and thereafter applies belief propagation algorithm as a decoding method to recover the common sparse signal. The simulation results show significant gain in terms of signal reconstruction accuracy and energy consumption of our approach compared with existing approaches. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
50. Hybrid attribute-based recommender system for learning material using genetic algorithm and a multidimensional information model.
- Author
-
Salehi, Mojtaba, Pourzaferani, Mohammad, and Razavi, Seyed Amir
- Subjects
RECOMMENDER systems ,HYBRID systems ,GENETIC algorithms ,ONLINE education ,TEACHING aids ,INFORMATION filtering systems ,COMBINATORIAL optimization - Abstract
Abstract: In recent years, the explosion of learning materials in the web-based educational systems has caused difficulty of locating appropriate learning materials to learners. A personalized recommendation is an enabling mechanism to overcome information overload occurred in the new learning environments and deliver suitable materials to learners. Since users express their opinions based on some specific attributes of items, this paper proposes a hybrid recommender system for learning materials based on their attributes to improve the accuracy and quality of recommendation. The presented system has two main modules: explicit attribute-based recommender and implicit attribute-based recommender. In the first module, weights of implicit or latent attributes of materials for learner are considered as chromosomes in genetic algorithm then this algorithm optimizes the weights according to historical rating. Then, recommendation is generated by Nearest Neighborhood Algorithm (NNA) using the optimized weight vectors implicit attributes that represent the opinions of learners. In the second, preference matrix (PM) is introduced that can model the interests of learner based on explicit attributes of learning materials in a multidimensional information model. Then, a new similarity measure between PMs is introduced and recommendations are generated by NNA. The experimental results show that our proposed method outperforms current algorithms on accuracy measures and can alleviate some problems such as cold-start and sparsity. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.