155 results on '"Krim"'
Search Results
2. Robust Subspace Detectors Based on α-Divergence With Application to Detection in Imaging
- Author
-
Aref Miri Rekavandi, Robin J. Evans, and Abd-Krim Seghouane
- Subjects
Hyperparameter ,02 engineering and technology ,Wald test ,Computer Graphics and Computer-Aided Design ,symbols.namesake ,Gaussian noise ,Robustness (computer science) ,Likelihood-ratio test ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Divergence (statistics) ,Algorithm ,Software ,Subspace topology ,Signal subspace - Abstract
Robust variants of Wald, Rao and likelihood ratio (LR) tests for the detection of a signal subspace in a signal interference subspace corrupted by contaminated Gaussian noise are proposed in this paper. They are derived using the $\alpha -$ divergence, and the trade-off between the robustness and the power (the probability of detection) of the tests is adjustable using a single hyperparameter $\alpha $ . It is shown that when $\alpha \rightarrow 1$ , these tests are equivalent to their well known classical counterparts. For example the robust LR test coincides with the LR test or the matched subspace detector (MSD). Asymptotic results are provided to support the proposed tests and robustness to outliers is obtained using values of $\alpha . Numerical experiments illustrating the performance of these tests on simulated, real functional magnetic resonance imaging (fMRI), hyperspectral and synthetic aperture radar (SAR) data are also presented.
- Published
- 2021
- Full Text
- View/download PDF
3. Sparse Principal Component Analysis With Preserved Sparsity Pattern
- Author
-
Navid Shokouhi, Abd-Krim Seghouane, and Inge Koch
- Subjects
Computer science ,business.industry ,Feature vector ,Dimensionality reduction ,Feature extraction ,Sparse PCA ,Feature selection ,Image processing ,Pattern recognition ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Blind signal separation ,Principal component analysis ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Software - Abstract
Principal component analysis (PCA) is widely used for feature extraction and dimension reduction in pattern recognition and data analysis. Despite its popularity, the reduced dimension obtained from the PCA is difficult to interpret due to the dense structure of principal loading vectors. To address this issue, several methods have been proposed for sparse PCA, all of which estimate loading vectors with few non-zero elements. However, when more than one principal component is estimated, the associated loading vectors do not possess the same sparsity pattern. Therefore, it becomes difficult to determine a small subset of variables from the original feature space that have the highest contribution in the principal components. To address this issue, an adaptive block sparse PCA method is proposed. The proposed method is guaranteed to obtain the same sparsity pattern across all principal components. Experiments show that applying the proposed sparse PCA method can help improve the performance of feature selection for image processing applications. We further demonstrate that our proposed sparse PCA method can be used to improve the performance of blind source separation for functional magnetic resonance imaging data.
- Published
- 2019
- Full Text
- View/download PDF
4. Multiphase joint segmentation-registration and object tracking for layered images
- Author
-
Ping-Feng Chen, Krim, H., and Mendoza, O.L.
- Subjects
Image coding -- Methods ,Image segmentation -- Innovations ,Mathematical optimization -- Usage ,Object recognition (Computers) -- Usage ,Pattern recognition -- Usage ,Simulation methods -- Technology application ,Technology application ,Business ,Computers ,Electronics ,Electronics and electrical industries - Published
- 2010
5. Object recognition through topo-geometric shape models using error-tolerant subgraph isomorphisms
- Author
-
Baloch, S. and Krim, H.
- Subjects
Graph theory -- Usage ,Isomorphisms (Mathematics) -- Analysis ,Object recognition (Computers) -- Innovations ,Pattern recognition -- Innovations ,Three-dimensional display systems -- Usage ,3D technology ,Business ,Computers ,Electronics ,Electronics and electrical industries - Published
- 2010
6. Squigraphs for fine and compact modeling of 3-D sapes
- Author
-
Aouada, D. and Krim, H.
- Subjects
Morse code -- Usage ,Algebraic topology -- Models ,Topology -- Models ,Business ,Computers ,Electronics ,Electronics and electrical industries - Published
- 2010
7. A shearlet approach to edge analysis and detection
- Author
-
Sheng Yi, Labate, Demetrio, Easley, Glenn R., and Krim, Hamid
- Subjects
Machine vision -- Analysis ,Edge detection (Image processing) -- Research ,Wavelet transforms -- Usage ,Business ,Computers ,Electronics ,Electronics and electrical industries - Published
- 2009
8. Flexible skew-symmetric shape model for shape representation, classification, and sampling
- Author
-
Baloch, Sajjad H. and Krim, Hamid
- Subjects
Image processing -- Methods ,Gaussian processes -- Analysis ,Distribution (Probability theory) -- Analysis ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
A novel statistical method is presented for shape modeling by exploiting an extended class of the flexible skew-symmetric shape model (FSSM) distributions, where each is represented by a distribution. FSSM is formulated as a joint bimodal distribution of angle and distance from the centroid of an aggregate of random points.
- Published
- 2007
9. Robust Subspace Detectors Based on α -Divergence With Application to Detection in Imaging.
- Author
-
Rekavandi, Aref Miri, Seghouane, Abd-Krim, and Evans, Robin J.
- Subjects
SYNTHETIC aperture radar ,DETECTORS ,RANDOM noise theory ,SIGNAL detection ,HYPERSPECTRAL imaging systems ,FUNCTIONAL magnetic resonance imaging - Abstract
Robust variants of Wald, Rao and likelihood ratio (LR) tests for the detection of a signal subspace in a signal interference subspace corrupted by contaminated Gaussian noise are proposed in this paper. They are derived using the $\alpha -$ divergence, and the trade-off between the robustness and the power (the probability of detection) of the tests is adjustable using a single hyperparameter $\alpha $. It is shown that when $\alpha \rightarrow 1$ , these tests are equivalent to their well known classical counterparts. For example the robust LR test coincides with the LR test or the matched subspace detector (MSD). Asymptotic results are provided to support the proposed tests and robustness to outliers is obtained using values of $\alpha < 1$. Numerical experiments illustrating the performance of these tests on simulated, real functional magnetic resonance imaging (fMRI), hyperspectral and synthetic aperture radar (SAR) data are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Geodesic matching of triangulated surfaces
- Author
-
Hamza, A. Ben and Krim, Hamid
- Subjects
Object recognition (Computers) -- Methods ,Pattern recognition -- Methods ,Three-dimensional graphics -- Analysis ,Image processing -- Methods ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
A new methodology is proposed for three-dimensional (3-D) object matching based on a global geodesic measure. The object matching can be carried out by information-theoretic dissimilarity measure calculations between geodesic shape distributions and is also computationally efficient and inexpensive.
- Published
- 2006
11. Deep Dictionary Learning: A PARametric NETwork Approach.
- Author
-
Mahdizadehaghdam, Shahin, Panahi, Ashkan, Krim, Hamid, and Dai, Liyi
- Subjects
DEEP learning - Abstract
Deep dictionary learning seeks multiple dictionaries at different image scales to capture complementary coherent characteristics. We propose a method for learning a hierarchy of synthesis dictionaries with an image classification goal. The dictionaries and classification parameters are trained by a classification objective, and the sparse features are extracted by reducing a reconstruction loss in each layer. The reconstruction objectives in some sense regularize the classification problem and inject source signal information in the extracted features. The performance of the proposed hierarchical method increases by adding more layers, which consequently makes this model easier to tune and adapt. The proposed algorithm furthermore shows a remarkably lower fooling rate in the presence of adversarial perturbation. The validation of the proposed approach is based on its classification performance using four benchmark datasets and is compared to a Convolutional Neural Network (CNN) of similar size. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
12. Identification of a discrete planar symmetric shape from a single noisy view
- Author
-
Poliannikov, Oleg V. and Krim, Hamid
- Subjects
Machine vision -- Analysis ,Geometry, Projective -- Evaluation ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
The method for identifying a discrete planar symmetric shape from an arbitrary viewpoint is described. It is shown that the proposed method can be extended to the case of noisy data to yield an optimal estimate of a shape in question.
- Published
- 2005
13. Fast incorporation of optical flow into active polygons
- Author
-
Unal, Gozde, Krim, Hamid, and Yezzi, Anthony
- Subjects
Image processing -- Research ,Visual communication -- Research ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
The addition of a prediction step to active contour-based visual tracking using an optical flow is considered, and the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers is clarified. The need of adding 'ad hoc' regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters are avoided.
- Published
- 2005
14. Multiscale signal enhancement: beyond the normalityand independence assumption
- Author
-
Krim, Hamid and He, Yun
- Subjects
Signal processing -- Research ,Filtering (Electronics) -- Research ,Noise reduction systems (Electronics) -- Research ,Electronics industry -- Research ,Imaging technology ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
A new nonlinear smoothness-constrained filtering technique is described. Numerical experiments show that the approach is effective and efficient.
- Published
- 2002
15. Image segmentation and edge enhancement with stabilized inverse diffusion equations
- Author
-
Pollak, Ilya, Willsky, Alan S., and Krim, Hamid
- Subjects
Image processing -- Research ,Synthetic aperture radar -- Research ,Differential equations -- Usage ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Image-segmentation and edge-enhancement has been applied to signals and images with very high noise levels and to blurry signals. The approach is based on a new class of evolution equations for processing of imagery and signals. They are called stabilized inverse diffusion equations (SIDEs) and are a family of first-order multidimensional ordinary differential equations with discontinuous right-hand sides. Such an equation is an inverse diffusion everywhere other than at local extreme. At these extreme some stabilization is introduced.
- Published
- 2000
16. Sequential Dictionary Learning From Correlated Data: Application to fMRI Data Analysis
- Author
-
Asif Iqbal and Abd-Krim Seghouane
- Subjects
Computer science ,Matrix norm ,Image processing ,02 engineering and technology ,Regularization (mathematics) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Matrix (mathematics) ,0302 clinical medicine ,Image Processing, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,K-SVD ,medicine.diagnostic_test ,business.industry ,Brain ,Approximation algorithm ,Signal Processing, Computer-Assisted ,020206 networking & telecommunications ,Magnetic resonance imaging ,Pattern recognition ,Sparse approximation ,Magnetic Resonance Imaging ,Computer Graphics and Computer-Aided Design ,Independent component analysis ,Data set ,Algorithm design ,Supervised Machine Learning ,Artificial intelligence ,business ,Neural coding ,Functional magnetic resonance imaging ,Algorithms ,Software - Abstract
Sequential dictionary learning via the K-SVD algorithm has been revealed as a successful alternative to conventional data driven methods, such as independent component analysis for functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are however structured data matrices with notions of spatio-temporal correlation and temporal smoothness. This prior information has not been included in the K-SVD algorithm when applied to fMRI data analysis. In this paper, we propose three variants of the K-SVD algorithm dedicated to fMRI data analysis by accounting for this prior information. The proposed algorithms differ from the K-SVD in their sparse coding and dictionary update stages. The first two algorithms account for the known correlation structure in the fMRI data by using the squared Q, R-norm instead of the Frobenius norm for matrix approximation. The third and last algorithms account for both the known correlation structure in the fMRI data and the temporal smoothness. The temporal smoothness is incorporated in the dictionary update stage via regularization of the dictionary atoms obtained with penalization. The performance of the proposed dictionary learning algorithms is illustrated through simulations and applications on real fMRI data.
- Published
- 2017
- Full Text
- View/download PDF
17. Multiscale segmentation and anomaly enhancement of SAR imagery
- Author
-
Fosgate, Charles H., Krim, Hamid, Irving, William W., Karl, William C., and Willsky, Alan S.
- Subjects
Synthetic aperture radar -- Image quality ,Stochastic analysis -- Usage ,Pixels -- Observations ,Relief models -- Research ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Multiscale stochastic models can be used for segmentation of natural clutters of grass and forest terrains and for enhancement of anomalous pixel regions for man-made object detection in synthetic aperture radar (SAR) imagery. The models take advantage of the coherent nature of SAR sensors and the differences in interscale variability and predictability of different images. The models have a scale-autoregressive nature, and are applicable to image pixel classification and grass-forest boundary detection. The algorithmic calculations involved are simple and efficient.
- Published
- 1997
18. Metric Driven Classification: A Non-Parametric Approach Based on the Henze–Penrose Test Statistic.
- Author
-
Ghanem, Sally, Krim, Hamid, Clouse, Hamilton Scott, and Sakla, Wesam
- Subjects
PATTERN recognition systems ,NONPARAMETRIC statistics ,COMPUTER vision ,PROBABILITY theory ,COMPUTATIONAL complexity - Abstract
Entropy-based divergence measures have proven their effectiveness in many areas of computer vision and pattern recognition. However, the complexity of their implementation might be prohibitive in resource-limited applications, as they require estimates of probability densities which are expensive to compute directly for high-dimensional data. In this paper, we investigate the usage of a non-parametric distribution-free metric, known as the Henze–Penrose test statistic to obtain bounds for the $k$ -nearest neighbors ($k$ -NN) classification accuracy. Simulation results demonstrate the effectiveness and the reliability of this metric in estimating the inter-class separability. In addition, the proposed bounds on the $k$ -NN classification are exploited for evaluating the efficacy of different pre-processing techniques as well as selecting the least number of features that would achieve the desired classification performance. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
19. Analysis Dictionary Learning Based Classification: Structure for Robustness
- Author
-
Tang, Wen, primary, Panahi, Ashkan, additional, Krim, Hamid, additional, and Dai, Liyi, additional
- Published
- 2019
- Full Text
- View/download PDF
20. An $\alpha$ -Divergence-Based Approach for Robust Dictionary Learning
- Author
-
Iqbal, Asif, primary and Seghouane, Abd-Krim, additional
- Published
- 2019
- Full Text
- View/download PDF
21. Sparse Principal Component Analysis With Preserved Sparsity Pattern
- Author
-
Seghouane, Abd-Krim, primary, Shokouhi, Navid, additional, and Koch, Inge, additional
- Published
- 2019
- Full Text
- View/download PDF
22. Subspace Learning of Dynamics on a Shape Manifold: A Generative Modeling Approach
- Author
-
Hamid Krim and Sheng Yi
- Subjects
Manifold alignment ,Invariant manifold ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Fundamental theorem of Riemannian geometry ,Topology ,Computer Graphics and Computer-Aided Design ,Pseudo-Riemannian manifold ,Statistical manifold ,symbols.namesake ,symbols ,Mathematics::Differential Geometry ,Information geometry ,Software ,Subspace topology ,Ricci curvature ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
In this paper, we propose a novel subspace learning algorithm of shape dynamics. Compared to the previous works, our method is invertible and better characterizes the nonlinear geometry of a shape manifold while retaining a good computational efficiency. In this paper, using a parallel moving frame on a shape manifold, each path of shape dynamics is uniquely represented in a subspace spanned by the moving frame, given an initial condition (the starting point and starting frame). Mathematically, such a representation may be formulated as solving a manifold-valued differential equation, which provides a generative modeling of high-dimensional shape dynamics in a lower dimensional subspace. Given the parallelism and a path on a shape manifold, the parallel moving frame along the path is uniquely determined up to the choice of the starting frame. With an initial frame, we minimize the reconstruction error from the subspace to shape manifold. Such an optimization characterizes well the Riemannian geometry of the manifold by imposing parallelism (equivalent as a Riemannian metric) constraints on the moving frame. The parallelism in this paper is defined by a Levi-Civita connection, which is consistent with the Riemannian metric of the shape manifold. In the experiments, the performance of the subspace learning is extensively evaluated using two scenarios: 1) how the high dimensional geometry is characterized in the subspace and 2) how the reconstruction compares with the original shape dynamics. The results demonstrate and validate the theoretical advantages of the proposed approach.
- Published
- 2014
- Full Text
- View/download PDF
23. Human Activity as a Manifold-Valued Random Process
- Author
-
L. K. Norris, Sheng Yi, and Hamid Krim
- Subjects
Motor Activity ,Topology ,Curvature ,Models, Biological ,Sensitivity and Specificity ,Pseudo-Riemannian manifold ,symbols.namesake ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,Humans ,Computer Simulation ,Whole Body Imaging ,Mathematics ,Manifold alignment ,Models, Statistical ,Parallel transport ,Euclidean space ,Stochastic process ,Reproducibility of Results ,Actigraphy ,Computer Graphics and Computer-Aided Design ,Manifold ,Data Interpretation, Statistical ,symbols ,Configuration space ,Algorithms ,Software - Abstract
Most of previous shape based human activity models were built with either a linear assumption or an extrinsic interpretation of the nonlinear geometry of the shape space, both of which proved to be problematic on account of the nonlinear intrinsic geometry of the associated shape spaces. In this paper we propose an intrinsic stochastic modeling of human activity on a shape manifold. More importantly, within an elegant and theoretically sound framework, our work effectively bridges the nonlinear modeling of human activity on a nonlinear space, with the classic stochastic modeling in a Euclidean space, and thereby provides a foundation for a more effective and accurate analysis of the nonlinear feature space of activity models. From a video sequence, human activity is extracted as a sequence of shapes. Such a sequence is considered as one realization of a random process on a shape manifold. Different activities are then modeled as manifold valued random processes with different distributions. To address the problem of stochastic modeling on a manifold, we first construct a nonlinear invertible map of a manifold valued process to a Euclidean process. The resulting process is then modeled as a global or piecewise Brownian motion. The mapping from a manifold to a Euclidean space is known as a stochastic development. The advantage of such a technique is that it yields a one-one correspondence, and the resulting Euclidean process intrinsically captures the curvature on the original manifold. The proposed algorithm is validated on two activity databases [15], [5] and compared with the related works on each of these. The substantiating results demonstrate the viability and high accuracy of our modeling technique in characterizing and classifying different activities.
- Published
- 2012
- Full Text
- View/download PDF
24. A Kullback–Leibler Divergence Approach to Blind Image Restoration
- Author
-
Abd-Krim Seghouane
- Subjects
Kullback–Leibler divergence ,Covariance matrix ,business.industry ,Image processing ,Multivariate normal distribution ,Pattern recognition ,Computer Graphics and Computer-Aided Design ,symbols.namesake ,Kernel (image processing) ,symbols ,Probability distribution ,Artificial intelligence ,business ,Gaussian process ,Software ,Image restoration ,Mathematics - Abstract
A new algorithm for maximum-likelihood blind image restoration is presented in this paper. It is obtained by modeling the original image and the additive noise as multivariate Gaussian processes with unknown covariance matrices. The blurring process is specified by its point spread function, which is also unknown. Estimations of the original image and the blur are derived by alternating minimization of the Kullback-Leibler divergence between a model family of probability distributions defined using the linear image degradation model and a desired family of probability distributions constrained to be concentrated on the observed data. The algorithm presents the advantage to provide closed form expressions for the parameters to be updated and to converge only after few iterations. A simulation example that illustrates the effectiveness of the proposed algorithm is presented.
- Published
- 2011
- Full Text
- View/download PDF
25. From Point to Local Neighborhood: Polyp Detection in CT Colonography Using Geodesic Ring Neighborhoods
- Author
-
Ju Lynn Ong and Abd-Krim Seghouane
- Subjects
Geodesic ,Colonic Polyps ,Reproducibility of Results ,Centroid ,Geometry ,Geometry processing ,Curvature ,Sensitivity and Specificity ,Computer Graphics and Computer-Aided Design ,Object detection ,Pattern Recognition, Automated ,Radiographic Image Enhancement ,Artificial Intelligence ,Mesh generation ,Surface roughness ,Humans ,Radiographic Image Interpretation, Computer-Assisted ,Colonography, Computed Tomographic ,Algorithms ,Software ,Mathematics ,Shape analysis (digital geometry) - Abstract
Existing polyp detection methods rely heavily on curvature-based characteristics to differentiate between lesions. These assume that the discrete triangulated surface mesh or volume closely approximates a smooth continuous surface. However, this is often not the case and because curvature is computed as a local feature and a second-order differential quantity, the presence of noise significantly affects its estimation. For this reason, a more global feature is required to provide an accurate description of the surface at hand. In this paper, a novel method incorporating a local neighborhood around the centroid of a surface patch is proposed. This is done using geodesic rings which accumulate curvature information in a neighborhood around this centroid. This geodesic-ring neighborhood approximates a single smooth, continuous surface upon which curvature and orientation estimation methods can be applied. A new global shape index, S is also introduced and computed. These curvature and orientation values will be used to classify the surface as either a bulbous polyp, ridge-like fold or semiplanar structure. Experimental results show that this method is promising (100% sensitivity, 100% specificity for lesions >;10 mm) for distinguishing between bulbous polyps, folds and planar-like structures in the colon.
- Published
- 2011
- Full Text
- View/download PDF
26. Multiphase Joint Segmentation-Registration and Object Tracking for Layered Images
- Author
-
Olga L. Mendoza, Hamid Krim, and Ping-Feng Chen
- Subjects
Active contour model ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,Image registration ,Image processing ,Image segmentation ,Computer Graphics and Computer-Aided Design ,Edge detection ,Object detection ,Video tracking ,Motion estimation ,Computer vision ,Segmentation ,Artificial intelligence ,Image sensor ,business ,Software - Abstract
In this paper we propose to jointly segment and register objects of interest in layered images. Layered imaging refers to imageries taken from different perspectives and possibly by different sensors. Registration and segmentation are therefore the two main tasks which contribute to the bottom level, data alignment, of the multisensor data fusion hierarchical structures. Most exploitations of two layered images assumed that scanners are at very high altitudes and that only one transformation ties the two images. Our data are however taken at mid-range and therefore requires segmentation to assist us examining different object regions in a divide-and-conquer fashion. Our approach is a combination of multiphase active contour method with a joint segmentation-registration technique (which we called MPJSR) carried out in a local moving window prior to a global optimization. To further address layered video sequences and tracking objects in frames, we propose a simple adaptation of optical flow calculations along the active contours in a pair of layered image sequences. The experimental results show that the whole integrated algorithm is able to delineate the objects of interest, align them for a pair of layered frames and keep track of the objects over time.
- Published
- 2010
- Full Text
- View/download PDF
27. A Shearlet Approach to Edge Analysis and Detection
- Author
-
Hamid Krim, Sheng Yi, Glenn R. Easley, and Demetrio Labate
- Subjects
business.industry ,Orientation (computer vision) ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wavelet transform ,Pattern recognition ,Image processing ,Computer Graphics and Computer-Aided Design ,Edge detection ,Wavelet ,Shearlet ,Curvelet ,Artificial intelligence ,business ,Software ,Mathematics - Abstract
It is well known that the wavelet transform provides a very effective framework for analysis of multiscale edges. In this paper, we propose a novel approach based on the shearlet transform: a multiscale directional transform with a greater ability to localize distributed discontinuities such as edges. Indeed, unlike traditional wavelets, shearlets are theoretically optimal in representing images with edges and, in particular, have the ability to fully capture directional and other geometrical features. Numerical examples demonstrate that the shearlet approach is highly effective at detecting both the location and orientation of edges, and outperforms methods based on wavelets as well as other standard methods. Furthermore, the shearlet approach is useful to design simple and effective algorithms for the detection of corners and junctions.
- Published
- 2009
- Full Text
- View/download PDF
28. Sequential Dictionary Learning From Correlated Data: Application to fMRI Data Analysis
- Author
-
Seghouane, Abd-Krim, primary and Iqbal, Asif, additional
- Published
- 2017
- Full Text
- View/download PDF
29. Flexible Skew-Symmetric Shape Model for Shape Representation, Classification, and Sampling
- Author
-
Sajjad Baloch and Hamid Krim
- Subjects
Information Storage and Retrieval ,Sensitivity and Specificity ,Pattern Recognition, Automated ,symbols.namesake ,Heat kernel signature ,Artificial Intelligence ,Joint probability distribution ,Active shape model ,Image Interpretation, Computer-Assisted ,Computer Simulation ,Gaussian process ,Mathematics ,Models, Statistical ,business.industry ,Reproducibility of Results ,Pattern recognition ,Statistical model ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Point distribution model ,Skewness ,symbols ,Artificial intelligence ,business ,Algorithm ,Algorithms ,Software ,Shape analysis (digital geometry) - Abstract
Skewness of shape data often arises in applications (e.g., medical image analysis) and is usually overlooked in statistical shape models. In such cases, a Gaussian assumption is unrealistic and a formulation of a general shape model which accounts for skewness is in order. In this paper, we present a novel statistical method for shape modeling, which we refer to as the flexible skew-symmetric shape model (FSSM). The model is sufficiently flexible to accommodate a departure from Gaussianity of the data and is fairly general to learn a "mean shape" (template), with a potential for classification and random generation of new realizations of a given shape. Robustness to skewness results from deriving the FSSM from an extended class of flexible skew-symmetric distributions. In addition, we demonstrate that the model allows us to extract principal curves in a point cloud. The idea is to view a shape as a realization of a spatial random process and to subsequently learn a shape distribution which captures the inherent variability of realizations, provided they remain, with high probability, within a certain neighborhood range around a mean. Specifically, given shape realizations, FSSM is formulated as a joint bimodal distribution of angle and distance from the centroid of an aggregate of random points. Mean shape is recovered from the modes of the distribution, while the maximum likelihood criterion is employed for classification.
- Published
- 2007
- Full Text
- View/download PDF
30. Identification of a discrete planar symmetric shape from a single noisy view
- Author
-
Oleg V. Poliannikov and Hamid Krim
- Subjects
business.industry ,Feature extraction ,Information Storage and Retrieval ,Image processing ,Skeleton (category theory) ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Pattern Recognition, Automated ,Identification (information) ,Planar ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,Computer vision ,Artificial intelligence ,Projective invariants ,Artifacts ,business ,Noisy data ,Algorithm ,Algorithms ,Software ,Mathematics - Abstract
In this paper, we propose a method for identifying a discrete planar symmetric shape from an arbitrary viewpoint. Our algorithm is based on a newly proposed notion of a view's skeleton. We show that this concept yields projective invariants which facilitate the identification procedure. It is, furthermore, shown that the proposed method may be extended to the case of noisy data to yield an optimal estimate of a shape in question. Substantiating examples are provided.
- Published
- 2005
- Full Text
- View/download PDF
31. Fast incorporation of optical flow into active polygons
- Author
-
Anthony Yezzi, Gozde Unal, and Hamid Krim
- Subjects
Level set method ,Computer science ,Movement ,Video Recording ,Optical flow ,Information Storage and Retrieval ,Sensitivity and Specificity ,Pattern Recognition, Automated ,Artificial Intelligence ,Robustness (computer science) ,Motion estimation ,Image Interpretation, Computer-Assisted ,Photography ,Computer vision ,Active contour model ,business.industry ,Reproducibility of Results ,Signal Processing, Computer-Assisted ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Object detection ,Subtraction Technique ,Video tracking ,Polygon ,Artificial intelligence ,business ,Algorithm ,Algorithms ,Software - Abstract
In this paper, we first reconsider, in a different light, the addition of a prediction step to active contour-based visual tracking using an optical flow and clarify the local computation of the latter along the boundaries of continuous active contours with appropriate regularizers. We subsequently detail our contribution of computing an optical flow-based prediction step directly from the parameters of an active polygon, and of exploiting it in object tracking. This is in contrast to an explicitly separate computation of the optical flow and its ad hoc application. It also provides an inherent regularization effect resulting from integrating measurements along polygon edges. As a result, we completely avoid the need of adding ad hoc regularizing terms to the optical flow computations, and the inevitably arbitrary associated weighting parameters. This direct integration of optical flow into the active polygon framework distinguishes this technique from most previous contour-based approaches, where regularization terms are theoretically, as well as practically, essential. The greater robustness and speed due to a reduced number of parameters of this technique are additional and appealing features.
- Published
- 2005
- Full Text
- View/download PDF
32. Stochastic differential equations and geometric flows
- Author
-
Anthony Yezzi, Hamid Krim, and Gozde Unal
- Subjects
Stochastic differential equation ,Partial differential equation ,Differential equation ,Mathematical analysis ,Heat equation ,Image processing ,Image segmentation ,Computer Graphics and Computer-Aided Design ,Software ,Smoothing ,Mathematics ,Shape analysis (digital geometry) - Abstract
In previous years, curve evolution, applied to a single contour or to the level sets of an image via partial differential equations, has emerged as an important tool in image processing and computer vision. Curve evolution techniques have been utilized in problems such as image smoothing, segmentation, and shape analysis. We give a local stochastic interpretation of the basic curve smoothing equation, the so called geometric heat equation, and show that this evolution amounts to a tangential diffusion movement of the particles along the contour. Moreover, assuming that a priori information about the shapes of objects in an image is known, we present modifications of the geometric heat equation designed to preserve certain features in these shapes while removing noise. We also show how these new flows may be applied to smooth noisy curves without destroying their larger scale features, in contrast to the original geometric heat flow which tends to circularize any closed curve.
- Published
- 2002
- Full Text
- View/download PDF
33. Image segmentation and edge enhancement with stabilized inverse diffusion equations
- Author
-
Alan S. Willsky, Hamid Krim, and Ilya Pollak
- Subjects
Diffusion equation ,Image texture ,Differential equation ,Ordinary differential equation ,Mathematical analysis ,Geometry ,Image processing ,Image segmentation ,Inverse problem ,Computer Graphics and Computer-Aided Design ,Software ,Numerical stability ,Mathematics - Abstract
We introduce a family of first-order multidimensional ordinary differential equations (ODEs) with discontinuous right-hand sides and demonstrate their applicability in image processing. An equation belonging to this family is an inverse diffusion everywhere except at local extrema, where some stabilization is introduced. For this reason, we call these equations "stabilized inverse diffusion equations" (SIDEs). Existence and uniqueness of solutions, as well as stability, are proven for SIDEs. A SIDE in one spatial dimension may be interpreted as a limiting case of a semi-discretized Perona-Malik equation (1990, 19994). In an experiment, SIDE's are shown to suppress noise while sharpening edges present in the input signal. Their application to image segmentation is also demonstrated.
- Published
- 2000
- Full Text
- View/download PDF
34. Multiscale segmentation and anomaly enhancement of SAR imagery
- Author
-
William Clement Karl, W.W. Irving, Charles Fosgate, Hamid Krim, and Alan S. Willsky
- Subjects
Synthetic aperture radar ,Pixel ,Contextual image classification ,business.industry ,Computer science ,Image processing ,Terrain ,Pattern recognition ,Image segmentation ,Computer Graphics and Computer-Aided Design ,law.invention ,Speckle pattern ,Computer Science::Graphics ,law ,Radar imaging ,Clutter ,Computer vision ,Artificial intelligence ,Radar ,business ,Physics::Atmospheric and Oceanic Physics ,Software - Abstract
We present efficient multiscale approaches to the segmentation of natural clutter, specifically grass and forest, and to the enhancement of anomalies in synthetic aperture radar (SAR) imagery. The methods we propose exploit the coherent nature of SAR sensors. In particular, they take advantage of the characteristic statistical differences in imagery of different terrain types, as a function of scale, due to radar speckle. We employ a class of multiscale stochastic processes that provide a powerful framework for describing random processes and fields that evolve in scale. We build models representative of each category of terrain of interest (i.e., grass and forest) and employ them in directing decisions on pixel classification, segmentation, and anomalous behaviour. The scale-autoregressive nature of our models allows extremely efficient calculation of likelihoods for different terrain classifications over windows of SAR imagery. We subsequently use these likelihoods as the basis for both image pixel classification and grass-forest boundary estimation. In addition, anomaly enhancement is possible with minimal additional computation. Specifically, the residuals produced by our models in predicting SAR imagery from coarser scale images are theoretically uncorrelated. As a result, potentially anomalous pixels and regions are enhanced and pinpointed by noting regions whose residuals display a high level of correlation throughout scale. We evaluate the performance of our techniques through testing on 0.3-m resolution SAR data gathered with Lincoln Laboratory's millimeter-wave SAR.
- Published
- 1997
- Full Text
- View/download PDF
35. Subspace Learning of Dynamics on a Shape Manifold: A Generative Modeling Approach
- Author
-
Yi, Sheng, primary and Krim, Hamid, additional
- Published
- 2014
- Full Text
- View/download PDF
36. Human Activity as a Manifold-Valued Random Process.
- Author
-
Yi, Sheng, Krim, Hamid, and Norris, Larry K.
- Subjects
MANIFOLDS (Mathematics) ,STOCHASTIC processes ,HUMAN activity recognition ,NONLINEAR theories ,STOCHASTIC models ,FEATURE extraction ,WIENER processes - Abstract
Most of the previous shape-based human activity models are built with either a linear assumption or an extrinsic interpretation of the nonlinear geometry of the shape space, both of which proved to be problematic on account of the nonlinear intrinsic geometry of the associated shape spaces. In this paper, we propose an intrinsic stochastic modeling of human activity on a shape manifold. More importantly, within an elegant and theoretically sound framework, our work effectively bridges the nonlinear modeling of human activity on a nonlinear space, with the classic stochastic modeling in a Euclidean space, and thereby provides a foundation for a more effective and accurate analysis of the nonlinear feature space of activity models. From a video sequence, human activity is extracted as a sequence of shapes. Such a sequence is considered as one realization of a random process on a shape manifold. Different activities are then modeled as manifold valued random processes with different distributions. To address the problem of stochastic modeling on a manifold, we first construct a nonlinear invertible map of a manifold valued process to a Euclidean process. The resulting process is then modeled as a global or piecewise Brownian motion. The mapping from a manifold to a Euclidean space is known as a stochastic development. The advantage of such a technique is that it yields a one–one correspondence, and the resulting Euclidean process intrinsically captures the curvature on the original manifold. The proposed algorithm is validated on two activity databases and compared with the related works on each of these. The substantiating results demonstrate the viability and high-accuracy of our modeling technique in characterizing and classifying different activities. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
37. A Kullback–Leibler Divergence Approach to Blind Image Restoration.
- Author
-
Seghouane, Abd-Krim
- Subjects
IMAGE reconstruction ,IMAGE processing ,MAXIMUM likelihood statistics ,MATHEMATICAL optimization ,DISTRIBUTION (Probability theory) ,INFORMATION theory ,ALGORITHMS ,GAUSSIAN processes - Abstract
A new algorithm for maximum-likelihood blind image restoration is presented in this paper. It is obtained by modeling the original image and the additive noise as multivariate Gaussian processes with unknown covariance matrices. The blurring process is specified by its point spread function, which is also unknown. Estimations of the original image and the blur are derived by alternating minimization of the Kullback–Leibler divergence between a model family of probability distributions defined using the linear image degradation model and a desired family of probability distributions constrained to be concentrated on the observed data. The algorithm presents the advantage to provide closed form expressions for the parameters to be updated and to converge only after few iterations. A simulation example that illustrates the effectiveness of the proposed algorithm is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
38. From Point to Local Neighborhood: Polyp Detection in CT Colonography Using Geodesic Ring Neighborhoods.
- Author
-
Ong, Ju Lynn and Seghouane, Abd-Krim
- Subjects
TOMOGRAPHY ,GEODESICS ,CURVATURE ,COMPUTER-aided design ,SURFACE roughness ,ESTIMATION theory ,COLON cancer - Abstract
Existing polyp detection methods rely heavily on curvature-based characteristics to differentiate between lesions. These assume that the discrete triangulated surface mesh or volume closely approximates a smooth continuous surface. However, this is often not the case and because curvature is computed as a local feature and a second-order differential quantity, the presence of noise significantly affects its estimation. For this reason, a more global feature is required to provide an accurate description of the surface at hand. In this paper, a novel method incorporating a local neighborhood around the centroid of a surface patch is proposed. This is done using geodesic rings which accumulate curvature information in a neighborhood around this centroid. This geodesic-ring neighborhood approximates a single smooth, continuous surface upon which curvature and orientation estimation methods can be applied. A new global shape index, S is also introduced and computed. These curvature and orientation values will be used to classify the surface as either a bulbous polyp, ridge-like fold or semiplanar structure. Experimental results show that this method is promising (100% sensitivity, 100% specificity for lesions >10 mm) for distinguishing between bulbous polyps, folds and planar-like structures in the colon. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
39. Stochastic Differential Equations and Geometric Flows.
- Author
-
Unal, Gozde, Krim, Hamid, and Yezzi, Anthony
- Subjects
DIGITAL image processing ,STOCHASTIC difference equations - Abstract
In recent years, curve evolution, applied to a single contour or to the level sets of an image via partial differential equations, has emerged as an important tool in image processing and computer vision. Curve evolution techniques have been utilized in problems such as image smoothing, segmentation, and shape analysis. We give a local stochastic interpretation of the basic curve smoothing equation, the so called geometric heat equation, and show that this evolution amounts to a tangential diffusion movement of the particles along the contour. Moreover, assuming that a priori information about the shapes of objects in an image is known, we present modifications of the geometric heat equation designed to preserve certain features in these shapes while removing noise. We also show how these new flows may be applied to smooth noisy curves without destroying their larger scale features, in contrast to the original geometric heat flow which tends to circularize any closed curve. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
40. Multiphase Joint Segmentation-Registration and Object Tracking for Layered Images
- Author
-
Chen, Ping-Feng, primary, Krim, Hamid, additional, and Mendoza, Olga L., additional
- Published
- 2010
- Full Text
- View/download PDF
41. Multiscale signal enhancement: beyond the normality and independence assumption
- Author
-
He, Y., primary and Krim, H., additional
- Published
- 2002
- Full Text
- View/download PDF
42. Multiscale Signal Enhancement: Beyond the Normality and Independence Assumption.
- Author
-
Yun He and Krim, Hamid
- Subjects
- *
NONLINEAR functional analysis , *SIGNAL processing , *WAVELETS (Mathematics) - Abstract
Presents a study which examined a novel nonlinear filtering technique for denoising or signal enhancement in a wavelet-based framework. Theoretical background; Computational approach used; Findings.
- Published
- 2002
43. The Role of Redundant Bases and Shrinkage Functions in Image Denoising.
- Author
-
Hel-Or, Yacov and Ben-Artzi, Gil
- Subjects
IMAGE denoising ,IMAGE reconstruction ,SET functions ,NOISE measurement ,NOISE control ,WAVELET transforms - Abstract
Wavelet denoising is a classical and effective approach for reducing noise in images and signals. Suggested in 1994, this approach is carried out by rectifying the coefficients of a noisy image, in the transform domain, using a set of shrinkage functions (SFs). A plethora of papers deals with the optimal shape of the SFs and the transform used. For example, it is widely known that applying SFs in a redundant basis improves the results. However, it is barely known that the shape of the SFs should be changed when the transform used is redundant. In this paper, we introduce a complete picture of the interrelations between the transform used, the optimal shrinkage functions, and the domains in which they are optimized. We suggest three schemes for optimizing the SFs and provide bounds of the remaining noise, in each scheme, with respect to the other alternatives. In particular, we show that for subband optimization, where each SF is optimized independently for a particular band, optimizing the SFs in the spatial domain is always better than or equal to optimizing the SFs in the transform domain. Furthermore, for redundant bases, we provide the expected denoising gain that can be achieved, relative to the unitary basis, as a function of the redundancy rate. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Shearlet Enhanced Snapshot Compressive Imaging.
- Author
-
Yang, Peihao, Kong, Linghe, Liu, Xiao-Yang, Yuan, Xin, and Chen, Guihai
- Subjects
IMAGE reconstruction algorithms ,IMAGE representation ,IMAGE reconstruction ,FREQUENCY-domain analysis ,CAMERAS - Abstract
Snapshot compressive imaging (SCI) is a promising approach to capture high-dimensional data with low dimensional sensors. With modest modifications to off-the-shelf cameras, SCI cameras encode multiple frames into a single measurement frame. These correlated frames can then be retrieved by reconstruction algorithms. Existing reconstruction algorithms suffer from low speed or low fidelity. In this paper, we propose a novel reconstruction algorithm, namely, Shearlet enhanced Snapshot Compressive Imaging (SeSCI), which exploits the sparsity of the image representation in both frequency domain and shearlet domain. Towards this end, we first derive our SeSCI algorithm under the alternating direction method of multipliers (ADMM) framework. We then propose an efficient solution of SeSCI algorithm. Moreover, we prove that the improved SeSCI algorithm converges to a fixed point. Experimental results on both synthetic data and real data captured by SCI cameras demonstrate the significant advantages of SeSCI, which outperforms the conventional algorithms by more than 2dB in PSNR. At the same time, the SeSCI achieves a speed-up more than $100\times $ over the state-of-the-art algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion.
- Author
-
Li, Hui, Wu, Xiao-Jun, and Kittler, Josef
- Subjects
IMAGE fusion ,DECOMPOSITION method ,INFRARED imaging ,IMAGE processing ,FEATURE extraction ,MATRIX decomposition - Abstract
Image decomposition is crucial for many image processing tasks, as it allows to extract salient features from source images. A good image decomposition method could lead to a better performance, especially in image fusion tasks. We propose a multi-level image decomposition method based on latent low-rank representation(LatLRR), which is called MDLatLRR. This decomposition method is applicable to many image processing fields. In this paper, we focus on the image fusion task. We build a novel image fusion framework based on MDLatLRR which is used to decompose source images into detail parts(salient features) and base parts. A nuclear-norm based fusion strategy is used to fuse the detail parts and the base parts are fused by an averaging strategy. Compared with other state-of-the-art fusion methods, the proposed algorithm exhibits better fusion performance in both subjective and objective evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
46. Domain-Transformable Sparse Representation for Anomaly Detection in Moving-Camera Videos.
- Author
-
Jardim, Eric, Thomaz, Lucas A., da Silva, Eduardo A. B., and Netto, Sergio L.
- Subjects
ANOMALY detection (Computer security) ,MATRIX decomposition ,VIDEOS ,SPARSE matrices ,DATABASES - Abstract
This paper presents a special matrix factorization based on sparse representation that detects anomalies in video sequences generated with moving cameras. Such representation is made by associating the frames of the target video, that is a sequence to be tested for the presence of anomalies, with the frames of an anomaly-free reference video, which is a previously validated sequence. This factorization is done by a sparse coefficient matrix, and any target-video anomaly is encapsulated into a residue term. In order to cope with camera trepidations, domain-transformations are incorporated into the sparse representation process. Approximations of the transformed-domain optimization problem are introduced to turn it into a feasible iterative process. Results obtained from a comprehensive video database acquired with moving cameras on a visually cluttered environment indicate that the proposed algorithm provides a better geometric registration between reference and target videos, greatly improving the overall performance of the anomaly-detection system. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. Joint Learning of Fuzzy k-Means and Nonnegative Spectral Clustering With Side Information.
- Author
-
Zhang, Rui, Nie, Feiping, Guo, Muhan, Wei, Xian, and Li, Xuelong
- Subjects
K-means clustering ,FUZZY logic ,CLUSTER analysis (Statistics) ,ALGORITHMS ,DATA analysis - Abstract
As one of the most widely used clustering techniques, the fuzzy $k$ -means (FKM) assigns every data point to each cluster with a certain degree of membership. However, conventional FKM approach relies on the square data fitting term, which is sensitive to the outliers with ignoring the prior information. In this paper, we develop a novel and robust fuzzy $k$ -means clustering algorithm, namely, joint learning of fuzzy $k$ -means and nonnegative spectral clustering with side information. The proposed method combines fuzzy $k$ -means and nonnegative spectral clustering into a unified model, which can further exploit the prior knowledge of data pairs such that both the quality of affinity graph and the clustering performance can be improved. In addition, for the purpose of enhancing the robustness, the adaptive loss function is adopted in the objective function, since it smoothly interpolates between $\ell _{1}$ -norm and $\ell _{2}$ -norm. Finally, experimental results on benchmark datasets verify the effectiveness and the superiority of our clustering method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Kernel Distance Metric Learning Using Pairwise Constraints for Person Re-Identification.
- Author
-
Nguyen, Bac and De Baets, Bernard
- Subjects
IDENTIFICATION ,KERNEL functions ,VECTOR analysis ,IMAGING systems ,IMAGE processing - Abstract
Person re-identification is a fundamental task in many computer vision and image understanding systems. Due to appearance variations from different camera views, person re-identification still poses an important challenge. In the literature, KISSME has already been introduced as an effective distance metric learning method using pairwise constraints to improve the re-identification performance. Computationally, it only requires two inverse covariance matrix estimations. However, the linear transformation induced by KISSME is not powerful enough for more complex problems. We show that KISSME can be kernelized, resulting in a nonlinear transformation, which is suitable for many real-world applications. Moreover, the proposed kernel method can be used for learning distance metrics from structured objects without having a vectorial representation. The effectiveness of our method is validated on five publicly available data sets. To further apply the proposed kernel method efficiently when data are collected sequentially, we introduce a fast incremental version that learns a dissimilarity function in the feature space without estimating the inverse covariance matrices. The experiments show that the latter variant can obtain competitive results in a computationally efficient manner. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Convolutional Sparse and Low-Rank Coding-Based Image Decomposition.
- Author
-
Zhang, He and Patel, Vishal M.
- Subjects
MATHEMATICAL convolutions ,CARICATURES & cartoons ,IMAGE processing ,FILTERING software ,ALGORITHMS - Abstract
We propose novel convolutional sparse and low-rank coding-based methods for cartoon and texture decomposition. In our method, we first learn a set of generic filters that can efficiently represent cartoon-and texture-type images. Then, using these learned filters, we propose two optimization frameworks to decompose a given image into cartoon and texture components: convolutional sparse coding-based image decomposition; and convolutional low-rank coding-based image decomposition. By working directly on the whole image, the proposed image separation algorithms do not need to divide the image into overlapping patches for leaning local dictionaries. The shift-invariance property is directly modeled into the objective function for learning filters. Extensive experiments show that the proposed methods perform favorably compared with state-of-the-art image separation methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
50. Band Selection for Nonlinear Unmixing of Hyperspectral Images as a Maximal Clique Problem.
- Author
-
Imbiriba, Tales, Bermudez, Jose Carlos Moreira, and Richard, Cedric
- Subjects
HYPERSPECTRAL imaging systems ,NONLINEAR statistical models ,SUBGRAPHS ,KERNEL (Mathematics) ,COMPUTER simulation - Abstract
Kernel-based nonlinear mixing models have been applied to unmix spectral information of hyperspectral images when the type of mixing occurring in the scene is too complex or unknown. Such methods, however, usually require the inversion of matrices of sizes equal to the number of spectral bands. Reducing the computational load of these methods remains a challenge in large-scale applications. This paper proposes a centralized band selection (BS) method for supervised unmixing in the reproducing kernel Hilbert space. It is based upon the coherence criterion, which sets the largest value allowed for correlations between the basis kernel functions characterizing the selected bands in the unmixing model. We show that the proposed BS approach is equivalent to solving a maximum clique problem, i.e., searching for the biggest complete subgraph in a graph. Furthermore, we devise a strategy for selecting the coherence threshold and the Gaussian kernel bandwidth using coherence bounds for linearly independent bases. Simulation results illustrate the efficiency of the proposed method. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.