35 results
Search Results
2. B-Spline Explicit Active Surfaces: An Efficient Framework for Real-Time 3-D Region-Based Segmentation.
- Author
-
Barbosa, Daniel, Dietenbeck, Thomas, Schaerer, Joel, D'hooge, Jan, Friboulet, Denis, and Bernard, Olivier
- Subjects
IMAGE segmentation ,REAL-time computing ,ALGORITHMS ,ELECTRONIC data processing ,DIAGNOSTIC imaging ,IMAGE analysis ,IMAGE processing - Abstract
A new formulation of active contours based on explicit functions has been recently suggested. This novel framework allows real-time 3-D segmentation since it reduces the dimensionality of the segmentation problem. In this paper, we propose a B-spline formulation of this approach, which further improves the computational efficiency of the algorithm. We also show that this framework allows evolving the active contour using local region-based terms, thereby overcoming the limitations of the original method while preserving computational speed. The feasibility of real-time 3-D segmentation is demonstrated using simulated and medical data such as liver computer tomography and cardiac ultrasound images. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
3. A Psychovisual Quality Metric in Free-Energy Principle.
- Author
-
Zhai, Guangtao, Wu, Xiaolin, Yang, Xiaokang, Lin, Weisi, and Zhang, Wenjun
- Subjects
VISUAL perception ,ENTROPY ,IMAGE analysis ,NEUROSCIENCES ,COGNITION ,IMAGE processing ,ALGORITHMS ,IMAGE quality analysis - Abstract
In this paper, we propose a new psychovisual quality metric of images based on recent developments in brain theory and neuroscience, particularly the free-energy principle. The perception and understanding of an image is modeled as an active inference process, in which the brain tries to explain the scene using an internal generative model. The psychovisual quality is thus closely related to how accurately visual sensory data can be explained by the generative model, and the upper bound of the discrepancy between the image signal and its best internal description is given by the free energy of the cognition process. Therefore, the perceptual quality of an image can be quantified using the free energy. Constructively, we develop a reduced-reference free-energy-based distortion metric (FEDM) and a no-reference free-energy-based quality metric (NFEQM). The FEDM and the NFEQM are nearly invariant to many global systematic deviations in geometry and illumination that hardly affect visual quality, for which existing image quality metrics wrongly predict severe quality degradation. Although with very limited or even without information on the reference image, the FEDM and the NFEQM are highly competitive compared with the full-reference SSIM image quality metric on images in the popular LIVE database. Moreover, FEDM and NFEQM can measure correctly the visual quality of some model-based image processing algorithms, for which the competing metrics often contradict with viewers' opinions. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
4. Fitting Multiple Connected Ellipses to an Image Silhouette Hierarchically.
- Author
-
Yi Da Xu, Richard and Kemp, Michael
- Subjects
CURVE fitting ,CONIC sections ,ALGORITHMS ,NUMERICAL analysis ,IMAGE analysis ,IMAGE processing - Abstract
In this paper, we seek to fit a model, specified in terms of connected ellipses, to an image silhouette. Some algorithms that have attempted this problem are sensitive to initial guesses and also may converge to a wrong solution when they attempt to minimize the objective function for the entire ellipse structure in one step. We present an algorithm that overcomes these issues. Our first step is to temporarily ignore the connections, and refine the initial guess using unconstrained Expectation-Maximization (EM) for mixture Gaussian densities. Then the ellipses are reconnected linearly. Lastly, we apply the Levenberg-Marquardt algorithm to fine-tune the ellipse shapes to best align with the contour. The fitting is achieved in a hierarchical manner based upon the joints of the model. Experiments show that our algorithm can robustly fit a complex ellipse structure to a corresponding shape for several applications. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
5. Constrained and Dimensionality-Independent Path Openings.
- Author
-
Luengo Hendriks, Cris L.
- Subjects
IMAGE analysis ,MORPHOLOGY ,IMAGE processing ,ALGORITHMS ,LENGTH measurement - Abstract
Path openings and closings are morphological operations with flexible line segments as structuring elements. These line segments have the ability to adapt to local image structures, and can be used to detect lines that are not perfectly straight. They also are a convenient and efficient alternative to straight line segments as structuring elements when the exact orientation of lines in the image is not known. These path operations are defined by an adjacency relation, which typically allows for lines that are approximately horizontal, vertical or diagonal. However, because this definition allows zig-zag lines, diagonal paths can be much shorter than the corresponding horizontal or vertical paths. This undoubtedly causes problems when attempting to use path operations for length measurements. This paper 1) introduces a dimensionality-independent implementation of the path opening and closing algorithm by Appleton and Talbot, 2) proposes a constraint on the path operations to improve their ability to perform length measurements, and 3) shows how to use path openings and closings in a granulometry to obtain the length distribution of elongated structures directly from a gray-value image, without a need for binarizing the image and identifying individual objects. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
6. A Theory of Phase Singularities for Image Representation and its Applications to Object Tracking and Image Matching.
- Author
-
Yu Qiao, Wei Wang, Minematsu, Nobuaki, Jianzhuang Liu, Mitsuo Takeda, and Xiaoou Tang
- Subjects
IMAGE analysis ,INVARIANTS (Mathematics) ,NOISE ,ALGORITHMS ,IMAGE processing - Abstract
This paper studies phase singularities (PSs) for image representation. We show that PSs calculated with Laguerre-Gauss filters contain important information and provide a useful tool for image analysis. PSs are invariant to image translation and rotation. We introduce several invariant features to characterize the core structures around PSs and analyze the stability of PSs to noise addition and scale change. We also study the characteristics of PSs in a scale space, which lead to a method to select key scales along phase singularity curves. We demonstrate two applications of PSs: object tracking and image matching. In object tracking, we use the iterative closest point algorithm to determine the correspondences of PSs between two adjacent frames. The use of PSs allows us to precisely determine the motions of tracked objects. In image matching, we combine PSs and scale-invariant feature transform (SIFT) descriptor to deal with the variations between two images and examine the proposed method on a benchmark database. The results indicate that our method can find more correct matching pairs with higher repeatability rates than some well-known methods. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
7. Formulating Face Verification With Semidefinite Programming.
- Author
-
Shuicheng Yan, Jianzhuang Liu, Xiaoou Tang, and Huang, Thomas S.
- Subjects
IMAGE processing ,IMAGING systems ,IMAGE analysis ,INFORMATION processing ,ELECTRONIC data processing ,ALGORITHMS ,INVARIANT subspaces ,FUNCTIONAL analysis ,DISCRIMINANT analysis - Abstract
This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
8. A Primal-Dual Active-Set Method for Non-Negativity Constrained Total Variation Deblurring Problems.
- Author
-
Krishnan, D., Ping Lin, and Yip, Andy M.
- Subjects
IMAGE processing ,IMAGING systems ,IMAGE analysis ,MATHEMATICAL analysis ,INFORMATION processing ,NUMERICAL analysis ,ALGORITHMS - Abstract
This paper studies image deblurring problems using a total variation-based model, with a non-negativity constraint. The addition of the non-negativity constraint improves the quality of the solutions, but makes the solution process a difficult one. The contribution of our work is a fast and robust numerical algorithm to solve the non-negatively constrained problem. To overcome the nondifferentiability of the total variation norm, we formulate the constrained deblurring problem as a primal-dual program which is a variant of the formulation proposed by Chan, Golub, and Mulet for unconstrained problems. Here, dual refers to a combination of the Lagrangian and Fenchel duals. To solve the constrained primal-dual program, we use a semi-smooth Newton's method. We exploit the relationship between the semi-smooth Newton's method and the primal-dual active set method to achieve considerable simplification of the computations. The main advantages of our proposed scheme are: no parameters need significant adjustment, a standard inverse preconditioner works very well, quadratic rate of local convergence (theoretical and numerical), numerical evidence of global convergence, and high accuracy of solving the optimality system. The scheme shows robustness of performance over a wide range of parameters. A comprehensive set of numerical comparisons are provided against other methods to solve the same problem which show the speed and accuracy advantages of our scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
9. Image Denoising Based on Wavelets and Multifractals for Singularity Detection.
- Author
-
Zhong, Junmei and Ning, Ruola
- Subjects
IMAGE processing ,ALGORITHMS ,MULTIFRACTALS ,IMAGING systems ,IMAGE analysis ,DIMENSION theory (Topology) - Abstract
This paper presents a very efficient algorithm for image denoising based on wavelets and multifractals for singularity detection. A challenge of image denoising is how to preserve the edges of an image when reducing noise. By modeling the intensity surface of a noisy image as statistically self-similar multifractal processes and taking advantage of the multiresolution analysis with wavelet transform to exploit the local statistical self-similarity at different scales, the pointwise singularity strength value characterizing the local singularity at each scale was calculated. By thresholding the singularity strength, wavelet coefficients at each scale were classified into two categories: the edge-related and regular wavelet coefficients and the irregular coefficients. The irregular coefficients were denoised using an approximate minimum mean-squared error (MMSE) estimation method, while the edge-related and regular wavelet coefficients were smoothed using the fuzzy weighted mean (FWM) filter aiming at preserving the edges and details when reducing noise. Furthermore, to make the FWM-based filtering more efficient for noise reduction at the lowest decomposition level, the MMSE-based filtering was performed as the first pass of denoising followed by performing the FWM-based filtering. Experimental results demonstrated that this algorithm could achieve both good visual quality and high PSNR for the denoised images. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
10. Image Registration Using Log-Polar Mappings for Recovery of Large-Scale Similarity and Projective Transformations.
- Author
-
Zokai, Siavash and Wolberg, George
- Subjects
IMAGE processing ,MATHEMATICAL transformations ,IMAGING systems ,IMAGE analysis ,ESTIMATION theory ,ALGORITHMS - Abstract
This paper describes a novel technique to recover large similarity transformations (rotation/scale/translation) and moderate perspective deformations among image pairs. We introduce a hybrid algorithm that features log-polar mappings and nonlinear least squares optimization. The use of log-polar techniques in the spatial domain is introduced as a preprocessing module to recover large scale changes (e.g., at least four-fold) and arbitrary rotations. Although log-polar techniques are used in the Fourier-Mellin transform to accommodate rotation and scale in the frequency domain, its use in registering images subjected to very large scale changes has not yet been exploited in the spatial domain. In this paper, we demonstrate the superior performance of the log-polar transform in featureless image registration in the spatial domain. We achieve subpixel accuracy through the use of nonlinear least squares optimization. The registration process yields the eight parameters of the perspective transformation that best aligns the two input images. Extensive testing was performed on uncalibrated real images and an array of 10,000 image pairs with known transformations derived from the Corel Stock Photo Library of royalty-free photographic images. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
11. Multiple Exposure Fusion for High Dynamic Range Image Acquisition.
- Author
-
Jinno, Takao and Okuda, Masahiro
- Subjects
IMAGE analysis ,PIXELS ,ESTIMATION theory ,CALIBRATION ,ALGORITHMS ,IMAGE processing ,IMAGE quality analysis - Abstract
A multiple exposure fusion to enhance the dynamic range of an image is proposed. The construction of high dynamic range images (HDRIs) is performed by combining multiple images taken with different exposures and estimating the irradiance value for each pixel. This is a common process for HDRI acquisition. During this process, displacements of the images caused by object movements often yield motion blur and ghosting artifacts. To address the problem, this paper presents an efficient and accurate multiple exposure fusion technique for the HDRI acquisition. Our method simultaneously estimates displacements and occlusion and saturation regions by using maximum a posteriori estimation and constructs motion-blur-free HDRIs. We also propose a new weighting scheme for the multiple image fusion. We demonstrate that our HDRI acquisition algorithm is accurate, even for images with large motion. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
12. Saliency and Gist Features for Target Detection in Satellite Images.
- Author
-
Li, Zhicheng and Itti, Laurent
- Subjects
REMOTE-sensing images ,FEATURE extraction ,IMAGE processing ,BIOLOGICAL systems ,MATHEMATICAL models ,DATA visualization ,IMAGE analysis ,SUPPORT vector machines ,ALGORITHMS - Abstract
Reliably detecting objects in broad-area overhead or satellite images has become an increasingly pressing need, as the capabilities for image acquisition are growing rapidly. The problem is particularly difficult in the presence of large intraclass variability, e.g., finding “boats” or “buildings,” where model-based approaches tend to fail because no good model or template can be defined for the highly variable targets. This paper explores an automatic approach to detect and classify targets in high-resolution broad-area satellite images, which relies on detecting statistical signatures of targets, in terms of a set of biologically-inspired low-level visual features. Broad-area images are cut into small image chips, analyzed in two complementary ways: “attention/saliency” analysis exploits local features and their interactions across space, while “gist” analysis focuses on global nonspatial features and their statistics. Both feature sets are used to classify each chip as containing target(s) or not, using a support vector machine. Four experiments were performed to find “boats” (Experiments 1 and 2), “buildings” (Experiment 3) and “airplanes” (Experiment 4). In experiment 1, 14 416 image chips were randomly divided into training (300 boat, 300 nonboat) and test sets (13 816), and classification was performed on the test set (ROC area: 0.977 \pm 0.003). In experiment 2, classification was performed on another test set of 11 385 chips from another broad-area image, keeping the same training set as in experiment 1 (ROC area: 0.952 \pm 0.006). In experiment 3, 600 training chips (300 for each type) were randomly selected from 108 885 chips, and classification was conducted (ROC area: 0.922 \pm 0.005). In experiment 4, 20 training chips (10 for each type) were randomly selected to classify the remaining 2581 chips (ROC area: 0.976 \pm 0.003). The proposed algorithm outperformed the state-of-the-art SIFT, HMAX, and hidden-scale salient structure methods, and previous gist-only features in all four experiments. This study shows that the proposed target search method can reliably and effectively detect highly variable target objects in large image datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
13. Perceptual Segmentation: Combining Image Segmentation With Object Tagging.
- Author
-
Bergman, Ruth and Nachlieli, Hila
- Subjects
IMAGE analysis ,PIXELS ,ALGORITHMS ,IMAGE quality in imaging systems ,IMAGE processing ,PHOTOGRAPHERS - Abstract
Human observers understand the content of an image intuitively. Based upon image content, they perform many image-related tasks, such as creating slide shows and photo albums, and organizing their image archives. For example, to select photos for an album, people assess image quality based upon the main objects in the image. They modify colors in an image based upon the color of important objects, such as sky, grass or skin. Serious photographers might modify each object separately. Photo applications, in contrast, use low-level descriptors to guide similar tasks. Typical descriptors, such as color histograms, noise level, JPEG artifacts and overall sharpness, can guide an imaging application and safeguard against blunders. However, there is a gap between the outcome of such operations and the same task performed by a person. We believe that the gap can be bridged by automatically understanding the content of the image. This paper presents algorithms for automatic tagging of perceptual objects in images, including sky, skin, and foliage, which constitutes an important step toward this goal. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
14. Fast and Memory Efficient 2-D Connected Components Using Linked Lists of Line Segments.
- Author
-
De Bock, Johan and Philips, Wilfried
- Subjects
IMAGE processing ,ALGORITHMS ,DATA structures ,IMAGE analysis ,IMAGE compression standards ,ELECTRONIC data processing - Abstract
In this paper we present a more efficient approach to the problem of finding the connected components in binary images. In conventional connected components algorithms, the main data structure to compute and store the connected components is the region label image. We replace the region label image with a singly-linked list of line segments (or runs) for each region. This enables us to design a very fast and memory efficient connected components algorithm. Most conventional algorithms require (at least) two raster scans. Those that only need one raster scan, require irregular and unbounded image access. The proposed algorithm is a single pass regular access algorithm and only requires access to the three most recently processed image lines at any given time. Experimental results demonstrate that our algorithm is considerably faster than the fastest conventional algorithm. Additionally, our novel region coding data structure uses much less memory in typical cases than the traditional region label image. Even in worst case situations the processing time of our algorithm is linear with the number of pixels in an image. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
15. Automatic Parameter Selection for Denoising Algorithms Using a No-Reference Measure of Image Content.
- Author
-
Zhu, Xiang and Milanfar, Peyman
- Subjects
PARAMETER estimation ,ALGORITHMS ,IMAGE analysis ,INVERSE problems ,IMAGE processing ,NOISE measurement ,DATABASES ,NOISE control ,SINGULAR value decomposition - Abstract
Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no “ground-truth” reference is available. Some analytical methods such as cross-validation and Stein's unbiased risk estimate (SURE) have been successfully used to set such parameters. However, these methods tend to be strongly reliant on restrictive assumptions on the noise, and also computationally heavy. In this paper, we propose a no-reference metric Q which is based upon singular value decomposition of local image gradient matrix, and provides a quantitative measure of true image content (i.e., sharpness and contrast as manifested in visually salient geometric features such as edges,) in the presence of noise and other disturbances. This measure 1) is easy to compute, 2) reacts reasonably to both blur and random noise, and 3) works well even when the noise is not Gaussian. The proposed measure is used to automatically and effectively set the parameters of two leading image denoising algorithms. Ample simulated and real data experiments support our claims. Furthermore, tests using the TID2008 database show that this measure correlates well with subjective quality evaluations for both blur and noise distortions. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
16. Image Segmentation Based on GrabCut Framework Integrating Multiscale Nonlinear Structure Tensor.
- Author
-
Shoudong Han, Wenbing Tao, Desheng Wang, Xue-Cheng Tai, and Xianglin Wu
- Subjects
IMAGE ,IMAGE processing ,COMPUTER vision ,IMAGE analysis ,ALGORITHMS ,PIXELS - Abstract
Abstract--In this paper, we propose an interactive color natural image segmentation method. The method integrates color feature with multiscale nonlinear structure tensor texture (MSNST) feature and then uses GrabCut method to obtain the segmentations. The MSNST feature is used to describe the texture feature of an image and integrated into GrabCut framework to overcome the problem of the scale difference of textured images. In addition, we extend the Gaussian Mixture Model (GMM) to MSNST feature and GMM based on MSNST is constructed to describe the energy function so that the texture feature can be suitably integrated into GrabCut framework and fused with the color feature to achieve the more superior image segmentation performance than the original GrabCut method. For easier implementation and more efficient computation, the symmetric KL divergence is chosen to produce the estimates of the tensor statistics instead of the Riemannian structure of the space of tensor. The Conjugate norm was employed using Locality Preserving Projections (LPP) technique as the distance measure in the color space for more discriminating power. An adaptive fusing strategy is presented to effectively adjust the mixing factor so that the color and MSNST texture features are efficiently integrated to achieve more robust segmentation performance. Last, an iteration convergence criterion is proposed to reduce the time of the iteration of GrabCut algorithm dramatically with satisfied segmentation accuracy. Experiments using synthesis texture images and real natural scene images demonstrate the superior performance of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
17. Integrating Concept Ontology and Multitask Learning to Achieve More Effective Classifier Training for Multilevel Image Annotation.
- Author
-
Jianping Fan, Yuli Gao, and Hangzai Luo
- Subjects
ONTOLOGY ,LEARNING ,IMAGE analysis ,VISUAL perception ,KERNEL functions ,ALGORITHMS ,VISUALIZATION ,IMAGE processing - Abstract
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visua similarity, a novel mnultitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
18. Marginal Fisher Analysis and Its Variants for Human Gait Recognition and Content- Based Image Retrieval.
- Author
-
Dong Xu, Shuicheng Yan, Dacheng Tao, Lin, Stephen, and Hong-Jiang Zhang
- Subjects
IMAGE processing ,IMAGE retrieval ,IMAGING systems ,IMAGE analysis ,GAIT in humans ,PATTERN recognition systems ,MULTIMEDIA systems ,ALGORITHMS - Abstract
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MEA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumaniD gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
19. Texas Two-Step: A Framework for Optimal Multi-Input Single-Output Deconvolution.
- Author
-
Neelamani, Ramesh, Deffenbaugh, Max, and Baraniuk, Richard G.
- Subjects
IMAGE processing ,IMAGING systems ,IMAGE analysis ,WAVELETS (Mathematics) ,SUFFICIENT statistics ,ALGORITHMS - Abstract
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework-Texas Two-Step-to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
20. Developing Nonstationary Noise Estimation for Application in Edge and Corner Detection.
- Author
-
Wyatt, Paul and Nakai, Hiroaki
- Subjects
IMAGE processing ,IMAGING systems ,IMAGE analysis ,INFORMATION processing ,NOISE ,ALGORITHMS - Abstract
Accurate estimation of noise and signal power is of fundamental interest in a wide variety of vision applications as it is critical to thresholding and decision processes. This paper proposes two methods for the estimation of nonstationary noise based upon models of image structure which locally separate signal from noise. The resulting algorithms are noniterative and thereby fast. The accuracy of the proposed and existing methods is compared, first separately and then in application to two common image processing tasks: edge and corner detection. It is demonstrated that the proposed model can be used to improve the stability of both, in the presence of contrast change and nonstationary noise. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
21. Fat IIR Isotropic 2-D Complex Gabor Filters With Boundary Initialization.
- Author
-
Bernardino, Alexandre and Santos-Victor, José
- Subjects
IMAGE analysis ,IMAGING systems ,IMAGE processing ,COMPUTER vision ,ALGORITHMS ,INFORMATION processing - Abstract
Gabor filters are widely applied in image analysis and computer vision applications. This paper describes a fast algorithm for isotropic complex Gabor filtering that outperforms existing implementations. The main computational improvement arises from the decomposition of Gabor filtering into more efficient Gaussian filtering and sinusoidal modulations. Appropriate filter initial conditions are derived to avoid boundary transients, without requiring explicit image border extension. Our proposal reduces up to 39% the number of required operations with respect to state-of-the-art approaches. A full C++ implementation of the method is publicly available. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
22. Semi-Blind Image Restoration Via Mumford--Shah Regularization.
- Author
-
Bar, Leah, Sochen, Nir, and Kiryati, Nahum
- Subjects
IMAGE reconstruction ,MARKOV processes ,MARKOV random fields ,IMAGE analysis ,IMAGE processing ,ALGORITHMS - Abstract
Image restoration and segmentation are both classical problems, that are known to be difficult and have attracted major research efforts. This paper shows that the two problems are tightly coupled and can be successfully solved together. Mutual support of image restoration and segmentation processes within a joint variational framework is theoretically motivated, and validated by successful experimental results. The proposed variational method integrates semi-blind image deconvolution (parametric blur-kernel), and Mumford-Shah segmentation. The functional is formulated using the Γ-convergence approximation and is iteratively optimized via the alternate minimization method. While the major novelty of this work is in the unified treatment of the semi-blind restoration and segmentation problems, the important special case of known blur is also considered and promising results are obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
23. Inverse Halftoning Algorithm Using Edge-Based Lookup Table Approach.
- Author
-
Kuo-Liang Chung and Shih-Tung Wu
- Subjects
IMAGE processing ,IMAGE analysis ,IMAGING systems ,ALGORITHMS ,INFORMATION processing ,COMPUTER vision - Abstract
The inverse halftoning algorithm is used to reconstruct a gray image from an input halftone image. Based on the recently published lookup table (LUT) technique, this paper presents a novel edge-based LUT method for inverse halftoning which improves the quality of the reconstructed gray image. The proposed method first uses the LUT-based inverse halftoning method as a Preprocessing step to transform the given halftone image to a base gray image, and then the edges are extracted and classified from the base gray image. According to these classified edges, a novel edge-based LUT is built up to reconstruct the gray image. Based on a set of 30 real training images with both low- and high-frequency contents, experimental results demonstrated that the proposed method achieves a better image quality when compared to the currently published two methods, by Chang et al. and Meşe and Vaidyanathan. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
24. CLUE: Cluster-Based Retrieval of Images by Unsupervised Learning.
- Author
-
Yixin Chen, Wang, James Z., and Krovetz, Robert
- Subjects
IMAGE processing ,ALGORITHMS ,IMAGING systems ,IMAGE analysis ,IMAGE reconstruction ,MULTIMEDIA systems - Abstract
In a typical content-based image retrieval (CBIR) system, target images (images in the database) are sorted by feature similarities with respect to the query. Similarities among target images are usually ignored. This paper introduces a new technique cluster-based retrieval of images by unsupervised learning (CLUE), for improving user interaction with image retrieval systems by fully exploiting the similarity information, CLUE retrieves image clusters by applying a graph-theoretic clustering algorithm to a collection of images in the vicinity of the query. Clustering in CLUE is dynamic. In particular, clusters formed depend on which images are retrieved in response to the query. CLUE can be combined with any real-valued symmetric similarity measure (metric or nonmetric). Thus, it may be embedded in many current OUR systems, including relevance feedback systems. The performance of an experimental image retrieval system using CLUE is evaluated on a database of around 60,000 images from COREL. Empirical results demonstrate improved performance compared with a CBIR system using the same image similarity measure. In addition, results on images returned by Google's Image Search reveal the potential of applying CLUE to real-world image data and integrating CLUE as a part of the interface for keyword-based image retrieval systems. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
25. Three-Dimensional Surface Reconstruction From Multistatic SAR Images.
- Author
-
Rigling, Brian D. and Moses, Randolph L.
- Subjects
THREE-dimensional imaging ,IMAGING systems ,ALGORITHMS ,IMAGE processing ,PHOTOGRAPHS ,IMAGE analysis - Abstract
This paper discusses reconstruction of three-dimensional surfaces from multiple bistatic synthetic aperture radar (SAR) images. Techniques for surface reconstruction from multiple monostatic SAR images already exist, including interferometric processing and stereo SAR. We generalize these methods to obtain algorithms for bistatic interferometric SAR and bistatic stereo SAR. We also propose a framework for predicting the performance of our multistatic stereo SAR algorithm, and, from this framework, we suggest a metric for use in planning strategic deployment of multistatic assets. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
26. Image Authentication Using Distributed Source Coding.
- Author
-
Lin, Yao-Chung, Varodayan, David, and Girod, Bernd
- Subjects
CODING theory ,IMAGE processing ,IMAGE analysis ,ROBUST control ,IMAGE reconstruction ,IMAGE quality analysis ,ALGORITHMS - Abstract
We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian–Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
27. Edge Strength Filter Based Color Filter Array Interpolation.
- Author
-
Pekkucuksen, Ibrahim and Altunbasak, Yucel
- Subjects
COLOR filter arrays ,INTERPOLATION ,DIGITAL cameras ,IMAGE analysis ,PIXELS ,ALGORITHMS ,IMAGE processing - Abstract
For economic reasons, most digital cameras use color filter arrays instead of beam splitters to capture image data. As a result of this, only one of the required three color samples becomes available at each pixel location and the other two need to be interpolated. This process is called Color Filter Array (CFA) interpolation or demosaicing. Many demosaicing algorithms have been introduced over the years to improve subjective and objective interpolation quality. We propose an orientation-free edge strength filter and apply it to the demosaicing problem. Edge strength filter output is utilized both to improve the initial green channel interpolation and to apply the constant color difference rule adaptively. This simple edge directed method yields visually pleasing results with high CPSNR. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
28. Total Variation Projection With First Order Schemes.
- Author
-
Fadili, Jalal M. and Peyre, Gabriel
- Subjects
ALGORITHMS ,IMAGE analysis ,IMAGE processing ,STOCHASTIC convergence ,INVERSE problems ,NUMERICAL analysis ,CONVEX functions ,MATHEMATICAL optimization - Abstract
This article proposes a new algorithm to compute the projection on the set of images whose total variation is bounded by a constant. The projection is computed through a dual formulation that is solved by first order non-smooth optimization methods. This yields an iterative algorithm that applies iterative soft thresholding to the dual vector field, and for which we establish convergence rate on the primal iterates. This projection algorithm can then be used as a building block in a variety of applications such as solving inverse problems under a total variation constraint, or for texture synthesis. Numerical results are reported to illustrate the usefulness and potential applicability of our TV projection algorithm on various examples including denoising, texture synthesis, inpainting, deconvolution and tomography problems. We also show that our projection algorithm competes favorably with state-of-the-art TV projection methods in terms of convergence speed. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
29. An Efficient Two-Phase L¹-TV Method for Restoring Blurred Images with Impulse Noise.
- Author
-
Chan, Raymond H., Yiqiu Dong, and Hintermüller, Michael
- Subjects
CURVE fitting ,IMAGE analysis ,IMAGE processing ,IMAGE reconstruction ,ALGORITHMS - Abstract
A two-phase image restoration method based upon total variation regularization combined with an L¹-data-fitting term for impulse noise removal and deblurring is proposed. In the first phase, suitable noise detectors are used for identifying image pixels contaminated by noise. Then, in the second phase, based upon the information on the location of noise-free pixels, images are deblurred and denoised simultaneously. For efficiency reasons, in the second phase a superlinearly convergent algorithm based upon Fenchel-duality and inexact semismooth Newton techniques is utilized for solving the associated variational problem. Numerical results prove the new method to be a significantly advance over several state-of-the-art techniques with respect to restoration capability and computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
30. The SURE-LET Approach to Image Denoising.
- Author
-
Blu, Thierry and Luisier, Florian
- Subjects
IMAGE processing ,IMAGING systems ,IMAGE analysis ,WAVELETS (Mathematics) ,MATHEMATICAL transformations ,IMAGE transmission ,LINEAR systems ,INFORMATION processing ,ALGORITHMS - Abstract
We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error-Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain—typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms—this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a ‘tight’ frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes-linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. Integrating Color Constancy Into JPEG2000.
- Author
-
Ebner, Marc, Tischler, German, and Albert, Jurgen
- Subjects
IMAGE processing ,DIGITAL photography ,DIGITAL electronics ,JPEG (Image coding standard) ,IMAGE transmission ,IMAGING systems ,INFORMATION processing ,ALGORITHMS ,IMAGE analysis - Abstract
The human visual system is able to perceive colors as approximately constant. This ability is known as color constancy. In contrast, the colors measured by a sensor vary with the type of illuminant used. Color constancy is very important for digital photography and automatic color-based object recognition. In digital photography, this ability is known under the name automatic white balance. A number of algorithms have been developed for color constancy. We review two well-known color constancy algorithms, the gray world assumption and the Retinex algorithm and show how a color constancy algorithm may be integrated into the JPEG2000 framework. Since computer images are usually stored in compressed form anyway, little overhead is required to add color constancy into the processing pipeline. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. Progressive Quantized Projection Approach to Data Hiding.
- Author
-
Aighoniemy, Masoud and Tewfik, Ahmed H.
- Subjects
IMAGE processing ,INFORMATION processing ,INFORMATION science ,IMAGE analysis ,ALGORITHMS ,DATABASE management - Abstract
A new image data-hiding technique is proposed. The proposed approach modifies blocks of the image after projecting them onto certain directions. By quantizing the projected blocks to even and odd values, one can represent the hidden information properly. The proposed algorithm performs the modification progressively to ensure successful data extraction without the need for the original image at the receiver side. Two techniques are also presented for correcting scaling and rotation attacks. The first approach is an exhaustive search in nature, which is based on a training sequence that is inserted as part of the hidden information. The second approach uses wavelet maxima as image semantics for rotation and scaling estimation. Both algorithms have proved to be effective in correcting rotation and scaling distortion. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
33. Fast Algorithm for Distortion-Based Error Protection of Embedded Image Codes.
- Author
-
Hamzaoui, Raouf, Stanković, Vladimir, and Zixiang Xiong
- Subjects
IMAGE processing ,ALGORITHMS ,EMBEDDED computer systems ,IMAGING systems ,INFORMATION processing ,IMAGE analysis - Abstract
We consider a joint source-channel coding system that protects an embedded bitstream using a finite family of channel codes with error detection and error correction capability. The performance of this system may be measured by the expected distortion or by the expected number of correctly decoded source bits. Whereas a rate-based optimal solution can be found in linear time, the computation of a distortion-based optimal solution is prohibitive. Under the assumption of the convexity of the operational distortion-rate function of the source coder, we give a lower bound on the expected distortion of a distortion-based optimal solution that depends only on a rate-based optimal solution. Then, we propose a local search (LS) algorithm that starts from a rate-based optimal solution and converges in linear time to a local minimum of the expected distortion. Experimental results for a binary symmetric channel show that our LS algorithm is near optimal, whereas its complexity is much lower than that of the previous best solution. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
34. Morphological Decomposition of 2-D Binary Shapes into Convex Polygons: A Heuristic Algorithm.
- Author
-
Jianning Xu
- Subjects
ALGORITHMS ,IMAGE analysis ,CONVEX domains ,IMAGE processing - Abstract
Deals with a study which proposed a morphological shape algorithm which decomposes a two-dimensional binary shape into a group of convex polygonal components used in image processing. Definition of the fundamental morphological operations used in image analysis; Approximation of convexes on algorithms; Decomposition into convex polygons.
- Published
- 2001
- Full Text
- View/download PDF
35. Memory Efficient Propagation-Based Watershed and Influence Zone Algorithms for Large Images.
- Author
-
Pitas, Ioannis and Cotsaces, Costas I.
- Subjects
IMAGE analysis ,IMAGE processing ,IMAGE reconstruction ,ALGORITHMS - Abstract
Presents information on a study which explored ways to increase the memory efficiency of propagation-based watershed and influence zone algorithms for large images. Theoretical background of the algorithms; Phases in output image reconstruction; Conclusions.
- Published
- 2000
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.