126 results on '"Image derivatives"'
Search Results
2. The Hausdorff Dimension and Scale-Space Normalisation of Natural Images
- Author
-
Pedersen, Kim Steenstrup, Nielsen, Mads, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Nielsen, Mads, editor, Johansen, Peter, editor, Olsen, Ole Fogh, editor, and Weickert, Joachim, editor
- Published
- 1999
- Full Text
- View/download PDF
3. Local Feature Descriptor and Derivative Filters for Blind Image Quality Assessment
- Author
-
Mariusz Oszust
- Subjects
Image derivatives ,Image quality ,Computer science ,business.industry ,Applied Mathematics ,Deep learning ,Detector ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Local feature descriptor ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Support vector machine ,Kernel (image processing) ,Distortion ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
In this letter, a novel blind image quality assessment (BIQA) technique is introduced to provide an automatic and reproducible evaluation of distorted images. In the approach, the information carried by image derivatives of different orders is captured by local features and used for the image quality prediction. Since a typical local feature descriptor is designed to ensure a robust image patch representation, in this letter, a novel descriptor that additionally highlights local differences enhanced by the filtering is proposed. Furthermore, a set of derivative kernels is introduced. Finally, the support vector regression technique is used to map statistics of described local features into subjective scores, providing an objective quality score for an image. Extensive experimental validation on popular IQA image datasets reveals that the proposed method outperforms the state-of-the-art handcrafted and deep learning BIQA measures.
- Published
- 2019
- Full Text
- View/download PDF
4. Experimental Analysis of Appearance Maps as Descriptor Manifolds Approximations
- Author
-
Javier Gonzalez-Jimenez, Francisco-Angel Moreno, and Alberto Jaenal
- Subjects
Structure (mathematical logic) ,Image derivatives ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Pattern recognition ,Sample (graphics) ,Manifold ,Regression ,Image (mathematics) ,Computer Science::Computer Vision and Pattern Recognition ,Metric (mathematics) ,Artificial intelligence ,business - Abstract
Images of a given environment, coded by a holistic image descriptor, produce a manifold that is articulated by the camera pose in such environment. The correct articulation of such Descriptor Manifold (DM) by the camera poses is the cornerstone for precise Appearance-based Localization (AbL), which implies knowing the correspondent descriptor for any given pose of the camera in the environment. Since such correspondences are only given at sample pairs of the DM (the appearance map), some kind of regression must be applied to predict descriptor values at unmapped locations. This is relevant for AbL because this regression process can be exploited as an observation model for the localization task. This paper analyses the influence of a number of parameters involved in the approximation of the DM from the appearance map, including the sampling density, the method employed to regress values at unvisited poses, and the impact of the image content on the DM structure. We present experimental evaluations of diverse setups and propose an image metric based on the image derivatives, which allows us to build appearance maps in the form of grids of variable density. A preliminary use case is presented as an initial step for future research.
- Published
- 2021
- Full Text
- View/download PDF
5. Texture based blur estimation in a single defocused image
- Author
-
Hamid Reza Pourreza and Mina Masoudifar
- Subjects
Deblurring ,Image derivatives ,Similarity (geometry) ,Logarithm ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Image segmentation ,Content-based image retrieval ,Computer Science::Graphics ,Computer Science::Computer Vision and Pattern Recognition ,Histogram ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Noise (video) ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Texture identification has many potential application such as image segmentation, content based image retrieval and so on. In real world, noise and blur are considered as nuisance factors in texture analysis. In this paper, robustness of local similarity pattern (LSP) to these disturbing effects is studied. Then, a method to measure amount of blur in a defocused and noisy texture is proposed. In this method, some order derivatives of an image is computed. Logarithm of these derivatives is calculated and histograms of the log-derivatives are used to blur estimation. By conjunction of these two methods, we can compute the blur map of a defocused image consists of various types of textures. This map could be used in image deblurring.
- Published
- 2020
- Full Text
- View/download PDF
6. Quantitative performance comparison of derivative operators for intervertebral kinematics analysis
- Author
-
Maria Agnese Pirozzi, Antonio Fratini, Emilio Andreozzi, Giuseppe Cesarelli, Paolo Bifulco, Andreozzi, Emilio, Pirozzi, Maria Agnese, Fratini, Antonio, Cesarelli, Giuseppe, and Bifulco, Paolo
- Subjects
Image derivatives ,business.industry ,Computer science ,Template matching ,0206 medical engineering ,Pattern recognition ,02 engineering and technology ,Derivative ,Kinematics ,Differential operator ,020601 biomedical engineering ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Operator (computer programming) ,spine kinematics, derivative operators, X-ray imaging, fluoroscopy, quantum noise ,Trajectory ,Artificial intelligence ,business ,Instant centre of rotation - Abstract
Comparison of derivative operators via quantitative performance analysis is rarely addressed in medical imaging. Indeed, the main application of such operators is the extraction of edges and, since there is no unequivocal definition of edges, the common trend is to identify the best performing operator based on a qualitative match between the extracted edges and the fickle human perception of object boundaries. This study presents an objective comparison of four first-order derivative operators through quantitative analysis of results yielded in a specific task, i.e. a spine kinematics application. Such application is based on a template matching method, which estimates common kinematic parameters of intervertebral segments from an X-ray fluoroscopy sequence of spine motion, by operating on the image derivatives of each frame. Therefore, differences in image derivatives, computed via different derivative operators, may lead to differences in estimated parameters of intervertebral kinematics. The comparison presented in this study focused on the trajectory of the instantaneous center of rotation (ICR) of an intervertebral segment, as it is particularly sensitive even to very small differences in displacements and velocities. Therefore, a quantitative analysis of the discrepancies between the ICR trajectories, obtained with each of the four considered derivative operators, was carried out by defining quantitative measures. The results showed detectable differences in the obtained ICR trajectories, thus highlighting the need for quantitative analysis of derivative operator performances in applications aimed at providing quantitative results. However, the significance level of such differences for clinical applications should be further assessed, but, currently, it is not possible, as there is no consensus and sufficient data on kinematic parameters features associated with specific spinal pathologies.
- Published
- 2020
- Full Text
- View/download PDF
7. Towards Learning-based Inverse Subsurface Scattering
- Author
-
Chengqian Che, Shuang Zhao, Fujun Luan, Kavita Bala, and Ioannis Gkioulekas
- Subjects
Image derivatives ,Artificial neural network ,Inverse scattering problem ,Monte Carlo method ,0202 electrical engineering, electronic engineering, information engineering ,Radiative transfer ,Subsurface scattering ,020207 software engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Parameter space ,Algorithm ,Encoder - Abstract
Given images of translucent objects, of unknown shape and lighting, we aim to use learning to infer the optical parameters controlling subsurface scattering of light inside the objects. We introduce a new architecture, the inverse transport network (ITN), that aims to improve generalization of an encoder network to unseen scenes, by connecting it with a physically-accurate, differentiable Monte Carlo renderer capable of estimating image derivatives with respect to scattering material parameters. During training, this combination forces the encoder network to predict parameters that not only match groundtruth values, but also reproduce input images. During testing, the encoder network is used alone, without the renderer, to predict material parameters from a single input image. Drawing insights from the physics of radiative transfer, we additionally use material parameterizations that help reduce estimation errors due to ambiguities in the scattering parameter space. Finally, we augment the training loss with pixelwise weight maps that emphasize the parts of the image most informative about the underlying scattering parameters. We demonstrate that this combination allows neural networks to generalize to scenes with completely unseen geometries and illuminations better than traditional networks, with 38.06% reduced parameter error on average.
- Published
- 2020
- Full Text
- View/download PDF
8. Tensors, Differential Geometry and Statistical Shading Analysis
- Author
-
Daniel Holtmann-Rice, Benjamin Kunsberg, and Steven W. Zucker
- Subjects
Statistics and Probability ,Image derivatives ,Applied Mathematics ,media_common.quotation_subject ,Scalar (mathematics) ,020207 software engineering ,02 engineering and technology ,Ambiguity ,Rotation matrix ,Condensed Matter Physics ,Differential geometry ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,020201 artificial intelligence & image processing ,Geometry and Topology ,Computer Vision and Pattern Recognition ,Algebraic number ,Normal ,Subspace topology ,media_common ,Mathematics - Abstract
We develop a linear algebraic framework for the shape-from-shading problem, because tensors arise when scalar (e.g., image) and vector (e.g., surface normal) fields are differentiated multiple times. Using this framework, we first investigate when image derivatives exhibit invariance to changing illumination by calculating the statistics of image derivatives under general distributions on the light source. Second, we apply that framework to develop Taylor-like expansions and build a boot-strapping algorithm to find the polynomial surface solutions (under any light source) consistent with a given patch to arbitrary order. A generic constraint on the light source restricts these solutions to a 2-D subspace, plus an unknown rotation matrix. It is this unknown matrix that encapsulates the ambiguity in the problem. Finally, we use the framework to computationally validate the hypothesis that image orientations (derivatives) provide increased invariance to illumination by showing (for a Lambertian model) that a shape-from-shading algorithm matching gradients instead of intensities provides more accurate reconstructions when illumination is incorrectly estimated under a flatness prior.
- Published
- 2018
- Full Text
- View/download PDF
9. Instantaneous 3D motion from image derivatives using the Least Trimmed Square regression
- Author
-
Dornaika, Fadi and Sappa, Angel
- Subjects
- *
MOTION perception (Vision) , *THREE-dimensional imaging , *IMAGE processing , *PATTERN recognition systems , *ROBUST statistics , *LEAST squares , *REGRESSION analysis - Abstract
Abstract: This paper presents a new technique to the instantaneous 3D motion estimation. The main contributions are as follows. First, we show that the 3D camera or scene velocity can be retrieved from image derivatives only assuming that the scene contains a dominant plane. Second, we propose a new robust algorithm that simultaneously provides the Least Trimmed Square solution and the percentage of inliers—the non-contaminated data. Experiments on both synthetic and real image sequences demonstrated the effectiveness of the developed method. Those experiments show that the new robust approach can outperform classical robust schemes. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
10. The Second Order Local-Image-Structure Solid.
- Author
-
Griffin, L.D.
- Abstract
Characterization of second order local image structure by a 6D vector (or jet) of Gaussian derivative measurements is considered. We consider the affect on jets of a group of transformations-affine intensity-scaling, image rotation and reflection, and their compositions-that preserve intrinsic image structure. We show how this group stratifies the jet space into a system of orbits. Considering individual orbits as points, a 3D orbifold is defined. We propose a norm on jet space which we use to induce a metric on the orbifold. The metric tensor shows that the orbifold is intrinsically curved. To allow visualization of the orbifold and numerical computation with it, we present a mildly-distorting but volume-preserving embedding of it into euclidean 3-space. We call the resulting shape, which is like a flattened lemon, the second order local-image-structure solid. As an example use of the solid, we compute the distribution of local structures in noise and natural images. For noise images, analytical results are possible and they agree with the empirical results. For natural images, an excess of locally 1D structure is found. [ABSTRACT FROM PUBLISHER]
- Published
- 2007
- Full Text
- View/download PDF
11. Focal Flow: Velocity and Depth from Differential Defocus Through Motion
- Author
-
Todd Zickler, Qi Guo, Sanjeev J. Koppal, Steven J. Gortler, and Emma Alexander
- Subjects
Image derivatives ,Simple lens ,Aperture ,Computer science ,business.industry ,Gaussian ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,symbols.namesake ,Flow (mathematics) ,Flow velocity ,Artificial Intelligence ,Depth map ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Computer vision ,Vector field ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does this using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the focal flow cue, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.
- Published
- 2017
- Full Text
- View/download PDF
12. Regularised differentiation for image derivatives
- Author
-
Ismail Ben Ayed, Yosra Mathlouthi, and Amar Mitiche
- Subjects
Image derivatives ,Motion analysis ,Computer science ,Iterative method ,Optical flow ,Finite difference ,Context (language use) ,02 engineering and technology ,System of linear equations ,03 medical and health sciences ,0302 clinical medicine ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,030217 neurology & neurosurgery ,Software ,Linear equation - Abstract
This study investigates a regularised differentiation method to estimate image derivatives. The scheme minimises an integral functional containing an anti-differentiation data discrepancy term and a smoothness regularisation term. When discretised, the Euler–Lagrange necessary conditions for a minimum of the functional yield a large scale sparse system of linear equations, which can be solved efficiently by Jacobi/Gauss–Seidel iterations. The authors investigate the impact of the method in the context of two important problems in computer vision: optical flow and scene flow estimation. Quantitative results, using the Middlebury dataset and other real and synthetic images, show that the authors’ regularised differentiation scheme outperforms standard derivative definitions by smoothed finite differences, which are commonly used in motion analysis. The method can be readily used in various other image analysis problems.
- Published
- 2017
- Full Text
- View/download PDF
13. Scale Dependency of Image Derivatives for Feature Measurement in Curvilinear Structures.
- Author
-
Streekstra, G.J., Van Den Boomgaard, R., and Smeulders, A.W.M.
- Abstract
Extraction of image features is a crucial step in many image analysis tasks. In feature extraction methods Gaussian derivative kernels are frequently utilized. Blurring of the image due to convolution with these kernels gives rise to feature measures different from the intended value in the original image. We propose to solve this problem by explicitly modeling the scale dependency of derivatives combined with measurement of derivatives at multiple scales. This approach is illustrated in methods for feature measurement in curvilinear structures. Results in 3D Confocal Images confirm that modelling of scale behavior of derivatives results in improved methods for center line localization in curved line structures and enables curvature and diameter measurement. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
14. STAR: A Structure and Texture Aware Retinex Model
- Author
-
Fan Zhu, Ling Shao, Jun Xu, Yingkun Hou, Dongwei Ren, Mengyang Yu, Li Liu, and Haoqian Wang
- Subjects
FOS: Computer and information sciences ,Image derivatives ,Color constancy ,Texture (cosmology) ,Color correction ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,Star (graph theory) ,Computer Graphics and Computer-Aided Design ,Exponential function ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Texture mapping ,Software ,STAR model ,Mathematics - Abstract
Retinex theory is developed mainly to decompose an image into the illumination and reflectance components by analyzing local image derivatives. In this theory, larger derivatives are attributed to the changes in reflectance, while smaller derivatives are emerged in the smooth illumination. In this paper, we utilize exponentiated local derivatives (with an exponent {\gamma}) of an observed image to generate its structure map and texture map. The structure map is produced by been amplified with {\gamma} > 1, while the texture map is generated by been shrank with {\gamma} < 1. To this end, we design exponential filters for the local derivatives, and present their capability on extracting accurate structure and texture maps, influenced by the choices of exponents {\gamma}. The extracted structure and texture maps are employed to regularize the illumination and reflectance components in Retinex decomposition. A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image. We solve the STAR model by an alternating optimization algorithm. Each sub-problem is transformed into a vectorized least squares regression, with closed-form solutions. Comprehensive experiments on commonly tested datasets demonstrate that, the proposed STAR model produce better quantitative and qualitative performance than previous competing methods, on illumination and reflectance decomposition, low-light image enhancement, and color correction. The code is publicly available at https://github.com/csjunxu/STAR., Comment: 16 pages, 13 figures, 3 tables, accepted by TIP
- Published
- 2019
15. Quadratic Penalty Method for Intensity-Based Deformable Image Registration and 4DCT Lung Motion Recovery
- Author
-
Edward M. Castillo
- Subjects
Image derivatives ,Computer science ,Movement ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,Regularization (mathematics) ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Quadratic equation ,Image Processing, Computer-Assisted ,Humans ,Penalty method ,Four-Dimensional Computed Tomography ,Coordinate descent ,Lung ,Statistical model ,General Medicine ,Inhalation ,Exhalation ,030220 oncology & carcinogenesis ,Moving least squares ,Algorithm ,Smoothing ,Algorithms - Abstract
Intensity-based deformable image registration (DIR) requires minimizing an image dissimilarity metric. Imaged anatomy, such as bones and vasculature, as well as the resolution of the digital grid, can often cause discontinuities in the corresponding objective function. Consequently, the application of a gradient-based optimization algorithm requires a preprocessing image smoothing to ensure the existence of necessary image derivatives. Simple block matching (exhaustive search) methods do not require image derivative approximations, but their general effectiveness is often hindered by erroneous solutions (outliers). Block match methods are therefore often coupled with a statistical outlier detection method to improve results. Purpose The purpose of this work is to present a spatially accurate, intensity-based DIR optimization formulation that can be solved with a straightforward gradient-free quadratic penalty algorithm and is suitable for 4D thoracic computed tomography (4DCT) registration. Additionally, a novel regularization strategy based on the well-known leave-one-out robust statistical model cross-validation method is introduced. Methods The proposed Quadratic Penalty DIR (QPDIR) method minimizes both an image dissimilarity term, which is separable with respect to individual voxel displacements, and a regularization term derived from the classical leave-one-out cross-validation statistical method. The resulting DIR problem lends itself to a quadratic penalty function optimization approach, where each subproblem can be solved by straightforward block coordinate descent iteration. Results The spatial accuracy of the method was assessed using expert-determined landmarks on ten 4DCT datasets available on www.dir-lab.com. The QPDIR algorithm achieved average millimeter spatial errors between 0.69 (0.91) and 1.19 (1.26) on the ten test cases. On all ten 4DCT test cases, the QPDIR method produced spatial accuracies that are superior or equivalent to those produced by current state-of-the-art methods. Moreover, QPDIR achieved accuracies at the resolution of the landmark error assessment (i.e., the interobserver error) on six of the ten cases. Conclusion The QPDIR algorithm is based on a simple quadratic penalty function formulation and a regularization term inspired by leave-one-out cross validation. The formulation lends itself to a parallelizable, gradient-free, block coordinate descent numerical optimization method. Numerical results indicate that the method achieves a high spatial accuracy on 4DCT inhale/exhale phases.
- Published
- 2019
16. Differentiation of polyps by clinical colonoscopy via integrated color information, image derivatives and machine learning
- Author
-
Edward Sun, Weiguo Cao, Yongfeng Gao, Yi Wang, Zhengrong Liang, Juan Carlos Bucobo, Marc J. Pomeroy, and Samuel L. Stanley
- Subjects
Image derivatives ,medicine.diagnostic_test ,business.industry ,Computer science ,medicine ,Colonoscopy ,Computer vision ,Artificial intelligence ,business - Published
- 2019
- Full Text
- View/download PDF
17. Segmentation of perivascular spaces in 7 T MR image using auto-context model with orientation-normalized features
- Author
-
Sang-Hyun Park, Weili Lin, Dinggang Shen, Xiaopeng Zong, and Yaozong Gao
- Subjects
Adult ,Male ,Image derivatives ,Cognitive Neuroscience ,Sensitivity and Specificity ,Article ,Pattern Recognition, Automated ,030218 nuclear medicine & medical imaging ,Machine Learning ,03 medical and health sciences ,Imaging, Three-Dimensional ,0302 clinical medicine ,Discriminative model ,Region of interest ,Image Interpretation, Computer-Assisted ,Humans ,Computer vision ,Segmentation ,Mathematics ,Context model ,business.industry ,Reproducibility of Results ,Pattern recognition ,Cerebral Arteries ,Image Enhancement ,Cerebral Veins ,Cerebral Angiography ,Random forest ,Haar-like features ,Neurology ,Female ,Artificial intelligence ,business ,Classifier (UML) ,Algorithms ,Magnetic Resonance Angiography ,030217 neurology & neurosurgery - Abstract
Quantitative study of perivascular spaces (PVSs) in brain magnetic resonance (MR) images is important for understanding the brain lymphatic system and its relationship with neurological diseases. One of the major challenges is the accurate extraction of PVSs that have very thin tubular structures with various directions in three-dimensional (3D) MR images. In this paper, we propose a learning-based PVS segmentation method to address this challenge. Specifically, we first determine a region of interest (ROI) by using the anatomical brain structure and the vesselness information derived from eigenvalues of image derivatives. Then, in the ROI, we extract a number of randomized Haar features which are normalized with respect to the principal directions of the underlying image derivatives. The classifier is trained by the random forest model that can effectively learn both discriminative features and classifier parameters to maximize the information gain. Finally, a sequential learning strategy is used to further enforce various contextual patterns around the thin tubular structures into the classifier. For evaluation, we apply our proposed method to the 7T brain MR images scanned from 17 healthy subjects aged from 25 to 37. The performance is measured by voxel-wise segmentation accuracy, cluster-wise classification accuracy, and similarity of geometric properties, such as volume, length, and diameter distributions between the predicted and the true PVSs. Moreover, the accuracies are also evaluated on the simulation images with motion artifacts and lacunes to demonstrate the potential of our method in segmenting PVSs from elderly and patient populations. The experimental results show that our proposed method outperforms all existing PVS segmentation methods.
- Published
- 2016
- Full Text
- View/download PDF
18. Fast and Robust 3D Numerical Method for Coronary Artery Vesselness Diffusion from CTA Images
- Author
-
Hengfei Cui
- Subjects
Image derivatives ,Discretization ,Computer science ,Anisotropic diffusion ,business.industry ,Physics::Medical Physics ,Pattern recognition ,02 engineering and technology ,030204 cardiovascular system & hematology ,Background noise ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,Gaussian function ,symbols ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business ,Diffusion MRI - Abstract
Optimized anisotropic diffusion is commonly used in medical imaging for the purpose of reducing background noise and tissues and enhancing the vessel structures of interest. In this work, a hybrid diffusion tensor is developed, which integrates Frangi’s vesselness measure with a continuous switch, suitable for filtering both tubular and planar image structures. Besides, a new 3D diffusion discretization scheme is proposed, in which we apply Gaussian kernel decomposition for computing image derivatives. This scheme is rotational invariant and shows good isotropic filtering properties on both synthetic and real Computed Tomography Angiography (CTA) data. In addition, segmentation approach is performed over filtered images obtained by using different schemes. Our method is proved to give better segmentation result and more thin branches can be detected. In conclusion, the proposed method should garner wider clinical applicability in Computed Tomography Coronary Angiography (CTCA) images preprocessing.
- Published
- 2018
- Full Text
- View/download PDF
19. Efficient multiplicative noise removal method using isotropic second order total variation
- Author
-
Liang Xiao and Pengfei Liu
- Subjects
Image derivatives ,Mathematical optimization ,Iterative method ,Total variation denoising ,Peak signal-to-noise ratio ,Regularization (mathematics) ,Multiplicative noise ,Matrix decomposition ,Computational Mathematics ,Computational Theory and Mathematics ,Modeling and Simulation ,Maximum a posteriori estimation ,Algorithm ,Mathematics - Abstract
To overcome and reduce the undesirable staircase effect commonly met in the total variation (TV) regularization based multiplicative noise removal methods, a novel multiplicative noise removal model based on isotropic second order total variation (ISOTV) is proposed under the maximum a posteriori (MAP) framework. Under the spectral decomposition framework, the ISOTV is first transformed into an equivalent formulation as a novel weighted L 1 - L 2 mixed norm of the second order image derivatives. Then an efficient alternating iterative algorithm is designed to solve the proposed model. Finally, we prove in detail the convergence of the proposed algorithm. A set of experiments on both standard and medical images show that the proposed ISOTV method yields state-of-the-art results both in terms of peak signal to noise ratio (PSNR) and image perception quality. Specifically, the proposed ISOTV method can better reduce the staircase effect and preserve image edges more sharpness with medical applications.
- Published
- 2015
- Full Text
- View/download PDF
20. A fast higher degree total variation minimization method for image restoration
- Author
-
Pengfei Liu, Liang Xiao, and Jun Zhang
- Subjects
Mathematical optimization ,Deblurring ,Image derivatives ,Applied Mathematics ,Wiener deconvolution ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Matrix decomposition ,Computational Theory and Mathematics ,Rate of convergence ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Minification ,Algorithm ,Image restoration ,Mathematics - Abstract
Based on the spectral decomposition theory, this paper presents a unified analysis of higher degree total variation HDTV model for image restoration. Under this framework, HDTV is reinterpreted as a family of weighted L1–L2 mixed norms of image derivatives. Due to the equivalent formulation of HDTV, we construct a modified functional for HDTV-based image restoration. Then, the minimization of the modified functional can be decoupled into two separate sub-problems, which correspond to the deblurring and denoising. Thus, we design a fast and efficient image restoration algorithm using an iterative Wiener deconvolution with fast projected gradient denoising IWD-FPGD scheme. Moreover, we show the convergence of the proposed IWD-FPGD algorithm for the special case of second-degree total variation. Finally, the systematic performance comparisons of the proposed IWD-FPGD algorithm demonstrate the effectiveness in terms of peak signal-to-noise ratio, structural similarity and convergence rate.
- Published
- 2015
- Full Text
- View/download PDF
21. Focal Track: Depth and Accommodation with Oscillating Lens Deformation
- Author
-
Emma Alexander, Qi Guo, and Todd Zickler
- Subjects
Image derivatives ,Monocular ,business.industry ,Computer science ,Track (disk drive) ,Photodetector ,02 engineering and technology ,Frame rate ,01 natural sciences ,010305 fluids & plasmas ,law.invention ,Lens (optics) ,Noise ,Optics ,Optical imaging ,law ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Focal length ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Accommodation - Abstract
The focal track sensor is a monocular and computationally efficient depth sensor that is based on defocus controlled by a liquid membrane lens. It synchronizes small lens oscillations with a photosensor to produce real-time depth maps by means of differential defocus, and it couples these oscillations with bigger lens deformations that adapt the defocus working range to track objects over large axial distances. To create the focal track sensor, we derive a texture-invariant family of equations that relate image derivatives to scene depth when a lens changes its focal length differentially. Based on these equations, we design a feed-forward sequence of computations that: robustly incorporates image derivatives at multiple scales; produces confidence maps along with depth; and can be trained endto- end to mitigate against noise, aberrations, and other non-idealities. Our prototype with 1-inch optics produces depth and confidence maps at 100 frames per second over an axial range of more than 75cm.
- Published
- 2017
- Full Text
- View/download PDF
22. On accurate dense stereo-matching using a local adaptive multi-cost approach
- Author
-
L. Grammatikopoulos, I. Kalisperakis, G. Karras, and Christos Stentoumis
- Subjects
Image derivatives ,Matching (statistics) ,Pixel ,business.industry ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cost approach ,Absolute difference ,Atomic and Molecular Physics, and Optics ,Computer Science Applications ,symbols.namesake ,Gaussian function ,symbols ,Computer vision ,Artificial intelligence ,Computers in Earth Sciences ,business ,Engineering (miscellaneous) ,Algorithm ,Smoothing ,Mathematics - Abstract
Defining pixel correspondences among images is a fundamental process in fully automating image-based 3D reconstruction. In this contribution, we show that an adaptive local stereo-method of high computational efficiency may provide accurate 3D reconstructions under various scenarios, or even outperform global optimizations. We demonstrate that census matching cost on image gradients is more robust, and we exponentially combine it with the absolute difference in colour and in principal image derivatives. An aggregated cost volume is computed by linearly expanded cross skeleton support regions. A novel consideration is the smoothing of the cost volume via a modified 3D Gaussian kernel, which is geometrically constrained; this offers 3D support to cost computation in order to relax the inherent assumption of “fronto-parallelism” in local methods. The above steps are integrated into a hierarchical scheme, which exploits adaptive windows. Hence, failures around surface discontinuities, typical in hierarchical matching, are addressed. Extensive results are presented for datasets from popular benchmarks as well as for aerial and high-resolution close-range images.
- Published
- 2014
- Full Text
- View/download PDF
23. Two-stage blind deconvolution scheme using useful priors
- Author
-
Hongjun Zhou, Wei Wang, Shi-hai Xu, Shuai Chen, and Jinjin Zheng
- Subjects
Blind deconvolution ,Image derivatives ,business.industry ,Computer science ,Kernel density estimation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Image (mathematics) ,Prior probability ,Artificial intelligence ,Deconvolution ,Electrical and Electronic Engineering ,business ,Selection (genetic algorithm) - Abstract
Hand shake blurry image is a common phenomenon in our daily life. In this paper, a novel blind deconvolution scheme is proposed to recover a single hand shake blurry image. The algorithm is subdivided into two main stages, kernel estimation stage and non-blind deconvolution stage. In the kernel estimation stage, we propose a cost function taking a selected map into consideration. In the non-blind decovolution stage, another cost function is designed using image derivatives prior. We also present an adaptive kernel size selection method instead of traditional manual selection. Extensive experiments on real world blurry images are conducted to demonstrate the performance of our algorithm.
- Published
- 2014
- Full Text
- View/download PDF
24. Mixed Higher Order Variational Model for Image Recovery
- Author
-
Liang Xiao, Pengfei Liu, and Liancun Xiu
- Subjects
Mathematical optimization ,Image derivatives ,Current (mathematics) ,Article Subject ,Degree (graph theory) ,lcsh:Mathematics ,General Mathematics ,General Engineering ,lcsh:QA1-939 ,Thresholding ,Regularization (mathematics) ,Signal ,Matrix decomposition ,Monotone polygon ,lcsh:TA1-2040 ,lcsh:Engineering (General). Civil engineering (General) ,Algorithm ,Mathematics - Abstract
A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weightedL1-L2mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV) scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR) and restoration quality.
- Published
- 2014
- Full Text
- View/download PDF
25. Monocular, boundary preserving joint recovery of scene flow and depth
- Author
-
Ismail Ben Ayed, Yosra Mathlouthi, and Amar Mitiche
- Subjects
Image derivatives ,Computer Networks and Communications ,Computer science ,depth ,Optical flow ,02 engineering and technology ,Classification of discontinuities ,Regularization (mathematics) ,Image sequence analysis ,lcsh:QA75.5-76.95 ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Monocular ,3D motion ,business.industry ,regularization ,Nonlinear system ,Variational method ,Hardware and Architecture ,ICT ,020201 artificial intelligence & image processing ,Scene flow ,Artificial intelligence ,Minification ,lcsh:Electronic computers. Computer science ,business ,Algorithm ,030217 neurology & neurosurgery ,Software ,L1 regularization ,Information Systems - Abstract
Variational joint recovery of scene flow and depth from a single image sequence, rather than from a stereo sequence as others required, was investigated in Mitiche et al. (2015) using an integral functional with a term of conformity of scene flow and depth to the image sequence spatiotemporal variations, and L2 regularization terms for smooth depth field and scene flow. The resulting scheme was analogous to the Horn and Schunck optical flow estimation method, except that the unknowns were depth and scene flow rather than optical flow. Several examples were given to show the basic potency of the method: it was able to recover good depth and motion, except at their boundaries because L2 regularization is blind to discontinuities which it smooths indiscriminately. The method that we study in this paper generalizes to L1 regularization the formulation of Mitiche et al. (2015) so that it computes boundary-preserving estimates of both depth and scene flow. The image derivatives, which appear as data in the functional, are computed from the recorded image sequence also by a variational method, which uses L1 regularization to preserve their discontinuities. Although L1 regularization yields non-linear Euler–Lagrange equations for the minimization of the objective functional, these can be solved efficiently. The advantages of the generalization, namely, sharper computed depth and three-dimensional motion, are put in evidence in experimentation with real and synthetic images, which shows the results of L1 versus L2 regularization of depth and motion, as well as the results using L1 rather than L2 regularization of image derivatives.
- Published
- 2016
- Full Text
- View/download PDF
26. No-reference image quality assessment based on high order derivatives
- Author
-
Yuming Fang, Weisi Lin, and Qiaohong Li
- Subjects
Image derivatives ,Computer science ,Image quality ,business.industry ,Feature vector ,Binary image ,Feature extraction ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Automatic image annotation ,Image texture ,Feature (computer vision) ,Histogram ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Image warping ,business ,Image gradient ,Feature detection (computer vision) - Abstract
Research in human visual perception has found that the sense of natural scences cannot be conveyed only through lines and edges. It also needs the knowledge of texture regions within the image, which can be obtained through the analysis of higher derivatives. Inspired by the research from neuroscience that high order derivatives can capture the details of image structure, we propose a novel simple yet effective blind image quality assessment (IQA) metric based on high order derivatives (BHOD). In the proposed metric, we extract multi-scale structural features up to fourth order image derivatives, to obtain the image structural features. Support vector regression (SVR) is used to learn the mapping between feature space and subjective opinion scores. The proposed method is extensively evaluated on three image databases and shows highly competitive performance to state-of-the-art NR-IQA methods.
- Published
- 2016
- Full Text
- View/download PDF
27. Range Image Derivatives for GRCM on 2.5D Face Recognition
- Author
-
Thian Song Ong, Lee Ying Chong, and Andrew Beng Jin Teoh
- Subjects
Image derivatives ,Computer science ,Covariance matrix ,business.industry ,010401 analytical chemistry ,02 engineering and technology ,01 natural sciences ,Facial recognition system ,0104 chemical sciences ,Range (mathematics) ,Feature (computer vision) ,Tensor (intrinsic definition) ,Face (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,Three-dimensional face recognition ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
2.5D face recognition, which leverages both texture and range facial images often outperform sole texture 2D face recognition as the former provides additional unique information than the latter. The 2.5D face recognition naturally incurs higher computational load since two types of data are involved. In this paper, we investigate the possibility of just using range facial image alone for recognition. Gabor-based region covariance matrix (GRCM) is a flexible face feature descriptor that is capable to capture the geometrical and statistical properties of a facial image by fusing the diverse facial features into a covariance matrix. Here, we attempt to extract several feature derivatives from the range facial image for GRCM. Since GRCM resides on the Tensor manifold, geodesic and re-parameterized distances of Tensor manifold are used as dissimilarity measures of two GRCMs. Thus, the accuracy performance of range image derivatives with several distance metrics on Tensor manifold is explored. Experimental results show the effectiveness of the range image derivatives and the flexibility of the GRCM in 2.5D face recognition.
- Published
- 2016
- Full Text
- View/download PDF
28. Focal Flow: Measuring Distance and Velocity with Defocus and Differential Motion
- Author
-
Qi Guo, Emma Alexander, Sanjeev J. Koppal, Todd Zickler, and Steven J. Gortler
- Subjects
Image derivatives ,Simple lens ,Computer science ,business.industry ,Aperture ,Gaussian ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Optical flow ,020207 software engineering ,02 engineering and technology ,law.invention ,symbols.namesake ,Flow (mathematics) ,law ,Depth map ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,Pinhole camera ,symbols ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does so using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the ideal focal flow sensor, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.
- Published
- 2016
- Full Text
- View/download PDF
29. Boundary Preserving Variational Image Differentiation
- Author
-
Amar Mitiche, Yosra Mathlouthi, and Ismail Ben Ayed
- Subjects
Image derivatives ,Discretization ,Computer science ,Finite difference ,Optical flow ,02 engineering and technology ,Regularization (mathematics) ,03 medical and health sciences ,Nonlinear system ,0302 clinical medicine ,Variational method ,0202 electrical engineering, electronic engineering, information engineering ,Applied mathematics ,Partial derivative ,020201 artificial intelligence & image processing ,030217 neurology & neurosurgery - Abstract
The purpose of this study is to investigate image differentiation by a boundary preserving variational method. The method minimizes a functional composed of an anti-differentiation data discrepancy term and an \(L^1\) regularization term. For each partial derivative of the image, the anti-differentiation term biases the minimizer toward a function which integrates to the image up to an additive constant, while the regularization term biases it toward a function smooth everywhere except across image edges. A discretization of the functional Euler-Lagrange equations gives a large scale system of nonlinear equations that, however, is sparse, and “almost” linear, which directs to a resolution by successive linear approximations. The method is investigated in two important computer vision problems, namely optical flow and scene flow estimation, where image differentiation is used and ordinarily done by local averaging of finite image differences. We present several experiments, which show that motion fields are more accurate when computed using image derivatives evaluated by regularized variational differentiation than with conventional averaging of finite differences.
- Published
- 2016
- Full Text
- View/download PDF
30. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction
- Author
-
Ailong Cai, Bin Yan, Linyuan Wang, Hanming Zhang, Guoen Hu, and Lei Li
- Subjects
Image derivatives ,Optimization problem ,Computer science ,Proximal point method ,lcsh:Medicine ,02 engineering and technology ,Regularization (mathematics) ,030218 nuclear medicine & medical imaging ,Diagnostic Radiology ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Medicine and Health Sciences ,lcsh:Science ,Tomography ,Fast Fourier transforms ,Multidisciplinary ,Fourier Analysis ,Radiology and Imaging ,Applied Mathematics ,Simulation and Modeling ,Bone Imaging ,Physical Sciences ,Engineering and Technology ,020201 artificial intelligence & image processing ,Algorithm ,Algorithms ,Research Article ,Optimization ,Iterative method ,Imaging Techniques ,Fast Fourier transform ,Neuroimaging ,Iterative reconstruction ,Digital Imaging ,Research and Analysis Methods ,03 medical and health sciences ,Diagnostic Medicine ,Humans ,Augmented Lagrangian method ,lcsh:R ,Biology and Life Sciences ,Computed Axial Tomography ,X-Ray Radiography ,Mathematical and statistical techniques ,lcsh:Q ,Tomography, X-Ray Computed ,Mathematics ,Neuroscience - Abstract
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
- Published
- 2016
31. AVA Christmas Meeting, London, UK 18 December 2012
- Author
-
Stephen J. Anderson and Keith Langley
- Subjects
Image derivatives ,Signal processing ,business.industry ,Experimental and Cognitive Psychology ,Invariant (physics) ,Scale invariance ,Sensory Systems ,Ophthalmology ,symbols.namesake ,Subjective constancy ,Artificial Intelligence ,Taylor series ,symbols ,Computer vision ,Artificial intelligence ,business ,Scaling ,Algorithm ,Linear filter ,Mathematics - Abstract
Cohen (IEEE Trans. Signal Processing, 41, 3275-3292, 1993) suggested that a signal’s scale can beregarded as a ‘physical attribute’ that decouples the size of a phenomenon from its shape. His idea, incombination with invariant signal representations, has clear ramifications for the phenomenon of sizeconstancy in vision science. In the visual system, it is hoped that size constancy might be derived fromthe collected responses of a distribution of isotropic spatial filters whose underlying spatial extentare systematically varied from coarse to fine scales according to a diffusion model of image blur: theso-called scale-space representation (eg Koenderink, Biol. Cyb. 50, 363-370, 1984). We demonstratethat this ‘blurring’ approach is flawed. The reason is because scaling and blurring are fundamentallydifferent image operations – application of linear filters whose degrees of blur are different beforeattempting to extract scale information can seriously impair one’s ability to extract the physical attributeof scale. We show that local scale (and position) invariant signal representations can be derived byfinding unknown coefficients that allow one to predict the image intensity signal from a power seriesexpansion. We further show that the inverse of this power series is a Taylor expansion of discretelocal image derivatives whose coefficients are invariant of position and scale. The expansion retainsthe benefit of efficiency when representing 2D shape. Finally, we show how our ideas link scaling,fractional orders of differentiation and pyramid sampling as a means for determining the scale of a 2Dshape. We suggest that similar computations underpin position and scale invariant computations in the visual system.
- Published
- 2012
- Full Text
- View/download PDF
32. Higher Degree Total Variation (HDTV) Regularization for Image Recovery
- Author
-
Mathews Jacob and Yue Hu
- Subjects
Image derivatives ,Mathematical optimization ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Reproducibility of Results ,Wavelet transform ,Directional derivative ,Image Enhancement ,Sensitivity and Specificity ,Computer Graphics and Computer-Aided Design ,Regularization (mathematics) ,Wavelet ,Norm (mathematics) ,Image Interpretation, Computer-Assisted ,Penalty method ,Artifacts ,Algorithm ,Algorithms ,Software ,Smoothing ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
We introduce novel image regularization penalties to overcome the practical problems associated with the classical total variation (TV) scheme. Motivated by novel reinterpretations of the classical TV regularizer, we derive two families of functionals involving higher degree partial image derivatives; we term these families as isotropic and anisotropic higher degree TV (HDTV) penalties, respectively. The isotropic penalty is the L(1) - L(2) mixed norm of the directional image derivatives, while the anisotropic penalty is the separable L(1) norm of directional derivatives. These functionals inherit the desirable properties of standard TV schemes such as invariance to rotations and translations, preservation of discontinuities, and convexity. The use of mixed norms in isotropic penalties encourages the joint sparsity of the directional derivatives at each pixel, thus encouraging isotropic smoothing. In contrast, the fully separable norm in the anisotropic penalty ensures the preservation of discontinuities, while continuing to smooth along the linelike features; this scheme thus enhances the linelike image characteristics analogous to standard TV. We also introduce efficient majorize-minimize algorithms to solve the resulting optimization problems. The numerical comparison of the proposed scheme with classical TV penalty, current second-degree methods, and wavelet algorithms clearly demonstrate the performance improvement. Specifically, the proposed algorithms minimize the staircase and ringing artifacts that are common with TV and wavelet schemes, while better preserving the singularities. We also observe that anisotropic HDTV penalty provides consistently improved reconstructions compared with the isotropic HDTV penalty.
- Published
- 2012
- Full Text
- View/download PDF
33. Estimation directe et conjointe du flot de scène et de la profondeur à partir d'une séquence monoculaire d'images.
- Author
-
Mathlouthi, Yosra and Mathlouthi, Yosra
- Abstract
Dans cette thèse, on étudie l'estimation conjointe du flot de scène dense et de la profondeur relative à partir d'une séquence d'images monoculaire. On commence par développer un schéma de base qui permet de poser le problème sous une forme variationnelle par une fonctionnelle composée de deux termes : un terme de conformité aux données spatiotemporelles de la séquence d'images et un terme de régularisation. Le terme de données relie la vitesse tridimensionnelle (3D) et la profondeur en termes de variations spatiotemporelles visuelles. Ce terme s'obtient en remplaçant les coordonnées du vecteur de vitesse optique dans la contrainte du gradient du flot optique de Horn et Schunck par leur expressions en termes du flot de scène et de la profondeur. Sous cette forme, l'énoncé de notre problème est analogue à l'estimation classique du flot optique proposée par Horn et Schunck, quoiqu'elle implique ici le flot de scène et la profondeur au lieu du mouvement de l'image. En premier lieu, on utilise un terme de régularisation L² qui assure une solution lisse partout dans l'image. La discrétisation des équations d'Euler-Lagrange correspondantes à notre fonctionnelle forme un système creux à grande échelle d'équations linéaires. On écrit explicitement ce système et on ordonne ses équations de façon que sa matrice soit symétrique positive définie. Ceci implique que les itérations de Gauss-Seidel convergent point par point ou bloc par bloc, et offre un moyen très efficace pour résoudre les équations d'Euler-Lagrange. En second lieu, une amélioration de la méthode étudiée est proposée par une version qui préserve les frontières du mouvement et des objets dans la scène. Le terme de régularisation L¹ permet le lissage de la solution à l'intérieur des zones uniformes et l'inhibe sur les frontières de mouvement et de profondeur. La discrétisation des équations d'Euler-Lagrange correspondantes à la fonctionnelle de régularisation du type L¹ donne un grand système creux d'équations non
- Published
- 2016
34. Image Quality Assessment Based on Multi-Order Visual Comparison
- Author
-
Wen Sun, Fei Zhou, and Qingmin Liao
- Subjects
Image derivatives ,Computer science ,Image quality ,business.industry ,Visual comparison ,Artificial Intelligence ,Hardware and Architecture ,Order (business) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software - Published
- 2014
- Full Text
- View/download PDF
35. Evaluating sharpness functions for automated scanning electron microscopy
- Author
-
Joseph M. Maubach, R.M.M. Mattheij, and ME Maria Rudnaya
- Subjects
Autofocus ,Image derivatives ,Histology ,Materials science ,business.industry ,Scanning electron microscope ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scanning confocal electron microscopy ,Pathology and Forensic Medicine ,law.invention ,symbols.namesake ,Optics ,Fourier transform ,law ,symbols ,business - Abstract
Fast and reliable autofocus techniques are an important topic for automated scanning electron microscopy. In this paper, different autofocus techniques are discussed and applied to a variety of experimental through-focus series of scanning electron microscopy images with different geometries. The procedure of quality evaluation is described, and for a variety of scanning electron microscope samples it is demonstrated that techniques based on image derivatives and Fourier transforms are in general better than statistical, intensity and histogram-based techniques. Further, it is shown that varying of an extra parameter can dramatically increase quality of an autofocus technique.
- Published
- 2010
- Full Text
- View/download PDF
36. Change Detection in Optical Remote Sensing Images Using Difference-Based Methods and Spatial Information
- Author
-
Shohreh Kasaei and Rouhollah Dianat
- Subjects
Polynomial regression ,Spatial relation ,Image derivatives ,Pixel ,Pattern recognition (psychology) ,Electrical and Electronic Engineering ,Image sensor ,Geotechnical Engineering and Engineering Geology ,Spatial analysis ,Change detection ,Mathematics ,Remote sensing - Abstract
A new and general framework-called modified polynomial regression (MPR)-is introduced in this letter, which detects the changes that occurred in remote sensing images. It is an improvement of the conventional polynomial regression (CPR) method. Most change detection (CD) methods, including CPR, do not consider the spatial relations among image pixels. To improve CPR, our proposed framework incorporates the spatial information into the CD process by using linear spatial-oriented image operators. It is proved that MPR preserves the affine invariance property of CPR. A realization of MPR is proposed, which employs the image derivatives to account for spatiality. Experimental results show the superiority of the proposed method over the CPR method and three other difference-based CD methods, namely, simple differencing, linear chronochrome CD, and multivariate alteration detection.
- Published
- 2010
- Full Text
- View/download PDF
37. Fast motion deblurring
- Author
-
Sunghyun Cho and Seungyong Lee
- Subjects
Image derivatives ,Deblurring ,business.industry ,Motion blur ,Kernel density estimation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Computer Graphics and Computer-Aided Design ,Computer Science::Computer Vision and Pattern Recognition ,Conjugate gradient method ,Computer vision ,Artificial intelligence ,Deconvolution ,business ,Algorithm ,Image restoration ,Mathematics - Abstract
This paper presents a fast deblurring method that produces a deblurring result from a single image of moderate size in a few seconds. We accelerate both latent image estimation and kernel estimation in an iterative deblurring process by introducing a novel prediction step and working with image derivatives rather than pixel values. In the prediction step, we use simple image processing techniques to predict strong edges from an estimated latent image, which will be solely used for kernel estimation. With this approach, a computationally efficient Gaussian prior becomes sufficient for deconvolution to estimate the latent image, as small deconvolution artifacts can be suppressed in the prediction. For kernel estimation, we formulate the optimization function using image derivatives, and accelerate the numerical process by reducing the number of Fourier transforms needed for a conjugate gradient method. We also show that the formulation results in a smaller condition number of the numerical system than the use of pixel values, which gives faster convergence. Experimental results demonstrate that our method runs an order of magnitude faster than previous work, while the deblurring quality is comparable. GPU implementation facilitates further speed-up, making our method fast enough for practical use.
- Published
- 2009
- Full Text
- View/download PDF
38. Instantaneous 3D motion from image derivatives using the Least Trimmed Square regression
- Author
-
Angel D. Sappa and Fadi Dornaika
- Subjects
Signal processing ,Image derivatives ,Plane (geometry) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Robust statistics ,Image processing ,Real image ,Square (algebra) ,Artificial Intelligence ,Motion estimation ,Signal Processing ,Statistics ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
This paper presents a new technique to the instantaneous 3D motion estimation. The main contributions are as follows. First, we show that the 3D camera or scene velocity can be retrieved from image derivatives only assuming that the scene contains a dominant plane. Second, we propose a new robust algorithm that simultaneously provides the Least Trimmed Square solution and the percentage of inliers-the non-contaminated data. Experiments on both synthetic and real image sequences demonstrated the effectiveness of the developed method. Those experiments show that the new robust approach can outperform classical robust schemes.
- Published
- 2009
- Full Text
- View/download PDF
39. Improving the SIFT descriptor with smooth derivative filters
- Author
-
Plinio Moreno, Alexandre Bernardino, and José Santos-Victor
- Subjects
Image derivatives ,Pixel ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point set registration ,Image processing ,Non-local means ,Object detection ,Gabor filter ,Artificial Intelligence ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Mathematics ,Feature detection (computer vision) - Abstract
Several approaches to object recognition make extensive use of local image information extracted in interest points, known as local image descriptors. State-of-the-art methods perform a statistical analysis of the gradient information around the interest point, which often relies on the computation of image derivatives with pixel differencing methods. In this paper, we show the advantages of using smooth derivative filters instead of pixel differences in the performance of a well known local image descriptor. The method is based on the use of odd Gabor functions, whose parameters are selectively tuned to as a function of the local image properties under analysis. We perform an extensive experimental evaluation to show that our method increases the distinctiveness of local image descriptors for image region matching and object recognition.
- Published
- 2009
- Full Text
- View/download PDF
40. Edge-Based Color Constancy via Support Vector Regression
- Author
-
De Xu, Bing Li, and Ning Wang
- Subjects
Image derivatives ,Color histogram ,Pixel ,Color constancy ,Computer science ,Color normalization ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Color balance ,Edge detection ,Support vector machine ,Artificial Intelligence ,Hardware and Architecture ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software - Abstract
Color constancy is the ability to measure colors of objects independent of the light source color. Various methods have been proposed to handle this problem. Most of them depend on the statistical distributions of the pixel values. Recent studies show that incorporation image derivatives are more effective than the direct use of pixel values. Based on this idea, a novel edge-based color constancy algorithm using support vector regression (SVR) is proposed. Contrary to existing SVR color constancy algorithm, which is computed from the zero-order structure of images, our method is based on the higher-order structure of images. The experimental results show that our algorithm is more effective than the zero-order SVR color constancy methods.
- Published
- 2009
- Full Text
- View/download PDF
41. Corner validation based on extracted corner properties
- Author
-
Yasemin Yardimci and Yalin Bastanlar
- Subjects
Image derivatives ,Orientation (computer vision) ,Covariance matrix ,business.industry ,Corner detection ,Pattern recognition ,Hardware_PERFORMANCEANDRELIABILITY ,Filter (signal processing) ,ComputerApplications_GENERAL ,Signal Processing ,Pattern recognition (psychology) ,Hardware_INTEGRATEDCIRCUITS ,Computer vision ,Computer Vision and Pattern Recognition ,Noise (video) ,Artificial intelligence ,business ,Pose ,Software ,Mathematics - Abstract
We developed a method to validate and filter a large set of previously obtained corner points. We derived the necessary relationships between image derivatives and estimates of corner angle, orientation and contrast. Commonly used cornerness measures of the auto-correlation matrix estimates of image derivatives are expressed in terms of these estimated corner properties. A candidate corner is validated if the cornerness score directly obtained from the image is sufficiently close to the cornerness score for an ideal corner with the estimated orientation, angle and contrast. We tested this algorithm on both real and synthetic images and observed that this procedure significantly improves the corner detection rates based on human evaluations. We tested the accuracy of our corner property estimates under various noise conditions. Extracted corner properties can also be used for tasks like feature point matching, object recognition and pose estimation.
- Published
- 2008
- Full Text
- View/download PDF
42. Incremental Refinement of Image Salient-Point Detection
- Author
-
Yiannis Andreopoulos and Ioannis Patras
- Subjects
Image derivatives ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Sensitivity and Specificity ,Edge detection ,Pattern Recognition, Automated ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,Digital image processing ,Computer vision ,Detection theory ,Image sensor ,Feature detection (computer vision) ,business.industry ,Binary image ,Reproducibility of Results ,Pattern recognition ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Salient ,Computer Science::Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms ,Software - Abstract
Low-level image analysis systems typically detect "points of interest", i.e., areas of natural images that contain corners or edges. Most of the robust and computationally efficient detectors proposed for this task use the autocorrelation matrix of the localized image derivatives. Although the performance of such detectors and their suitability for particular applications has been studied in relevant literature, their behavior under limited input source (image) precision or limited computational or energy resources is largely unknown. All existing frameworks assume that the input image is readily available for processing and that sufficient computational and energy resources exist for the completion of the result. Nevertheless, recent advances in incremental image sensors or compressed sensing, as well as the demand for low-complexity scene analysis in sensor networks now challenge these assumptions. In this paper, we investigate an approach to compute salient points of images incrementally, i.e., the salient point detector can operate with a coarsely quantized input image representation and successively refine the result (the derived salient points) as the image precision is successively refined by the sensor. This has the advantage that the image sensing and the salient point detection can be terminated at any input image precision (e.g., bound set by the sensory equipment or by computation, or by the salient point accuracy required by the application) and the obtained salient points under this precision are readily available. We focus on the popular detector proposed by Harris and Stephens and demonstrate how such an approach can operate when the image samples are refined in a bitwise manner, i.e., the image bitplanes are received one-by-one from the image sensor. We estimate the required energy for image sensing as well as the computation required for the salient point detection based on stochastic source modeling. The computation and energy required by the proposed incremental refinement approach is compared against the conventional salient-point detector realization that operates directly on each source precision and cannot refine the result. Our experiments demonstrate the feasibility of incremental approaches for salient point detection in various classes of natural images. In addition, a first comparison between the results obtained by the intermediate detectors is presented and a novel application for adaptive low-energy image sensing based on points of saliency is presented.
- Published
- 2008
- Full Text
- View/download PDF
43. The Second Order Local-Image-Structure Solid
- Author
-
Lewis D. Griffin
- Subjects
Image derivatives ,Euclidean space ,Applied Mathematics ,Mathematical analysis ,Curvature ,Scale space ,Computational Theory and Mathematics ,Artificial Intelligence ,Norm (mathematics) ,Embedding ,Computer Vision and Pattern Recognition ,Affine transformation ,Software ,Orbifold ,Mathematics - Abstract
Characterization of second order local image structure by a 6D vector (or jet) of Gaussian derivative measurements is considered. We consider the affect on jets of a group of transformations-affine intensity-scaling, image rotation and reflection, and their compositions-that preserve intrinsic image structure. We show how this group stratifies the jet space into a system of orbits. Considering individual orbits as points, a 3D orbifold is defined. We propose a norm on jet space which we use to induce a metric on the orbifold. The metric tensor shows that the orbifold is intrinsically curved. To allow visualization of the orbifold and numerical computation with it, we present a mildly-distorting but volume-preserving embedding of it into euclidean 3-space. We call the resulting shape, which is like a flattened lemon, the second order local-image-structure solid. As an example use of the solid, we compute the distribution of local structures in noise and natural images. For noise images, analytical results are possible and they agree with the empirical results. For natural images, an excess of locally 1D structure is found.
- Published
- 2007
- Full Text
- View/download PDF
44. Object Segmentation and Ground Truth in 3D Embryonic Imaging
- Author
-
Andrew C. Oates, Koichiro Uriu, Bhavna Rajasekaran, Guillaume Valentin, Jean-Yves Tinevez, Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), Max-Planck-Gesellschaft, Max Planck Institute for the Physics of Complex Systems (MPI-PKS), Theoretical Biology Laboratory [Riken, Japan], RIKEN - Institute of Physical and Chemical Research [Japon] (RIKEN), National Institute for Medical Research (NIMR), Medical Research Council, Department of Cell & Developmental Biology, University College of London [London] (UCL), This work was supported by the following sources of funding: Max Planck Society through the Max Planck Institute for Molecular Cell Biology and Genetics, http://www.mpi-cbg.de: BR GV JYT AO, European Research Council under the European Communities 7th Framework Programme [FP7/2007–2013]/[ERC grant 207634], http://erc.europa.eu/funding-and-grants BR AO, Wellcome Trust [WT098025MA], http://www.wellcome.ac.uk/funding/ GV AO, Medical Research Council [MC_UP_1202/3], http://www.mrc.ac.uk/funding/ AO, Japan Society for the Promotion of Science [11J02685], https://www.jsps.go.jp/english/ KU, European Molecular Biology Organisation EMBO-LTF [ALTF 1572-2011], http://www.embo.org/funding-awards GV., We thank Ivo Sbalzarini for suggesting the measure for accuracy, Frank Jülicher at the MPI-PKS for his support, Hubert Scherrer-Paulus for the computing facility at MPI-PKS, Jan Peychel and Daniel James White of the light microscopy facility at MPI-CBG, the fish facility at MPI-CBG, Pavel Tomancak and members of the Oates lab., and European Project: 207634,EC:FP7:ERC,ERC-2007-StG,SEGCLOCKDYN(2008)
- Subjects
0301 basic medicine ,Image derivatives ,Embryology ,Embryo, Nonmammalian ,Computer science ,[SDV]Life Sciences [q-bio] ,experimental model ,lcsh:Medicine ,Imaging techniques ,Bioinformatics ,chimera ,Imaging ,Image analysis ,0302 clinical medicine ,Morphogenesis ,Segmentation ,MESH: Animals ,lcsh:Science ,[SDV.BDD]Life Sciences [q-bio]/Development Biology ,Zebrafish ,Ground truth ,Microscopy ,MESH: Imaging, Three-Dimensional ,Multidisciplinary ,Signal to noise ratio ,MESH: Zebrafish / embryology ,Applied Mathematics ,Simulation and Modeling ,Fishes ,Animal Models ,Living matter ,Osteichthyes ,Physical Sciences ,Vertebrates ,Engineering and Technology ,Algorithms ,Research Article ,MESH: Cell Nucleus ,Scale-space segmentation ,embryo ,Image processing ,MESH: Embryo, Nonmammalian / anatomy & histology ,Research and Analysis Methods ,Chimerism ,Error ,03 medical and health sciences ,Model Organisms ,Imaging, Three-Dimensional ,Animals ,Cell Nucleus ,Segmentation-based object categorization ,business.industry ,Embryos ,lcsh:R ,Organisms ,Biology and Life Sciences ,Correction ,Pattern recognition ,MESH: Chimerism ,signal noise ratio ,Morphogenic segmentation ,030104 developmental biology ,validation process ,MESH: Algorithms ,Signal Processing ,lcsh:Q ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Mathematics ,Developmental Biology - Abstract
Erratum in :Correction: Object Segmentation and Ground Truth in 3D Embryonic Imaging.Bhavna R, Uriu K, Valentin G, Tinevez JY, Oates AC. PLoS One. 2016 Aug 16;11(8):e0161550. doi: 10.1371/journal.pone.0161550. eCollection 2016. PMID: 27529424 Free PMC article.; International audience; Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
- Published
- 2015
- Full Text
- View/download PDF
45. Low-rank modeling of local k-space neighborhoods: from phase and support constraints to structured sparsity
- Author
-
Justin P. Haldar
- Subjects
Image derivatives ,Rank (linear algebra) ,medicine.diagnostic_test ,Computer science ,business.industry ,Physics::Medical Physics ,Matrix representation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Magnetic resonance imaging ,Pattern recognition ,k-space ,Linear prediction ,Matrix (mathematics) ,Wavelet ,medicine ,Embedding ,Artificial intelligence ,business - Abstract
Low-rank modeling of local k-space neighborhoods (LORAKS) is a recent novel framework for constrained MRI reconstruction. LORAKS relies on embedding MRI data into carefully-constructed matrices, which will have low-rank structure when the MRI image has sparse support or slowly-varying phase. Low-rank matrix representation allows MRI images to be reconstructed from undersampled data using modern low-rank matrix techniques, and enables data acquisition strategies that are incompatible with more traditional representations. This paper reviews LORAKS, and describes extensions that allow LORAKS to additionally impose structured transform-domain sparsity constraints (e.g., structured sparsity of the image derivatives or wavelet coefficients).
- Published
- 2015
- Full Text
- View/download PDF
46. Fast Second Degree Total Variation Method for Image Compressive Sensing
- Author
-
Liang Xiao, Jun Zhang, and Pengfei Liu
- Subjects
Image derivatives ,Multidisciplinary ,lcsh:R ,lcsh:Medicine ,Image processing ,Iterative reconstruction ,Models, Theoretical ,Data Compression ,Regularization (mathematics) ,Peak signal-to-noise ratio ,Matrix decomposition ,Signal-to-noise ratio ,Compressed sensing ,Image Processing, Computer-Assisted ,lcsh:Q ,lcsh:Science ,Algorithm ,Algorithms ,Research Article ,Mathematics - Abstract
This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L 1-L 2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.
- Published
- 2015
47. Detection and analysis of individual leaf-off tree crowns in small footprint, high sampling density lidar data from the eastern deciduous forest in North America
- Author
-
Tomas Brandtberg
- Subjects
Canopy ,Image derivatives ,Pixel ,Gaussian blur ,Soil Science ,Geology ,Image processing ,symbols.namesake ,Deciduous ,Lidar ,Outlier ,symbols ,Computers in Earth Sciences ,Mathematics ,Remote sensing - Abstract
Leaf-off individual trees in a deciduous forest in the eastern USA are detected and analysed in small footprint, high sampling density lidar data. The data were acquired February 1, 2001, using a SAAB TopEye laser profiling system, with a sampling density of approximately 12 returns per square meter. The sparse and complex configuration of the branches of the leaf-off forest provides sufficient returns to allow the detection of the trees as individual objects and to analyse their vertical structures. Initially, for the detection of the individual trees only, the lidar data are first inserted in a 2D digital image, with the height as the pixel value or brightness level. The empty pixels are interpolated, and height outliers are removed. Gaussian smoothing at different scales is performed to create a three-dimensional scale-space structure. Blob signatures based on second-order image derivatives are calculated, and then normalised so they can be compared at different scale-levels. The grey-level blobs with the strongest normalised signatures are selected within the scale-space structure. The support regions of the blobs are marked one-at-a-time in the segmentation result image with higher priority for stronger blobs. The segmentation results of six individual hectare plots are assessed by a computerised, objective method that makes use of a ground reference data set of the individual tree crowns. For analysis of individual trees, a subset of the original laser returns is selected within each tree crown region of the canopy reference map. Indices based on moments of the first four orders, maximum value and number of canopy and ground returns, are estimated. The indices are derived separately for height and laser reflectance of branches for the two echoes. Significant differences (p
- Published
- 2003
- Full Text
- View/download PDF
48. ImageSURF: An ImageJ Plugin for Batch Pixel-Based Image Segmentation Using Random Forests
- Author
-
James C. Vickers, Aidan O'Mara, Anna E. King, and Matthew T. K. Kirkcaldie
- Subjects
random forests ,0301 basic medicine ,Image derivatives ,Source code ,Computer science ,media_common.quotation_subject ,Scale-space segmentation ,Image processing ,Library and Information Sciences ,03 medical and health sciences ,0302 clinical medicine ,ImageJ ,FIJI ,segmentation ,trainable segmentation ,binary segmentation ,Computer vision ,Segmentation ,media_common ,lcsh:Computer software ,neuroscience, image analysis ,Pixel ,business.industry ,Image segmentation ,Random forest ,lcsh:QA76.75-76.765 ,030104 developmental biology ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Software ,Information Systems - Abstract
Image segmentation is a necessary step in automated quantitative imaging. ImageSURF is a macro-compatible ImageJ2/FIJI plugin for pixel-based image segmentation that considers a range of image derivatives to train pixel classifiers which are then applied to image sets of any size to produce segmentations without bias in a consistent, transparent and reproducible manner. The plugin is available from ImageJ update site http://sites.imagej.net/ImageSURF/ and source code from https://github.com/omaraa/ImageSURF. Funding statement: This research was supported by an Australian Government Research Training Program Scholarship.
- Published
- 2017
- Full Text
- View/download PDF
49. Local Binary Patterns Calculated over Gaussian Derivative Images
- Author
-
James L. Crowley, Varun Jain, Augustin Lux, Perception, recognition and integration for observation of activity (PRIMA), Inria Grenoble - Rhône-Alpes, and Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Image derivatives ,Local binary patterns ,Computer science ,business.industry ,Gaussian ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Pattern recognition ,02 engineering and technology ,Binary pattern ,Facial recognition system ,Image (mathematics) ,symbols.namesake ,[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Benchmark (computing) ,Three-dimensional face recognition ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing - Abstract
International audience; In this paper we present a new static descriptor for facial image analysis. We combine Gaussian derivatives with Local Binary Patterns to provide a robust and powerful descriptor especially suited to extracting texture from facial images. Gaussian features in the form of image derivatives form the input to the Linear Binary Pattern(LBP) operator instead of the original image. The proposed descriptor is tested for face recognition and smile detection. For face recognition we use the CMU-PIE and the YaleB+extended YaleB database. Smile detection is performed on the benchmark GENKI 4k database. With minimal machine learning our descriptor outperforms the state of the art at smile detection and compares favourably with the state of the art at face recognition.
- Published
- 2014
- Full Text
- View/download PDF
50. Edge Detection Algorithm For Enhancement of Linear Back Projection Tomographic Images
- Author
-
Shafishuaza Sahlan, Jaysuman Pusppanathan, Usman Ullah Sheikh, Ruzairi Abdul Rahim, Leow Pei Ling, Fatin Aliah Phang, Mahdi Faramarzi, Nor Muzakkir Nor Ayob, Mohd Hafiz Fazalul Rahiman, Khairul Hamimah Abas, and Fazlul Rahman Mohd Yunus
- Subjects
Engineering ,Image derivatives ,Process tomography ,Tomographic reconstruction ,business.industry ,General Engineering ,Image processing ,Iterative reconstruction ,Edge detection ,Computer vision ,Artificial intelligence ,business ,Image gradient ,Feature detection (computer vision) - Abstract
Process tomography (PT) is a leading technique for multiphase flow measurement and flow monitoring systems in various fields. PT has the advantage of interpreting acquired measurement data and transforming it into visual tomographic images. The most common method of image reconstruction uses the linear back projection algorithm which often results in blurry images. This paper proposes an enhancement of the reconstructed images using an edge detection image processing technique to convolve with the original image. This filtering technique calculates approximation changes of the horizontal and vertical image derivatives, thus further enhancing image accuracy. Several ultrasonic tomography images were put into a simulation test to validate it. Hence, the image results are being assessed for its performance.
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.