227 results
Search Results
2. Evaluation of Lip Prints on Different Supports Using a Batch Image Processing Algorithm and Image Superimposition
- Author
-
Clemente Maia da Silva Fernandes, Mônica Campos Serra, and Lara Maria Herrera
- Subjects
Adult ,Male ,Paper ,Engineering ,Biometrics ,Matching (graph theory) ,Adolescent ,Image processing ,01 natural sciences ,Pathology and Forensic Medicine ,Image (mathematics) ,03 medical and health sciences ,Young Adult ,0302 clinical medicine ,Software ,stomatognathic system ,Genetics ,Image Processing, Computer-Assisted ,Superimposition ,Humans ,030216 legal & forensic medicine ,Aged ,business.industry ,010401 analytical chemistry ,Forensic Medicine ,Middle Aged ,Lip ,0104 chemical sciences ,Visualization ,stomatognathic diseases ,Biometric Identification ,LIP PRINTS ,Female ,Glass ,business ,Algorithm ,Algorithms - Abstract
This study aimed to develop and to assess an algorithm to facilitate lip print visualization, and to digitally analyze lip prints on different supports, by superimposition. It also aimed to classify lip prints according to sex. A batch image processing algorithm was developed, which facilitated the identification and extraction of information about lip grooves. However, it performed better for lip print images with a uniform background. Paper and glass slab allowed more correct identifications than glass and the both sides of compact disks. There was no significant difference between the type of support and the amount of matching structures located in the middle area of the lower lip. There was no evidence of association between types of lip grooves and sex. Lip groove patterns of type III and type I were the most common for both sexes. The development of systems for lip print analysis is necessary, mainly concerning digital methods.
- Published
- 2016
3. Different Approaches for Extracting Information from the Co-Occurrence Matrix.
- Author
-
Nanni, Loris, Brahnam, Sheryl, Ghidoni, Stefano, Menegatti, Emanuele, and Barrier, Tonya
- Subjects
INFORMATION technology ,WEB-based user interfaces ,COMPUTER simulation ,IMAGE processing ,PRINCIPAL components analysis ,STATISTICS ,COMPARATIVE studies - Abstract
In 1979 Haralick famously introduced a method for analyzing the texture of an image: a set of statistics extracted from the co-occurrence matrix. In this paper we investigate novel sets of texture descriptors extracted from the co-occurrence matrix; in addition, we compare and combine different strategies for extending these descriptors. The following approaches are compared: the standard approach proposed by Haralick, two methods that consider the co-occurrence matrix as a three-dimensional shape, a gray-level run-length set of features and the direct use of the co-occurrence matrix projected onto a lower dimensional subspace by principal component analysis. Texture descriptors are extracted from the co-occurrence matrix evaluated at multiple scales. Moreover, the descriptors are extracted not only from the entire co-occurrence matrix but also from subwindows. The resulting texture descriptors are used to train a support vector machine and ensembles. Results show that our novel extraction methods improve the performance of standard methods. We validate our approach across six medical datasets representing different image classification problems using the Wilcoxon signed rank test. The source code used for the approaches tested in this paper will be available at: http://www.dei.unipd.it/wdyn/?IDsezione=3314&IDgruppo_pass=124&preview=. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
4. Time of flight PET reconstruction using nonuniform update for regional recovery uniformity
- Author
-
Kim, Kyungsang, Kim, Donghwan, Yang, Jaewon, Fakhri, Georges El, Seo, Youngho, Fessler, Jeffrey A, and Li, Quanzheng
- Subjects
Engineering ,Biomedical Engineering ,Bioengineering ,Algorithms ,Humans ,Image Processing ,Computer-Assisted ,Pancreas ,Positron-Emission Tomography ,Time Factors ,momentum ,nonuniform update ,NUSQS ,recovery uniformity ,TOF PET reconstruction ,Other Physical Sciences ,Oncology and Carcinogenesis ,Nuclear Medicine & Medical Imaging ,Biomedical engineering ,Medical and biological physics - Abstract
PurposeTime of flight (TOF) PET reconstruction is well known to statistically improve the image quality compared to non-TOF PET. Although TOF PET can improve the overall signal to noise ratio (SNR) of the image compared to non-TOF PET, the SNR disparity between separate regions in the reconstructed image using TOF data becomes higher than that using non-TOF data. Using the conventional ordered subset expectation maximization (OS-EM) method, the SNR in the low activity regions becomes significantly lower than in the high activity regions due to the different photon statistics of TOF bins. A uniform recovery across different SNR regions is preferred if it can yield an overall good image quality within small number of iterations in practice. To allow more uniform recovery of regions, a spatially variant update is necessary for different SNR regions.MethodsThis paper focuses on designing a spatially variant step size and proposes a TOF-PET reconstruction method that uses a nonuniform separable quadratic surrogates (NUSQS) algorithm, providing a straightforward control of spatially variant step size. To control the noise, a spatially invariant quadratic regularization is incorporated, which by itself does not theoretically affect the recovery uniformity. The Nesterov's momentum method with ordered subsets (OS) is also used to accelerate the reconstruction speed. To evaluate the proposed method, an XCAT simulation phantom and clinical data from a pancreas cancer patient with full (ground truth) and 6× downsampled counts were used, where a Poisson thinning process was employed for downsampling. We selected tumor and cold regions of interest (ROIs) and compared the proposed method with the TOF-based conventional OS-EM and OS-SQS algorithms with an early stopping criterion.ResultsIn computer simulation, without regularization, hot regions of OS-EM and OS-NUSQS converged similarly, but cold region of OS-EM was noisier than OS-NUSQS after 24 iterations. With regularization, although the overall speeds of OS-EM and OS-NUSQS were similar, recovery ratios of hot and cold regions reconstructed by the OS-NUSQS were more uniform compared to those of the conventional OS-SQS and OS-EM. The OS-NUSQS with Nesterov's momentum converged faster than others while preserving the uniform recovery. In the clinical example, we demonstrated that the OS-NUSQS with Nesterov's momentum provides more uniform recovery ratios of hot and cold ROIs compared to the OS-SQS and OS-EM. Although the cost function of all methods is equivalent, the proposed method has higher structural similarity (SSIM) values of hot and cold regions compared to other methods after 24 iterations. Furthermore, our computing time using graphics processing unit was 80× shorter than the time using quad-core CPUs.ConclusionThis paper proposes a TOF PET reconstruction method using the OS-NUSQS with Nesterov's momentum for uniform recovery of different SNR regions. In particular, the spatially nonuniform step size in the proposed method provides uniform recovery ratios of different SNR regions, and the Nesterov's momentum further accelerates overall convergence while preserving uniform recovery. The computer simulation and clinical example demonstrate that the proposed method converges uniformly across ROIs. In addition, tumor contrast and SSIM of the proposed method were higher than those of the conventional OS-EM and OS-SQS in early iterations.
- Published
- 2019
5. Texture Classification by Texton: Statistical versus Binary.
- Author
-
Guo, Zhenhua, Zhang, Zhongcheng, Li, Xiu, Li, Qin, and You, Jane
- Subjects
STATISTICS ,BINARY number system ,COMPUTER science ,COMPUTER algorithms ,IMAGE processing ,COMPUTER software - Abstract
Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
6. Improved Minimum Squared Error Algorithm with Applications to Face Recognition.
- Author
-
Zhu, Qi, Li, Zhengming, Liu, Jinxing, Fan, Zizhu, Yu, Lei, and Chen, Yan
- Subjects
HUMAN facial recognition software ,ALGORITHMS ,CLASSIFICATION ,ERROR analysis in mathematics ,EXPERIMENTAL design ,INFORMATION technology ,SIGNAL processing - Abstract
Minimum squared error based classification (MSEC) method establishes a unique classification model for all the test samples. However, this classification model may be not optimal for each test sample. This paper proposes an improved MSEC (IMSEC) method, which is tailored for each test sample. The proposed method first roughly identifies the possible classes of the test sample, and then establishes a minimum squared error (MSE) model based on the training samples from these possible classes of the test sample. We apply our method to face recognition. The experimental results on several datasets show that IMSEC outperforms MSEC and the other state-of-the-art methods in terms of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
7. Computationally Efficient Locally Adaptive Demosaicing of Color Filter Array Images Using the Dual-Tree Complex Wavelet Packet Transform
- Author
-
Aelterman, Jan, Goossens, Bart, De Vylder, Jonas, Pižurica, Aleksandra, and Philips, Wilfried
- Subjects
LIGHT filters ,COLOR image processing ,DIGITAL images ,COMPUTER engineering ,SIGNAL processing ,APPLIED mathematics ,COMPUTER algorithms - Abstract
Most digital cameras use an array of alternating color filters to capture the varied colors in a scene with a single sensor chip. Reconstruction of a full color image from such a color mosaic is what constitutes demosaicing. In this paper, a technique is proposed that performs this demosaicing in a way that incurs a very low computational cost. This is done through a (dual-tree complex) wavelet interpretation of the demosaicing problem. By using a novel locally adaptive approach for demosaicing (complex) wavelet coefficients, we show that many of the common demosaicing artifacts can be avoided in an efficient way. Results demonstrate that the proposed method is competitive with respect to the current state of the art, but incurs a lower computational cost. The wavelet approach also allows for computationally effective denoising or deblurring approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
8. Computer vision system for high temperature measurements of surface properties.
- Author
-
Fabijańska, Anna and Sankowski, Dominik
- Subjects
COMPUTER vision ,IMAGE processing ,ENGINEERING ,ALGORITHMS ,SURFACE tension ,METALLIC surfaces - Abstract
Recently, computer vision systems have become very popular. They are of great importance in almost every field of science, engineering and industry. Present-day vision systems allow to obtain information that is normally not distinguishable by humans. It is possible due to use of appropriate digital image processing and analysis algorithms. The paper explains importance of proper image processing algorithms selection for computer vision applications. Particular industrial image quantitative analysis systems are considered. Computerized system for high-temperature measurements of surface properties is used as an example. The system is capable of measuring wetting angle and surface tension of metals, alloys and other materials (e.g. glass) in temperatures up to 1,800°C. A brief description of the system is given. Particularly, attention is paid to preprocessing algorithms. They consider not only typical factors that usually accompany digital images founded analysis but also specificity of images obtained during the measurement process as well. Correction of factors arising from CCD camera electronic components and reduction of aura (glow that appears around specimen in high temperatures) affects with high quality image segmentation. In consequence the accuracy of surface parameters determination is increased. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
9. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction
- Author
-
Li, Siqi, Gong, Kuang, Badawi, Ramsey D, Kim, Edward J, Qi, Jinyi, and Wang, Guobao
- Subjects
Computer Vision and Multimedia Computation ,Information and Computing Sciences ,Machine Learning ,Neurosciences ,Biomedical Imaging ,Networking and Information Technology R&D (NITRD) ,Bioengineering ,Humans ,Image Processing ,Computer-Assisted ,Positron-Emission Tomography ,Computer Simulation ,Neural Networks ,Computer ,Algorithms ,Kernel ,Image reconstruction ,Positron emission tomography ,Optimization ,Neural networks ,Electronics packaging ,Standards ,Dynamic PET ,image reconstruction ,kernel methods ,optimization transfer ,deep image prior ,Engineering ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
- Published
- 2023
10. An Analytical Algorithm for Tensor Tomography From Projections Acquired About Three Axes
- Author
-
Tao, Weijie, Rohmer, Damien, Gullberg, Grant T, Seo, Youngho, and Huang, Qiu
- Subjects
Information and Computing Sciences ,Graphics ,Augmented Reality and Games ,Biomedical Imaging ,Bioengineering ,Humans ,Algorithms ,Tomography ,X-Ray Computed ,Tomography ,Phantoms ,Imaging ,Imaging ,Three-Dimensional ,Image Processing ,Computer-Assisted ,Tensors ,Image reconstruction ,X-ray imaging ,Ellipsoids ,Three-dimensional displays ,Filtering algorithms ,Biomedical measurement ,Filtered back-projection algorithm ,solenoidal and irrotational components ,tensor tomography ,directional X-ray projections ,Engineering ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Tensor fields are useful for modeling the structure of biological tissues. The challenge to measure tensor fields involves acquiring sufficient data of scalar measurements that are physically achievable and reconstructing tensors from as few projections as possible for efficient applications in medical imaging. In this paper, we present a filtered back-projection algorithm for the reconstruction of a symmetric second-rank tensor field from directional X-ray projections about three axes. The tensor field is decomposed into a solenoidal and irrotational component, each of three unknowns. Using the Fourier projection theorem, a filtered back-projection algorithm is derived to reconstruct the solenoidal and irrotational components from projections acquired around three axes. A simple illustrative phantom consisting of two spherical shells and a 3D digital cardiac diffusion image obtained from diffusion tensor MRI of an excised human heart are used to simulate directional X-ray projections. The simulations validate the mathematical derivations and demonstrate reasonable noise properties of the algorithm. The decomposition of the tensor field into solenoidal and irrotational components provides insight into the development of algorithms for reconstructing tensor fields with sufficient samples in terms of the type of directional projections and the necessary orbits for the acquisition of the projections of the tensor field.
- Published
- 2022
11. Positronium Lifetime Image Reconstruction for TOF PET
- Author
-
Qi, Jinyi and Huang, Bangyan
- Subjects
Information and Computing Sciences ,Computer Vision and Multimedia Computation ,Bioengineering ,Biomedical Imaging ,Detection ,screening and diagnosis ,4.1 Discovery and preclinical testing of markers and technologies ,4.2 Evaluation of markers and technologies ,Algorithms ,Computer Simulation ,Image Processing ,Computer-Assisted ,Phantoms ,Imaging ,Positron-Emission Tomography ,Tomography ,X-Ray Computed ,Positron emission tomography ,Image reconstruction ,Positrons ,Spatial resolution ,Photonics ,Maximum likelihood estimation ,Location awareness ,Positronium lifetime ,lifetime image reconstruction ,penalized maximum likelihood ,positron emission tomography ,Engineering ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Positron emission tomography is widely used in clinical and preclinical applications. Positronium lifetime carries information about the tissue microenvironment where positrons are emitted, but such information has not been captured because of two technical challenges. One challenge is the low sensitivity in detecting triple coincidence events. This problem has been mitigated by the recent developments of PET scanners with long (1-2 m) axial field of view. The other challenge is the low spatial resolution of the positronium lifetime images formed by existing methods that is determined by the time-of-flight (TOF) resolution (200-500 ps) of existing PET scanners. This paper solves the second challenge by developing a new image reconstruction method to generate high-resolution positronium lifetime images using existing TOF PET. Simulation studies demonstrate that the proposed method can reconstruct positronium lifetime images at much better spatial resolution than the limit set by the TOF resolution of the PET scanner. The proposed method opens up the possibility of performing positronium lifetime imaging using existing TOF PET scanners. The lifetime information can be used to understand the tissue microenvironment in vivo which could facilitate the study of disease mechanism and selection of proper treatments.
- Published
- 2022
12. Robust Non-Rigid Point Set Registration Using Student's-t Mixture Model.
- Author
-
Zhou, Zhiyong, Zheng, Jian, Dai, Yakang, Zhou, Zhe, and Chen, Shi
- Subjects
ROBUST control ,POINT set theory ,IMAGE processing ,COMPUTER algorithms ,PROBABILITY density function ,MATHEMATICAL models ,STUDENTS - Abstract
The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD) algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
13. Biview Learning for Human Posture Segmentation from 3D Points Cloud.
- Author
-
Qiao, Maoying, Cheng, Jun, Bian, Wei, and Tao, Dacheng
- Subjects
LEARNING ,IMAGE segmentation ,POSTURE ,THREE-dimensional imaging ,CLOUD computing ,FEATURE extraction ,SOFTWARE engineering - Abstract
Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
14. Fully Automated Segmentation of the Pons and Midbrain Using Human T1 MR Brain Images.
- Author
-
Nigro, Salvatore, Cerasa, Antonio, Zito, Giancarlo, Perrotta, Paolo, Chiaravalloti, Francesco, Donzuso, Giulia, Fera, Franceso, Bilotta, Eleonora, Pantano, Pietro, and Quattrone, Aldo
- Subjects
MAGNETIC resonance imaging of the brain ,MESENCEPHALON ,BRAIN stem ,BRAIN diseases ,PONS test ,BRAIN anatomy ,CLINICAL trials ,PATIENTS - Abstract
Purpose: This paper describes a novel method to automatically segment the human brainstem into midbrain and pons, called LABS: Landmark-based Automated Brainstem Segmentation. LABS processes high-resolution structural magnetic resonance images (MRIs) according to a revised landmark-based approach integrated with a thresholding method, without manual interaction. Methods: This method was first tested on morphological T1-weighted MRIs of 30 healthy subjects. Its reliability was further confirmed by including neurological patients (with Alzheimer's Disease) from the ADNI repository, in whom the presence of volumetric loss within the brainstem had been previously described. Segmentation accuracies were evaluated against expert-drawn manual delineation. To evaluate the quality of LABS segmentation we used volumetric, spatial overlap and distance-based metrics. Results: The comparison between the quantitative measurements provided by LABS against manual segmentations revealed excellent results in healthy controls when considering either the midbrain (DICE measures higher that 0.9; Volume ratio around 1 and Hausdorff distance around 3) or the pons (DICE measures around 0.93; Volume ratio ranging 1.024–1.05 and Hausdorff distance around 2). Similar performances were detected for AD patients considering segmentation of the pons (DICE measures higher that 0.93; Volume ratio ranging from 0.97–0.98 and Hausdorff distance ranging 1.07–1.33), while LABS performed lower for the midbrain (DICE measures ranging 0.86–0.88; Volume ratio around 0.95 and Hausdorff distance ranging 1.71–2.15). Conclusions: Our study represents the first attempt to validate a new fully automated method for in vivo segmentation of two anatomically complex brainstem subregions. We retain that our method might represent a useful tool for future applications in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
15. An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves.
- Author
-
Sun, Yuanyuan, Chen, Lina, Xu, Rudan, and Kong, Ruiqing
- Subjects
DATA encryption ,COMPUTER algorithms ,JULIA sets ,HILBERT space ,DATA security ,PERTURBATION theory ,INFORMATION technology - Abstract
Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
16. A Part-Based Probabilistic Model for Object Detection with Occlusion.
- Author
-
Zhang, Chunhui, Zhang, Jun, Zhao, Heng, and Liang, Jimin
- Subjects
PROBABILITY theory ,ROBUST control ,ALGORITHMS ,BAYESIAN analysis ,MATHEMATICAL optimization ,COMPUTER graphics ,IMAGE processing - Abstract
The part-based method has been a fast rising framework for object detection. It is attracting more and more attention for its detection precision and partial robustness to the occlusion. However, little research has been focused on the problem of occlusion overlapping of the part regions, which can reduce the performance of the system. This paper proposes a part-based probabilistic model and the corresponding inference algorithm for the problem of the part occlusion. The model is based on the Bayesian theory integrally and aims to be robust to the large occlusion. In the stage of the model construction, all of the parts constitute the vertex set of a fully connected graph, and a binary variable is assigned to each part to indicate its occlusion status. In addition, we introduce a penalty term to regularize the argument space of the objective function. Thus, the part detection is formulated as an optimization problem, which is divided into two alternative procedures: the outer inference and the inner inference. A stochastic tentative method is employed in the outer inference to determine the occlusion status for each part. In the inner inference, the gradient descent algorithm is employed to find the optimal positions of the parts, in term of the current occlusion status. Experiments were carried out on the Caltech database. The results demonstrated that the proposed method achieves a strong robustness to the occlusion. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
17. Knowledge-Guided Robust MRI Brain Extraction for Diverse Large-Scale Neuroimaging Studies on Humans and Non-Human Primates.
- Author
-
Wang, Yaping, Nie, Jingxin, Yap, Pew-Thian, Li, Gang, Shi, Feng, Geng, Xiujuan, Guo, Lei, and Shen, Dinggang
- Subjects
MAGNETIC resonance imaging of the brain ,BRAIN imaging ,PRIMATE physiology ,AGING ,BRAIN ,AGE groups ,COMPARATIVE studies - Abstract
Accurate and robust brain extraction is a critical step in most neuroimaging analysis pipelines. In particular, for the large-scale multi-site neuroimaging studies involving a significant number of subjects with diverse age and diagnostic groups, accurate and robust extraction of the brain automatically and consistently is highly desirable. In this paper, we introduce population-specific probability maps to guide the brain extraction of diverse subject groups, including both healthy and diseased adult human populations, both developing and aging human populations, as well as non-human primates. Specifically, the proposed method combines an atlas-based approach, for coarse skull-stripping, with a deformable-surface-based approach that is guided by local intensity information and population-specific prior information learned from a set of real brain images for more localized refinement. Comprehensive quantitative evaluations were performed on the diverse large-scale populations of ADNI dataset with over 800 subjects (55∼90 years of age, multi-site, various diagnosis groups), OASIS dataset with over 400 subjects (18∼96 years of age, wide age range, various diagnosis groups), and NIH pediatrics dataset with 150 subjects (5∼18 years of age, multi-site, wide age range as a complementary age group to the adult dataset). The results demonstrate that our method consistently yields the best overall results across almost the entire human life span, with only a single set of parameters. To demonstrate its capability to work on non-human primates, the proposed method is further evaluated using a rhesus macaque dataset with 20 subjects. Quantitative comparisons with popularly used state-of-the-art methods, including BET, Two-pass BET, BET-B, BSE, HWA, ROBEX and AFNI, demonstrate that the proposed method performs favorably with superior performance on all testing datasets, indicating its robustness and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
18. Rapid Reconstruction of 3D Neuronal Morphology from Light Microscopy Images with Augmented Rayburst Sampling.
- Author
-
Ming, Xing, Li, Anan, Wu, Jingpeng, Yan, Cheng, Ding, Wenxiang, Gong, Hui, Zeng, Shaoqun, and Liu, Qian
- Subjects
NEUROLOGY ,NEURAL circuitry ,MEDICAL microscopy ,IMAGE reconstruction ,COMPUTER-aided diagnosis ,NEURONS ,IMAGE analysis ,COMPUTATIONAL biology ,BRAIN imaging - Abstract
Digital reconstruction of three-dimensional (3D) neuronal morphology from light microscopy images provides a powerful technique for analysis of neural circuits. It is time-consuming to manually perform this process. Thus, efficient computer-assisted approaches are preferable. In this paper, we present an innovative method for the tracing and reconstruction of 3D neuronal morphology from light microscopy images. The method uses a prediction and refinement strategy that is based on exploration of local neuron structural features. We extended the rayburst sampling algorithm to a marching fashion, which starts from a single or a few seed points and marches recursively forward along neurite branches to trace and reconstruct the whole tree-like structure. A local radius-related but size-independent hemispherical sampling was used to predict the neurite centerline and detect branches. Iterative rayburst sampling was performed in the orthogonal plane, to refine the centerline location and to estimate the local radius. We implemented the method in a cooperative 3D interactive visualization-assisted system named flNeuronTool. The source code in C++ and the binaries are freely available at http://sourceforge.net/projects/flneurontool/. We validated and evaluated the proposed method using synthetic data and real datasets from the Digital Reconstruction of Axonal and Dendritic Morphology (DIADEM) challenge. Then, flNeuronTool was applied to mouse brain images acquired with the Micro-Optical Sectioning Tomography (MOST) system, to reconstruct single neurons and local neural circuits. The results showed that the system achieves a reasonable balance between fast speed and acceptable accuracy, which is promising for interactive applications in neuronal image analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
19. Effective Moment Feature Vectors for Protein Domain Structures.
- Author
-
Shi, Jian-Yu, Yiu, Siu-Ming, Zhang, Yan-Ning, and Chin, Francis Yuk-Lun
- Subjects
PROTEIN structure ,BIOMEDICAL signal processing ,PROTEOMICS ,VECTOR spaces ,DECISION theory - Abstract
Imaging processing techniques have been shown to be useful in studying protein domain structures. The idea is to represent the pairwise distances of any two residues of the structure in a 2D distance matrix (DM). Features and/or submatrices are extracted from this DM to represent a domain. Existing approaches, however, may involve a large number of features (100–400) or complicated mathematical operations. Finding fewer but more effective features is always desirable. In this paper, based on some key observations on DMs, we are able to decompose a DM image into four basic binary images, each representing the structural characteristics of a fundamental secondary structure element (SSE) or a motif in the domain. Using the concept of moments in image processing, we further derive 45 structural features based on the four binary images. Together with 4 features extracted from the basic images, we represent the structure of a domain using 49 features. We show that our feature vectors can represent domain structures effectively in terms of the following. (1) We show a higher accuracy for domain classification. (2) We show a clear and consistent distribution of domains using our proposed structural vector space. (3) We are able to cluster the domains according to our moment features and demonstrate a relationship between structural variation and functional diversity. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
20. Enlarge the Training Set Based on Inter-Class Relationship for Face Recognition from One Image per Person.
- Author
-
Li, Qin, Wang, Hua Jing, You, Jane, Li, Zhao Ming, and Li, Jin Xue
- Subjects
- *
FACE perception , *DRIVERS' licenses , *IDENTIFICATION (Psychology) , *LAW enforcement , *IMPLICIT learning , *PRINCIPAL components analysis , *DISCRIMINANT analysis - Abstract
In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two) images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA), Fisher linear discriminant analysis (LDA), and locality preserving projections (LPP) and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
21. Face Recognition with Multi-Resolution Spectral Feature Images.
- Author
-
Sun, Zhan-Li, Lam, Kin-Man, Dong, Zhao-Yang, Wang, Han, Gao, Qing-Wei, and Zheng, Chun-Hou
- Subjects
FACE perception ,EXPRESSIVE behavior ,ALGORITHMS ,COMPUTER simulation ,NUMERICAL analysis ,ELECTRICAL engineering ,SIGNAL processing ,IMAGE processing - Abstract
The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
22. Quasi-interpolation on the Body Centered Cubic Lattice.
- Author
-
Entezari, A., Mirzargar, M., and Kalantari, L.
- Subjects
ENGINEERING ,VISUALIZATION ,ALGORITHMS ,IMAGE processing ,POLYNOMIALS ,LATTICE theory - Abstract
This paper introduces a quasi-interpolation method for reconstruction of data sampled on the Body Centered Cubic (BCC) lattice. The reconstructions based on this quasi-interpolation achieve the optimal approximation order offered by the shifts of the quintic box spline on the BCC lattice. We also present a local FIR filter that is used to filter the data for quasi-interpolation. We document the improved quality and fidelity of reconstructions after employing the introduced quasi-interpolation method. Finally the resulting quasi-interpolation on the BCC sampled data are compared to the corresponding quasi-interpolation method on the Cartesian sampled data. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
23. An Incremental K-means algorithm.
- Author
-
Pham, D. T., Dimov, S. S., and Nguyen, C. D.
- Subjects
ALGORITHMS ,IMAGE processing ,MAXIMA & minima ,CLUSTER theory (Nuclear physics) ,IMAGING systems ,ENGINEERING - Abstract
Data clustering is an important data exploration technique with many applications in engineering, including parts family formation in group technology and segmentation in image processing. One of the most popular data clustering methods is K-means clustering because of its simplicity and computational efficiency. The main problem with this clustering method is its tendency to converge at a local minimum. In this paper, the cause of this problem is explained and an existing solution involving a cluster centre jumping operation is examined. The jumping technique alleviates the problem with local minima by enabling cluster centres to move in such a radical way as to reduce the overall cluster distortion. However, the method is very sensitive to errors in estimating distortion. A clustering scheme that is also based on distortion reduction through cluster centre movement but is not so sensitive to inaccuracies in distortion estimation is proposed in this paper. The scheme, which is an incremental version of the K-means algorithm, involves adding cluster centres one by one as clusters are being formed. The paper presents test results to demonstrate the efficacy of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
24. CLOSED FREE-FORM SURFACE GEOMETRICAL MODELING A NEW APPROACH WITH GLOBAL AND LOCAL CHARACTERIZATION.
- Author
-
Mari, Jean-Luc and Sequeira, Jean
- Subjects
GEOMETRIC modeling ,ALGORITHMS ,COMPUTER graphics ,COMPUTER science ,IMAGE processing ,ENGINEERING ,INDUSTRIAL engineering - Abstract
In this paper, we present a new approach to geometrical modeling which allows the user to easily characterize and control the shape defined to a closed surface. We will focus on dealing with the shape's topological, morphological and geometrical properties separately. To do this, we have based our work on the following observations concerning surfaces defined by control-points, and implicit surfaces with skeleton. They both provide complementary approaches to the surface's deformation, and both have specific advantages and limits. We thus attempted to conceive a model which integrates the local and geometrical characterization induced by the control points, as well as the representation of the morphology given by the skeleton. Knowing that the lattice of control points is close to the surface and that the skeleton is centered in the related shape, we thought of a 3-layer model. The transition layer separates the local geometrical considerations from those linked to the global morphology. We apply our model to shape design in order to modify an object in an interactive and ergonomic way, as well as to reconstruction which allows better shape understanding. To do so, we present the algorithms related to these processes. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
25. PET Image Reconstruction Using Deep Image Prior
- Author
-
Gong, Kuang, Catana, Ciprian, Qi, Jinyi, and Li, Quanzheng
- Subjects
Clinical Research ,Networking and Information Technology R&D (NITRD) ,Biomedical Imaging ,Bioengineering ,4.1 Discovery and preclinical testing of markers and technologies ,Detection ,screening and diagnosis ,Algorithms ,Brain ,Deep Learning ,Humans ,Image Processing ,Computer-Assisted ,Phantoms ,Imaging ,Positron-Emission Tomography ,Unsupervised Machine Learning ,Medical image reconstruction ,deep neural network ,unsupervised learning ,positron emission tomography ,Information and Computing Sciences ,Engineering ,Nuclear Medicine & Medical Imaging - Abstract
Recently, deep neural networks have been widely and successfully applied in computer vision tasks and have attracted growing interest in medical imaging. One barrier for the application of deep neural networks to medical imaging is the need for large amounts of prior training pairs, which is not always feasible in clinical practice. This is especially true for medical image reconstruction problems, where raw data are needed. Inspired by the deep image prior framework, in this paper, we proposed a personalized network training method where no prior training pairs are needed, but only the patient's own prior information. The network is updated during the iterative reconstruction process using the patient-specific prior information and measured data. We formulated the maximum-likelihood estimation as a constrained optimization problem and solved it using the alternating direction method of multipliers algorithm. Magnetic resonance imaging guided positron emission tomography reconstruction was employed as an example to demonstrate the effectiveness of the proposed framework. Quantification results based on simulation and real data show that the proposed reconstruction framework can outperform Gaussian post-smoothing and anatomically guided reconstructions using the kernel method or the neural-network penalty.
- Published
- 2019
26. Spatio-Temporally Constrained Reconstruction for Hyperpolarized Carbon-13 MRI Using Kinetic Models
- Author
-
Maidens, John, Gordon, Jeremy W, Chen, Hsin-Yu, Park, Ilwoo, Van Criekinge, Mark, Milshteyn, Eugene, Bok, Robert, Aggarwal, Rahul, Ferrone, Marcus, Slater, James B, Kurhanewicz, John, Vigneron, Daniel B, Arcak, Murat, and Larson, Peder EZ
- Subjects
Bioengineering ,Biomedical Imaging ,Detection ,screening and diagnosis ,4.1 Discovery and preclinical testing of markers and technologies ,Algorithms ,Animals ,Carbon Isotopes ,Humans ,Image Processing ,Computer-Assisted ,Kidney ,Magnetic Resonance Imaging ,Male ,Molecular Imaging ,Prostate ,Prostatic Neoplasms ,Rats ,Rats ,Sprague-Dawley ,Signal-To-Noise Ratio ,Parameter estimation ,linear systems ,inverse problems ,optimization ,magnetic resonance imaging ,carbon ,molecular imaging ,Information and Computing Sciences ,Engineering ,Nuclear Medicine & Medical Imaging - Abstract
We present a method of generating spatial maps of kinetic parameters from dynamic sequences of images collected in hyperpolarized carbon-13 magnetic resonance imaging (MRI) experiments. The technique exploits spatial correlations in the dynamic traces via regularization in the space of parameter maps. Similar techniques have proven successful in other dynamic imaging problems, such as dynamic contrast enhanced MRI. In this paper, we apply these techniques for the first time to hyperpolarized MRI problems, which are particularly challenging due to limited signal-to-noise ratio (SNR). We formulate the reconstruction as an optimization problem and present an efficient iterative algorithm for solving it based on the alternation direction method of multipliers. We demonstrate that this technique improves the qualitative appearance of parameter maps estimated from low SNR dynamic image sequences, first in simulation then on a number of data sets collected in vivo. The improvement this method provides is particularly pronounced at low SNR levels.
- Published
- 2018
27. Accelerated Cardiac Diffusion Tensor Imaging Using Joint Low-Rank and Sparsity Constraints
- Author
-
Ma, Sen, Nguyen, Christopher T, Christodoulou, Anthony G, Luthringer, Daniel, Kobashigawa, Jon, Lee, Sang-Eun, Chang, Hyuk-Jae, and Li, Debiao
- Subjects
Information and Computing Sciences ,Communications Engineering ,Engineering ,Computer Vision and Multimedia Computation ,Biomedical Imaging ,Bioengineering ,Cardiovascular ,Clinical Research ,Algorithms ,Diffusion Tensor Imaging ,Heart ,Humans ,Image Processing ,Computer-Assisted ,Signal Processing ,Computer-Assisted ,Cardiac diffusion tensor imaging ,phase correction ,low-rank modeling ,compressed sensing ,helix angle ,helix angle transmurality ,mean diffusivity ,eess.SP ,Artificial Intelligence and Image Processing ,Biomedical Engineering ,Electrical and Electronic Engineering ,Biomedical engineering ,Electronics ,sensors and digital hardware ,Computer vision and multimedia computation - Abstract
ObjectiveThe purpose of this paper is to accelerate cardiac diffusion tensor imaging (CDTI) by integrating low-rankness and compressed sensing.MethodsDiffusion-weighted images exhibit both transform sparsity and low-rankness. These properties can jointly be exploited to accelerate CDTI, especially when a phase map is applied to correct for the phase inconsistency across diffusion directions, thereby enhancing low-rankness. The proposed method is evaluated both ex vivo and in vivo, and is compared to methods using either a low-rank or sparsity constraint alone.ResultsCompared to using a low-rank or sparsity constraint alone, the proposed method preserves more accurate helix angle features, the transmural continuum across the myocardium wall, and mean diffusivity at higher acceleration, while yielding significantly lower bias and higher intraclass correlation coefficient.ConclusionLow-rankness and compressed sensing together facilitate acceleration for both ex vivo and in vivo CDTI, improving reconstruction accuracy compared to employing either constraint alone.SignificanceCompared to previous methods for accelerating CDTI, the proposed method has the potential to reach higher acceleration while preserving myofiber architecture features, which may allow more spatial coverage, higher spatial resolution, and shorter temporal footprint in the future.
- Published
- 2018
28. Hybrid Pre-Log and Post-Log Image Reconstruction for Computed Tomography
- Author
-
Wang, Guobao, Zhou, Jian, Yu, Zhou, Wang, Wenli, and Qi, Jinyi
- Subjects
Information and Computing Sciences ,Computer Vision and Multimedia Computation ,Biomedical Imaging ,Bioengineering ,Algorithms ,Computer Simulation ,Humans ,Image Processing ,Computer-Assisted ,Models ,Biological ,Phantoms ,Imaging ,Poisson Distribution ,Shoulder ,Tomography ,X-Ray Computed ,Low-dose CT ,image reconstruction ,iterative algorithm ,noise model ,shifted Poisson ,weighted least squares ,Engineering ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Tomographic image reconstruction for low-dose computed tomography (CT) is increasingly challenging as dose continues to reduce in clinical applications. Pre-log domain methods and post-log domain methods have been proposed individually and each method has its own disadvantage. While having the potential to improve image quality for low-dose data by using an accurate imaging model, pre-log domain methods suffer slow convergence in practice due to the nonlinear transformation from the image to measurements. In contrast, post-log domain methods have fast convergence speed but the resulting image quality is suboptimal for low dose CT data because the log transformation is extremely unreliable for low-count measurements and undefined for negative values. This paper proposes a hybrid method that integrates the pre-log model and post-log model together to overcome the disadvantages of individual pre-log and post-log methods. We divide a set of CT data into high-count and low-count regions. The post-log weighted least squares model is used for measurements in the high-count region and the pre-log shifted Poisson model for measurements in the low-count region. The hybrid likelihood function can be optimized using an existing iterative algorithm. Computer simulations and phantom experiments show that the proposed hybrid method can achieve faster early convergence than the pre-log shifted Poisson likelihood method and better signal-to-noise performance than the post-log weighted least squares method.
- Published
- 2017
29. Developing and Evaluating a Target-Background Similarity Metric for Camouflage Detection.
- Author
-
Lin, Chiuhsiang Joe, Chang, Chi-Chan, and Liu, Bor-Shong
- Subjects
CAMOUFLAGE (Military science) ,STEALTH aircraft ,COMPUTER algorithms ,COMPUTER-aided design ,PSYCHOPHYSICS ,IMAGE quality in imaging systems - Abstract
Background: Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. Methodology: In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. Significance: The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. Mathematical Modeling of Biofilm Structures Using COMSTAT Data
- Author
-
Verotta, Davide, Haagensen, Janus, Spormann, Alfred M, and Yang, Katherine
- Subjects
Chemical Engineering ,Engineering ,Bioengineering ,Algorithms ,Biofilms ,Computer Simulation ,Image Processing ,Computer-Assisted ,Kinetics ,Microscopy ,Confocal ,Models ,Statistical ,Multivariate Analysis ,Pseudomonas aeruginosa ,Software ,Applied Mathematics ,Biomedical Engineering ,Bioinformatics ,Biomedical engineering ,Applied mathematics - Abstract
Mathematical modeling holds great potential for quantitatively describing biofilm growth in presence or absence of chemical agents used to limit or promote biofilm growth. In this paper, we describe a general mathematical/statistical framework that allows for the characterization of complex data in terms of few parameters and the capability to (i) compare different experiments and exposures to different agents, (ii) test different hypotheses regarding biofilm growth and interaction with different agents, and (iii) simulate arbitrary administrations of agents. The mathematical framework is divided to submodels characterizing biofilm, including new models characterizing live biofilm growth and dead cell accumulation; the interaction with agents inhibiting or stimulating growth; the kinetics of the agents. The statistical framework can take into account measurement and interexperiment variation. We demonstrate the application of (some of) the models using confocal microscopy data obtained using the computer program COMSTAT.
- Published
- 2017
31. Optimizing Flip Angles for Metabolic Rate Estimation in Hyperpolarized Carbon-13 MRI
- Author
-
Maidens, John, Gordon, Jeremy W, Arcak, Murat, and Larson, Peder EZ
- Subjects
Engineering ,Algorithms ,Animals ,Carbon Isotopes ,Computer Simulation ,Disease Models ,Animal ,Image Processing ,Computer-Assisted ,Lactic Acid ,Magnetic Resonance Imaging ,Male ,Mice ,Prostatic Neoplasms ,Pyruvic Acid ,Fisher information ,hyperpolarized carbon-13 magnetic resonance imaging ,optimal experiment design ,parameter mapping ,quantitative imaging ,Information and Computing Sciences ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Hyperpolarized carbon-13 magnetic resonance imaging has enabled the real-time observation of perfusion and metabolism in vivo. These experiments typically aim to distinguish between healthy and diseased tissues based on the rate at which they metabolize an injected substrate. However, existing approaches to optimizing flip angle sequences for these experiments have focused on indirect metrics of the reliability of metabolic rate estimates, such as signal variation and signal-to-noise ratio. In this paper we present an optimization procedure that focuses on maximizing the Fisher information about the metabolic rate. We demonstrate through numerical simulation experiments that flip angles optimized based on the Fisher information lead to lower variance in metabolic rate estimates than previous flip angle sequences. In particular, we demonstrate a 20% decrease in metabolic rate uncertainty when compared with the best competing sequence. We then demonstrate appropriateness of the mathematical model used in the simulation experiments with in vivo experiments in a prostate cancer mouse model. While there is no ground truth against which to compare the parameter estimates generated in the in vivo experiments, we demonstrate that our model used can reproduce consistent parameter estimates for a number of flip angle sequences.
- Published
- 2016
32. Fuzzy Nonlinear Proximal Support Vector Machine for Land Extraction Based on Remote Sensing Image.
- Author
-
Zhong, Xiaomei, Li, Jianping, Dou, Huacheng, Deng, Shijun, Wang, Guofei, Jiang, Yu, Wang, Yongjie, Zhou, Zebing, Wang, Li, and Yan, Fei
- Subjects
- *
FUZZY algorithms , *GEOMORPHOLOGY , *NONLINEAR analysis , *SUPPORT vector machines , *REMOTE sensing , *CARTOGRAPHY , *ARTIFICIAL neural networks software , *SIGNAL processing - Abstract
Currently, remote sensing technologies were widely employed in the dynamic monitoring of the land. This paper presented an algorithm named fuzzy nonlinear proximal support vector machine (FNPSVM) by basing on ETM+ remote sensing image. This algorithm is applied to extract various types of lands of the city Da’an in northern China. Two multi-category strategies, namely “one-against-one” and “one-against-rest” for this algorithm were described in detail and then compared. A fuzzy membership function was presented to reduce the effects of noises or outliers on the data samples. The approaches of feature extraction, feature selection, and several key parameter settings were also given. Numerous experiments were carried out to evaluate its performances including various accuracies (overall accuracies and kappa coefficient), stability, training speed, and classification speed. The FNPSVM classifier was compared to the other three classifiers including the maximum likelihood classifier (MLC), back propagation neural network (BPN), and the proximal support vector machine (PSVM) under different training conditions. The impacts of the selection of training samples, testing samples and features on the four classifiers were also evaluated in these experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
33. A Geometrical Approach for Automatic Shape Restoration of the Left Ventricle.
- Author
-
Tan, May-Ling, Su, Yi, Lim, Chi-Wan, Selvaraj, Senthil Kumar, Zhong, Liang, and Tan, Ru-San
- Subjects
- *
LEFT heart ventricle , *THREE-dimensional imaging , *MAGNETIC resonance imaging , *BIOENGINEERING , *CARDIOVASCULAR system , *QUASI-Newton methods , *ANATOMY - Abstract
This paper describes an automatic algorithm that uses a geometry-driven optimization approach to restore the shape of three-dimensional (3D) left ventricular (LV) models created from magnetic resonance imaging (MRI) data. The basic premise is to restore the LV shape such that the LV epicardial surface is smooth after the restoration and that the general shape characteristic of the LV is not altered. The Maximum Principle Curvature () and the Minimum Principle Curvature () of the LV epicardial surface are used to construct a shape-based optimization objective function to restore the shape of a motion-affected LV via a dual-resolution semi-rigid deformation process and a free-form geometric deformation process. A limited memory quasi-Newton algorithm, L-BFGS-B, is then used to solve the optimization problem. The goal of the optimization is to achieve a smooth epicardial shape by iterative in-plane and through-plane translation of vertices in the LV model. We tested our algorithm on 30 sets of LV models with simulated motion artifact generated from a very smooth patient sample, and 20 in vivo patient-specific models which contain significant motion artifacts. In the 30 simulated samples, the Hausdorff distances with respect to the Ground Truth are significantly reduced after restoration, signifying that the algorithm can restore geometrical accuracy of motion-affected LV models. In the 20 in vivo patient-specific models, the results show that our method is able to restore the shape of LV models without altering the general shape of the model. The magnitudes of in-plane translations are also consistent with existing registration techniques and experimental findings. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
34. Fusion Tensor Subspace Transformation Framework.
- Author
-
Wang, Su-Jing, Zhou, Chun-Guang, and Fu, Xiaolan
- Subjects
- *
APPLICATION software , *NATURAL language processing , *APPLIED mathematics , *INFORMATION technology , *COMPUTER algorithms , *SEMANTICS - Abstract
Tensor subspace transformation, a commonly used subspace transformation technique, has gained more and more popularity over the past few years because many objects in the real world can be naturally represented as multidimensional arrays, i.e. tensors. For example, a RGB facial image can be represented as a three-dimensional array (or 3rd-order tensor). The first two dimensionalities (or modes) represent the facial spatial information and the third dimensionality (or mode) represents the color space information. Each mode of the tensor may express a different semantic meaning. Thus different transformation strategies should be applied to different modes of the tensor according to their semantic meanings to obtain the best performance. To the best of our knowledge, there are no existing tensor subspace transformation algorithm which implements different transformation strategies on different modes of a tensor accordingly. In this paper, we propose a fusion tensor subspace transformation framework, a novel idea where different transformation strategies are implemented on separate modes of a tensor. Under the framework, we propose the Fusion Tensor Color Space (FTCS) model for face recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
35. Iterative Nonlocal Total Variation Regularization Method for Image Restoration.
- Author
-
Xu, Huanyu, Sun, Quansen, Luo, Nan, Cao, Guo, and Xia, Deshen
- Subjects
- *
IMAGE reconstruction , *ITERATIVE methods (Mathematics) , *MATHEMATICAL regularization , *ALGORITHMS , *EXPERIMENTAL design , *SIGNAL processing , *NUMBER theory - Abstract
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
36. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.
- Author
-
Tang, Xiaoying, Oishi, Kenichi, Faria, Andreia V., Hillis, Argye E., Albert, Marilyn S., Mori, Susumu, and Miller, Michael I.
- Subjects
- *
SEGMENTATION (Biology) , *PARAMETER estimation , *COMPUTATIONAL biology , *BIOMEDICAL engineering , *DIFFERENTIAL geometry , *MORPHOGENESIS , *BIOENGINEERING - Abstract
This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
37. Accelerated High-Dimensional MR Imaging With Sparse Sampling Using Low-Rank Tensors
- Author
-
He, Jingfei, Liu, Qiegen, Christodoulou, Anthony G, Ma, Chao, Lam, Fan, and Liang, Zhi-Pei
- Subjects
Information and Computing Sciences ,Communications Engineering ,Engineering ,Computer Vision and Multimedia Computation ,Clinical Research ,Biomedical Imaging ,Bioengineering ,Algorithms ,Image Processing ,Computer-Assisted ,Magnetic Resonance Imaging ,High-dimensional MR imaging ,low-rank tensor ,partial separability ,sparse regularization ,sparse sampling ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI.
- Published
- 2016
38. Ultrashort echo time and zero echo time MRI at 7T
- Author
-
Larson, Peder EZ, Han, Misung, Krug, Roland, Jakary, Angela, Nelson, Sarah J, Vigneron, Daniel B, Henry, Roland G, McKinnon, Graeme, and Kelley, Douglas AC
- Subjects
Engineering ,Biomedical and Clinical Sciences ,Clinical Sciences ,Physical Sciences ,Bioengineering ,Clinical Research ,Biomedical Imaging ,4.1 Discovery and preclinical testing of markers and technologies ,Detection ,screening and diagnosis ,Acoustics ,Algorithms ,Ankle ,Artifacts ,Brain ,Brain Mapping ,Contrast Media ,Healthy Volunteers ,Humans ,Image Enhancement ,Image Interpretation ,Computer-Assisted ,Image Processing ,Computer-Assisted ,Knee ,Magnetic Resonance Imaging ,Multiple Sclerosis ,Phantoms ,Imaging ,Signal-To-Noise Ratio ,Magnetic resonance imaging ,Neuroimaging ,Musculoskeletal system ,Nuclear Medicine & Medical Imaging - Abstract
ObjectiveZero echo time (ZTE) and ultrashort echo time (UTE) pulse sequences for MRI offer unique advantages of being able to detect signal from rapidly decaying short-T2 tissue components. In this paper, we applied 3D ZTE and UTE pulse sequences at 7T to assess differences between these methods.Materials and methodsWe matched the ZTE and UTE pulse sequences closely in terms of readout trajectories and image contrast. Our ZTE used the water- and fat-suppressed solid-state proton projection imaging method to fill the center of k-space. Images from healthy volunteers obtained at 7T were compared qualitatively, as well as with SNR and CNR measurements for various ultrashort, short, and long-T2 tissues.ResultsWe measured nearly identical contrast-to-noise and signal-to-noise ratios (CNR/SNR) in similar scan times between the two approaches for ultrashort, short, and long-T2 components in the brain, knee and ankle. In our protocol, we observed gradient fidelity artifacts in UTE, and our chosen flip angle and readout also resulted in shading artifacts in ZTE due to inadvertent spatial selectivity. These can be corrected by advanced reconstruction methods or with different chosen protocol parameters.ConclusionThe applied ZTE and UTE pulse sequences achieved similar contrast and SNR efficiency for volumetric imaging of ultrashort-T2 components. Key differences include that ZTE is limited to volumetric imaging, but has substantially reduced acoustic noise levels during the scan. Meanwhile, UTE has higher acoustic noise levels and greater sensitivity to gradient fidelity, but offers more flexibility in image contrast and volume selection.
- Published
- 2016
39. Designing a product family of meshing tools
- Author
-
Bastarrica, María Cecilia and Hitschfeld-Kahler, Nancy
- Subjects
- *
COMPUTER software development , *ENGINEERING , *ALGORITHMS , *IMAGE processing - Abstract
Abstract: Applying software engineering concepts can improve the quality of any software development, and this is even more dramatic for complex, large and sophisticated software, such as meshing tools. Software product families are series of related products that make intensive reuse of already developed components. Object-oriented design promotes reusability, so it is specially suited for designing the structure of product families. In this paper we present an object-oriented design of a product family of meshing tools, where all family members share the software structure. By instantiating the structure with particular algorithms and parameters, we can easily produce different tools of the family. A good family design allows us not only to combine existing algorithms but also to easily incorporate new ones, improving software family evolution. We show how the family design is used for the generation of finite element and finite volume meshing tools, as well as a new tool for image processing. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
40. Edge-Preserving PET Image Reconstruction Using Trust Optimization Transfer
- Author
-
Wang, Guobao and Qi, Jinyi
- Subjects
Information and Computing Sciences ,Communications Engineering ,Engineering ,Computer Vision and Multimedia Computation ,Bioengineering ,Biomedical Imaging ,Algorithms ,Animals ,Brain ,Humans ,Image Processing ,Computer-Assisted ,Phantoms ,Imaging ,Positron-Emission Tomography ,Primates ,Edge-preserving regularization ,image reconstruction ,optimization algorithm ,optimization transfer ,positron emission tomography ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Iterative image reconstruction for positron emission tomography can improve image quality by using spatial regularization. The most commonly used quadratic penalty often oversmoothes sharp edges and fine features in reconstructed images, while nonquadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or nonsmooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3-D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the nonsmooth l1 regularization.
- Published
- 2015
41. Reconstruction of 4-D Dynamic SPECT Images from Inconsistent Projections Using a Spline Initialized FADS Algorithm (SIFADS)
- Author
-
Abdalah, Mahmoud, Boutchko, Rostyslav, Mitra, Debasis, and Gullberg, Grant T
- Subjects
Information and Computing Sciences ,Bioengineering ,Cardiovascular ,Algorithms ,Animals ,Blood Physiological Phenomena ,Dogs ,Heart ,Image Processing ,Computer-Assisted ,Liver ,Phantoms ,Imaging ,Rats ,Tomography ,Emission-Computed ,Single-Photon ,Dynamic single photon emission computed tomography ,image reconstruction ,optimization ,regularization ,Engineering ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
In this paper, we propose and validate an algorithm of extracting voxel-by-voxel time activity curves directly from inconsistent projections applied in dynamic cardiac SPECT. The algorithm was derived based on factor analysis of dynamic structures (FADS) approach and imposes prior information by applying several regularization functions with adaptively changing relative weighting. The anatomical information of the imaged subject was used to apply the proposed regularization functions adaptively in the spatial domain. The algorithm performance is validated by reconstructing dynamic datasets simulated using the NCAT phantom with a range of different input tissue time-activity curves. The results are compared to the spline-based and FADS methods. The validated algorithm is then applied to reconstruct pre-clinical cardiac SPECT data from canine and murine subjects. Images, generated from both simulated and experimentally acquired data confirm the ability of the new algorithm to solve the inverse problem of dynamic SPECT with slow gantry rotation.
- Published
- 2015
42. PET Image Reconstruction Using Kernel Method
- Author
-
Wang, Guobao and Qi, Jinyi
- Subjects
Information and Computing Sciences ,Communications Engineering ,Engineering ,Computer Vision and Multimedia Computation ,Bioengineering ,Networking and Information Technology R&D (NITRD) ,Biomedical Imaging ,Generic health relevance ,Algorithms ,Computer Simulation ,Humans ,Image Processing ,Computer-Assisted ,Phantoms ,Imaging ,Positron-Emission Tomography ,Software ,Expectation maximization ,image prior ,image reconstruction ,kernel method ,positron emission tomography ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Image reconstruction from low-count positron emission tomography (PET) projection data is challenging because the inverse problem is ill-posed. Prior information can be used to improve image quality. Inspired by the kernel methods in machine learning, this paper proposes a kernel based method that models PET image intensity in each pixel as a function of a set of features obtained from prior information. The kernel-based image model is incorporated into the forward model of PET projection data and the coefficients can be readily estimated by the maximum likelihood (ML) or penalized likelihood image reconstruction. A kernelized expectation-maximization algorithm is presented to obtain the ML estimate. Computer simulations show that the proposed approach can achieve better bias versus variance trade-off and higher contrast recovery for dynamic PET image reconstruction than the conventional maximum likelihood method with and without post-reconstruction denoising. Compared with other regularization-based methods, the kernel method is easier to implement and provides better image quality for low-count data. Application of the proposed kernel method to a 4-D dynamic PET patient dataset showed promising results.
- Published
- 2015
43. Metric optimization for surface analysis in the Laplace-Beltrami embedding space.
- Author
-
Shi, Yonggang, Lai, Rongjie, Wang, Danny JJ, Pelletier, Daniel, Mohr, David, Sicotte, Nancy, and Toga, Arthur W
- Subjects
Hippocampus ,Cerebral Cortex ,Humans ,Brain Mapping ,Depression ,Algorithms ,Image Processing ,Computer-Assisted ,Adolescent ,Child ,Female ,Cortex ,hippocampus ,Laplace-Beltrami embedding ,metric optimization ,surface mapping ,Image Processing ,Computer-Assisted ,Nuclear Medicine & Medical Imaging ,Information and Computing Sciences ,Engineering - Abstract
In this paper, we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects.
- Published
- 2014
44. 200 MeV Proton Radiography Studies With a Hand Phantom Using a Prototype Proton CT Scanner
- Author
-
Plautz, Tia, Bashkirov, V, Feng, V, Hurley, F, Johnson, RP, Leary, C, Macafee, S, Plumb, A, Rykalin, V, Sadrozinski, HF-W, Schubert, K, Schulte, R, Schultze, B, Steinberg, D, Witt, M, and Zatserklyaniy, A
- Subjects
Information and Computing Sciences ,Engineering ,Biomedical Imaging ,Bioengineering ,4.2 Evaluation of markers and technologies ,Detection ,screening and diagnosis ,Algorithms ,Hand ,Humans ,Image Processing ,Computer-Assisted ,Phantoms ,Imaging ,Protons ,Radiation Dosage ,Tomography ,X-Ray Computed ,Data reduction ,proton imaging ,spatial resolution ,tomographic reconstruction of material properties ,Nuclear Medicine & Medical Imaging ,Information and computing sciences - Abstract
Proton radiography has applications in patient alignment and verification procedures for proton beam radiation therapy. In this paper, we report an experiment which used 200 MeV protons to generate proton energy-loss and scattering radiographs of a hand phantom. The experiment used the first-generation proton computed tomography (CT) scanner prototype, which was installed on the research beam line of the clinical proton synchrotron at Loma Linda University Medical Center. It was found that while both radiographs displayed anatomical details of the hand phantom, the energy-loss radiograph had a noticeably higher resolution. Nonetheless, scattering radiography may yield more contrast between soft and bone tissue than energy-loss radiography, however, this requires further study. This study contributes to the optimization of the performance of the next-generation of clinical proton CT scanners. Furthermore, it demonstrates the potential of proton imaging (proton radiography and CT), which is now within reach of becoming available as a new, potentially low-dose medical imaging modality.
- Published
- 2014
45. High-Resolution Cardiovascular MRI by Integrating Parallel Imaging With Low-Rank and Sparse Modeling
- Author
-
Christodoulou, Anthony G, Zhang, Haosen, Zhao, Bo, Hitchens, T Kevin, Ho, Chien, and Liang, Zhi-Pei
- Subjects
Information and Computing Sciences ,Engineering ,Computer Vision and Multimedia Computation ,Biomedical Engineering ,Cardiovascular ,Bioengineering ,Heart Disease ,Biomedical Imaging ,4.2 Evaluation of markers and technologies ,Detection ,screening and diagnosis ,4.1 Discovery and preclinical testing of markers and technologies ,Algorithms ,Animals ,Heart ,Humans ,Image Processing ,Computer-Assisted ,Magnetic Resonance Imaging ,Phantoms ,Imaging ,Rats ,Cardiovascular MRI ,group sparsity ,inverse problems ,low-rank modeling ,partial separability ,Artificial Intelligence and Image Processing ,Electrical and Electronic Engineering ,Biomedical engineering ,Electronics ,sensors and digital hardware ,Computer vision and multimedia computation - Abstract
Magnetic resonance imaging (MRI) has long been recognized as a powerful tool for cardiovascular imaging because of its unique potential to measure blood flow, cardiac wall motion, and tissue properties jointly. However, many clinical applications of cardiac MRI have been limited by low imaging speed. In this paper, we present a novel method to accelerate cardiovascular MRI through the integration of parallel imaging, low-rank modeling, and sparse modeling. This method consists of a novel image model and specialized data acquisition. Of particular novelty is the proposed low-rank model component, which is specially adapted to the particular low-rank structure of cardiovascular signals. Simulations and in vivo experiments were performed to evaluate the method, as well as an analysis of the low-rank structure of a numerical cardiovascular phantom. Cardiac imaging experiments were carried out on both human and rat subjects without the use of ECG or respiratory gating and without breath holds. The proposed method reconstructed 2-D human cardiac images up to 22 fps and 1.0 mm × 1.0 mm spatial resolution and 3-D rat cardiac images at 67 fps and 0.65 mm × 0.65 mm × 0.31 mm spatial resolution. These capabilities will enhance the practical utility of cardiovascular MRI.
- Published
- 2013
46. In vivo estimation of the shoulder joint center of rotation using magneto-inertial sensors: MRI-based accuracy and repeatability assessment
- Author
-
Michele Crabolu, Danilo Pani, Andrea Cereatti, Maurizio Conti, P. Crivelli, and Luigi Raffo
- Subjects
Male ,Engineering ,Accelerometers ,Center of rotation ,Functional method ,Gleno-humeral joint ,Gyroscope ,Human movement ,Magneto-inertial sensing ,Shoulder ,Wearable devices ,Adult ,Algorithms ,Female ,Humans ,Humerus ,Movement ,Phantoms ,Imaging ,Reproducibility of Results ,Shoulder Joint ,Signal-To-Noise Ratio ,Image Processing ,Computer-Assisted ,Magnetic Phenomena ,Magnetic Resonance Imaging ,Range of Motion ,Articular ,Rotation ,Radiological and Ultrasound Technology ,Biomaterials ,Biomedical Engineering ,Radiology ,Nuclear Medicine and Imaging ,02 engineering and technology ,0302 clinical medicine ,Image Processing, Computer-Assisted ,Computer vision ,Range of Motion, Articular ,Phantoms, Imaging ,General Medicine ,Repeatability ,medicine.anatomical_structure ,Range of motion ,Rotation (mathematics) ,Accuracy and precision ,0206 medical engineering ,Angular velocity ,03 medical and health sciences ,Inertial measurement unit ,medicine ,Radiology, Nuclear Medicine and imaging ,Instant centre of rotation ,business.industry ,Research ,020601 biomedical engineering ,Shoulder joint ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Biomedical engineering - Abstract
Background The human gleno-humeral joint is normally represented as a spherical hinge and its center of rotation is used to construct humerus anatomical axes and as reduction point for the computation of the internal joint moments. The position of the gleno-humeral joint center (GHJC) can be estimated by recording ad hoc shoulder joint movement following a functional approach. In the last years, extensive research has been conducted to improve GHJC estimate as obtained from positioning systems such as stereo-photogrammetry or electromagnetic tracking. Conversely, despite the growing interest for wearable technologies in the field of human movement analysis, no studies investigated the problem of GHJC estimation using miniaturized magneto-inertial measurement units (MIMUs). The aim of this study was to evaluate both accuracy and precision of the GHJC estimation as obtained using a MIMU-based methodology and a functional approach. Methods Five different functional methods were implemented and comparatively assessed under different experimental conditions (two types of shoulder motions: cross and star type motion; two joint velocities: ωmax = 90°/s, 180°/s; two ranges of motion: Ɵ = 45°, 90°). Validation was conducted on five healthy subjects and true GHJC locations were obtained using magnetic resonance imaging. Results The best performing methods (NAP and SAC) showed an accuracy in the estimate of the GHJC between 20.6 and 21.9 mm and repeatability values between 9.4 and 10.4 mm. Methods performance did not show significant differences for the type of arm motion analyzed or a reduction of the arm angular velocity (180°/s and 90°/s). In addition, a reduction of the joint range of motion (90° and 45°) did not seem to influence significantly the GHJC position estimate except in a few subject-method combinations. Conclusions MIMU-based functional methods can be used to estimate the GHJC position in vivo with errors of the same order of magnitude than those obtained using traditionally stereo-photogrammetric techniques. The methodology proposed seemed to be robust under different experimental conditions. The present paper was awarded as “SIAMOC Best Methodological Paper 2016”.
- Published
- 2017
47. An Analytical Algorithm for Tensor Tomography From Projections Acquired About Three Axes
- Author
-
Weijie Tao, Damien Rohmer, Grant T. Gullberg, Youngho Seo, and Qiu Huang
- Subjects
directional X-ray projections ,Ellipsoids ,Image Processing ,Bioengineering ,Tensors ,Phantoms ,Imaging ,Imaging, Three-Dimensional ,Computer-Assisted ,Engineering ,Information and Computing Sciences ,Image Processing, Computer-Assisted ,Filtered back-projection algorithm ,Humans ,Electrical and Electronic Engineering ,Biomedical measurement ,Tomography ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,solenoidal and irrotational components ,X-ray imaging ,X-Ray Computed ,Computer Science Applications ,Nuclear Medicine & Medical Imaging ,Three-Dimensional ,Image reconstruction ,Biomedical Imaging ,Three-dimensional displays ,tensor tomography ,Tomography, X-Ray Computed ,Filtering algorithms ,Algorithms ,Software - Abstract
Tensor fields are useful for modeling the structure of biological tissues. The challenge to measure tensor fields involves acquiring sufficient data of scalar measurements that are physically achievable and reconstructing tensors from as few projections as possible for efficient applications in medical imaging. In this paper, we present a filtered back-projection algorithm for the reconstruction of a symmetric second-rank tensor field from directional X-ray projections about three axes. The tensor field is decomposed into a solenoidal and irrotational component, each of three unknowns. Using the Fourier projection theorem, a filtered back-projection algorithm is derived to reconstruct the solenoidal and irrotational components from projections acquired around three axes. A simple illustrative phantom consisting of two spherical shells and a 3D digital cardiac diffusion image obtained from diffusion tensor MRI of an excised human heart are used to simulate directional X-ray projections. The simulations validate the mathematical derivations and demonstrate reasonable noise properties of the algorithm. The decomposition of the tensor field into solenoidal and irrotational components provides insight into the development of algorithms for reconstructing tensor fields with sufficient samples in terms of the type of directional projections and the necessary orbits for the acquisition of the projections of the tensor field.
- Published
- 2022
48. A System Verification Platform for High-Density Epiretinal Prostheses
- Author
-
Mark S. Humayun, Kuanfu Chen, Zhi Yang, Wentai Liu, Yi-Kai Lo, and James D. Weiland
- Subjects
Engineering ,Light ,media_common.quotation_subject ,Retinal implant ,Video Recording ,Biomedical Engineering ,Image processing ,Prosthesis Design ,Retina ,Feedback ,Software ,Image Processing, Computer-Assisted ,Humans ,Telemetry ,Computer Simulation ,Electrical and Electronic Engineering ,Software analysis pattern ,Vision, Ocular ,media_common ,Computers ,business.industry ,Neural Prosthesis ,Retinal Degeneration ,Signal Processing, Computer-Assisted ,Equipment Design ,Electric Stimulation ,Electrodes, Implanted ,Visual Prosthesis ,Debugging ,Visual prosthesis ,Embedded system ,Systems design ,business ,Wireless Technology ,Algorithms - Abstract
Retinal prostheses have restored light perception to people worldwide who have poor or no vision as a consequence of retinal degeneration. To advance the quality of visual stimulation for retinal implant recipients, a higher number of stimulation channels is expected in the next generation retinal prostheses, which poses a great challenge to system design and verification. This paper presents a system verification platform dedicated to the development of retinal prostheses. The system includes primary processing, dual-band power and data telemetry, a high-density stimulator array, and two methods for output verification. End-to-end system validation and individual functional block characterization can be achieved with this platform through visual inspection and software analysis. Custom-built software running on the computers also provides a good way for testing new features before they are realized by the ICs. Real-time visual feedbacks through the video displays make it easy to monitor and debug the system. The characterization of the wireless telemetry and the demonstration of the visual display are reported in this paper using a 256-channel retinal prosthetic IC as an example.
- Published
- 2013
49. Gyroscope Pivot Bearing Dimension and Surface Defect Detection
- Author
-
Xudong Li, Huijie Zhao, and Wenqian Ge
- Subjects
Engineering ,Surface Properties ,illumination system ,defect detection ,Image processing ,lcsh:Chemical technology ,Curvature ,Biochemistry ,Article ,Analytical Chemistry ,law.invention ,Software Design ,law ,Image Processing, Computer-Assisted ,lcsh:TP1-1185 ,Computer vision ,pulse coupled neural network ,Electrical and Electronic Engineering ,image segmentation ,Instrumentation ,Lighting ,Fitness function ,Bearing (mechanical) ,particle swarm optimization ,Artificial neural network ,business.industry ,Particle swarm optimization ,Gyroscope ,Image segmentation ,Atomic and Molecular Physics, and Optics ,Neural Networks, Computer ,Artificial intelligence ,business ,Algorithms - Abstract
Because of the perceived lack of systematic analysis in illumination system design processes and a lack of criteria for design methods in vision detection a method for the design of a task-oriented illumination system is proposed. After detecting the micro-defects of a gyroscope pivot bearing with a high curvature glabrous surface and analyzing the characteristics of the surface detection and reflection model, a complex illumination system with coaxial and ring lights is proposed. The illumination system is then optimized based on the analysis of illuminance uniformity of target regions by simulation and grey scale uniformity and articulation that are calculated from grey imagery. Currently, in order to apply the Pulse Coupled Neural Network (PCNN) method, structural parameters must be tested and adjusted repeatedly. Therefore, this paper proposes the use of a particle swarm optimization (PSO) algorithm, in which the maximum between cluster variance rules is used as fitness function with a linearily reduced inertia factor. This algorithm is used to adaptively set PCNN connection coefficients and dynamic threshold, which avoids algorithmic precocity and local oscillations. The proposed method is used for pivot bearing defect image processing. The segmentation results of the maximum entropy and minimum error method and the one described in this paper are compared using buffer region matching, and the experimental results show that the method of this paper is effective.
- Published
- 2011
50. Automatic method of analysis and measurement of additional parameters of corneal deformation in the Corvis tonometer
- Author
-
Robert Koprowski
- Subjects
Engineering ,Corvis ST ,Scheimpflug principle ,Biomedical Engineering ,Image processing ,Eye ,Biomaterials ,Cornea ,Automation ,Tonometry, Ocular ,Software ,Tonometer ,Median filter ,Scheimpflug camera ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Biomechanics ,Computer Simulation ,MATLAB ,Intraocular Pressure ,computer.programming_language ,Radiological and Ultrasound Technology ,Pixel ,business.industry ,Research ,Corneal deformation ,Reproducibility of Results ,General Medicine ,Filter (signal processing) ,eye diseases ,Biomechanical Phenomena ,Artificial intelligence ,sense organs ,business ,computer ,Algorithms - Abstract
Introduction: The method for measuring intraocular pressure using the Corvis tonometer provides a sequence of images of corneal deformation. Deformations of the cornea are recorded using the ultra-high-speed Scheimpflug camera. This paper presents a new and reproducible method of analysis of corneal deformation images that allows for automatic measurements of new features, namely new three parameters unavailable in the original software. Material and method: The images subjected to processing had a resolution of 200 × 576 × 140 pixels. They were acquired from the Corvis tonometer and simulation. In total 14000 2D images were analysed. The image analysis method proposed by the author automatically detects the edge of the cornea and sclera fragments. For this purpose, new methods of image analysis and processing proposed by the author as well as those well-known, such as Canny filter, binarization, median filtering etc., have been used. The presented algorithms were implemented in Matlab (version 7.11.0.584 - R2010b) with Image Processing toolbox (version 7.1 -R2010b) using both known algorithms for image analysis and processing and those proposed by the author. Results: Owing to the proposed algorithm it is possible to determine three parameters: (1) the degree of the corneal reaction relative to the static position; (2) the corneal length changes; (3) the ratio of amplitude changes to the corneal deformation length. The corneal reaction is smaller by about 30.40% compared to its static position. The change in the corneal length during deformation is very small, approximately 1% of its original length. Parameter (3) enables to determine the applanation points with a correlation of 92% compared to the conventional method for calculating corneal flattening areas. The proposed algorithm provides reproducible results fully automatically within a few seconds/per patient using Core i7 processor. Conclusions: Using the proposed algorithm, it is possible to measure new, additional parameters of corneal deformation, which are not available in the original software. The presented analysis method provides three new parameters of the corneal reaction. Detailed clinical studies based on this method will be presented in subsequent papers.
- Published
- 2014
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.