22 results on '"Niethammer, Marc"'
Search Results
2. Exploring Cycle Consistency Learning in Interactive Volume Segmentation
- Author
-
Liu, Qin, Zheng, Meng, Planche, Benjamin, Gao, Zhongpai, Chen, Terrence, Niethammer, Marc, and Wu, Ziyan
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Interactive volume segmentation can be approached via two decoupled modules: interaction-to-segmentation and segmentation propagation. Given a medical volume, a user first segments a slice (or several slices) via the interaction module and then propagates the segmentation(s) to the remaining slices. The user may repeat this process multiple times until a sufficiently high volume segmentation quality is achieved. However, due to the lack of human correction during propagation, segmentation errors are prone to accumulate in the intermediate slices and may lead to sub-optimal performance. To alleviate this issue, we propose a simple yet effective cycle consistency loss that regularizes an intermediate segmentation by referencing the accurate segmentation in the starting slice. To this end, we introduce a backward segmentation path that propagates the intermediate segmentation back to the starting slice using the same propagation network. With cycle consistency training, the propagation network is better regularized than in standard forward-only training approaches. Evaluation results on challenging benchmarks such as AbdomenCT-1k and OAI-ZIB demonstrate the effectiveness of our method. To the best of our knowledge, we are the first to explore cycle consistency learning in interactive volume segmentation., Comment: Tech Report
- Published
- 2023
- Full Text
- View/download PDF
3. MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse Modalities
- Author
-
Chen, Boqi and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Multiple imaging modalities are often used for disease diagnosis, prediction, or population-based analyses. However, not all modalities might be available due to cost, different study designs, or changes in imaging technology. If the differences between the types of imaging are small, data harmonization approaches can be used; for larger changes, direct image synthesis approaches have been explored. In this paper, we develop an approach based on multi-modal metric learning to synthesize images of diverse modalities. We use metric learning via multi-modal image retrieval, resulting in embeddings that can relate images of different modalities. Given a large image database, the learned image embeddings allow us to use k-nearest neighbor (k-NN) regression for image synthesis. Our driving medical problem is knee osteoarthritis (KOA), but our developed method is general after proper image alignment. We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs. Our experiments show that the proposed method outperforms direct image synthesis and that the synthesized thickness maps retain information relevant to downstream tasks such as progression prediction and Kellgren-Lawrence grading (KLG). Our results suggest that retrieval approaches can be used to obtain high-quality and meaningful image synthesis results given large image databases.
- Published
- 2023
- Full Text
- View/download PDF
4. SimpleClick: Interactive Image Segmentation with Simple Vision Transformers
- Author
-
Liu, Qin, Xu, Zhenlin, Bertasius, Gedas, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Click-based interactive image segmentation aims at extracting objects with a limited user clicking. A hierarchical backbone is the de-facto architecture for current methods. Recently, the plain, non-hierarchical Vision Transformer (ViT) has emerged as a competitive backbone for dense prediction tasks. This design allows the original ViT to be a foundation model that can be finetuned for downstream tasks without redesigning a hierarchical backbone for pretraining. Although this design is simple and has been proven effective, it has not yet been explored for interactive image segmentation. To fill this gap, we propose SimpleClick, the first interactive segmentation method that leverages a plain backbone. Based on the plain backbone, we introduce a symmetric patch embedding layer that encodes clicks into the backbone with minor modifications to the backbone itself. With the plain backbone pretrained as a masked autoencoder (MAE), SimpleClick achieves state-of-the-art performance. Remarkably, our method achieves 4.15 NoC@90 on SBD, improving 21.8% over the previous best result. Extensive evaluation on medical images demonstrates the generalizability of our method. We further develop an extremely tiny ViT backbone for SimpleClick and provide a detailed computational analysis, highlighting its suitability as a practical annotation tool., Comment: Tech report. Update 03/11/2023: Add results on a tiny model and append supplementary materials
- Published
- 2022
- Full Text
- View/download PDF
5. LiftReg: Limited Angle 2D/3D Deformable Registration
- Author
-
Tian, Lin, Lee, Yueh Z., Estépar, Raúl San José, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
We propose LiftReg, a 2D/3D deformable registration approach. LiftReg is a deep registration framework which is trained using sets of digitally reconstructed radiographs (DRR) and computed tomography (CT) image pairs. By using simulated training data, LiftReg can use a high-quality CT-CT image similarity measure, which helps the network to learn a high-quality deformation space. To further improve registration quality and to address the inherent depth ambiguities of very limited angle acquisitions, we propose to use features extracted from the backprojected 2D images and a statistical deformation model. We test our approach on the DirLab lung registration dataset and show that it outperforms an existing learning-based pairwise registration approach., Comment: MICCAI 2022
- Published
- 2022
- Full Text
- View/download PDF
6. Harmonization Benchmarking Tool for Neuroimaging Datasets
- Author
-
Osika, Tom, Ebrahim, Ebrahim, Styner, Martin, Niethammer, Marc, Sawyer, Thomas, and Enquobahrie, Andinet
- Subjects
FOS: Biological sciences ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing ,Quantitative Biology - Quantitative Methods ,Quantitative Methods (q-bio.QM) - Abstract
A major data pre-processing step for large, multi-site studies is to handle site effects by harmonizing data, generating a dataset that enables more powerful analyses and more robust algorithms. There is a wide variety of data harmonization techniques, but there are few tools that streamline the process of harmonizing data, comparing across techniques, and benchmarking new techniques. In this paper, we introduce HArmonization BEnchmarking Tool (HABET), an open source tool for generating harmonized images and evaluating the performance of different harmonization algorithms. To demonstrate the capabilities of HABET, we harmonize diffusion MRI images from the Adolescent Brain and Cognitive Development (ABCD) study using two different approaches, and we compare their performance., Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
- Published
- 2022
- Full Text
- View/download PDF
7. Fluid registration between lung CT and stationary chest tomosynthesis images
- Author
-
Tian, Lin, Puett, Connor, Liu, Peirong, Shen, Zhengyang, Aylward, Stephen R., Lee, Yueh Z., and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,FOS: Electrical engineering, electronic engineering, information engineering ,FOS: Physical sciences ,Medical Physics (physics.med-ph) ,Electrical Engineering and Systems Science - Image and Video Processing ,Physics - Medical Physics - Abstract
Registration is widely used in image-guided therapy and image-guided surgery to estimate spatial correspondences between organs of interest between planning and treatment images. However, while high-quality computed tomography (CT) images are often available at planning time, limited angle acquisitions are frequently used during treatment because of radiation concerns or imaging time constraints. This requires algorithms to register CT images based on limited angle acquisitions. We, therefore, formulate a 3D/2D registration approach which infers a 3D deformation based on measured projections and digitally reconstructed radiographs of the CT. Most 3D/2D registration approaches use simple transformation models or require complex mathematical derivations to formulate the underlying optimization problem. Instead, our approach entirely relies on differentiable operations which can be combined with modern computational toolboxes supporting automatic differentiation. This then allows for rapid prototyping, integration with deep neural networks, and to support a variety of transformation models including fluid flow models. We demonstrate our approach for the registration between CT and stationary chest tomosynthesis (sDCT) images and show how it naturally leads to an iterative image reconstruction approach.
- Published
- 2022
- Full Text
- View/download PDF
8. Aladdin: Joint Atlas Building and Diffeomorphic Registration Learning with Pairwise Alignment
- Author
-
Ding, Zhipeng and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Physics::Instrumentation and Detectors ,Computer Science::Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Atlas building and image registration are important tasks for medical image analysis. Once one or multiple atlases from an image population have been constructed, commonly (1) images are warped into an atlas space to study intra-subject or inter-subject variations or (2) a possibly probabilistic atlas is warped into image space to assign anatomical labels. Atlas estimation and nonparametric transformations are computationally expensive as they usually require numerical optimization. Additionally, previous approaches for atlas building often define similarity measures between a fuzzy atlas and each individual image, which may cause alignment difficulties because a fuzzy atlas does not exhibit clear anatomical structures in contrast to the individual images. This work explores using a convolutional neural network (CNN) to jointly predict the atlas and a stationary velocity field (SVF) parameterization for diffeomorphic image registration with respect to the atlas. Our approach does not require affine pre-registrations and utilizes pairwise image alignment losses to increase registration accuracy. We evaluate our model on 3D knee magnetic resonance images (MRI) from the OAI-ZIB dataset. Our results show that the proposed framework achieves better performance than other state-of-the-art image registration algorithms, allows for end-to-end training, and for fast inference at test time., Comment: Accepted by CVPR-2022
- Published
- 2022
- Full Text
- View/download PDF
9. Accurate Point Cloud Registration with Robust Optimal Transport
- Author
-
Shen, Zhengyang, Feydy, Jean, Liu, Peirong, Curiale, Ariel Hernán, Estepar, Ruben San Jose, Estepar, Raul San Jose, and Niethammer, Marc
- Subjects
Computational Geometry (cs.CG) ,FOS: Computer and information sciences ,I.2.10 ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computational Geometry - Abstract
This work investigates the use of robust optimal transport (OT) for shape matching. Specifically, we show that recent OT solvers improve both optimization-based and deep learning methods for point cloud registration, boosting accuracy at an affordable computational cost. This manuscript starts with a practical overview of modern OT theory. We then provide solutions to the main difficulties in using this framework for shape matching. Finally, we showcase the performance of transport-enhanced registration models on a wide range of challenging tasks: rigid registration for partial shapes; scene flow estimation on the Kitti dataset; and nonparametric registration of lung vascular trees between inspiration and expiration. Our OT-based methods achieve state-of-the-art results on Kitti and for the challenging lung registration task, both in terms of accuracy and scalability. We also release PVT1010, a new public dataset of 1,010 pairs of lung vascular trees with densely sampled points. This dataset provides a challenging use case for point cloud registration algorithms with highly complex shapes and deformations. Our work demonstrates that robust OT enables fast pre-alignment and fine-tuning for a wide range of registration models, thereby providing a new key method for the computer vision toolbox. Our code and dataset are available online at: https://github.com/uncbiag/robot., Comment: Accepted in NeurIPS 2021
- Published
- 2021
- Full Text
- View/download PDF
10. Dissecting Supervised Contrastive Learning
- Author
-
Graf, Florian, Hofer, Christoph D., Niethammer, Marc, and Kwitt, Roland
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Statistics - Machine Learning ,Machine Learning (stat.ML) ,Machine Learning (cs.LG) - Abstract
Minimizing cross-entropy over the softmax scores of a linear map composed with a high-capacity encoder is arguably the most popular choice for training neural networks on supervised learning tasks. However, recent works show that one can directly optimize the encoder instead, to obtain equally (or even more) discriminative representations via a supervised variant of a contrastive objective. In this work, we address the question whether there are fundamental differences in the sought-for representation geometry in the output space of the encoder at minimal loss. Specifically, we prove, under mild assumptions, that both losses attain their minimum once the representations of each class collapse to the vertices of a regular simplex, inscribed in a hypersphere. We provide empirical evidence that this configuration is attained in practice and that reaching a close-to-optimal state typically indicates good generalization performance. Yet, the two losses show remarkably different optimization behavior. The number of iterations required to perfectly fit to data scales superlinearly with the amount of randomly flipped labels for the supervised contrastive loss. This is in contrast to the approximately linear scaling previously reported for networks trained with cross-entropy., Comment: v4 updates: - updated appendix section S1.3 - this includes fixing an oversight in the proofs (Lemma 1 missed an equality condition, which now appears in Lemma 2) - improved figure quality
- Published
- 2021
- Full Text
- View/download PDF
11. Robust and Generalizable Visual Representation Learning via Random Convolutions
- Author
-
Xu, Zhenlin, Liu, Deyi, Yang, Junlin, Raffel, Colin, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science::Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Machine Learning (cs.LG) - Abstract
While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust. In this work, we show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation. Random convolutions are approximately shape-preserving and may distort local textures. Intuitively, randomized convolutions create an infinite number of new domains with similar global shapes but random local textures. Therefore, we explore using outputs of multi-scale random convolutions as new images or mixing them with the original images during training. When applying a network trained with our approach to unseen domains, our method consistently improves the performance on domain generalization benchmarks and is scalable to ImageNet. In particular, in the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin. More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation., Comment: ICLR 2021. Code is available at https://github.com/wildphoton/RandConv
- Published
- 2020
- Full Text
- View/download PDF
12. Perfusion Imaging: A Data Assimilation Approach
- Author
-
Peirong Liu, Yueh Z Lee, Aylward, Stephen R, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Perfusion imaging (PI) is clinically used to assess strokes and brain tumors. Commonly used PI approaches based on magnetic resonance imaging (MRI) or computed tomography (CT) measure the effect of a contrast agent moving through blood vessels and into tissue. Contrast-agent free approaches, for example, based on intravoxel incoherent motion, also exist, but are so far not routinely used clinically. These methods rely on estimating on the arterial input function (AIF) to approximately model tissue perfusion, neglecting spatial dependencies, and reliably estimating the AIF is also non-trivial, leading to difficulties with standardizing perfusion measures. In this work we therefore propose a data-assimilation approach (PIANO) which estimates the velocity and diffusion fields of an advection-diffusion model that best explains the contrast dynamics. PIANO accounts for spatial dependencies and neither requires estimating the AIF nor relies on a particular contrast agent bolus shape. Specifically, we propose a convenient parameterization of the estimation problem, a numerical estimation approach, and extensively evaluate PIANO. We demonstrate that PIANO can successfully resolve velocity and diffusion field ambiguities and results in sensitive measures for the assessment of stroke, comparing favorably to conventional measures of perfusion., Comment: Submitted to IEEE-TMI 2020
- Published
- 2020
- Full Text
- View/download PDF
13. Anatomical Data Augmentation via Fluid-based Image Registration
- Author
-
Shen, Zhengyang, Xu, Zhenlin, Olut, Sahin, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,I.2.10 ,Computer Science::Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
We introduce a fluid-based image augmentation method for medical image analysis. In contrast to existing methods, our framework generates anatomically meaningful images via interpolation from the geodesic subspace underlying given samples. Our approach consists of three steps: 1) given a source image and a set of target images, we construct a geodesic subspace using the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model; 2) we sample transformations from the resulting geodesic subspace; 3) we obtain deformed images and segmentations via interpolation. Experiments on brain (LPBA) and knee (OAI) data illustrate the performance of our approach on two tasks: 1) data augmentation during training and testing for image segmentation; 2) one-shot learning for single atlas image segmentation. We demonstrate that our approach generates anatomically meaningful data and improves performance on these tasks over competing approaches. Code is available at https://github.com/uncbiag/easyreg., Comment: MICCAI 2020
- Published
- 2020
- Full Text
- View/download PDF
14. Local Temperature Scaling for Probability Calibration
- Author
-
Ding, Zhipeng, Han, Xu, Liu, Peirong, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION - Abstract
For semantic segmentation, label probabilities are often uncalibrated as they are typically only the by-product of a segmentation task. Intersection over Union (IoU) and Dice score are often used as criteria for segmentation success, while metrics related to label probabilities are not often explored. However, probability calibration approaches have been studied, which match probability outputs with experimentally observed errors. These approaches mainly focus on classification tasks, but not on semantic segmentation. Thus, we propose a learning-based calibration method that focuses on multi-label semantic segmentation. Specifically, we adopt a convolutional neural network to predict local temperature values for probability calibration. One advantage of our approach is that it does not change prediction accuracy, hence allowing for calibration as a post-processing step. Experiments on the COCO, CamVid, and LPBA40 datasets demonstrate improved calibration performance for a range of different metrics. We also demonstrate the good performance of our method for multi-atlas brain segmentation from magnetic resonance images., Comment: Accepted by ICCV-2021
- Published
- 2020
- Full Text
- View/download PDF
15. Joint and individual analysis of breast cancer histologic images and genomic covariates
- Author
-
Carmichael, Iain, Calhoun, Benjamin C., Hoadley, Katherine A., Troester, Melissa A., Geradts, Joseph, Couture, Heather D., Olsson, Linnea, Perou, Charles M., Niethammer, Marc, Hannig, Jan, and Marron, J. S.
- Subjects
Statistics and Probability ,FOS: Computer and information sciences ,Image and Video Processing (eess.IV) ,Electrical Engineering and Systems Science - Image and Video Processing ,Quantitative Biology - Quantitative Methods ,Statistics - Applications ,01 natural sciences ,Article ,3. Good health ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,030220 oncology & carcinogenesis ,Modeling and Simulation ,FOS: Biological sciences ,FOS: Electrical engineering, electronic engineering, information engineering ,Applications (stat.AP) ,0101 mathematics ,Statistics, Probability and Uncertainty ,Quantitative Methods (q-bio.QM) - Abstract
A key challenge in modern data analysis is understanding connections between complex and differing modalities of data. For example, two of the main approaches to the study of breast cancer are histopathology (analyzing visual characteristics of tumors) and genetics. While histopathology is the gold standard for diagnostics and there have been many recent breakthroughs in genetics, there is little overlap between these two fields. We aim to bridge this gap by developing methods based on Angle-based Joint and Individual Variation Explained (AJIVE) to directly explore similarities and differences between these two modalities. Our approach exploits Convolutional Neural Networks (CNNs) as a powerful, automatic method for image feature extraction to address some of the challenges presented by statistical analysis of histopathology image data. CNNs raise issues of interpretability that we address by developing novel methods to explore visual modes of variation captured by statistical algorithms (e.g. PCA or AJIVE) applied to CNN features. Our results provide many interpretable connections and contrasts between histopathology and genetics.
- Published
- 2019
- Full Text
- View/download PDF
16. Multiple Instance Learning for Heterogeneous Images: Training a CNN for Histopathology
- Author
-
Couture, Heather D., Marron, J. S., Perou, Charles M., Troester, Melissa A., and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Multiple instance (MI) learning with a convolutional neural network enables end-to-end training in the presence of weak image-level labels. We propose a new method for aggregating predictions from smaller regions of the image into an image-level classification by using the quantile function. The quantile function provides a more complete description of the heterogeneity within each image, improving image-level classification. We also adapt image augmentation to the MI framework by randomly selecting cropped regions on which to apply MI aggregation during each epoch of training. This provides a mechanism to study the importance of MI learning. We validate our method on five different classification tasks for breast tumor histology and provide a visualization method for interpreting local image classifications that could lead to future insights into tumor heterogeneity.
- Published
- 2018
- Full Text
- View/download PDF
17. One-Shot Learning of Scene Categories via Feature Trajectory Transfer
- Author
-
Kwitt, Roland, Hegenbart, Sebastian, and Niethammer, Marc
- Abstract
Proceedings; Oagm&Arw Joint Workshop 2016 On Computer Vision And Robotics 11Th–13Th May 2016, University Of Applied Sciences Upper Austria, Wels Campus
- Published
- 2016
- Full Text
- View/download PDF
18. Fast Predictive Image Registration
- Author
-
Yang, Xiao, Kwitt, Roland, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Science::Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
We present a method to predict image deformations based on patch-wise image appearance. Specifically, we design a patch-based deep encoder-decoder network which learns the pixel/voxel-wise mapping between image appearance and registration parameters. Our approach can predict general deformation parameterizations, however, we focus on the large deformation diffeomorphic metric mapping (LDDMM) registration model. By predicting the LDDMM momentum-parameterization we retain the desirable theoretical properties of LDDMM, while reducing computation time by orders of magnitude: combined with patch pruning, we achieve a 1500x/66x speed up compared to GPU-based optimization for 2D/3D image registration. Our approach has better prediction accuracy than predicting deformation or velocity fields and results in diffeomorphic transformations. Additionally, we create a Bayesian probabilistic version of our network, which allows evaluation of deformation field uncertainty through Monte Carlo sampling using dropout at test time. We show that deformation uncertainty highlights areas of ambiguous deformations. We test our method on the OASIS brain image dataset in 2D and 3D.
- Published
- 2016
- Full Text
- View/download PDF
19. Multi-modal Image Registration for Correlative Microscopy
- Author
-
Cao, Tian, Zach, Christopher, Modla, Shannon, Powell, Debbie, Czymmek, Kirk, and Niethammer, Marc
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION - Abstract
Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies. Image registration for correlative microscopy is quite challenging because it is a multi-modal, multi-scale and multi-dimensional registration problem. In this report, I introduce two methods of image registration for correlative microscopy. The first method is based on fiducials (beads). I generate landmarks from the fiducials and compute the similarity transformation matrix based on three pairs of nearest corresponding landmarks. A least-squares matching process is applied afterwards to further refine the registration. The second method is inspired by the image analogies approach. I introduce the sparse representation model into image analogies. I first train representative image patches (dictionaries) for pre-registered datasets from two different modalities, and then I use the sparse coding technique to transfer a given image to a predicted image from one modality to another based on the learned dictionaries. The final image registration is between the predicted image and the original image corresponding to the given image in the different modality. The method transforms a multi-modal registration problem to a mono-modal one. I test my approaches on Transmission Electron Microscopy (TEM) and confocal microscopy images. Experimental results of the methods are also shown in this report., Comment: 24 pages
- Published
- 2014
- Full Text
- View/download PDF
20. Spatio-temporal Image Analysis for Longitudinal and Time-Series Image Data
- Author
-
Xavier Pennec, Stanley Durrleman, Marc Niethammer, Guido Gerig, Tom Fletcher, Algorithms, models and methods for images and signals of the human brain (ARAMIS), Inria Paris-Rocquencourt, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut du Cerveau et de la Moëlle Epinière = Brain and Spine Institute (ICM), Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP], Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP], Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Centre National de la Recherche Scientifique (CNRS), Scientific Computing and Imaging Institute (SCI Institute), University of Utah, Department of Computer Science [Chapel Hill], University of North Carolina [Chapel Hill] (UNC), University of North Carolina System (UNC)-University of North Carolina System (UNC), Analysis and Simulation of Biomedical Images (ASCLEPIOS), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Durrleman, Stanley, Fletcher, Thomas P., Gerig, Guido, Niethammer, Marc, Pennec, Xavier, Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut National de la Santé et de la Recherche Médicale (INSERM)-CHU Pitié-Salpêtrière [AP-HP], Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-CHU Pitié-Salpêtrière [AP-HP], Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre National de la Recherche Scientifique (CNRS)-CHU Pitié-Salpêtrière [AP-HP], and Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)
- Subjects
010504 meteorology & atmospheric sciences ,Series (mathematics) ,business.industry ,Computer science ,010102 general mathematics ,01 natural sciences ,Image (mathematics) ,[INFO.INFO-IM]Computer Science [cs]/Medical Imaging ,Computer vision ,Artificial intelligence ,0101 mathematics ,business ,ComputingMilieux_MISCELLANEOUS ,0105 earth and related environmental sciences - Abstract
International audience
- Published
- 2015
21. Using the Fourth Dimension to Distinguish Between Structures for Anisotropic Diffusion Filtering in 4D CT Perfusion Scans
- Author
-
Evert-Jan Vonken, Max A. Viergever, Theo D. Witkamp, Adriënne M. Mendrik, Mathias Prokop, Bram van Ginneken, Durrleman, Stanley, Fletcher, Tom, Gerig, Guido, Niethammer, Marc, and Pennec, Xavier
- Subjects
Materials science ,business.industry ,Anisotropic diffusion ,viruses ,Radiation dose ,Perfusion scanning ,Structure tensor ,Theoretical Computer Science ,enzymes and coenzymes (carbohydrates) ,Fourth Dimension ,heterocyclic compounds ,Anisotropic diffusion filtering ,Nuclear medicine ,business ,Time intensity ,Perfusion ,Computer Science(all) ,Biomedical engineering - Abstract
High resolution 4D (3D+time) cerebral CT perfusion (CTP) scans can be used to create 3D arteriograms (showing only arteries) and venograms (only veins). However, due to the low X-ray radiation dose used for acquiring the CTP scans, they are inherently noisy. In this paper, we propose a time intensity profile similarity (TIPS) anisotropic diffusion method that uses the 4th dimension to distinguish between structures, for reducing noise and enhancing arteries and veins in 4D CTP scans. The method was evaluated on 20 patient CTP scans. An observer study was performed by two radiologists, assessing the arteries and veins in arteriograms and venograms derived from the filtered CTP data, compared to those derived from the original data. Results showed that arteriograms and venograms derived from the filtered CTP data showed more and better visualized small arteries and veins in the majority of the 20 evaluated CTP scans. In conclusion, arteries and veins are separately enhanced and noise is reduced by using the time-intensity profile similarity (fourth dimension) to distinguish between structures for anisotropic diffusion filtering in 4D CT perfusion scans.
- Published
- 2015
22. A New Framework for Analyzing Structural Volume Changes of Longitudinal Brain MRI Data:Spatio-temporal Image Analysis for Longitudinal and Time-Series Image Data
- Author
-
Aubert-Broche, Bérengère, Fonov, Vladimir S., García-Lorenzo, Daniel, Mouiha, Abderazzak, Guizard, Nicolas, Coupé, Pierrick, Eskildsen, Simon Fristed, Collins, D.Louis, Durrleman, Stanley, Fletcher, Tom, Gerig, Guido, and Niethammer, Marc
- Abstract
Cross-sectional analysis of longitudinal MRI data might be sub-optimal as each dataset is analyzed independently. In this study, we evaluate how much variability can be reduced by analyzing structural volume changes of longitudinal data using longitudinal analysis. We propose a two-part pipeline that consists of longitudinal registration and longitudinal classification. The longitudinal registration step includes the creation of subject-specific linear and non-linear templates that are then registered to a population template. The longitudinal classification is composed of a 4D EM algorithm, using a priori classes computed by averaging the tissue classes of all time points obtained cross-sectionally.To study the impact of these two steps, we apply the framework completely (called LL method: Longitudinal registration and Longitudinal classification) and partially (LC method: Longitudinal registration and Cross-sectional classification) and compare these to a standard cross-sectional framework (CC method: Cross-sectional registration and Cross-sectional classification).The three methods are applied to (1) a scan-rescan database to analyze the reliability and to (2) the NIH pediatric population to compare the GM and WM growth trajectories, evaluated with a linear mixed-model. The LL method, and the LC method to a lesser extent, significantly reduce the variability in the measurements in the scan-rescan study and give the best fitted GM and WM growth models with the NIH pediatric database. The results confirm that both steps of the longitudinal framework reduce the variability and improve the accuracy compared to the cross-sectional framework, with longitudinal classification yielding the greatest impact.
- Published
- 2012
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.