266 results on '"Kreshuk A"'
Search Results
252. Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction
- Author
-
Buchholz, Tim-Oliver, Gumhold, Stefan, Kreshuk, Anna, Jug, Florian, Technische Universität Dresden, and IMPRS - CellDivoSys
- Subjects
Entrauschen, Bildrestauration, Bildrekonstruktion, Maschinelles Lernen ,Denoising, Image reconstruction, image restoration, machine learning, deep learning ,ddc:006 - Abstract
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data. Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations. Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT). In the next paragraphs I will summarize the individual contributions briefly. Electron microscopy is the go to method for high-resolution images in biological research. Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses. However, slow scanning speeds are required to obtain SEM images of sufficient quality. In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks. Once such a network is trained, it can be applied to noisy data to restore high quality images. With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in $40$- to $50$-fold imaging speedups for SEM imaging. In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions. However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images. Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually. To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series. An implementation of cryoCARE is publicly available as Scipion (de la Rosa-Trevín et al. 2016) plugin. Next, I will discuss the problem of self-supervised image denoising. With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied. However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system. In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach. Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012). In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks. I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE). I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction. The missing wedge artefacts in tomographic imaging originate in sparse-view imaging. Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images. However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space. I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients. Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents Summary iii Acknowledgements v 1 Introduction 1 1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3 1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4 1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5 1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8 1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11 2 Denoising in Electron Microscopy 15 2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19 2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19 2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21 2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23 2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25 2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27 2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29 2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31 2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32 2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33 3 Noise2Void: Self-Supervised Denoising 35 3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37 3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44 3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47 3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48 3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50 4 Fourier Image Transformer 53 4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55 4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57 4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59 4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60 4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60 4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61 4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64 4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Conclusions and Outlook 71
- Published
- 2021
253. From Images to Graphs: Machine Learning Methods for the Detection of Microtubules and Synapses in Large-Scale Electron Microscopy Data
- Author
-
Buhmann, Julia, Hahnloser, Richard, Cook, Matthew, Funke, Jan, and Kreshuk, Anna
- Subjects
Data processing, computer science ,machine learning ,computer vision ,connectomics ,Electron microscopy ,image segmentation ,Neuroscience ,ddc:004 - Abstract
Brain wiring diagrams showing every individual neuron and all the synaptic connections are becoming an important resource for neuroscientists. However, only a few such high-resolution wiring diagrams have been reconstructed so far. Emerging high-throughput electron microscopy (EM) technologies have started to fill this critical gap. EM images of neural tissue with sufficiently high resolution at large scale allow the extraction of the wiring diagram. The sheer size of acquired datasets precludes manual analysis and makes the development of computer-based automatic methods necessary. In this work, we propose and test new methods to address the problem of automatic identification of microtubules and synaptic partners in large-scale EM image datasets with the ultimate goal to aid circuit reconstruction. In the first part of the thesis, we introduce a method for the automatic reconstruction of microtubules. Microtubules follow the backbone of a neuron and can be an important source of constraints for the reconstruction of neurons. We formulate an energy-based model on short candidate segments of microtubules found by a local classifier. We enumerate and score possible links between candidates, in order to find a cost-minimal subset of candidates and links by solving an integer linear program. The model provides a way to incorporate biological priors including both hard constraints (e.g. microtubules are topologically chains of links) and soft constraints (e.g. high curvature is unlikely). We test our method on a challenging EM dataset of Drosophila neural tissue and show that our model reliably tracks microtubules spanning many image sections. In the second part of the thesis, we propose a method for the prediction of synaptic partners, which is, along with the segmentation of neurons, required for circuit reconstruction. For the prediction of synaptic partners, we propose a 3D U-Net architecture to directly identify pairs of voxels that are pre- and post-synaptic to each other. To that end, we formulate the problem of synaptic partner identification as a classification problem on long-range edges between voxels to encode both the presence of a synaptic pair and its direction. This formulation allows us to directly learn from synaptic point annotations instead of the more labor-intensive voxel-based synaptic cleft or vesicle annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and improve over the current state of the art, producing 3% fewer errors than the next best method. The third and last part of the thesis is also dedicated to the reconstruction of synaptic partners. Compared to the previously introduced method, we propose to directly predict post-synaptic locations and the direction to their pre-synaptic partner. We use our method to extract 244 million putative synaptic partners in the fifty-teravoxel full adult fly brain (FAFB) EM dataset and evaluated its accuracy on 146,643 synapses of 702 neurons with a total cable length of 312 mm in four different brain regions. We find that the predicted synaptic connections can be used together with a neuron segmentation to infer a connectivity graph with high accuracy. Our synaptic partner predictions for the FAFB dataset are publicly available, together with a query library allowing automatic retrieval of up- and downstream neurons. The three methods described in this work have produced state-of-the-art results on two very challenging, highly anisotropic EM datasets. The 244 million putative synaptic partners in the FAFB dataset will be a valuable resource for neuroscientists. While the fully automatic extraction of large wiring diagrams is still impeded by high accuracy requirements, this work is an important contribution for the acceleration of reconstruction efforts.
- Published
- 2020
- Full Text
- View/download PDF
254. How to Build the Virtual Cell with Artificial Intelligence: Priorities and Opportunities.
- Author
-
Bunne C, Roohani Y, Rosen Y, Gupta A, Zhang X, Roed M, Alexandrov T, AlQuraishi M, Brennan P, Burkhardt DB, Califano A, Cool J, Dernburg AF, Ewing K, Fox EB, Haury M, Herr AE, Horvitz E, Hsu PD, Jain V, Johnson GR, Kalil T, Kelley DR, Kelley SO, Kreshuk A, Mitchison T, Otte S, Shendure J, Sofroniew NJ, Theis F, Theodoris CV, Upadhyayula S, Valer M, Wang B, Xing E, Yeung-Levy S, Zitnik M, Karaletsos T, Regev A, Lundberg E, Leskovec J, and Quake SR
- Abstract
The cell is arguably the most fundamental unit of life and is central to understanding biology. Accurate modeling of cells is important for this understanding as well as for determining the root causes of disease. Recent advances in artificial intelligence (AI), combined with the ability to generate large-scale experimental data, present novel opportunities to model cells. Here we propose a vision of leveraging advances in AI to construct virtual cells, high-fidelity simulations of cells and cellular systems under different conditions that are directly learned from biological data across measurements and scales. We discuss desired capabilities of such AI Virtual Cells, including generating universal representations of biological entities across scales, and facilitating interpretable in silico experiments to predict and understand their behavior using Virtual Instruments. We further address the challenges, opportunities and requirements to realize this vision including data needs, evaluation strategies, and community standards and engagement to ensure biological accuracy and broad utility. We envision a future where AI Virtual Cells help identify new drug targets, predict cellular responses to perturbations, as well as scale hypothesis exploration. With open science collaborations across the biomedical ecosystem that includes academia, philanthropy, and the biopharma and AI industries, a comprehensive predictive understanding of cell mechanisms and interactions has come into reach., Competing Interests: Competing interests C.B. and A. R. are employees of Genentech, a member of the Roche Group. A.R. has equity in Roche. A.R. was a co-founder and equity holder of Celsius Therapeutics, and is an equity holder in Immunitas. Until July 31, 2020 A.R. was an S.A.B. member of ThermoFisher Scientific, Syros Pharmaceuticals, Neogene Therapeutics and Asimov. A.R. is a named inventor on multiple filed patents related to single cell and spatial genomics, including for scRNA-seq, spatial transcriptomics, Perturb-Seq, compressed experiments, and PerturbView. E.L. is an advisor for the Chan-Zuckerberg Initiative Foundation. N.J.S. is an employee of EvolutionaryScale, PBC.
- Published
- 2024
255. Understanding metric-related pitfalls in image analysis validation.
- Author
-
Reinke A, Tizabi MD, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Kavur AE, Rädsch T, Sudre CH, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Blaschko M, Buettner F, Cardoso MJ, Cheplygina V, Chen J, Christodoulou E, Cimini BA, Collins GS, Farahani K, Ferrer L, Galdran A, van Ginneken B, Glocker B, Godau P, Haase R, Hashimoto DA, Hoffman MM, Huisman M, Isensee F, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Karthikesalingam A, Kenngott H, Kleesiek J, Kofler F, Kooi T, Kopp-Schneider A, Kozubek M, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Mattson P, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rafelski SM, Rajpoot N, Reyes M, Riegler MA, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, van Smeden M, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Calster BV, Varoquaux G, Wiesenfarth M, Yaniv ZR, Jäger PF, and Maier-Hein L
- Abstract
Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.
- Published
- 2024
256. Deep learning enables fast and dense single-molecule localization with high accuracy.
- Author
-
Speiser A, Müller LR, Hoess P, Matti U, Obara CJ, Legant WR, Kreshuk A, Macke JH, Ries J, and Turaga SC
- Subjects
- Animals, COS Cells, Chlorocebus aethiops, Databases, Factual, Software, Deep Learning, Image Processing, Computer-Assisted methods, Single Molecule Imaging methods
- Abstract
Single-molecule localization microscopy (SMLM) has had remarkable success in imaging cellular structures with nanometer resolution, but standard analysis algorithms require sparse emitters, which limits imaging speed and labeling density. Here, we overcome this major limitation using deep learning. We developed DECODE (deep context dependent), a computational tool that can localize single emitters at high density in three dimensions with highest accuracy for a large range of imaging modalities and conditions. In a public software benchmark competition, it outperformed all other fitters on 12 out of 12 datasets when comparing both detection accuracy and localization error, often by a substantial margin. DECODE allowed us to acquire fast dynamic live-cell SMLM data with reduced light exposure and to image microtubules at ultra-high labeling density. Packaged for simple installation and use, DECODE will enable many laboratories to reduce imaging times and increase localization density in SMLM., (© 2021. The Author(s), under exclusive licence to Springer Nature America, Inc.)
- Published
- 2021
- Full Text
- View/download PDF
257. Universal autofocus for quantitative volumetric microscopy of whole mouse brains.
- Author
-
Silvestri L, Müllenbroich MC, Costantini I, Di Giovanna AP, Mazzamuto G, Franceschini A, Kutra D, Kreshuk A, Checcucci C, Toresano LO, Frasconi P, Sacconi L, and Pavone FS
- Subjects
- Animals, Male, Mice, Brain anatomy & histology, Image Processing, Computer-Assisted methods, Imaging, Three-Dimensional methods, Microglia cytology, Microscopy, Fluorescence methods
- Abstract
Unbiased quantitative analysis of macroscopic biological samples demands fast imaging systems capable of maintaining high resolution across large volumes. Here we introduce RAPID (rapid autofocusing via pupil-split image phase detection), a real-time autofocus method applicable in every widefield-based microscope. RAPID-enabled light-sheet microscopy reliably reconstructs intact, cleared mouse brains with subcellular resolution, and allowed us to characterize the three-dimensional (3D) spatial clustering of somatostatin-positive neurons in the whole encephalon, including densely labeled areas. Furthermore, it enabled 3D morphological analysis of microglia across the entire brain. Beyond light-sheet microscopy, we demonstrate that RAPID maintains high image quality in various settings, from in vivo fluorescence imaging to 3D tracking of fast-moving organisms. RAPID thus provides a flexible autofocus solution that is suitable for traditional automated microscopy tasks as well as for quantitative analysis of large biological specimens., (© 2021. The Author(s), under exclusive licence to Springer Nature America, Inc.)
- Published
- 2021
- Full Text
- View/download PDF
258. Deep learning-enhanced light-field imaging with continuous validation.
- Author
-
Wagner N, Beuttenmueller F, Norlin N, Gierten J, Boffi JC, Wittbrodt J, Weigert M, Hufnagel L, Prevedel R, and Kreshuk A
- Subjects
- Animals, Biomechanical Phenomena, Calcium chemistry, Larva physiology, Oryzias physiology, Reproducibility of Results, Zebrafish physiology, Deep Learning, Heart physiology, Image Processing, Computer-Assisted methods, Microscopy methods
- Abstract
Visualizing dynamic processes over large, three-dimensional fields of view at high speed is essential for many applications in the life sciences. Light-field microscopy (LFM) has emerged as a tool for fast volumetric image acquisition, but its effective throughput and widespread use in biology has been hampered by a computationally demanding and artifact-prone image reconstruction process. Here, we present a framework for artificial intelligence-enhanced microscopy, integrating a hybrid light-field light-sheet microscope and deep learning-based volume reconstruction. In our approach, concomitantly acquired, high-resolution two-dimensional light-sheet images continuously serve as training data and validation for the convolutional neural network reconstructing the raw LFM data during extended volumetric time-lapse imaging experiments. Our network delivers high-quality three-dimensional reconstructions at video-rate throughput, which can be further refined based on the high-resolution light-sheet images. We demonstrate the capabilities of our approach by imaging medaka heart dynamics and zebrafish neural activity with volumetric imaging rates up to 100 Hz.
- Published
- 2021
- Full Text
- View/download PDF
259. Developing open-source software for bioimage analysis: opportunities and challenges.
- Author
-
Levet F, Carpenter AE, Eliceiri KW, Kreshuk A, Bankhead P, and Haase R
- Subjects
- Image Processing, Computer-Assisted, Software
- Abstract
Fast-paced innovations in imaging have resulted in single systems producing exponential amounts of data to be analyzed. Computational methods developed in computer science labs have proven to be crucial for analyzing these data in an unbiased and efficient manner, reaching a prominent role in most microscopy studies. Still, their use usually requires expertise in bioimage analysis, and their accessibility for life scientists has therefore become a bottleneck. Open-source software for bioimage analysis has developed to disseminate these computational methods to a wider audience, and to life scientists in particular. In recent years, the influence of many open-source tools has grown tremendously, helping tens of thousands of life scientists in the process. As creators of successful open-source bioimage analysis software, we here discuss the motivations that can initiate development of a new tool, the common challenges faced, and the characteristics required for achieving success., Competing Interests: No competing interests were disclosed., (Copyright: © 2021 Levet F et al.)
- Published
- 2021
- Full Text
- View/download PDF
260. Microscopy-based assay for semi-quantitative detection of SARS-CoV-2 specific antibodies in human sera: A semi-quantitative, high throughput, microscopy-based assay expands existing approaches to measure SARS-CoV-2 specific antibody levels in human sera.
- Author
-
Pape C, Remme R, Wolny A, Olberg S, Wolf S, Cerrone L, Cortese M, Klaus S, Lucic B, Ullrich S, Anders-Össwein M, Wolf S, Cerikan B, Neufeldt CJ, Ganter M, Schnitzler P, Merle U, Lusic M, Boulant S, Stanifer M, Bartenschlager R, Hamprecht FA, Kreshuk A, Tischer C, Kräusslich HG, Müller B, and Laketa V
- Subjects
- COVID-19 immunology, COVID-19 virology, COVID-19 Testing methods, Fluorescent Antibody Technique, High-Throughput Screening Assays, Humans, Image Processing, Computer-Assisted statistics & numerical data, Immune Sera chemistry, Machine Learning, Sensitivity and Specificity, Antibodies, Viral blood, COVID-19 diagnosis, Immunoassay, Immunoglobulin A blood, Immunoglobulin G blood, Immunoglobulin M blood, Microscopy methods, SARS-CoV-2 immunology
- Abstract
Emergence of the novel pathogenic coronavirus SARS-CoV-2 and its rapid pandemic spread presents challenges that demand immediate attention. Here, we describe the development of a semi-quantitative high-content microscopy-based assay for detection of three major classes (IgG, IgA, and IgM) of SARS-CoV-2 specific antibodies in human samples. The possibility to detect antibodies against the entire viral proteome together with a robust semi-automated image analysis workflow resulted in specific, sensitive and unbiased assay that complements the portfolio of SARS-CoV-2 serological assays. Sensitive, specific and quantitative serological assays are urgently needed for a better understanding of humoral immune response against the virus as a basis for developing public health strategies to control viral spread. The procedure described here has been used for clinical studies and provides a general framework for the application of quantitative high-throughput microscopy to rapidly develop serological assays for emerging virus infections., (© 2020 The Authors. BioEssays published by Wiley Periodicals LLC.)
- Published
- 2021
- Full Text
- View/download PDF
261. ilastik: interactive machine learning for (bio)image analysis.
- Author
-
Berg S, Kutra D, Kroeger T, Straehle CN, Kausler BX, Haubold C, Schiegg M, Ales J, Beier T, Rudy M, Eren K, Cervantes JI, Xu B, Beuttenmueller F, Wolny A, Zhang C, Koethe U, Hamprecht FA, and Kreshuk A
- Subjects
- Aryl Hydrocarbon Receptor Nuclear Translocator physiology, Cell Proliferation, Collagen metabolism, Endoplasmic Reticulum ultrastructure, Humans, Image Processing, Computer-Assisted methods, Machine Learning
- Abstract
We present ilastik, an easy-to-use interactive tool that brings machine-learning-based (bio)image analysis to end users without substantial computational expertise. It contains pre-defined workflows for image segmentation, object classification, counting and tracking. Users adapt the workflows to the problem at hand by interactively providing sparse training annotations for a nonlinear classifier. ilastik can process data in up to five dimensions (3D, time and number of channels). Its computational back end runs operations on-demand wherever possible, allowing for interactive prediction on data larger than RAM. Once the classifiers are trained, ilastik workflows can be applied to new data from the command line without further user interaction. We describe all ilastik workflows in detail, including three case studies and a discussion on the expected performance.
- Published
- 2019
- Full Text
- View/download PDF
262. Machine Learning: Advanced Image Segmentation Using ilastik.
- Author
-
Kreshuk A and Zhang C
- Subjects
- Animals, Datasets as Topic, Mice, Microscopy, Electron methods, Mitochondria, Somatosensory Cortex cytology, Somatosensory Cortex diagnostic imaging, Workflow, Image Processing, Computer-Assisted methods, Machine Learning, Software
- Abstract
Segmentation is one of the most ubiquitous problems in biological image analysis. Here we present a machine learning-based solution to it as implemented in the open source ilastik toolkit. We give a broad description of the underlying theory and demonstrate two workflows: Pixel Classification and Autocontext. We illustrate their use on a challenging problem in electron microscopy image segmentation. After following this walk-through, we expect the readers to be able to apply the necessary steps to their own data and segment their images by either workflow.
- Published
- 2019
- Full Text
- View/download PDF
263. Three-dimensional immersive virtual reality for studying cellular compartments in 3D models from EM preparations of neural tissues.
- Author
-
Calì C, Baghabra J, Boges DJ, Holst GR, Kreshuk A, Hamprecht FA, Srinivasan M, Lehväslaiho H, and Magistretti PJ
- Subjects
- Animals, Astrocytes metabolism, Astrocytes ultrastructure, CA1 Region, Hippocampal metabolism, CA1 Region, Hippocampal ultrastructure, Epoxy Resins, Glycogen metabolism, Neurons metabolism, Neurons ultrastructure, Pattern Recognition, Automated methods, Rats, Sprague-Dawley, Tissue Embedding, Imaging, Three-Dimensional methods, Microscopy, Electron, Scanning methods, Models, Neurological, User-Computer Interface
- Abstract
Advances in the application of electron microscopy (EM) to serial imaging are opening doors to new ways of analyzing cellular structure. New and improved algorithms and workflows for manual and semiautomated segmentation allow us to observe the spatial arrangement of the smallest cellular features with unprecedented detail in full three-dimensions. From larger samples, higher complexity models can be generated; however, they pose new challenges to data management and analysis. Here we review some currently available solutions and present our approach in detail. We use the fully immersive virtual reality (VR) environment CAVE (cave automatic virtual environment), a room in which we are able to project a cellular reconstruction and visualize in 3D, to step into a world created with Blender, a free, fully customizable 3D modeling software with NeuroMorph plug-ins for visualization and analysis of EM preparations of brain tissue. Our workflow allows for full and fast reconstructions of volumes of brain neuropil using ilastik, a software tool for semiautomated segmentation of EM stacks. With this visualization environment, we can walk into the model containing neuronal and astrocytic processes to study the spatial distribution of glycogen granules, a major energy source that is selectively stored in astrocytes. The use of CAVE was key to the observation of a nonrandom distribution of glycogen, and led us to develop tools to quantitatively analyze glycogen clustering and proximity to other subcellular features., (© 2015 Wiley Periodicals, Inc.)
- Published
- 2016
- Full Text
- View/download PDF
264. Segmenting and Tracking Multiple Dividing Targets Using ilastik.
- Author
-
Haubold C, Schiegg M, Kreshuk A, Berg S, Koethe U, and Hamprecht FA
- Subjects
- Animals, Cell Division physiology, Cell Tracking statistics & numerical data, False Positive Reactions, Image Processing, Computer-Assisted methods, Microscopy instrumentation, Microscopy methods, Pattern Recognition, Automated statistics & numerical data, Signal-To-Noise Ratio, Algorithms, Cell Tracking methods, Drosophila melanogaster ultrastructure, Embryo, Nonmammalian ultrastructure, Image Processing, Computer-Assisted statistics & numerical data, Software
- Abstract
Tracking crowded cells or other targets in biology is often a challenging task due to poor signal-to-noise ratio, mutual occlusion, large displacements, little discernibility, and the ability of cells to divide. We here present an open source implementation of conservation tracking (Schiegg et al., IEEE international conference on computer vision (ICCV). IEEE, New York, pp 2928-2935, 2013) in the ilastik software framework. This robust tracking-by-assignment algorithm explicitly makes allowance for false positive detections, undersegmentation, and cell division. We give an overview over the underlying algorithm and parameters, and explain the use for a light sheet microscopy sequence of a Drosophila embryo. Equipped with this knowledge, users will be able to track targets of interest in their own data.
- Published
- 2016
- Full Text
- View/download PDF
265. Automated tracing of myelinated axons and detection of the nodes of Ranvier in serial images of peripheral nerves.
- Author
-
Kreshuk A, Walecki R, Koethe U, Gierthmuehlen M, Plachta D, Genoud C, Haastert-Talini K, and Hamprecht FA
- Subjects
- Algorithms, Animals, Datasets as Topic, Peripheral Nerves cytology, Rats, Vagus Nerve ultrastructure, Axons ultrastructure, Imaging, Three-Dimensional methods, Microscopy, Electron methods, Peripheral Nerves ultrastructure, Ranvier's Nodes ultrastructure, Supervised Machine Learning
- Abstract
The development of realistic neuroanatomical models of peripheral nerves for simulation purposes requires the reconstruction of the morphology of the myelinated fibres in the nerve, including their nodes of Ranvier. Currently, this information has to be extracted by semimanual procedures, which severely limit the scalability of the experiments. In this contribution, we propose a supervised machine learning approach for the detailed reconstruction of the geometry of fibres inside a peripheral nerve based on its high-resolution serial section images. Learning from sparse expert annotations, the algorithm traces myelinated axons, even across the nodes of Ranvier. The latter are detected automatically. The approach is based on classifying the myelinated membranes in a supervised fashion, closing the membrane gaps by solving an assignment problem, and classifying the closed gaps for the nodes of Ranvier detection. The algorithm has been validated on two very different datasets: (i) rat vagus nerve subvolume, SBFSEM microscope, 200 × 200 × 200 nm resolution, (ii) rat sensory branch subvolume, confocal microscope, 384 × 384 × 800 nm resolution. For the first dataset, the algorithm correctly reconstructed 88% of the axons (241 out of 273) and achieved 92% accuracy on the task of Ranvier node detection. For the second dataset, the gap closing algorithm correctly closed 96.2% of the gaps, and 55% of axons were reconstructed correctly through the whole volume. On both datasets, training the algorithm on a small data subset and applying it to the full dataset takes a fraction of the time required by the currently used semiautomated protocols. Our software, raw data and ground truth annotations are available at http://hci.iwr.uni-heidelberg.de/Benchmarks/. The development version of the code can be found at https://github.com/RWalecki/ATMA., (© 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.)
- Published
- 2015
- Full Text
- View/download PDF
266. Semiautomated correlative 3D electron microscopy of in vivo-imaged axons and dendrites.
- Author
-
Maco B, Cantoni M, Holtmaat A, Kreshuk A, Hamprecht FA, and Knott GW
- Subjects
- Animals, Lasers, Mice, Software, Axons ultrastructure, Brain cytology, Dendrites ultrastructure, Imaging, Three-Dimensional methods, Microscopy, Electron, Scanning methods
- Abstract
This protocol describes how in vivo-imaged dendrites and axons in adult mouse brains can subsequently be prepared and imaged with focused ion beam scanning electron microscopy (FIBSEM). The procedure starts after in vivo imaging with chemical fixation, followed by the identification of the fluorescent structures of interest. Their position is then highlighted in the fixed tissue by burning fiducial marks with the two-photon laser. Once the section has been stained and resin-embedded, a small block is trimmed close to these marks. Serially aligned EM images are acquired through this region, using FIBSEM, and the neurites of interest are then reconstructed semiautomatically by using the ilastik software (http://ilastik.org/). This reliable imaging and reconstruction technique avoids the use of specific labels to identify the structures of interest in the electron microscope, enabling optimal chemical fixation techniques to be applied and providing the best possible structural preservation for 3D analysis. The entire protocol takes ∼4 d.
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.