116 results on '"Menze, Bjoern H."'
Search Results
2. Denoising Diffusion Models for 3D Healthy Brain Tissue Inpainting
- Author
-
Durrer, Alicia, Wolleb, Julia, Bieder, Florentin, Friedrich, Paul, Melie-Garcia, Lester, Ocampo-Pineda, Mario, Bercea, Cosmin I., Hamamci, Ibrahim E., Wiestler, Benedikt, Piraud, Marie, Yaldizli, Özgür, Granziera, Cristina, Menze, Bjoern H., Cattin, Philippe C., Kofler, Florian, Durrer, Alicia, Wolleb, Julia, Bieder, Florentin, Friedrich, Paul, Melie-Garcia, Lester, Ocampo-Pineda, Mario, Bercea, Cosmin I., Hamamci, Ibrahim E., Wiestler, Benedikt, Piraud, Marie, Yaldizli, Özgür, Granziera, Cristina, Menze, Bjoern H., Cattin, Philippe C., and Kofler, Florian
- Abstract
Monitoring diseases that affect the brain's structural integrity requires automated analysis of magnetic resonance (MR) images, e.g., for the evaluation of volumetric changes. However, many of the evaluation tools are optimized for analyzing healthy tissue. To enable the evaluation of scans containing pathological tissue, it is therefore required to restore healthy tissue in the pathological areas. In this work, we explore and extend denoising diffusion models for consistent inpainting of healthy 3D brain tissue. We modify state-of-the-art 2D, pseudo-3D, and 3D methods working in the image space, as well as 3D latent and 3D wavelet diffusion models, and train them to synthesize healthy brain tissue. Our evaluation shows that the pseudo-3D model performs best regarding the structural-similarity index, peak signal-to-noise ratio, and mean squared error. To emphasize the clinical relevance, we fine-tune this model on data containing synthetic MS lesions and evaluate it on a downstream brain tissue segmentation task, whereby it outperforms the established FMRIB Software Library (FSL) lesion-filling method.
- Published
- 2024
3. Deep Learning for Medical Image Analysis
- Author
-
Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Belagiannis, Vasileios (Prof. Dr.), Hu, Xiaobin, Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Belagiannis, Vasileios (Prof. Dr.), and Hu, Xiaobin
- Abstract
This thesis aims to develop effective and novel deep learning based algorithms to resolve lesion segmentation, disease prognostic analysis issues, and medical image synthesis, such as brain glioma multi-class segmentation, natural killer/T cell lymphoma multi-stage segmentation, prognostic analysis of natural killer/T cell lymphoma, and MR image enhancement., Diese Arbeit beschäftigt sich damit, effektive neue Algorithmen basierend auf Deep Learning zur Anwendung in Läsionssegmentierung, prognostischer Krankheitsanalyse und medizinischer Bildsynthese zu entwickeln. Insbesondere handelt es sich dabei um Multiklassen-Segmentierung von Gehirn Gliomen, mehrstufige Segmentierung von Natürlichen Killer-T-Zell-Lymphomen, prognostische Analyse Natürlicher Killer-T-Zell-Lymphome und MRT Bildverbesserung.
- Published
- 2022
4. The Brain Tumor Segmentation (BraTS) Challenge 2023: Glioma Segmentation in Sub-Saharan Africa Patient Population (BraTS-Africa)
- Author
-
Adewole, Maruf, Rudie, Jeffrey D., Gbadamosi, Anu, Toyobo, Oluyemisi, Raymond, Confidence, Zhang, Dong, Omidiji, Olubukola, Akinola, Rachel, Suwaid, Mohammad Abba, Emegoakor, Adaobi, Ojo, Nancy, Aguh, Kenneth, Kalaiwo, Chinasa, Babatunde, Gabriel, Ogunleye, Afolabi, Gbadamosi, Yewande, Iorpagher, Kator, Calabrese, Evan, Aboian, Mariam, Linguraru, Marius, Albrecht, Jake, Wiestler, Benedikt, Kofler, Florian, Janas, Anastasia, LaBella, Dominic, Kzerooni, Anahita Fathi, Li, Hongwei Bran, Iglesias, Juan Eugenio, Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Van Leemput, Koen, Bukas, Christina, Piraud, Maire, Conte, Gian-Marco, Johansson, Elaine, Meier, Zeke, Menze, Bjoern H, Baid, Ujjwal, Bakas, Spyridon, Dako, Farouk, Fatade, Abiodun, Anazodo, Udunna C, Adewole, Maruf, Rudie, Jeffrey D., Gbadamosi, Anu, Toyobo, Oluyemisi, Raymond, Confidence, Zhang, Dong, Omidiji, Olubukola, Akinola, Rachel, Suwaid, Mohammad Abba, Emegoakor, Adaobi, Ojo, Nancy, Aguh, Kenneth, Kalaiwo, Chinasa, Babatunde, Gabriel, Ogunleye, Afolabi, Gbadamosi, Yewande, Iorpagher, Kator, Calabrese, Evan, Aboian, Mariam, Linguraru, Marius, Albrecht, Jake, Wiestler, Benedikt, Kofler, Florian, Janas, Anastasia, LaBella, Dominic, Kzerooni, Anahita Fathi, Li, Hongwei Bran, Iglesias, Juan Eugenio, Farahani, Keyvan, Eddy, James, Bergquist, Timothy, Chung, Verena, Shinohara, Russell Takeshi, Wiggins, Walter, Reitman, Zachary, Wang, Chunhao, Liu, Xinyang, Jiang, Zhifan, Familiar, Ariana, Van Leemput, Koen, Bukas, Christina, Piraud, Maire, Conte, Gian-Marco, Johansson, Elaine, Meier, Zeke, Menze, Bjoern H, Baid, Ujjwal, Bakas, Spyridon, Dako, Farouk, Fatade, Abiodun, and Anazodo, Udunna C
- Abstract
Gliomas are the most common type of primary brain tumors. Although gliomas are relatively rare, they are among the deadliest types of cancer, with a survival rate of less than 2 years after diagnosis. Gliomas are challenging to diagnose, hard to treat and inherently resistant to conventional therapy. Years of extensive research to improve diagnosis and treatment of gliomas have decreased mortality rates across the Global North, while chances of survival among individuals in low- and middle-income countries (LMICs) remain unchanged and are significantly worse in Sub-Saharan Africa (SSA) populations. Long-term survival with glioma is associated with the identification of appropriate pathological features on brain MRI and confirmation by histopathology. Since 2012, the Brain Tumor Segmentation (BraTS) Challenge have evaluated state-of-the-art machine learning methods to detect, characterize, and classify gliomas. However, it is unclear if the state-of-the-art methods can be widely implemented in SSA given the extensive use of lower-quality MRI technology, which produces poor image contrast and resolution and more importantly, the propensity for late presentation of disease at advanced stages as well as the unique characteristics of gliomas in SSA (i.e., suspected higher rates of gliomatosis cerebri). Thus, the BraTS-Africa Challenge provides a unique opportunity to include brain MRI glioma cases from SSA in global efforts through the BraTS Challenge to develop and evaluate computer-aided-diagnostic (CAD) methods for the detection and characterization of glioma in resource-limited settings, where the potential for CAD tools to transform healthcare are more likely., Comment: arXiv admin note: text overlap with arXiv:2107.02314
- Published
- 2023
5. Primitive Simultaneous Optimization of Similarity Metrics for Image Registration
- Author
-
Waldmannstetter, Diana, Wiestler, Benedikt, Schwarting, Julian, Ezhov, Ivan, Metz, Marie, Bakas, Spyridon, Baheti, Bhakti, Chakrabarty, Satrajit, Rueckert, Daniel, Kirschke, Jan S., Heckemann, Rolf A., Piraud, Marie, Menze, Bjoern H., Kofler, Florian, Waldmannstetter, Diana, Wiestler, Benedikt, Schwarting, Julian, Ezhov, Ivan, Metz, Marie, Bakas, Spyridon, Baheti, Bhakti, Chakrabarty, Satrajit, Rueckert, Daniel, Kirschke, Jan S., Heckemann, Rolf A., Piraud, Marie, Menze, Bjoern H., and Kofler, Florian
- Abstract
Even though simultaneous optimization of similarity metrics is a standard procedure in the field of semantic segmentation, surprisingly, this is much less established for image registration. To help closing this gap in the literature, we investigate in a complex multi-modal 3D setting whether simultaneous optimization of registration metrics, here implemented by means of primitive summation, can benefit image registration. We evaluate two challenging datasets containing collections of pre- to post-operative and pre- to intra-operative MR images of glioma. Employing the proposed optimization, we demonstrate improved registration accuracy in terms of TRE on expert neuroradiologists' landmark annotations.
- Published
- 2023
6. A deep learning approach to predict collateral flow in stroke patients using radiomic features from perfusion images
- Author
-
Tetteh, Giles, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Meier, Raphael, Kaesmacher, Johannes; https://orcid.org/0000-0002-9177-2289, Paetzold, Johannes C, Kirschke, Jan S, Zimmer, Claus, Wiest, Roland; https://orcid.org/0000-0001-7030-2045, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Tetteh, Giles, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Meier, Raphael, Kaesmacher, Johannes; https://orcid.org/0000-0002-9177-2289, Paetzold, Johannes C, Kirschke, Jan S, Zimmer, Claus, Wiest, Roland; https://orcid.org/0000-0001-7030-2045, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Collateral circulation results from specialized anastomotic channels which are capable of providing oxygenated blood to regions with compromised blood flow caused by arterial obstruction. The quality of collateral circulation has been established as a key factor in determining the likelihood of a favorable clinical outcome and goes a long way to determining the choice of a stroke care model. Though many imaging and grading methods exist for quantifying collateral blood flow, the actual grading is mostly done through manual inspection. This approach is associated with a number of challenges. First, it is time-consuming. Second, there is a high tendency for bias and inconsistency in the final grade assigned to a patient depending on the experience level of the clinician. We present a multi-stage deep learning approach to predict collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data. First, we formulate a region of interest detection task as a reinforcement learning problem and train a deep learning network to automatically detect the occluded region within the 3D MR perfusion volumes. Second, we extract radiomic features from the obtained region of interest through local image descriptors and denoising auto-encoders. Finally, we apply a convolutional neural network and other machine learning classifiers to the extracted radiomic features to automatically predict the collateral flow grading of the given patient volume as one of three severity classes - no flow (0), moderate flow (1), and good flow (2). Results from our experiments show an overall accuracy of 72% in the three-class prediction task. With an inter-observer agreement of 16% and a maximum intra-observer agreement of 74% in a similar experiment, our automated deep learning approach demonstrates a performance comparable to expert grading, is faster than visual inspection, and eliminates the problem of grading bias.
- Published
- 2023
7. Identifying core MRI sequences for reliable automatic brain metastasis segmentation
- Author
-
Buchner, Josef A, Peeken, Jan C, Etzel, Lucas, Ezhov, Ivan, Mayinger, Michael, Christ, Sebastian M, Brunner, Thomas B, Wittig, Andrea, Menze, Bjoern H, Zimmer, Claus, Meyer, Bernhard, Guckenberger, Matthias, Andratschke, Nicolaus, El Shafie, Rami A, Debus, Jürgen, Rogers, Susanne, Riesterer, Oliver, Schulze, Katrin, Feldmann, Horst J, Blanck, Oliver, Zamboglou, Constantinos, Ferentinos, Konstantinos, Bilger, Angelika, Grosu, Anca L, Wolff, Robert, Kirschke, Jan S, Eitz, Kerstin A, Combs, Stephanie E, Bernhardt, Denise, Rueckert, Daniel, et al, Buchner, Josef A, Peeken, Jan C, Etzel, Lucas, Ezhov, Ivan, Mayinger, Michael, Christ, Sebastian M, Brunner, Thomas B, Wittig, Andrea, Menze, Bjoern H, Zimmer, Claus, Meyer, Bernhard, Guckenberger, Matthias, Andratschke, Nicolaus, El Shafie, Rami A, Debus, Jürgen, Rogers, Susanne, Riesterer, Oliver, Schulze, Katrin, Feldmann, Horst J, Blanck, Oliver, Zamboglou, Constantinos, Ferentinos, Konstantinos, Bilger, Angelika, Grosu, Anca L, Wolff, Robert, Kirschke, Jan S, Eitz, Kerstin A, Combs, Stephanie E, Bernhardt, Denise, Rueckert, Daniel, and et al
- Abstract
BACKGROUND Many automatic approaches to brain tumor segmentation employ multiple magnetic resonance imaging (MRI) sequences. The goal of this project was to compare different combinations of input sequences to determine which MRI sequences are needed for effective automated brain metastasis (BM) segmentation. METHODS We analyzed preoperative imaging (T1-weighted sequence ± contrast-enhancement (T1/T1-CE), T2-weighted sequence (T2), and T2 fluid-attenuated inversion recovery (T2-FLAIR) sequence) from 339 patients with BMs from seven centers. A baseline 3D U-Net with all four sequences and six U-Nets with plausible sequence combinations (T1-CE, T1, T2-FLAIR, T1-CE + T2-FLAIR, T1-CE + T1 + T2-FLAIR, T1-CE + T1) were trained on 239 patients from two centers and subsequently tested on an external cohort of 100 patients from five centers. RESULTS The model based on T1-CE alone achieved the best segmentation performance for BM segmentation with a median Dice similarity coefficient (DSC) of 0.96. Models trained without T1-CE performed worse (T1-only: DSC = 0.70 and T2-FLAIR-only: DSC = 0.73). For edema segmentation, models that included both T1-CE and T2-FLAIR performed best (DSC = 0.93), while the remaining four models without simultaneous inclusion of these both sequences reached a median DSC of 0.81-0.89. CONCLUSIONS A T1-CE-only protocol suffices for the segmentation of BMs. The combination of T1-CE and T2-FLAIR is important for edema segmentation. Missing either T1-CE or T2-FLAIR decreases performance. These findings may improve imaging routines by omitting unnecessary sequences, thus allowing for faster procedures in daily clinical practice while enabling optimal neural network-based target definitions.
- Published
- 2023
8. Framing image registration as a landmark detection problem for better representation of clinical relevance
- Author
-
Waldmannstetter, Diana, Wiestler, Benedikt, Schwarting, Julian, Ezhov, Ivan, Metz, Marie, Bakas, Spyridon, Baheti, Bhakti, Chakrabarty, Satrajit, Kirschke, Jan S., Heckemann, Rolf A., Piraud, Marie, Kofler, Florian, Menze, Bjoern H., Waldmannstetter, Diana, Wiestler, Benedikt, Schwarting, Julian, Ezhov, Ivan, Metz, Marie, Bakas, Spyridon, Baheti, Bhakti, Chakrabarty, Satrajit, Kirschke, Jan S., Heckemann, Rolf A., Piraud, Marie, Kofler, Florian, and Menze, Bjoern H.
- Abstract
Nowadays, registration methods are typically evaluated based on sub-resolution tracking error differences. In an effort to reinfuse this evaluation process with clinical relevance, we propose to reframe image registration as a landmark detection problem. Ideally, landmark-specific detection thresholds are derived from an inter-rater analysis. To approximate this costly process, we propose to compute hit rate curves based on the distribution of errors of a sub-sample inter-rater analysis. Therefore, we suggest deriving thresholds from the error distribution using the formula: median + delta * median absolute deviation. The method promises differentiation of previously indistinguishable registration algorithms and further enables assessing the clinical significance in algorithm development.
- Published
- 2023
9. Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations
- Author
-
Kreitner, Linus, Paetzold, Johannes C., Rauch, Nikolaus, Chen, Chen, Hagag, Ahmed M., Fayed, Alaa E., Sivaprasad, Sobha, Rausch, Sebastian, Weichsel, Julian, Menze, Bjoern H., Harders, Matthias, Knier, Benjamin, Rueckert, Daniel, Menten, Martin J., Kreitner, Linus, Paetzold, Johannes C., Rauch, Nikolaus, Chen, Chen, Hagag, Ahmed M., Fayed, Alaa E., Sivaprasad, Sobha, Rausch, Sebastian, Weichsel, Julian, Menze, Bjoern H., Harders, Matthias, Knier, Benjamin, Rueckert, Daniel, and Menten, Martin J.
- Abstract
Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images., Comment: Currently under review
- Published
- 2023
10. Deep Learning for Fast and Robust Multiparametric Magnetic Resonance Imaging
- Author
-
Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Menzel, Marion I. (Priv.-Doz. Dr.);Kirschke, Jan S. (Prof. Dr.), Pirkl, Carolin Martha Anna, Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Menzel, Marion I. (Priv.-Doz. Dr.);Kirschke, Jan S. (Prof. Dr.), and Pirkl, Carolin Martha Anna
- Abstract
Despite the great potential of quantitative Magnetic Resonance Imaging (MRI) for comprehensive tissue characterization, the generally long acquisition times hamper a broad clinical deployment. This dissertation aims at developing Deep Learning methods for fast and robust multiparametric MRI to meet the key clinical needs for image-based biomarkers. The proposed methodological advances for multiparametric mapping via transient-state imaging techniques are validated, and initial clinical experience is demonstrated., Trotz des großen Potenzials quantitativer Magnetresonanztomographie (MRT) hinsichtlich einer umfassenden Gewebecharakterisierung behindern die meist langen Akquisitionszeiten deren breiten klinischen Einsatz. Ziel dieser Dissertation ist es Deep Learning Methoden für schnelle und robuste multiparametrische MRT zu entwickeln, um die wichtigsten klinischen Anforderungen für bildbasierte Biomarker zu erfüllen. Die dargestellten methodischen Fortschritte für multiparametrische Quantifizierung mit Transient-State Bildgebungstechniken werden validiert und erste klinische Erfahrungen demonstriert.
- Published
- 2021
11. Quantifying hemodynamics in the aorta with four-dimensional flow magnetic resonance imaging
- Author
-
Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Hennemuth, Anja B. (Prof. Dr.);Schnell, Susanne (Prof. Dr.), Zimmermann, Judith, Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Hennemuth, Anja B. (Prof. Dr.);Schnell, Susanne (Prof. Dr.), and Zimmermann, Judith
- Abstract
This dissertation advances the use of magnetic resonance imaging (MRI)-based quantitative flow markers for aortic disease management. Image analysis methods for quantifying hemodynamics are evaluated using in vivo and in vitro 4D flow MRI of healthy and diseased aortas; and CFD-FSI simulations are used for direct comparison with matched boundary conditions. This thesis emphasizes the versatility and promising potentials of quantitative 4D flow MRI, but also underlines important limitations., Diese Dissertation treibt den Einsatz MRT-basierter quantitativer Flussparameter bei Aortenerkrankungen voran. Die Bildanalyse Methoden zur Quantifizierung der Hämodynamik werden anhand von in-vivo- und in-vitro 4D-Fluss-MRT der Aorta erforscht. CFD-FSI Simulationen werden zum Vergleich genutzt, wobei Randbedingungen mit direkt gemessenen Daten definiert werden. Diese Arbeit betont die Potenziale und Vielseitigkeit der quantitativen 4D-Fluss-MRT, aber unterstreicht ebenso wichtige Limitationen.
- Published
- 2021
12. Deep learning based medical image segmentation and classification for artificial intelligence healthcare
- Author
-
Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Langs, Georg (Prof. Dr.), Zhao, Yu, Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Langs, Georg (Prof. Dr.), and Zhao, Yu
- Abstract
This thesis focuses on developing novel deep-learning-based methods to address medical image segmentation and classification issues such as small organ segmentation, prostate cancer lesion characterization, parkinsonian syndrome diagnosis, lymph node metastasis prediction, and microsatellite instability prediction., Diese Arbeit konzentriert sich auf die Entwicklung neuer Deep-Learning-basierter Methoden zur Bearbeitung medizinischer Bildsegmentierungs- und Klassifizierungsprobleme wie die Segmentierung kleiner Organe, Charakterisierung von Prostatakrebs-Läsionen, Diagnose des Parkinson-Syndroms und Vorhersage von Lymphknotenmetastasen sowie von Mikrosatelliteninstabilität.
- Published
- 2021
13. Deep Convolutional Neural Networks for Biomedical Image Analysis
- Author
-
Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Razansky, Daniel (Prof. Dr.), Schoppe, Oliver, Menze, Bjoern H. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.);Razansky, Daniel (Prof. Dr.), and Schoppe, Oliver
- Abstract
Despite the breakthroughs of machine learning in biomedical image analysis, adoption in practice is slow due to several bottlenecks: scarcity of annotated training data, limited reliability of those annotations, and insufficient generalization of the models. This dissertation aims at addressing these bottlenecks by developing efficient training strategies for models that generalize well and appreciate the imperfection of labels along three use cases: cancer metastasis detection down to single cancer cells, transfer learning across biomedical domains, and whole-body organ segmentation., Trotz der Durchbrüche in der biomedizinischen Bildanalyse durch maschinelles Lernen hält diese Methode in der Praxis nur langsam Einzug. Dies liegt am Mangel annotierter Trainingsdaten, begrenzter Zuverlässigkeit dieser Annotationen, und unzureichender Allgemeingültigkeit der Modelle. Diese Dissertation entwickelt effiziente Trainingsstrategien für Modelle mit hoher Allgemeingültigkeit und unter Berücksichtigung der Fehlerhaftigkeit der Annotationen entlang von drei Anwendungsfällen: Detektion von Tumormetastasen bis hin zu einzelnen Krebszellen, Wissenstransfer über verschiedene biomedizinische Domänen hinweg, und Multi-Organ-Segmentierung in Ganzkörperaufnahmen.
- Published
- 2021
14. Deep Learning for Medical Image Analysis
- Author
-
Belagiannis, Vasileios (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.), Hu, Xiaobin, Belagiannis, Vasileios (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.), and Hu, Xiaobin
- Abstract
This thesis aims to develop effective and novel deep learning based algorithms to resolve lesion segmentation, disease prognostic analysis issues, and medical image synthesis, such as brain glioma multi-class segmentation, natural killer/T cell lymphoma multi-stage segmentation, prognostic analysis of natural killer/T cell lymphoma, and MR image enhancement., Diese Arbeit beschäftigt sich damit, effektive neue Algorithmen basierend auf Deep Learning zur Anwendung in Läsionssegmentierung, prognostischer Krankheitsanalyse und medizinischer Bildsynthese zu entwickeln. Insbesondere handelt es sich dabei um Multiklassen-Segmentierung von Gehirn Gliomen, mehrstufige Segmentierung von Natürlichen Killer-T-Zell-Lymphomen, prognostische Analyse Natürlicher Killer-T-Zell-Lymphome und MRT Bildverbesserung.
- Published
- 2022
15. Landmark-Free Statistical Shape Modeling Via Neural Flow Deformations
- Author
-
Wang, Linwei, Dou, Qi, Fletcher, Thomas P, Speidel, Stefanie, Liu, Shuo; https://orcid.org/0000-0001-8238-7015, Wang, L ( Linwei ), Dou, Q ( Qi ), Fletcher, T P ( Thomas P ), Speidel, S ( Stefanie ), Liu, S ( Shuo ), Lüdke, David, Amiranashvili, Tamaz, Ambellan, Felix, Ezhov, Ivan, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Zachow, Stefan, Wang, Linwei, Dou, Qi, Fletcher, Thomas P, Speidel, Stefanie, Liu, Shuo; https://orcid.org/0000-0001-8238-7015, Wang, L ( Linwei ), Dou, Q ( Qi ), Fletcher, T P ( Thomas P ), Speidel, S ( Stefanie ), Liu, S ( Shuo ), Lüdke, David, Amiranashvili, Tamaz, Ambellan, Felix, Ezhov, Ivan, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, and Zachow, Stefan
- Abstract
Statistical shape modeling aims at capturing shape variations of an anatomical structure that occur within a given population. Shape models are employed in many tasks, such as shape reconstruction and image segmentation, but also shape generation and classification. Existing shape priors either require dense correspondence between training examples or lack robustness and topological guarantees. We present FlowSSM, a novel shape modeling approach that learns shape variability without requiring dense correspondence between training instances. It relies on a hierarchy of continuous deformation flows, which are parametrized by a neural network. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior for distal femur and liver. We show that the emerging latent representation is discriminative by separating healthy from pathological shapes. Ultimately, we demonstrate its effectiveness on two shape reconstruction tasks from partial data. Our source code is publicly available (https://github.com/davecasp/flowssm).
- Published
- 2022
16. FedCostWAvg: A New Averaging for Better Federated Learning
- Author
-
Crimi, Alessandro; https://orcid.org/0000-0001-5397-6363, Bakas, Spyridon; https://orcid.org/0000-0001-8734-6482, Crimi, A ( Alessandro ), Bakas, S ( Spyridon ), Mächler, Leon, Ezhov, Ivan, Kofler, Florian, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Paetzold, Johannes C, Loehr, Timo, Zimmer, Claus, Wiestler, Benedikt; https://orcid.org/0000-0002-2963-7772, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Crimi, Alessandro; https://orcid.org/0000-0001-5397-6363, Bakas, Spyridon; https://orcid.org/0000-0001-8734-6482, Crimi, A ( Alessandro ), Bakas, S ( Spyridon ), Mächler, Leon, Ezhov, Ivan, Kofler, Florian, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Paetzold, Johannes C, Loehr, Timo, Zimmer, Claus, Wiestler, Benedikt; https://orcid.org/0000-0002-2963-7772, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
We propose a simple new aggregation strategy for federated learning that won the MICCAI Federated Tumor Segmentation Challenge 2021 (FETS), the first ever challenge on Federated Learning in the Machine Learning community. Our method addresses the problem of how to aggregate multiple models that were trained on different data sets. Conceptually, we propose a new way to choose the weights when averaging the different models, thereby extending the current state of the art (FedAvg). Empirical validation demonstrates that our approach reaches a notable improvement in segmentation performance compared to FedAvg.
- Published
- 2022
17. Physiology-based simulation of the retinal vasculature enables annotation-free segmentation of OCT angiographs
- Author
-
Menten, Martin J., Paetzold, Johannes C., Dima, Alina, Menze, Bjoern H., Knier, Benjamin, Rueckert, Daniel, Menten, Martin J., Paetzold, Johannes C., Dima, Alina, Menze, Bjoern H., Knier, Benjamin, and Rueckert, Daniel
- Abstract
Optical coherence tomography angiography (OCTA) can non-invasively image the eye's circulatory system. In order to reliably characterize the retinal vasculature, there is a need to automatically extract quantitative metrics from these images. The calculation of such biomarkers requires a precise semantic segmentation of the blood vessels. However, deep-learning-based methods for segmentation mostly rely on supervised training with voxel-level annotations, which are costly to obtain. In this work, we present a pipeline to synthesize large amounts of realistic OCTA images with intrinsically matching ground truth labels; thereby obviating the need for manual annotation of training data. Our proposed method is based on two novel components: 1) a physiology-based simulation that models the various retinal vascular plexuses and 2) a suite of physics-based image augmentations that emulate the OCTA image acquisition process including typical artifacts. In extensive benchmarking experiments, we demonstrate the utility of our synthetic data by successfully training retinal vessel segmentation algorithms. Encouraged by our method's competitive quantitative and superior qualitative performance, we believe that it constitutes a versatile tool to advance the quantitative analysis of OCTA images., Comment: Accepted at MICCAI 2022
- Published
- 2022
18. A unified 3D framework for Organs at Risk Localization and Segmentation for Radiation Therapy Planning
- Author
-
Navarro, Fernando, Sasahara, Guido, Shit, Suprosanna, Ezhov, Ivan, Peeken, Jan C., Combs, Stephanie E., Menze, Bjoern H., Navarro, Fernando, Sasahara, Guido, Shit, Suprosanna, Ezhov, Ivan, Peeken, Jan C., Combs, Stephanie E., and Menze, Bjoern H.
- Abstract
Automatic localization and segmentation of organs-at-risk (OAR) in CT are essential pre-processing steps in medical image analysis tasks, such as radiation therapy planning. For instance, the segmentation of OAR surrounding tumors enables the maximization of radiation to the tumor area without compromising the healthy tissues. However, the current medical workflow requires manual delineation of OAR, which is prone to errors and is annotator-dependent. In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation rather than novel localization or segmentation architectures. To the best of our knowledge, our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging. In the first step, a 3D multi-variate regression network predicts organs' centroids and bounding boxes. Secondly, 3D organ-specific segmentation networks are leveraged to generate a multi-organ segmentation map. Our method achieved an overall Dice score of $0.9260\pm 0.18 \%$ on the VISCERAL dataset containing CT scans with varying fields of view and multiple organs.
- Published
- 2022
19. Anatomy-Aware Inference of the 3D Standing Spine Posture from 2D Radiographs
- Author
-
Bayat, Amirhossein, Pace, Danielle F., Sekuboyina, Anjany, Payer, Christian, Stern, Darko, Urschler, Martin, Kirschke, Jan S., Menze, Bjoern H., Bayat, Amirhossein, Pace, Danielle F., Sekuboyina, Anjany, Payer, Christian, Stern, Darko, Urschler, Martin, Kirschke, Jan S., and Menze, Bjoern H.
- Abstract
An important factor for the development of spinal degeneration, pain and the outcome of spinal surgery is known to be the balance of the spine. It must be analyzed in an upright, standing position to ensure physiological loading conditions and visualize load-dependent deformations. Despite the complex 3D shape of the spine, this analysis is currently performed using 2D radiographs, as all frequently used 3D imaging techniques require the patient to be scanned in a prone position. To overcome this limitation, we propose a deep neural network to reconstruct the 3D spinal pose in an upright standing position, loaded naturally. Specifically, we propose a novel neural network architecture, which takes orthogonal 2D radiographs and infers the spine’s 3D posture using vertebral shape priors. In this work, we define vertebral shape priors using an atlas and a spine shape prior, incorporating both into our proposed network architecture. We validate our architecture on digitally reconstructed radiographs, achieving a 3D reconstruction Dice of
, indicating an almost perfect 2D-to-3D domain translation. Validating the reconstruction accuracy of a 3D standing spine on real data is infeasible due to the lack of a valid ground truth. Hence, we design a novel experiment for this purpose, using an orientation invariant distance metric, to evaluate our model’s ability to synthesize full-3D, upright, and patient-specific spine models. We compare the synthesized spine shapes from clinical upright standing radiographs to the same patient’s 3D spinal posture in the prone position from CT. - Published
- 2022
20. Anatomy-Aware Inference of the 3D Standing Spine Posture from 2D Radiographs
- Author
-
Bayat, Amirhossein; https://orcid.org/0000-0003-1129-9725, Pace, Danielle F, Sekuboyina, Anjany, Payer, Christian, Stern, Darko, Urschler, Martin; https://orcid.org/0000-0001-5792-3971, Kirschke, Jan S; https://orcid.org/0000-0002-7557-0003, Menze, Bjoern H, Bayat, Amirhossein; https://orcid.org/0000-0003-1129-9725, Pace, Danielle F, Sekuboyina, Anjany, Payer, Christian, Stern, Darko, Urschler, Martin; https://orcid.org/0000-0001-5792-3971, Kirschke, Jan S; https://orcid.org/0000-0002-7557-0003, and Menze, Bjoern H
- Abstract
An important factor for the development of spinal degeneration, pain and the outcome of spinal surgery is known to be the balance of the spine. It must be analyzed in an upright, standing position to ensure physiological loading conditions and visualize load-dependent deformations. Despite the complex 3D shape of the spine, this analysis is currently performed using 2D radiographs, as all frequently used 3D imaging techniques require the patient to be scanned in a prone position. To overcome this limitation, we propose a deep neural network to reconstruct the 3D spinal pose in an upright standing position, loaded naturally. Specifically, we propose a novel neural network architecture, which takes orthogonal 2D radiographs and infers the spine’s 3D posture using vertebral shape priors. In this work, we define vertebral shape priors using an atlas and a spine shape prior, incorporating both into our proposed network architecture. We validate our architecture on digitally reconstructed radiographs, achieving a 3D reconstruction Dice of 0.95, indicating an almost perfect 2D-to-3D domain translation. Validating the reconstruction accuracy of a 3D standing spine on real data is infeasible due to the lack of a valid ground truth. Hence, we design a novel experiment for this purpose, using an orientation invariant distance metric, to evaluate our model’s ability to synthesize full-3D, upright, and patient-specific spine models. We compare the synthesized spine shapes from clinical upright standing radiographs to the same patient’s 3D spinal posture in the prone position from CT.
- Published
- 2022
21. SRflow: Deep learning based super-resolution of 4D-flow MRI data
- Author
-
Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Zimmermann, Judith, Ezhov, Ivan, Paetzold, Johannes C, Sanches, Augusto F, Pirkl, Carolin; https://orcid.org/0000-0002-5759-5290, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Zimmermann, Judith, Ezhov, Ivan, Paetzold, Johannes C, Sanches, Augusto F, Pirkl, Carolin; https://orcid.org/0000-0002-5759-5290, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Exploiting 4D-flow magnetic resonance imaging (MRI) data to quantify hemodynamics requires an adequate spatio-temporal vector field resolution at a low noise level. To address this challenge, we provide a learned solution to super-resolve in vivo 4D-flow MRI data at a post-processing level. We propose a deep convolutional neural network (CNN) that learns the inter-scale relationship of the velocity vector map and leverages an efficient residual learning scheme to make it computationally feasible. A novel, direction-sensitive, and robust loss function is crucial to learning vector-field data. We present a detailed comparative study between the proposed super-resolution and the conventional cubic B-spline based vector-field super-resolution. Our method improves the peak-velocity to noise ratio of the flow field by 10 and 30% for in vivo cardiovascular and cerebrovascular data, respectively, for 4 × super-resolution over the state-of-the-art cubic B-spline. Significantly, our method offers 10x faster inference over the cubic B-spline. The proposed approach for super-resolution of 4D-flow data would potentially improve the subsequent calculation of hemodynamic quantities.
- Published
- 2022
22. Learning residual motion correction for fast and robust 3D multiparametric MRI
- Author
-
Pirkl, Carolin M; https://orcid.org/0000-0002-5759-5290, Cencini, Matteo; https://orcid.org/0000-0001-7060-6305, Kurzawski, Jan W; https://orcid.org/0000-0003-2781-1236, Waldmannstetter, Diana, Li, Hongwei, Sekuboyina, Anjany; https://orcid.org/0000-0002-5601-284X, Endt, Sebastian, Peretti, Luca, Donatelli, Graziella; https://orcid.org/0000-0002-5325-0746, Pasquariello, Rosa, Costagli, Mauro; https://orcid.org/0000-0001-9073-1082, Buonincontri, Guido; https://orcid.org/0000-0002-8386-639X, Tosetti, Michela, Menzel, Marion I, Menze, Bjoern H, Pirkl, Carolin M; https://orcid.org/0000-0002-5759-5290, Cencini, Matteo; https://orcid.org/0000-0001-7060-6305, Kurzawski, Jan W; https://orcid.org/0000-0003-2781-1236, Waldmannstetter, Diana, Li, Hongwei, Sekuboyina, Anjany; https://orcid.org/0000-0002-5601-284X, Endt, Sebastian, Peretti, Luca, Donatelli, Graziella; https://orcid.org/0000-0002-5325-0746, Pasquariello, Rosa, Costagli, Mauro; https://orcid.org/0000-0001-9073-1082, Buonincontri, Guido; https://orcid.org/0000-0002-8386-639X, Tosetti, Michela, Menzel, Marion I, and Menze, Bjoern H
- Published
- 2022
23. Application of machine learning to pretherapeutically estimate dosimetry in men with advanced prostate cancer treated with 177Lu-PSMA I&T therapy
- Author
-
Xue, Song, Gafita, Andrei, Dong, Chao, Zhao, Yu, Tetteh, Giles, Menze, Bjoern H, Ziegler, Sibylle, Weber, Wolfgang, Afshar-Oromieh, Ali, Rominger, Axel, Eiber, Matthias, Shi, Kuangyu; https://orcid.org/0000-0002-8714-3084, Xue, Song, Gafita, Andrei, Dong, Chao, Zhao, Yu, Tetteh, Giles, Menze, Bjoern H, Ziegler, Sibylle, Weber, Wolfgang, Afshar-Oromieh, Ali, Rominger, Axel, Eiber, Matthias, and Shi, Kuangyu; https://orcid.org/0000-0002-8714-3084
- Abstract
Purpose: Although treatment planning and individualized dose application for emerging prostate-specific membrane antigen (PSMA)-targeted radioligand therapy (RLT) are generally recommended, it is still difficult to implement in practice at the moment. In this study, we aimed to prove the concept of pretherapeutic prediction of dosimetry based on imaging and laboratory measurements before the RLT treatment. Methods: Twenty-three patients with metastatic castration-resistant prostate cancer (mCRPC) treated with 177Lu-PSMA I&T RLT were included retrospectively. They had available pre-therapy 68 Ga-PSMA-HEBD-CC PET/CT and at least 3 planar and 1 SPECT/CT imaging for dosimetry. Overall, 43 cycles of 177Lu-PSMA I&T RLT were applied. Organ-based standard uptake values (SUVs) were obtained from pre-therapy PET/CT scans. Patient dosimetry was calculated for the kidney, liver, spleen, and salivary glands using Hermes Hybrid Dosimetry 4.0 from the planar and SPECT/CT images. Machine learning methods were explored for dose prediction from organ SUVs and laboratory measurements. The uncertainty of these dose predictions was compared with the population-based dosimetry estimates. Mean absolute percentage error (MAPE) was used to assess the prediction uncertainty of estimated dosimetry. Results: An optimal machine learning method achieved a dosimetry prediction MAPE of 15.8 ± 13.2% for the kidney, 29.6% ± 13.7% for the liver, 23.8% ± 13.1% for the salivary glands, and 32.1 ± 31.4% for the spleen. In contrast, the prediction based on literature population mean has significantly larger MAPE (p < 0.01), 25.5 ± 17.3% for the kidney, 139.1% ± 111.5% for the liver, 67.0 ± 58.3% for the salivary glands, and 54.1 ± 215.3% for the spleen. Conclusion: The preliminary results confirmed the feasibility of pretherapeutic estimation of treatment dosimetry and its added value to empirical population-based estimation. The exploration of dose prediction may support the implementation of treatment
- Published
- 2022
24. Landmark-Free Statistical Shape Modeling Via Neural Flow Deformations
- Author
-
Wang, Linwei, Dou, Qi, Fletcher, Thomas P, Speidel, Stefanie, Liu, Shuo; https://orcid.org/0000-0001-8238-7015, Wang, L ( Linwei ), Dou, Q ( Qi ), Fletcher, T P ( Thomas P ), Speidel, S ( Stefanie ), Liu, S ( Shuo ), Lüdke, David, Amiranashvili, Tamaz, Ambellan, Felix, Ezhov, Ivan, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Zachow, Stefan, Wang, Linwei, Dou, Qi, Fletcher, Thomas P, Speidel, Stefanie, Liu, Shuo; https://orcid.org/0000-0001-8238-7015, Wang, L ( Linwei ), Dou, Q ( Qi ), Fletcher, T P ( Thomas P ), Speidel, S ( Stefanie ), Liu, S ( Shuo ), Lüdke, David, Amiranashvili, Tamaz, Ambellan, Felix, Ezhov, Ivan, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, and Zachow, Stefan
- Abstract
Statistical shape modeling aims at capturing shape variations of an anatomical structure that occur within a given population. Shape models are employed in many tasks, such as shape reconstruction and image segmentation, but also shape generation and classification. Existing shape priors either require dense correspondence between training examples or lack robustness and topological guarantees. We present FlowSSM, a novel shape modeling approach that learns shape variability without requiring dense correspondence between training instances. It relies on a hierarchy of continuous deformation flows, which are parametrized by a neural network. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior for distal femur and liver. We show that the emerging latent representation is discriminative by separating healthy from pathological shapes. Ultimately, we demonstrate its effectiveness on two shape reconstruction tasks from partial data. Our source code is publicly available (https://github.com/davecasp/flowssm).
- Published
- 2022
25. Region of Interest focused MRI to Synthetic CT Translation using Regression and Classification Multi-task Network
- Author
-
Kaushik, Sandeep S, Bylun, Mikael, Cozzin, Cristina, Shanbhag, Dattesh, Petit, Steven F, Wyatt, Jonathan J, Menzel, Marion I; https://orcid.org/0000-0003-0087-9134, Pirkl, Carolin M; https://orcid.org/0000-0002-5759-5290, Mehta, Bhairav, Chauhan, Vikas, Chandrasekharan, Kesavadas, Jonsson, Jojakim, Nyholm, Tufve, Wiesinger, Florian, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Kaushik, Sandeep S, Bylun, Mikael, Cozzin, Cristina, Shanbhag, Dattesh, Petit, Steven F, Wyatt, Jonathan J, Menzel, Marion I; https://orcid.org/0000-0003-0087-9134, Pirkl, Carolin M; https://orcid.org/0000-0002-5759-5290, Mehta, Bhairav, Chauhan, Vikas, Chandrasekharan, Kesavadas, Jonsson, Jojakim, Nyholm, Tufve, Wiesinger, Florian, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
In this work, we present a method for synthetic CT (sCT) generation from zero-echo-time (ZTE) MRI aimed at structural and quantitative accuracies of the image, with a particular focus on the accurate bone density value prediction. We propose a loss function that favors a spatially sparse region in the image. We harness the ability of a multi-task network to produce correlated outputs as a framework to enable localisation of region of interest (RoI) via classification, emphasize regression of values within RoI and still retain the overall accuracy via global regression. The network is optimized by a composite loss function that combines a dedicated loss from each task. We demonstrate how the multi-task network with RoI focused loss offers an advantage over other configurations of the network to achieve higher accuracy of performance. This is relevant to sCT where failure to accurately estimate high Hounsfield Unit values of bone could lead to impaired accuracy in clinical applications. We compare the dose calculation maps from the proposed sCT and the real CT in a radiation therapy treatment planning setup.
- Published
- 2022
26. Learn-Morph-Infer: a new way of solving the inverse problem for brain tumor modeling
- Author
-
Ezhov, Ivan, Scibilia, Kevin, Franitza, Katharina, Steinbauer, Felix, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Zimmer, Lucas; https://orcid.org/0000-0002-5167-2929, Lipková, Jana; https://orcid.org/0000-0001-8101-4794, Kofler, Florian, Paetzold, Johannes C, Canalini, Luca, Waldmannstetter, Diana, Menten, Martin J, Metz, Marie, Wiestler, Benedikt; https://orcid.org/0000-0002-2963-7772, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Ezhov, Ivan, Scibilia, Kevin, Franitza, Katharina, Steinbauer, Felix, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Zimmer, Lucas; https://orcid.org/0000-0002-5167-2929, Lipková, Jana; https://orcid.org/0000-0001-8101-4794, Kofler, Florian, Paetzold, Johannes C, Canalini, Luca, Waldmannstetter, Diana, Menten, Martin J, Metz, Marie, Wiestler, Benedikt; https://orcid.org/0000-0002-2963-7772, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Current treatment planning of patients diagnosed with a brain tumor, such as glioma, could significantly benefit by accessing the spatial distribution of tumor cell concentration. Existing diagnostic modalities, e.g. magnetic resonance imaging (MRI), contrast sufficiently well areas of high cell density. In gliomas, however, they do not portray areas of low cell concentration, which can often serve as a source for the secondary appearance of the tumor after treatment. To estimate tumor cell densities beyond the visible boundaries of the lesion, numerical simulations of tumor growth could complement imaging information by providing estimates of full spatial distributions of tumor cells. Over recent years a corpus of literature on medical image-based tumor modeling was published. It includes different mathematical formalisms describing the forward tumor growth model. Alongside, various parametric inference schemes were developed to perform an efficient tumor model personalization, i.e. solving the inverse problem. However, the unifying drawback of all existing approaches is the time complexity of the model personalization which prohibits a potential integration of the modeling into clinical settings. In this work, we introduce a deep learning based methodology for inferring the patient-specific spatial distribution of brain tumors from T1Gd and FLAIR MRI medical scans. Coined as Learn-Morph-Infer the method achieves real-time performance in the order of minutes on widely available hardware and the compute time is stable across tumor models of different complexity, such as reaction-diffusion and reaction-advection-diffusion models. We believe the proposed inverse solution approach not only bridges the way for clinical translation of brain tumor personalization but can also be adopted to other scientific and engineering domains.
- Published
- 2022
27. Physiology-based simulation of the retinal vasculature enables annotation-free segmentation of OCT angiographs
- Author
-
Menten, Martin J, Paetzold, Johannes C, Dima, Alina; https://orcid.org/0000-0002-2598-0952, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Knierim, B, Rueckert, Daniel, Menten, Martin J, Paetzold, Johannes C, Dima, Alina; https://orcid.org/0000-0002-2598-0952, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Knierim, B, and Rueckert, Daniel
- Abstract
Optical coherence tomography angiography (OCTA) can non-invasively image the eye's circulatory system. In order to reliably characterize the retinal vasculature, there is a need to automatically extract quantitative metrics from these images. The calculation of such biomarkers requires a precise semantic segmentation of the blood vessels. However, deep-learning-based methods for segmentation mostly rely on supervised training with voxel-level annotations, which are costly to obtain. In this work, we present a pipeline to synthesize large amounts of realistic OCTA images with intrinsically matching ground truth labels; thereby obviating the need for manual annotation of training data. Our proposed method is based on two novel components: 1) a physiology-based simulation that models the various retinal vascular plexuses and 2) a suite of physics-based image augmentations that emulate the OCTA image acquisition process including typical artifacts. In extensive benchmarking experiments, we demonstrate the utility of our synthetic data by successfully training retinal vessel segmentation algorithms. Encouraged by our method's competitive quantitative and superior qualitative performance, we believe that it constitutes a versatile tool to advance the quantitative analysis of OCTA images.
- Published
- 2022
28. Detecting CTP Truncation Artifacts in Acute Stroke Imaging from the Arterial Input and the Vascular Output Functions
- Author
-
de la Rosa, Ezequiel, et al, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, de la Rosa, Ezequiel, et al, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Background Current guidelines for CT perfusion (CTP) in acute stroke suggest acquiring scans with a minimal duration of 60-70 s. But even then, CTP analysis can be affected by truncation artifacts. Conversely, shorter acquisitions are still widely used in clinical practice and are usually sufficient to reliably estimate lesion volumes. We aim to devise an automatic method that detects scans affected by truncation artifacts. Methods Shorter scan durations are simulated from the ISLES’18 dataset by consecutively removing the last CTP time-point until reaching a 10 s duration. For each truncated series, perfusion lesion volumes are quantified and used to label the series as unreliable if the lesion volumes considerably deviate from the original untruncated ones. Afterwards, nine features from the arterial input function (AIF) and the vascular output function (VOF) are derived and used to fit machine-learning models with the goal of detecting unreliably truncated scans. Methods are compared against a baseline classifier solely based on the scan duration, which is the current clinical standard. The ROC-AUC, precision-recall AUC and the F1-score are measured in a 5-fold cross-validation setting. Results Machine learning models obtained high performance, with a ROC-AUC of 0.964 and precision-recall AUC of 0.958 for the best performing classifier. The highest detection rate is obtained with support vector machines (F1-score = 0.913). The most important feature is the AIFcoverage, measured as the time difference between the scan duration and the AIF peak. In comparison, the baseline classifier yielded a lower performance of 0.940 ROC-AUC and 0.933 precision-recall AUC. At the 60-second cutoff, the baseline classifier obtained a low detection of unreliably truncated scans (F1-Score = 0.638). Conclusions Machine learning models fed with discriminant AIF and VOF features accurately detected unreliable stroke lesion measurements due to insufficient acquisition duration. Unlike t
- Published
- 2022
29. A unified 3D framework for Organs at Risk Localization and Segmentation for Radiation Therapy Planning
- Author
-
Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Sasahara, Guido, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Ezhov, Ivan, Peeken, Jan C; https://orcid.org/0000-0003-2679-9853, Combs, Stephanie E; https://orcid.org/0000-0002-5233-1536, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Sasahara, Guido, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Ezhov, Ivan, Peeken, Jan C; https://orcid.org/0000-0003-2679-9853, Combs, Stephanie E; https://orcid.org/0000-0002-5233-1536, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Automatic localization and segmentation of organs-at-risk (OAR) in CT are essential pre-processing steps in medical image analysis tasks, such as radiation therapy planning. For instance, the segmentation of OAR surrounding tumors enables the maximization of radiation to the tumor area without compromising the healthy tissues. However, the current medical workflow requires manual delineation of OAR, which is prone to errors and is annotator-dependent. In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation rather than novel localization or segmentation architectures. To the best of our knowledge, our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging. In the first step, a 3D multi-variate regression network predicts organs' centroids and bounding boxes. Secondly, 3D organ-specific segmentation networks are leveraged to generate a multi-organ segmentation map. Our method achieved an overall Dice score of 0.9260±0.18% on the VISCERAL dataset containing CT scans with varying fields of view and multiple organs.
- Published
- 2022
30. QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results
- Author
-
Mehta, Raghav, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, et al, Mehta, Raghav, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, and et al
- Abstract
Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at: this https URL.
- Published
- 2022
31. Focused Decoding Enables 3D Anatomical Detection by Transformers
- Author
-
Wittmann, Bastian, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Wittmann, Bastian, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Detection Transformers represent end-to-end object detection approaches based on a Transformer encoder-decoder architecture, exploiting the attention mechanism for global relation modeling. Although Detection Transformers deliver results on par with or even superior to their highly optimized CNN-based counterparts operating on 2D natural images, their success is closely coupled to access to a vast amount of training data. This, however, restricts the feasibility of employing Detection Transformers in the medical domain, as access to annotated data is typically limited. To tackle this issue and facilitate the advent of medical Detection Transformers, we propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder. Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view to regions of interest, which allows for a precise focus on relevant anatomical structures. We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights. Code for Focused Decoder is available in our medical Vision Transformer library this http URL.
- Published
- 2022
32. Proof-of-concept Study to Estimate Individual Post-Therapy Dosimetry in Men with Advanced Prostate Cancer Treated with 177Lu-PSMA I&T Therapy
- Author
-
Xue, Song, Gafita, Andrei, Dong, Chao, Zhao, Yu, Tetteh, Giles, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Ziegler, Sibylle, Weber, Wolfgang, Afshar-Oromieh, Ali, Rominger, Axel, Eiber, Matthias, Shi, Kuangyu; https://orcid.org/0000-0002-8714-3084, Xue, Song, Gafita, Andrei, Dong, Chao, Zhao, Yu, Tetteh, Giles, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Ziegler, Sibylle, Weber, Wolfgang, Afshar-Oromieh, Ali, Rominger, Axel, Eiber, Matthias, and Shi, Kuangyu; https://orcid.org/0000-0002-8714-3084
- Abstract
It is still debating if individualized dose should be applied for the emerging PSMA-targeted radionuclide therapy (RLT). A critical consideration in this debate is the necessity and feasibility of individual estimation of post-therapy dosimetry before the treatment. In this study, we aimed to prove the concept of individual dosimetry prediction based on pre-therapy imaging and laboratory measurements. Methods: 23 patients with metastatic castration-resistant prostate cancer (mCRPC) treated with 177Lu-PSMA-I&T RLT were included retrospectively. Included patients had available pre-therapeutic 68Ga-PSMA-HEBD-CC PET/CT and at least 3 planar and 1 SPECT/CT dosimetry imaging. Overall, 43 cycles of 177Lu-PSMA I&T RLT were applied. Organ-based standard uptake value (SUV) uptake was obtained from pretherapy PET/CT scans. Patient individual dosimetry was calculated for kidney, liver, spleen, and salivary glands using Hermes Hybrid Dosimetry 4.0 from the post-treatment 177Lu-PSMA I&T imaging studies. Machine learning methods were explored for individual dose prediction from PET images. The accuracy of these dose predictions was compared with the accuracy of population-based dosimetry estimates. Mean absolute percentage error was used to assess the prediction error of estimated dosimetry. Results: An optimal machine learning method achieved a dosimetry prediction error of 15.8 ± 13.2% for kidney, 29.6%±13.7% for liver, 23.8%±13.1% for salivary glands and 32.1 ± 31.4% for spleen. In contrast, the prediction based on literature population mean has significantly larger error (p < 0.01), 25.5 ± 17.3% for kidney, 139.1%±111.5% for liver, 67.0 ± 58.3% for salivary glands, and 54.1 ± 215.3% for spleen. Conclusion: The preliminary results confirmed the feasibility of individual estimation of post-therapy dosimetry before the RLT and its added value to empirical population-based estimation. The exploration of individual dose prediction may support the identification of the role of treat
- Published
- 2022
33. Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021
- Author
-
Sudre, Carole H, Ezhov, Ivan, Kofler, Florian, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, et al, Sudre, Carole H, Ezhov, Ivan, Kofler, Florian, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, and et al
- Abstract
Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the \textit{VAscular Lesions DetectiOn and Segmentation} (\textit{Where is VALDO?}) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1 - EPVS, 9 for Task 2 - Microbleeds and 6 for Task 3 - Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1 - EPVS and Task 2 - Microbleeds and not practically useful results yet for Task 3 - Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.
- Published
- 2022
34. Quantifying hemodynamics in the aorta with four-dimensional flow magnetic resonance imaging
- Author
-
Menze, Bjoern H. (Prof. Dr.), Schnell, Susanne (Prof. Dr.), Hennemuth, Anja B. (Prof. Dr.), Zimmermann, Judith, Menze, Bjoern H. (Prof. Dr.), Schnell, Susanne (Prof. Dr.), Hennemuth, Anja B. (Prof. Dr.), and Zimmermann, Judith
- Abstract
This dissertation advances the use of magnetic resonance imaging (MRI)-based quantitative flow markers for aortic disease management. Image analysis methods for quantifying hemodynamics are evaluated using in vivo and in vitro 4D flow MRI of healthy and diseased aortas; and CFD-FSI simulations are used for direct comparison with matched boundary conditions. This thesis emphasizes the versatility and promising potentials of quantitative 4D flow MRI, but also underlines important limitations., Diese Dissertation treibt den Einsatz MRT-basierter quantitativer Flussparameter bei Aortenerkrankungen voran. Die Bildanalyse Methoden zur Quantifizierung der Hämodynamik werden anhand von in-vivo- und in-vitro 4D-Fluss-MRT der Aorta erforscht. CFD-FSI Simulationen werden zum Vergleich genutzt, wobei Randbedingungen mit direkt gemessenen Daten definiert werden. Diese Arbeit betont die Potenziale und Vielseitigkeit der quantitativen 4D-Fluss-MRT, aber unterstreicht ebenso wichtige Limitationen.
- Published
- 2021
35. Deep Learning for Fast and Robust Multiparametric Magnetic Resonance Imaging
- Author
-
Kirschke, Jan S. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.), Menzel, Marion I. (Priv.-Doz. Dr.), Pirkl, Carolin Martha Anna, Kirschke, Jan S. (Prof. Dr.), Menze, Bjoern H. (Prof. Dr.), Menzel, Marion I. (Priv.-Doz. Dr.), and Pirkl, Carolin Martha Anna
- Abstract
Despite the great potential of quantitative Magnetic Resonance Imaging (MRI) for comprehensive tissue characterization, the generally long acquisition times hamper a broad clinical deployment. This dissertation aims at developing Deep Learning methods for fast and robust multiparametric MRI to meet the key clinical needs for image-based biomarkers. The proposed methodological advances for multiparametric mapping via transient-state imaging techniques are validated, and initial clinical experience is demonstrated., Trotz des großen Potenzials quantitativer Magnetresonanztomographie (MRT) hinsichtlich einer umfassenden Gewebecharakterisierung behindern die meist langen Akquisitionszeiten deren breiten klinischen Einsatz. Ziel dieser Dissertation ist es Deep Learning Methoden für schnelle und robuste multiparametrische MRT zu entwickeln, um die wichtigsten klinischen Anforderungen für bildbasierte Biomarker zu erfüllen. Die dargestellten methodischen Fortschritte für multiparametrische Quantifizierung mit Transient-State Bildgebungstechniken werden validiert und erste klinische Erfahrungen demonstriert.
- Published
- 2021
36. Deep learning based medical image segmentation and classification for artificial intelligence healthcare
- Author
-
Menze, Bjoern H. (Prof. Dr.), Langs, Georg (Prof. Dr.), Zhao, Yu, Menze, Bjoern H. (Prof. Dr.), Langs, Georg (Prof. Dr.), and Zhao, Yu
- Abstract
This thesis focuses on developing novel deep-learning-based methods to address medical image segmentation and classification issues such as small organ segmentation, prostate cancer lesion characterization, parkinsonian syndrome diagnosis, lymph node metastasis prediction, and microsatellite instability prediction., Diese Arbeit konzentriert sich auf die Entwicklung neuer Deep-Learning-basierter Methoden zur Bearbeitung medizinischer Bildsegmentierungs- und Klassifizierungsprobleme wie die Segmentierung kleiner Organe, Charakterisierung von Prostatakrebs-Läsionen, Diagnose des Parkinson-Syndroms und Vorhersage von Lymphknotenmetastasen sowie von Mikrosatelliteninstabilität.
- Published
- 2021
37. Deep Convolutional Neural Networks for Biomedical Image Analysis
- Author
-
Menze, Bjoern H. (Prof. Dr.), Razansky, Daniel (Prof. Dr.), Schoppe, Oliver, Menze, Bjoern H. (Prof. Dr.), Razansky, Daniel (Prof. Dr.), and Schoppe, Oliver
- Abstract
Despite the breakthroughs of machine learning in biomedical image analysis, adoption in practice is slow due to several bottlenecks: scarcity of annotated training data, limited reliability of those annotations, and insufficient generalization of the models. This dissertation aims at addressing these bottlenecks by developing efficient training strategies for models that generalize well and appreciate the imperfection of labels along three use cases: cancer metastasis detection down to single cancer cells, transfer learning across biomedical domains, and whole-body organ segmentation., Trotz der Durchbrüche in der biomedizinischen Bildanalyse durch maschinelles Lernen hält diese Methode in der Praxis nur langsam Einzug. Dies liegt am Mangel annotierter Trainingsdaten, begrenzter Zuverlässigkeit dieser Annotationen, und unzureichender Allgemeingültigkeit der Modelle. Diese Dissertation entwickelt effiziente Trainingsstrategien für Modelle mit hoher Allgemeingültigkeit und unter Berücksichtigung der Fehlerhaftigkeit der Annotationen entlang von drei Anwendungsfällen: Detektion von Tumormetastasen bis hin zu einzelnen Krebszellen, Wissenstransfer über verschiedene biomedizinische Domänen hinweg, und Multi-Organ-Segmentierung in Ganzkörperaufnahmen.
- Published
- 2021
38. Velocity-To-Pressure (V2P) - Net: Inferring Relative Pressures from Time-Varying 3D Fluid Flow Velocities
- Author
-
Feragen, Aasa, Sommer, Stefan, Schnabel, J, Nielsen, Mads, Feragen, A ( Aasa ), Sommer, S ( Stefan ), Schnabel, J ( J ), Nielsen, M ( Mads ), Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Das, Dhritiman, Ezhov, Ivan, Paetzold, Johannes C, Sanches, Augusto F, Thuerey, Nils, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Feragen, Aasa, Sommer, Stefan, Schnabel, J, Nielsen, Mads, Feragen, A ( Aasa ), Sommer, S ( Stefan ), Schnabel, J ( J ), Nielsen, M ( Mads ), Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Das, Dhritiman, Ezhov, Ivan, Paetzold, Johannes C, Sanches, Augusto F, Thuerey, Nils, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Pressure inference from a series of velocity fields is a common problem arising in medical imaging when analyzing 4D data. Traditional approaches primarily rely on a numerical scheme to solve the pressure-Poisson equation to obtain a dense pressure inference. This involves heavy expert intervention at each stage and requires significant computational resources. Concurrently, the application of current machine learning algorithms for solving partial differential equations is limited to domains with simple boundary conditions. We address these challenges in this paper and present V2P-Net: a novel, neural-network-based approach as an alternative method for inferring pressure from the observed velocity fields. We design an end-to-end hybrid-network architecture motivated by the conventional Navier-Stokes solver, which encapsulates the complex boundary conditions. It achieves accurate pressure estimation compared to the reference numerical solver for simulated flow data in multiple complex geometries of human in-vivo vessels.
- Published
- 2021
39. A Deep Learning Approach to Predicting Collateral Flow in Stroke Patients Using Radiomic Features from Perfusion Images
- Author
-
Tetteh, Giles, Navarro, Fernando, Paetzold, Johannes, Kirschke, Jan, Zimmer, Claus, Menze, Bjoern H., Tetteh, Giles, Navarro, Fernando, Paetzold, Johannes, Kirschke, Jan, Zimmer, Claus, and Menze, Bjoern H.
- Abstract
Collateral circulation results from specialized anastomotic channels which are capable of providing oxygenated blood to regions with compromised blood flow caused by ischemic injuries. The quality of collateral circulation has been established as a key factor in determining the likelihood of a favorable clinical outcome and goes a long way to determine the choice of stroke care model - that is the decision to transport or treat eligible patients immediately. Though there exist several imaging methods and grading criteria for quantifying collateral blood flow, the actual grading is mostly done through manual inspection of the acquired images. This approach is associated with a number of challenges. First, it is time-consuming - the clinician needs to scan through several slices of images to ascertain the region of interest before deciding on what severity grade to assign to a patient. Second, there is a high tendency for bias and inconsistency in the final grade assigned to a patient depending on the experience level of the clinician. We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data. First, we formulate a region of interest detection task as a reinforcement learning problem and train a deep learning network to automatically detect the occluded region within the 3D MR perfusion volumes. Second, we extract radiomic features from the obtained region of interest through local image descriptors and denoising auto-encoders. Finally, we apply a convolutional neural network and other machine learning classifiers to the extracted radiomic features to automatically predict the collateral flow grading of the given patient volume as one of three severity classes - no flow (0), moderate flow (1), and good flow (2)...
- Published
- 2021
40. Semi-Implicit Neural Solver for Time-dependent Partial Differential Equations
- Author
-
Shit, Suprosanna, Ezhov, Ivan, Mächler, Leon, R., Abinav, Lipkova, Jana, Paetzold, Johannes C., Kofler, Florian, Piraud, Marie, Menze, Bjoern H., Shit, Suprosanna, Ezhov, Ivan, Mächler, Leon, R., Abinav, Lipkova, Jana, Paetzold, Johannes C., Kofler, Florian, Piraud, Marie, and Menze, Bjoern H.
- Abstract
Fast and accurate solutions of time-dependent partial differential equations (PDEs) are of pivotal interest to many research fields, including physics, engineering, and biology. Generally, implicit/semi-implicit schemes are preferred over explicit ones to improve stability and correctness. However, existing semi-implicit methods are usually iterative and employ a general-purpose solver, which may be sub-optimal for a specific class of PDEs. In this paper, we propose a neural solver to learn an optimal iterative scheme in a data-driven fashion for any class of PDEs. Specifically, we modify a single iteration of a semi-implicit solver using a deep neural network. We provide theoretical guarantees for the correctness and convergence of neural solvers analogous to conventional iterative solvers. In addition to the commonly used Dirichlet boundary condition, we adopt a diffuse domain approach to incorporate a diverse type of boundary conditions, e.g., Neumann. We show that the proposed neural solver can go beyond linear PDEs and applies to a class of non-linear PDEs, where the non-linear component is non-stiff. We demonstrate the efficacy of our method on 2D and 3D scenarios. To this end, we show how our model generalizes to parameter settings, which are different from training; and achieves faster convergence than semi-implicit schemes.
- Published
- 2021
41. Whole Brain Vessel Graphs: A Dataset and Benchmark for Graph Learning and Neuroscience (VesselGraph)
- Author
-
Paetzold, Johannes C., McGinnis, Julian, Shit, Suprosanna, Ezhov, Ivan, Büschl, Paul, Prabhakar, Chinmay, Todorov, Mihail I., Sekuboyina, Anjany, Kaissis, Georgios, Ertürk, Ali, Günnemann, Stephan, Menze, Bjoern H., Paetzold, Johannes C., McGinnis, Julian, Shit, Suprosanna, Ezhov, Ivan, Büschl, Paul, Prabhakar, Chinmay, Todorov, Mihail I., Sekuboyina, Anjany, Kaissis, Georgios, Ertürk, Ali, Günnemann, Stephan, and Menze, Bjoern H.
- Abstract
Biological neural networks define the brain function and intelligence of humans and other mammals, and form ultra-large, spatial, structured graphs. Their neuronal organization is closely interconnected with the spatial organization of the brain's microvasculature, which supplies oxygen to the neurons and builds a complementary spatial graph. This vasculature (or the vessel structure) plays an important role in neuroscience; for example, the organization of (and changes to) vessel structure can represent early signs of various pathologies, e.g. Alzheimer's disease or stroke. Recently, advances in tissue clearing have enabled whole brain imaging and segmentation of the entirety of the mouse brain's vasculature. Building on these advances in imaging, we are presenting an extendable dataset of whole-brain vessel graphs based on specific imaging protocols. Specifically, we extract vascular graphs using a refined graph extraction scheme leveraging the volume rendering engine Voreen and provide them in an accessible and adaptable form through the OGB and PyTorch Geometric dataloaders. Moreover, we benchmark numerous state-of-the-art graph learning algorithms on the biologically relevant tasks of vessel prediction and vessel classification using the introduced vessel graph dataset. Our work paves a path towards advancing graph learning research into the field of neuroscience. Complementarily, the presented dataset raises challenging graph learning research questions for the machine learning community, in terms of incorporating biological priors into learning algorithms, or in scaling these algorithms to handle sparse,spatial graphs with millions of nodes and edges. All datasets and code are available for download at https://github.com/jocpae/VesselGraph ., Comment: Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track
- Published
- 2021
- Full Text
- View/download PDF
42. Accelerated 3D whole-brain T1, T2, and proton density mapping:feasibility for clinical glioma MR imaging
- Author
-
Pirkl, Carolin M., Nunez-Gonzalez, Laura, Kofler, Florian, Endt, Sebastian, Grundl, Lioba, Golbabaee, Mohammad, Gómez, Pedro A., Cencini, Matteo, Buonincontri, Guido, Schulte, Rolf F., Smits, Marion, Wiestler, Benedikt, Menze, Bjoern H., Menzel, Marion I., Hernandez-Tamames, Juan A., Pirkl, Carolin M., Nunez-Gonzalez, Laura, Kofler, Florian, Endt, Sebastian, Grundl, Lioba, Golbabaee, Mohammad, Gómez, Pedro A., Cencini, Matteo, Buonincontri, Guido, Schulte, Rolf F., Smits, Marion, Wiestler, Benedikt, Menze, Bjoern H., Menzel, Marion I., and Hernandez-Tamames, Juan A.
- Abstract
Purpose: Advanced MRI-based biomarkers offer comprehensive and quantitative information for the evaluation and characterization of brain tumors. In this study, we report initial clinical experience in routine glioma imaging with a novel, fully 3D multiparametric quantitative transient-state imaging (QTI) method for tissue characterization based on T1 and T2 values. Methods: To demonstrate the viability of the proposed 3D QTI technique, nine glioma patients (grade II–IV), with a variety of disease states and treatment histories, were included in this study. First, we investigated the feasibility of 3D QTI (6:25 min scan time) for its use in clinical routine imaging, focusing on image reconstruction, parameter estimation, and contrast-weighted image synthesis. Second, for an initial assessment of 3D QTI-based quantitative MR biomarkers, we performed a ROI-based analysis to characterize T1 and T2 components in tumor and peritumoral tissue. Results: The 3D acquisition combined with a compressed sensing reconstruction and neural network-based parameter inference produced parametric maps with high isotropic resolution (1.125 × 1.125 × 1.125 mm3 voxel size) and whole-brain coverage (22.5 × 22.5 × 22.5 cm3 FOV), enabling the synthesis of clinically relevant T1-weighted, T2-weighted, and FLAIR contrasts without any extra scan time. Our study revealed increased T1 and T2 values in tumor and peritumoral regions compared to contralateral white matter, good agreement with healthy volunteer data, and high inter-subject consistency. Conclusion: 3D QTI demonstrated comprehensive tissue assessment of tumor substructures captured in T1 and T2 parameters. Aiming for fast acquisition of quantitative MR biomarkers, 3D QTI has potential to improve disease characterization in brain tumor patients under tight clinical time-constraints.
- Published
- 2021
43. Evaluating the Robustness of Self-Supervised Learning in Medical Imaging
- Author
-
Navarro, Fernando, Watanabe, Christopher, Shit, Suprosanna, Sekuboyina, Anjany, Peeken, Jan C., Combs, Stephanie E., Menze, Bjoern H., Navarro, Fernando, Watanabe, Christopher, Shit, Suprosanna, Sekuboyina, Anjany, Peeken, Jan C., Combs, Stephanie E., and Menze, Bjoern H.
- Abstract
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets. While current research focuses on creating novel pretext tasks to learn meaningful and reusable representations for the target task, these efforts obtain marginal performance gains compared to fully-supervised learning. Meanwhile, little attention has been given to study the robustness of networks trained in a self-supervised manner. In this work, we demonstrate that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield consistent results exposing the hidden benefits of self-supervision for learning robust feature representations.
- Published
- 2021
44. METGAN: Generative Tumour Inpainting and Modality Synthesis in Light Sheet Microscopy
- Author
-
Horvath, Izabela, Paetzold, Johannes C., Schoppe, Oliver, Al-Maskari, Rami, Ezhov, Ivan, Shit, Suprosanna, Li, Hongwei, Ertuerk, Ali, Menze, Bjoern H., Horvath, Izabela, Paetzold, Johannes C., Schoppe, Oliver, Al-Maskari, Rami, Ezhov, Ivan, Shit, Suprosanna, Li, Hongwei, Ertuerk, Ali, and Menze, Bjoern H.
- Abstract
Novel multimodal imaging methods are capable of generating extensive, super high resolution datasets for preclinical research. Yet, a massive lack of annotations prevents the broad use of deep learning to analyze such data. So far, existing generative models fail to mitigate this problem because of frequent labeling errors. In this paper, we introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours. We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor. The generated images yield significant quantitative improvement compared to existing methods. To validate the quality of synthesis, we train segmentation networks on a dataset augmented with the synthetic data, substantially improving the segmentation over baseline.
- Published
- 2021
45. Proteomics of spatially identified tissues in whole organs
- Author
-
Bhatia, Harsharan Singh; https://orcid.org/0000-0001-9912-8263, Brunner, Andreas-David; https://orcid.org/0000-0002-2733-7899, Rong, Zhouyi; https://orcid.org/0000-0003-4849-1278, Mai, Hongcheng, Thielert, Marvin, Al-Maskari, Rami, Paetzold, Johannes Christian, Kofler, Florian, Todorov, Mihail Ivilinov, Ali, Mayar, Molbay, Muge, Kolabas, Zeynep Ilgin, Kaltenecker, Doris, Müller, Stephan, Lichtenthaler, Stefan F, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Theis, Fabian J; https://orcid.org/0000-0002-2419-1943, Mann, Matthias; https://orcid.org/0000-0003-1292-4799, Ertürk, Ali; https://orcid.org/0000-0001-5163-5100, Bhatia, Harsharan Singh; https://orcid.org/0000-0001-9912-8263, Brunner, Andreas-David; https://orcid.org/0000-0002-2733-7899, Rong, Zhouyi; https://orcid.org/0000-0003-4849-1278, Mai, Hongcheng, Thielert, Marvin, Al-Maskari, Rami, Paetzold, Johannes Christian, Kofler, Florian, Todorov, Mihail Ivilinov, Ali, Mayar, Molbay, Muge, Kolabas, Zeynep Ilgin, Kaltenecker, Doris, Müller, Stephan, Lichtenthaler, Stefan F, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Theis, Fabian J; https://orcid.org/0000-0002-2419-1943, Mann, Matthias; https://orcid.org/0000-0003-1292-4799, and Ertürk, Ali; https://orcid.org/0000-0001-5163-5100
- Abstract
Spatial molecular profiling of complex tissues is essential to investigate cellular function in physiological and pathological states. However, methods for molecular analysis of biological specimens imaged in 3D as a whole are lacking. Here, we present DISCO-MS, a technology combining whole-organ imaging, deep learning-based image analysis, and ultra-high sensitivity mass spectrometry. DISCO-MS yielded qualitative and quantitative proteomics data indistinguishable from uncleared samples in both rodent and human tissues. Using DISCO-MS, we investigated microglia activation locally along axonal tracts after brain injury and revealed known and novel biomarkers. Furthermore, we identified initial individual amyloid-beta plaques in the brains of a young familial Alzheimer’s disease mouse model, characterized the core proteome of these aggregates, and highlighted their compositional heterogeneity. Thus, DISCO-MS enables quantitative, unbiased proteome analysis of target tissues following unbiased imaging of entire organs, providing new diagnostic and therapeutic opportunities for complex diseases, including neurodegeneration.
- Published
- 2021
46. Evaluating the Robustness of Self-Supervised Learning in Medical Imaging
- Author
-
Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Watanabe, Christopher, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Sekuboyina, Anjany, Peeken, Jan C; https://orcid.org/0000-0003-2679-9853, Combs, Stephanie E; https://orcid.org/0000-0002-5233-1536, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Watanabe, Christopher, Shit, Suprosanna; https://orcid.org/0000-0003-4435-7207, Sekuboyina, Anjany, Peeken, Jan C; https://orcid.org/0000-0003-2679-9853, Combs, Stephanie E; https://orcid.org/0000-0002-5233-1536, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets. While current research focuses on creating novel pretext tasks to learn meaningful and reusable representations for the target task, these efforts obtain marginal performance gains compared to fully-supervised learning. Meanwhile, little attention has been given to study the robustness of networks trained in a self-supervised manner. In this work, we demonstrate that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging. Our experiments on pneumonia detection in X-rays and multi-organ segmentation in CT yield consistent results exposing the hidden benefits of self-supervision for learning robust feature representations.
- Published
- 2021
47. Unmixing tissue compartments via deep learning T1-T2-relaxation correlation imaging
- Author
-
Endt, Sebastian, Pirkl, Carolin M, Verdun, Claudio M, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Menzel, Marion I, Endt, Sebastian, Pirkl, Carolin M, Verdun, Claudio M, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, and Menzel, Marion I
- Abstract
Magnetic resonance imaging is a versatile diagnostic tool with numerous clinical applications. However, despite advances towards higher resolutions, it cannot resolve images on a cellular level. To nevertheless probe tissue microstructure, multidimensional correlation imaging emerges as a promising method. It takes advantage of the fact that each tissue compartment has a unique signal. Usually, these multi-compartmental characteristics are averaged over a macroscopic voxel. In contrast, correlation imaging aims to probe the true, heterogeneous nature of tissue. Based on image series acquired with varying inversion time T I and echo time T E, multiparametric spectra of T1 and T2 relaxation times in every voxel can be reconstructed, revealing sub-voxel tissue classes. However, even with impractically long acquisition times spent on dense sampling of the image (3D) and T IT E-space (2D), the inverse problem of retrieving these components from measured signal curves remains highly ill-conditioned and requires expensive regularized approaches. We formulate multiparametric correlation imaging as a classification problem and propose a flexible physics informed deep learning framework comprising a multilayer perceptron. This way, we efficiently reconstruct voxel-wise T1-T2-spectra with increased robustness to noise and undersampling in the T I-T E-space compared to state-of-the-art regression. Our results show feasibility of further acceleration of the acquisition by a factor of 4. After training on synthetic data that is not constraint by pre-defined tissue classes and independent of annotated data, we test our method on in-vivo brain data, revealing sub-voxel compartments in white and gray matter. This allows us to quantify tissue microstructure and will potentially lead to novel biomarkers.
- Published
- 2021
48. The Brain Tumor Sequence Registration Challenge: Establishing Correspondence between Pre-Operative and Follow-up MRI scans of diffuse glioma patients
- Author
-
Baheti, Bhakti, Waldmannstetter, Diana, Chakrabarty, Satrajit, Akbari, Mohammad, Bilello, Michel, Wiestler, Benedikt; https://orcid.org/0000-0002-2963-7772, Schwarting, Julian, Calabrese, Evan, Jeffrey, Rudie, Abidi, Syed, Mousa, Mina, Villanueva-Meyer, Javier, Marcus, Daniel S, Davathikos, Christos, Sotiras, Aristeidis, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Bakas, Spyridon; https://orcid.org/0000-0001-8734-6482, Baheti, Bhakti, Waldmannstetter, Diana, Chakrabarty, Satrajit, Akbari, Mohammad, Bilello, Michel, Wiestler, Benedikt; https://orcid.org/0000-0002-2963-7772, Schwarting, Julian, Calabrese, Evan, Jeffrey, Rudie, Abidi, Syed, Mousa, Mina, Villanueva-Meyer, Javier, Marcus, Daniel S, Davathikos, Christos, Sotiras, Aristeidis, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, and Bakas, Spyridon; https://orcid.org/0000-0001-8734-6482
- Abstract
Registration of longitudinal brain Magnetic Resonance Imaging (MRI) scans containing pathologies is challenging due to tissue appearance changes, and still an unsolved problem. This paper describes the first Brain Tumor Sequence Registration (BraTS-Reg) challenge, focusing on estimating correspondences between pre-operative and follow-up scans of the same patient diagnosed with a brain diffuse glioma. The BraTS-Reg challenge intends to establish a public benchmark environment for deformable registration algorithms. The associated dataset comprises de-identified multi-institutional multi-parametric MRI (mpMRI) data, curated for each scan's size and resolution, according to a common anatomical template. Clinical experts have generated extensive annotations of landmarks points within the scans, descriptive of distinct anatomical locations across the temporal domain. The training data along with these ground truth annotations will be released to participants to design and develop their registration algorithms, whereas the annotations for the validation and the testing data will be withheld by the organizers and used to evaluate the containerized algorithms of the participants. Each submitted algorithm will be quantitatively evaluated using several metrics, such as the Median Absolute Error (MAE), Robustness, and the Jacobian determinant.
- Published
- 2021
49. A Deep Learning Approach to Predicting Collateral Flow in Stroke Patients Using Radiomic Features from Perfusion Images
- Author
-
Tetteh, Giles, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Paetzold, Johannes C, Kirschke, Jan S; https://orcid.org/0000-0002-7557-0003, Zimmer, Claus, Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690, Tetteh, Giles, Navarro, Fernando; https://orcid.org/0000-0001-8906-9079, Paetzold, Johannes C, Kirschke, Jan S; https://orcid.org/0000-0002-7557-0003, Zimmer, Claus, and Menze, Bjoern H; https://orcid.org/0000-0003-4136-5690
- Abstract
Collateral circulation results from specialized anastomotic channels which are capable of providing oxygenated blood to regions with compromised blood flow caused by ischemic injuries. The quality of collateral circulation has been established as a key factor in determining the likelihood of a favorable clinical outcome and goes a long way to determine the choice of stroke care model - that is the decision to transport or treat eligible patients immediately. Though there exist several imaging methods and grading criteria for quantifying collateral blood flow, the actual grading is mostly done through manual inspection of the acquired images. This approach is associated with a number of challenges. First, it is time-consuming - the clinician needs to scan through several slices of images to ascertain the region of interest before deciding on what severity grade to assign to a patient. Second, there is a high tendency for bias and inconsistency in the final grade assigned to a patient depending on the experience level of the clinician. We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data. First, we formulate a region of interest detection task as a reinforcement learning problem and train a deep learning network to automatically detect the occluded region within the 3D MR perfusion volumes. Second, we extract radiomic features from the obtained region of interest through local image descriptors and denoising auto-encoders. Finally, we apply a convolutional neural network and other machine learning classifiers to the extracted radiomic features to automatically predict the collateral flow grading of the given patient volume as one of three severity classes - no flow (0), moderate flow (1), and good flow (2)...
- Published
- 2021
50. Accelerated 3D whole-brain T1, T2, and proton density mapping: feasibility for clinical glioma MR imaging
- Author
-
Pirkl, Carolin M; https://orcid.org/0000-0002-5759-5290, Nunez-Gonzalez, Laura, Kofler, Florian, Endt, Sebastian, Grundl, Lioba, Golbabaee, Mohammad, Gómez, Pedro A, Cencini, Matteo, Buonincontri, Guido, Schulte, Rolf F, Smits, Marion, Wiestler, Benedikt, Menze, Bjoern H, Menzel, Marion I, Hernandez-Tamames, Juan A, Pirkl, Carolin M; https://orcid.org/0000-0002-5759-5290, Nunez-Gonzalez, Laura, Kofler, Florian, Endt, Sebastian, Grundl, Lioba, Golbabaee, Mohammad, Gómez, Pedro A, Cencini, Matteo, Buonincontri, Guido, Schulte, Rolf F, Smits, Marion, Wiestler, Benedikt, Menze, Bjoern H, Menzel, Marion I, and Hernandez-Tamames, Juan A
- Abstract
Purpose: Advanced MRI-based biomarkers offer comprehensive and quantitative information for the evaluation and characterization of brain tumors. In this study, we report initial clinical experience in routine glioma imaging with a novel, fully 3D multiparametric quantitative transient-state imaging (QTI) method for tissue characterization based on T1 and T2 values. Methods: To demonstrate the viability of the proposed 3D QTI technique, nine glioma patients (grade II-IV), with a variety of disease states and treatment histories, were included in this study. First, we investigated the feasibility of 3D QTI (6:25 min scan time) for its use in clinical routine imaging, focusing on image reconstruction, parameter estimation, and contrast-weighted image synthesis. Second, for an initial assessment of 3D QTI-based quantitative MR biomarkers, we performed a ROI-based analysis to characterize T1 and T2 components in tumor and peritumoral tissue. Results: The 3D acquisition combined with a compressed sensing reconstruction and neural network-based parameter inference produced parametric maps with high isotropic resolution (1.125 × 1.125 × 1.125 mm3 voxel size) and whole-brain coverage (22.5 × 22.5 × 22.5 cm3 FOV), enabling the synthesis of clinically relevant T1-weighted, T2-weighted, and FLAIR contrasts without any extra scan time. Our study revealed increased T1 and T2 values in tumor and peritumoral regions compared to contralateral white matter, good agreement with healthy volunteer data, and high inter-subject consistency. Conclusion: 3D QTI demonstrated comprehensive tissue assessment of tumor substructures captured in T1 and T2 parameters. Aiming for fast acquisition of quantitative MR biomarkers, 3D QTI has potential to improve disease characterization in brain tumor patients under tight clinical time-constraints. Keywords: Glioma imaging; Image-based biomarkers; MRI; Multiparametric imaging; Neural networks.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.