43 results on '"Breininger K"'
Search Results
2. POS0900 AUTOMATIC SCORING OF EROSION, SYNOVITIS AND BONE OEDEMA IN RHEUMATOID ARTHRITIS USING DEEP LEARNING ON HAND MAGNETIC RESONANCE IMAGING
- Author
-
Schlereth, M., primary, Kleyer, A., additional, Utz, J., additional, Folle, L., additional, Bayat, S., additional, Fagni, F., additional, Minopoulou, I., additional, Tascilar, K., additional, Taubmann, J., additional, Uder, M., additional, Heimann, T., additional, Qiu, J., additional, Schett, G., additional, Breininger, K., additional, and Simon, D., additional
- Published
- 2023
- Full Text
- View/download PDF
3. Bildbasierte Berechnung der Grundfrequenz für den Einsatz in der Videostroboskopie
- Author
-
Kist, AM, Wölfl, AM, Breininger, K, Schützenberger, A, Kist, AM, Wölfl, AM, Breininger, K, and Schützenberger, A
- Abstract
Hintergrund: Die Videostroboskopie wird als Goldstandard in der Diagnostik der Stimmlippenschwingungen eingesetzt. Die Bestimmung des Lichtblitzabstandes durch das Audiosignal ist allerdings anfällig gegenüber Störgeräuschen, wie beispielsweise verbale Instruktionen des Untersuchenden. Dies erlaubt nur eingeschränkt die Grundfrequenzbestimmung. Diese ist jedoch essentiell für den optimalen Lichtblitzabstand. Wir erforschen einen neuen, in Echtzeit laufenden und KI-basierten Ansatz, der ausschließlich auf den Endoskopiebildern basiert.Material und Methoden: Die entwickelte KI-gestützte Methode nutzt Bildmaterial von Hochgeschwindigkeitskameras, um die Grundfrequenz der Stimmlippenschwingung auf endoskopischen Bildern zu bestimmen. Für jedes Bild berechnen wir durch ein tiefes neuronales Netz den relativen Öffnungsgrad der glottalen Fläche. Durch zufällig aufgenommene Bilder, sowie den daraus berechneten relativen Öffnungsgrad, können wir durch mathematische Verfahren ("compressed sensing") die Grundfrequenz berechnen. Unsere Methode wurde an gesunden Proband:innen getestet.Ergebnisse: Wir können zeigen, dass unser KI- und Bildbasierter Ansatz bei einer Aufnahmedauer von unter 600 ms die Grundfrequenz in über 95% der Fälle exakt berechnen kann. Die Datenanalyse unserer KI-Methode benötigt unter 75 ms und kann somit in Echtzeit bereitgestellt werden. Weiterhin wird beobachtet, dass die Aufnahme von Endoskopiebildern nicht strukturiert geschehen darf, so dass die Grundfrequenz adäquat bestimmt werden kann.Diskussion: Unsere Methode ist in der Lage sehr genau die Grundfrequenz zu bestimmen und stellt somit eine schnelle Alternative zur klassischen audiobasierten Videostroboskopie dar. Die Funktionsweise unserer Methode in pathophysiologischen Fällen wird in zukünftigen Studien untersucht werden.Fazit: Die laryngeale Videostroboskopie benötigt nicht per se Zugang zu fehlerfreien Audiodaten. Die einzigartige KI-gestützte Analyse einzelner Bilder erlaubt die Berechnung der
- Published
- 2023
4. Mitosis domain generalization in histopathology images - The MIDOG challenge
- Author
-
Aubreville, M., Stathonikos, N., Bertram, C.A., Klopfleisch, R., Hoeve, N. Ter, Ciompi, F., Wilm, F., Marzahl, C., Donovan, T.A., Maier, A., Breen, J., Ravikumar, N., Chung, Y., Park, J., Nateghi, R., Pourakpour, F., Fick, R.H.J., Hadj, S. Ben, Jahanifar, M., Shephard, A., Dexl, J., Wittenberg, T., Kondo, S., Lafarge, M.W., Koelzer, V.H., Liang, J., Wang, Yubo, Long, X., Liu, J., Razavi, S., Khademi, A., Yang, S., Wang, Xiyue, Erber, R., Klang, A., Lipnik, K., Bolfa, P., Dark, M.J., Wasinger, G., Veta, M., Breininger, K., Aubreville, M., Stathonikos, N., Bertram, C.A., Klopfleisch, R., Hoeve, N. Ter, Ciompi, F., Wilm, F., Marzahl, C., Donovan, T.A., Maier, A., Breen, J., Ravikumar, N., Chung, Y., Park, J., Nateghi, R., Pourakpour, F., Fick, R.H.J., Hadj, S. Ben, Jahanifar, M., Shephard, A., Dexl, J., Wittenberg, T., Kondo, S., Lafarge, M.W., Koelzer, V.H., Liang, J., Wang, Yubo, Long, X., Liu, J., Razavi, S., Khademi, A., Yang, S., Wang, Xiyue, Erber, R., Klang, A., Lipnik, K., Bolfa, P., Dark, M.J., Wasinger, G., Veta, M., and Breininger, K.
- Abstract
Item does not contain fulltext, The density of mitotic figures (MF) within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of MF by pathologists is subject to a strong inter-rater bias, limiting its prognostic value. State-of-the-art deep learning methods can support experts but have been observed to strongly deteriorate when applied in a different clinical environment. The variability caused by using different whole slide scanners has been identified as one decisive component in the underlying domain shift. The goal of the MICCAI MIDOG 2021 challenge was the creation of scanner-agnostic MF detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were provided. In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance. The winning algorithm yielded an F(1) score of 0.748 (CI95: 0.704-0.781), exceeding the performance of six experts on the same task.
- Published
- 2023
5. OP0292 CLASSIFICATION OF PSORIATIC ARTHRITIS, SERONEGATIVE RHEUMATOID ARTHRITIS, AND SEROPOSITIVE RHEUMATOID ARTHRITIS USING DEEP LEARNING ON MAGNETIC RESONANCE IMAGING
- Author
-
Folle, L., primary, Bayat, S., additional, Kleyer, A., additional, Fagni, F., additional, Kapsner, L., additional, Schlereth, M., additional, Meinderink, T., additional, Breininger, K., additional, Tascilar, K., additional, Krönke, G., additional, Uder, M., additional, Sticherling, M., additional, Bickelhaupt, S., additional, Schett, G., additional, Maier, A., additional, Roemer, F., additional, and Simon, D., additional
- Published
- 2022
- Full Text
- View/download PDF
6. Computer Assistance: A New Gold Standard for the Mitotic Count?
- Author
-
Bertram, C.A., primary, Aubreville, M., additional, Donovan, T.A., additional, Bartel, A., additional, Wilm, F., additional, Marzahl, C., additional, Assenmacher, C.A., additional, Becker, K., additional, Bennett, M., additional, Corner, S., additional, Cossic, B., additional, Denk, D., additional, Dettwiler, M., additional, Garcia Gonzalez, B., additional, Gurtner, C., additional, Haverkamp, A.K., additional, Heier, A., additional, Lehmbecker, A., additional, Merz, S., additional, Noland, E.L., additional, Plog, S., additional, Schmidt, A., additional, Sebastian, F., additional, Sledge, D.G., additional, Smedley, R.C., additional, Tecilla, M., additional, Thaiwong, T., additional, Fuchs-Baumgartinger, A., additional, Meuten, D.J., additional, Breininger, K., additional, Kiupel, M., additional, Maier, A., additional, and Klopfleisch, R., additional
- Published
- 2022
- Full Text
- View/download PDF
7. Expert Review of Algorithmic Mitotic Count Predictions Ensures High Reliability
- Author
-
Bertram, C.A., primary, Klopfleisch, R., additional, Bartel, A., additional, Donovan, T.A., additional, Fuchs-Baumgartinger, A., additional, Breininger, K., additional, Kiupel, M., additional, and Aubreville, M., additional
- Published
- 2022
- Full Text
- View/download PDF
8. AUTOMATIC SCORING OF EROSION, SYNOVITIS AND BONE OEDEMA IN RHEUMATOID ARTHRITIS USING DEEP LEARNING ON HAND MAGNETIC RESONANCE IMAGING.
- Author
-
Schlereth, M., Kleyer, A., Utz, J., Folle, L., Bayat, S., Fagni, F., Minopoulou, I., Tascilar, K., Taubmann, J., Uder, M., Heimann, T., Qiu, J., Schett, G., Breininger, K., and Simon, D.
- Published
- 2023
- Full Text
- View/download PDF
9. Towards graph-based reconstruction of the corticospinal tract
- Author
-
Breininger, K, Bauer, MHA, Kuhnt, D, Freisleben, B, Nimsky, C, Breininger, K, Bauer, MHA, Kuhnt, D, Freisleben, B, and Nimsky, C
- Published
- 2013
10. Automatic assessment of AgNOR-scores in canine cutaneous mast cell tumors using a Deep Learning-based algorithm
- Author
-
Ganz, J., Bertram, C. A., Lipnik, K., Ammeling, J., Richter, B., Puget, C., Parlak, E., Diehl, L., Klopfleisch, R., Kiupel, M., Donovan, T. A., Breininger, K., and Aubreville, M.
- Published
- 2023
- Full Text
- View/download PDF
11. Information mismatch in PHH3-assisted mitosis annotation leads to interpretation shifts in H&E slide analysis.
- Author
-
Ganz J, Marzahl C, Ammeling J, Rosbach E, Richter B, Puget C, Denk D, Demeter EA, Tăbăran FA, Wasinger G, Lipnik K, Tecilla M, Valentine MJ, Dark MJ, Abele N, Bolfa P, Erber R, Klopfleisch R, Merz S, Donovan TA, Jabari S, Bertram CA, Breininger K, and Aubreville M
- Subjects
- Humans, Deep Learning, Algorithms, Reproducibility of Results, Staining and Labeling methods, Image Processing, Computer-Assisted methods, Mitosis, Histones metabolism
- Abstract
The count of mitotic figures (MFs) observed in hematoxylin and eosin (H&E)-stained slides is an important prognostic marker, as it is a measure for tumor cell proliferation. However, the identification of MFs has a known low inter-rater agreement. In a computer-aided setting, deep learning algorithms can help to mitigate this, but they require large amounts of annotated data for training and validation. Furthermore, label noise introduced during the annotation process may impede the algorithms' performance. Unlike H&E, where identification of MFs is based mainly on morphological features, the mitosis-specific antibody phospho-histone H3 (PHH3) specifically highlights MFs. Counting MFs on slides stained against PHH3 leads to higher agreement among raters and has therefore recently been used as a ground truth for the annotation of MFs in H&E. However, as PHH3 facilitates the recognition of cells indistinguishable from H&E staining alone, the use of this ground truth could potentially introduce an interpretation shift and even label noise into the H&E-related dataset, impacting model performance. This study analyzes the impact of PHH3-assisted MF annotation on inter-rater reliability and object level agreement through an extensive multi-rater experiment. Subsequently, MF detectors, including a novel dual-stain detector, were evaluated on the resulting datasets to investigate the influence of PHH3-assisted labeling on the models' performance. We found that the annotators' object-level agreement significantly increased when using PHH3-assisted labeling (F1: 0.53 to 0.74). However, this enhancement in label consistency did not translate to improved performance for H&E-based detectors, neither during the training phase nor the evaluation phase. Conversely, the dual-stain detector was able to benefit from the higher consistency. This reveals an information mismatch between the H&E and PHH3-stained images as the cause of this effect, which renders PHH3-assisted annotations not well-aligned for use with H&E-based detectors. Based on our findings, we propose an improved PHH3-assisted labeling procedure., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
12. Artificial intelligence can be trained to predict c-KIT -11 mutational status of canine mast cell tumors from hematoxylin and eosin-stained histological slides.
- Author
-
Puget C, Ganz J, Ostermaier J, Conrad T, Parlak E, Bertram CA, Kiupel M, Breininger K, Aubreville M, and Klopfleisch R
- Abstract
Numerous prognostic factors are currently assessed histologically and immunohistochemically in canine mast cell tumors (MCTs) to evaluate clinical behavior. In addition, polymerase chain reaction (PCR) is often performed to detect internal tandem duplication (ITD) mutations in exon 11 of the c-KIT gene ( c-KIT -11-ITD) to predict the therapeutic response to tyrosine kinase inhibitors. This project aimed at training deep learning models (DLMs) to identify MCTs with c-KIT -11-ITD solely based on morphology. Hematoxylin and eosin (HE) stained slides of 368 cutaneous, subcutaneous, and mucocutaneous MCTs (195 with ITD and 173 without) were stained consecutively in 2 different laboratories and scanned with 3 different slide scanners. This resulted in 6 data sets (stain-scanner variations representing diagnostic institutions) of whole-slide images. DLMs were trained with single and mixed data sets and their performances were assessed under stain-scanner variations (domain shifts). The DLM correctly classified HE slides according to their c-KIT -11-ITD status in up to 87% of cases with a 0.90 sensitivity and a 0.83 specificity. A relevant performance drop could be observed when the stain-scanner combination of training and test data set differed. Multi-institutional data sets improved the average accuracy but did not reach the maximum accuracy of algorithms trained and tested on the same stain-scanner variant (ie, intra-institutional). In summary, DLM-based morphological examination can predict c-KIT -11-ITD with high accuracy in canine MCTs in HE slides. However, staining protocol and scanner type influence accuracy. Larger data sets of scans from different laboratories and scanners may lead to more robust DLMs to identify c- KIT mutations in HE slides., Competing Interests: Declaration of Conflicting InterestsThe author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
- Published
- 2024
- Full Text
- View/download PDF
13. Graph neural networks in multi-stained pathological imaging: extended comparative analysis of Radiomic features.
- Author
-
Rivera Monroy LC, Rist L, Ostalecki C, Bauer A, Vera J, Breininger K, and Maier A
- Abstract
Purpose: This study investigates the application of Radiomic features within graph neural networks (GNNs) for the classification of multiple-epitope-ligand cartography (MELC) pathology samples. It aims to enhance the diagnosis of often misdiagnosed skin diseases such as eczema, lymphoma, and melanoma. The novel contribution lies in integrating Radiomic features with GNNs and comparing their efficacy against traditional multi-stain profiles., Methods: We utilized GNNs to process multiple pathological slides as cell-level graphs, comparing their performance with XGBoost and Random Forest classifiers. The analysis included two feature types: multi-stain profiles and Radiomic features. Dimensionality reduction techniques such as UMAP and t-SNE were applied to optimize the feature space, and graph connectivity was based on spatial and feature closeness., Results: Integrating Radiomic features into spatially connected graphs significantly improved classification accuracy over traditional models. The application of UMAP further enhanced the performance of GNNs, particularly in classifying diseases with similar pathological features. The GNN model outperformed baseline methods, demonstrating its robustness in handling complex histopathological data., Conclusion: Radiomic features processed through GNNs show significant promise for multi-disease classification, improving diagnostic accuracy. This study's findings suggest that integrating advanced imaging analysis with graph-based modeling can lead to better diagnostic tools. Future research should expand these methods to a wider range of diseases to validate their generalizability and effectiveness., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
14. Re-identification from histopathology images.
- Author
-
Ganz J, Ammeling J, Jabari S, Breininger K, and Aubreville M
- Abstract
In numerous studies, deep learning algorithms have proven their potential for the analysis of histopathology images, for example, for revealing the subtypes of tumors or the primary origin of metastases. These models require large datasets for training, which must be anonymized to prevent possible patient identity leaks. This study demonstrates that even relatively simple deep learning algorithms can re-identify patients in large histopathology datasets with substantial accuracy. In addition, we compared a comprehensive set of state-of-the-art whole slide image classifiers and feature extractors for the given task. We evaluated our algorithms on two TCIA datasets including lung squamous cell carcinoma (LSCC) and lung adenocarcinoma (LUAD). We also demonstrate the algorithm's performance on an in-house dataset of meningioma tissue. We predicted the source patient of a slide with F
1 scores of up to 80.1% and 77.19% on the LSCC and LUAD datasets, respectively, and with 77.09% on our meningioma dataset. Based on our findings, we formulated a risk assessment scheme to estimate the risk to the patient's privacy prior to publication., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 The Author(s). Published by Elsevier B.V. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
15. Comprehensive multimodal deep learning survival prediction enabled by a transformer architecture: A multicenter study in glioblastoma.
- Author
-
Gomaa A, Huang Y, Hagag A, Schmitter C, Höfler D, Weissmann T, Breininger K, Schmidt M, Stritzelberger J, Delev D, Coras R, Dörfler A, Schnell O, Frey B, Gaipl US, Semrau S, Bert C, Hau P, Fietkau R, and Putz F
- Abstract
Background: This research aims to improve glioblastoma survival prediction by integrating MR images, clinical, and molecular-pathologic data in a transformer-based deep learning model, addressing data heterogeneity and performance generalizability., Methods: We propose and evaluate a transformer-based nonlinear and nonproportional survival prediction model. The model employs self-supervised learning techniques to effectively encode the high-dimensional MRI input for integration with nonimaging data using cross-attention. To demonstrate model generalizability, the model is assessed with the time-dependent concordance index (Cdt) in 2 training setups using 3 independent public test sets: UPenn-GBM, UCSF-PDGM, and Rio Hortega University Hospital (RHUH)-GBM, each comprising 378, 366, and 36 cases, respectively., Results: The proposed transformer model achieved a promising performance for imaging as well as nonimaging data, effectively integrating both modalities for enhanced performance (UCSF-PDGM test-set, imaging Cdt 0.578, multimodal Cdt 0.672) while outperforming state-of-the-art late-fusion 3D-CNN-based models. Consistent performance was observed across the 3 independent multicenter test sets with Cdt values of 0.707 (UPenn-GBM, internal test set), 0.672 (UCSF-PDGM, first external test set), and 0.618 (RHUH-GBM, second external test set). The model achieved significant discrimination between patients with favorable and unfavorable survival for all 3 datasets (log-rank P 1.9 × 10
-8 , 9.7 × 10-3 , and 1.2 × 10-2 ). Comparable results were obtained in the second setup using UCSF-PDGM for training/internal testing and UPenn-GBM and RHUH-GBM for external testing (Cdt 0.670, 0.638, and 0.621)., Conclusions: The proposed transformer-based survival prediction model integrates complementary information from diverse input modalities, contributing to improved glioblastoma survival prediction compared to state-of-the-art methods. Consistent performance was observed across institutions supporting model generalizability., Competing Interests: The authors declare no conflict of interest in this work., (© The Author(s) 2024. Published by Oxford University Press, the Society for Neuro-Oncology and the European Association of Neuro-Oncology.)- Published
- 2024
- Full Text
- View/download PDF
16. Author Correction: A comprehensive multi-domain dataset for mitotic figure detection.
- Author
-
Aubreville M, Wilm F, Stathonikos N, Breininger K, Donovan TA, Jabari S, Veta M, Ganz J, Ammeling J, van Diest PJ, Klopfleisch R, and Bertram CA
- Published
- 2024
- Full Text
- View/download PDF
17. Deep learning-based classification of erosion, synovitis and osteitis in hand MRI of patients with inflammatory arthritis.
- Author
-
Schlereth M, Mutlu MY, Utz J, Bayat S, Heimann T, Qiu J, Ehring C, Liu C, Uder M, Kleyer A, Simon D, Roemer F, Schett G, Breininger K, and Fagni F
- Subjects
- Humans, Male, Female, Middle Aged, Arthritis, Rheumatoid diagnostic imaging, Arthritis, Rheumatoid complications, Hand diagnostic imaging, Hand pathology, Arthritis, Psoriatic diagnostic imaging, Arthritis, Psoriatic diagnosis, Adult, Aged, ROC Curve, Severity of Illness Index, Neural Networks, Computer, Deep Learning, Osteitis diagnostic imaging, Osteitis etiology, Osteitis diagnosis, Osteitis pathology, Synovitis diagnostic imaging, Synovitis etiology, Synovitis diagnosis, Magnetic Resonance Imaging methods
- Abstract
Objectives: To train, test and validate the performance of a convolutional neural network (CNN)-based approach for the automated assessment of bone erosions, osteitis and synovitis in hand MRI of patients with inflammatory arthritis., Methods: Hand MRIs (coronal T1-weighted, T2-weighted fat-suppressed, T1-weighted fat-suppressed contrast-enhanced) of rheumatoid arthritis (RA) and psoriatic arthritis (PsA) patients from the rheumatology department of the Erlangen University Hospital were assessed by two expert rheumatologists using the Outcome Measures in Rheumatology-validated RA MRI Scoring System and PsA MRI Scoring System scores and were used to train, validate and test CNNs to automatically score erosions, osteitis and synovitis. Scoring performance was compared with human annotations in terms of macro-area under the receiver operating characteristic curve (AUC) and balanced accuracy using fivefold cross-validation. Validation was performed on an independent dataset of MRIs from a second patient cohort., Results: In total, 211 MRIs from 112 patients (14 906 region of interests (ROIs)) were included for training/internal validation using cross-validation and 220 MRIs from 75 patients (11 040 ROIs) for external validation of the networks. The networks achieved high mean (SD) macro-AUC of 92%±1% for erosions, 91%±2% for osteitis and 85%±2% for synovitis. Compared with human annotation, CNNs achieved a high mean Spearman correlation for erosions (90±2%), osteitis (78±8%) and synovitis (69±7%), which remained consistent in the validation dataset., Conclusions: We developed a CNN-based automated scoring system that allowed a rapid grading of erosions, osteitis and synovitis with good diagnostic accuracy and using less MRI sequences compared with conventional scoring. This CNN-based approach may help develop standardised cost-efficient and time-efficient assessments of hand MRIs for patients with arthritis., Competing Interests: Competing interests: FR is consultant to Grünenthal GmbH and is shareholder of Boston Imaging Core Lab., (© Author(s) (or their employer(s)) 2024. Re-use permitted under CC BY. Published by BMJ.)
- Published
- 2024
- Full Text
- View/download PDF
18. Domain generalization across tumor types, laboratories, and species - Insights from the 2022 edition of the Mitosis Domain Generalization Challenge.
- Author
-
Aubreville M, Stathonikos N, Donovan TA, Klopfleisch R, Ammeling J, Ganz J, Wilm F, Veta M, Jabari S, Eckstein M, Annuscheit J, Krumnow C, Bozaba E, Çayır S, Gu H, Chen X', Jahanifar M, Shephard A, Kondo S, Kasai S, Kotte S, Saipradeep VG, Lafarge MW, Koelzer VH, Wang Z, Zhang Y, Yang S, Wang X, Breininger K, and Bertram CA
- Subjects
- Humans, Animals, Cats, Algorithms, Image Processing, Computer-Assisted methods, Reference Standards, Laboratories, Mitosis
- Abstract
Recognition of mitotic figures in histologic tumor specimens is highly relevant to patient outcome assessment. This task is challenging for algorithms and human experts alike, with deterioration of algorithmic performance under shifts in image representations. Considerable covariate shifts occur when assessment is performed on different tumor types, images are acquired using different digitization devices, or specimens are produced in different laboratories. This observation motivated the inception of the 2022 challenge on MItosis Domain Generalization (MIDOG 2022). The challenge provided annotated histologic tumor images from six different domains and evaluated the algorithmic approaches for mitotic figure detection provided by nine challenge participants on ten independent domains. Ground truth for mitotic figure detection was established in two ways: a three-expert majority vote and an independent, immunohistochemistry-assisted set of labels. This work represents an overview of the challenge tasks, the algorithmic strategies employed by the participants, and potential factors contributing to their success. With an F
1 score of 0.764 for the top-performing team, we summarize that domain generalization across various tumor domains is possible with today's deep learning-based recognition pipelines. However, we also found that domain characteristics not present in the training set (feline as new species, spindle cell shape as new morphology and a new scanner) led to small but significant decreases in performance. When assessed against the immunohistochemistry-assisted reference standard, all methods resulted in reduced recall scores, with only minor changes in the order of participants in the ranking., Competing Interests: Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 The Author(s). Published by Elsevier B.V. All rights reserved.)- Published
- 2024
- Full Text
- View/download PDF
19. Opportunities for Improving Glaucoma Clinical Trials via Deep Learning-Based Identification of Patients with Low Visual Field Variability.
- Author
-
Wang R, Bradley C, Herbert P, Hou K, Hager GD, Breininger K, Unberath M, Ramulu P, and Yohannan J
- Subjects
- Humans, Retrospective Studies, Female, Male, Clinical Trials as Topic, Glaucoma physiopathology, Glaucoma diagnosis, Visual Acuity physiology, Aged, Visual Field Tests methods, Middle Aged, Tomography, Optical Coherence methods, Deep Learning, Visual Fields physiology, Intraocular Pressure physiology
- Abstract
Purpose: Develop and evaluate the performance of a deep learning model (DLM) that forecasts eyes with low future visual field (VF) variability, and study the impact of using this DLM on sample size requirements for neuroprotective trials., Design: Retrospective cohort and simulation study., Methods: We included 1 eye per patient with baseline reliable VFs, OCT, clinical measures (demographics, intraocular pressure, and visual acuity), and 5 subsequent reliable VFs to forecast VF variability using DLMs and perform sample size estimates. We estimated sample size for 3 groups of eyes: all eyes (AE), low variability eyes (LVE: the subset of AE with a standard deviation of mean deviation [MD] slope residuals in the bottom 25th percentile), and DLM-predicted low variability eyes (DLPE: the subset of AE predicted to be low variability by the DLM). Deep learning models using only baseline VF/OCT/clinical data as input (DLM1), or also using a second VF (DLM2) were constructed to predict low VF variability (DLPE1 and DLPE2, respectively). Data were split 60/10/30 into train/val/test. Clinical trial simulations were performed only on the test set. We estimated the sample size necessary to detect treatment effects of 20% to 50% in MD slope with 80% power. Power was defined as the percentage of simulated clinical trials where the MD slope was significantly worse from the control. Clinical trials were simulated with visits every 3 months with a total of 10 visits., Results: A total of 2817 eyes were included in the analysis. Deep learning models 1 and 2 achieved an area under the receiver operating characteristic curve of 0.73 (95% confidence interval [CI]: 0.68, 0.76) and 0.82 (95% CI: 0.78, 0.85) in forecasting low VF variability. When compared with including AE, using DLPE1 and DLPE2 reduced sample size to achieve 80% power by 30% and 38% for 30% treatment effect, and 31% and 38% for 50% treatment effect., Conclusions: Deep learning models can forecast eyes with low VF variability using data from a single baseline clinical visit. This can reduce sample size requirements, and potentially reduce the burden of future glaucoma clinical trials., Financial Disclosure(s): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article., (Copyright © 2024 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
20. Oral mucosa - an examination map for confocal laser endomicroscopy within the oral cavity: an experimental clinical study.
- Author
-
Oetter N, Pröll J, Sievert M, Goncalves M, Rohde M, Nobis CP, Knipfer C, Aubreville M, Pan Z, Breininger K, Maier A, Kesting M, and Stelzle F
- Subjects
- Humans, Male, Female, Middle Aged, Mouth Neoplasms pathology, Mouth Neoplasms diagnostic imaging, Microscopy, Confocal methods, Mouth Mucosa diagnostic imaging, Mouth Mucosa cytology
- Abstract
Objectives: Confocal laser endomicroscopy (CLE) is an optical method that enables microscopic visualization of oral mucosa. Previous studies have shown that it is possible to differentiate between physiological and malignant oral mucosa. However, differences in mucosal architecture were not taken into account. The objective was to map the different oral mucosal morphologies and to establish a "CLE map" of physiological mucosa as baseline for further application of this powerful technology., Materials and Methods: The CLE database consisted of 27 patients. The following spots were examined: (1) upper lip (intraoral) (2) alveolar ridge (3) lateral tongue (4) floor of the mouth (5) hard palate (6) intercalary line. All sequences were examined by two CLE experts for morphological differences and video quality., Results: Analysis revealed clear differences in image quality and possibility of depicting tissue morphologies between the various localizations of oral mucosa: imaging of the alveolar ridge and hard palate showed visually most discriminative tissue morphology. Labial mucosa was also visualized well using CLE. Here, typical morphological features such as uniform cells with regular intercellular gaps and vessels could be clearly depicted. Image generation and evaluation was particularly difficult in the area of the buccal mucosa, the lateral tongue and the floor of the mouth., Conclusion: A physiological "CLE map" for the entire oral cavity could be created for the first time., Clinical Relevance: This will make it possible to take into account the existing physiological morphological features when differentiating between normal mucosa and oral squamous cell carcinoma in future work., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
21. Diagnosis of malignancy in oropharyngeal confocal laser endomicroscopy using GPT 4.0 with vision.
- Author
-
Sievert M, Aubreville M, Mueller SK, Eckstein M, Breininger K, Iro H, and Goncalves M
- Subjects
- Humans, Reproducibility of Results, Microscopy, Confocal methods, Squamous Cell Carcinoma of Head and Neck, Lasers, Head and Neck Neoplasms
- Abstract
Purpose: Confocal Laser Endomicroscopy (CLE) is an imaging tool, that has demonstrated potential for intraoperative, real-time, non-invasive, microscopical assessment of surgical margins of oropharyngeal squamous cell carcinoma (OPSCC). However, interpreting CLE images remains challenging. This study investigates the application of OpenAI's Generative Pretrained Transformer (GPT) 4.0 with Vision capabilities for automated classification of CLE images in OPSCC., Methods: CLE Images of histological confirmed SCC or healthy mucosa from a database of 12 809 CLE images from 5 patients with OPSCC were retrieved and anonymized. Using a training data set of 16 images, a validation set of 139 images, comprising SCC (83 images, 59.7%) and healthy normal mucosa (56 images, 40.3%) was classified using the application programming interface (API) of GPT4.0. The same set of images was also classified by CLE experts (two surgeons and one pathologist), who were blinded to the histology. Diagnostic metrics, the reliability of GPT and inter-rater reliability were assessed., Results: Overall accuracy of the GPT model was 71.2%, the intra-rater agreement was κ = 0.837, indicating an almost perfect agreement across the three runs of GPT-generated results. Human experts achieved an accuracy of 88.5% with a substantial level of agreement (κ = 0.773)., Conclusions: Though limited to a specific clinical framework, patient and image set, this study sheds light on some previously unexplored diagnostic capabilities of large language models using few-shot prompting. It suggests the model`s ability to extrapolate information and classify CLE images with minimal example data. Whether future versions of the model can achieve clinically relevant diagnostic accuracy, especially in uncurated data sets, remains to be investigated., (© 2024. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)
- Published
- 2024
- Full Text
- View/download PDF
22. Deep learning-based identification of eyes at risk for glaucoma surgery.
- Author
-
Wang R, Bradley C, Herbert P, Hou K, Ramulu P, Breininger K, Unberath M, and Yohannan J
- Subjects
- Adult, Humans, Retrospective Studies, Retina, Ophthalmology, Deep Learning, Glaucoma surgery, Trabeculectomy
- Abstract
To develop and evaluate the performance of a deep learning model (DLM) that predicts eyes at high risk of surgical intervention for uncontrolled glaucoma based on multimodal data from an initial ophthalmology visit. Longitudinal, observational, retrospective study. 4898 unique eyes from 4038 adult glaucoma or glaucoma-suspect patients who underwent surgery for uncontrolled glaucoma (trabeculectomy, tube shunt, xen, or diode surgery) between 2013 and 2021, or did not undergo glaucoma surgery but had 3 or more ophthalmology visits. We constructed a DLM to predict the occurrence of glaucoma surgery within various time horizons from a baseline visit. Model inputs included spatially oriented visual field (VF) and optical coherence tomography (OCT) data as well as clinical and demographic features. Separate DLMs with the same architecture were trained to predict the occurrence of surgery within 3 months, within 3-6 months, within 6 months-1 year, within 1-2 years, within 2-3 years, within 3-4 years, and within 4-5 years from the baseline visit. Included eyes were randomly split into 60%, 20%, and 20% for training, validation, and testing. DLM performance was measured using area under the receiver operating characteristic curve (AUC) and precision-recall curve (PRC). Shapley additive explanations (SHAP) were utilized to assess the importance of different features. Model prediction of surgery for uncontrolled glaucoma within 3 months had the best AUC of 0.92 (95% CI 0.88, 0.96). DLMs achieved clinically useful AUC values (> 0.8) for all models that predicted the occurrence of surgery within 3 years. According to SHAP analysis, all 7 models placed intraocular pressure (IOP) within the five most important features in predicting the occurrence of glaucoma surgery. Mean deviation (MD) and average retinal nerve fiber layer (RNFL) thickness were listed among the top 5 most important features by 6 of the 7 models. DLMs can successfully identify eyes requiring surgery for uncontrolled glaucoma within specific time horizons. Predictive performance decreases as the time horizon for forecasting surgery increases. Implementing prediction models in a clinical setting may help identify patients that should be referred to a glaucoma specialist for surgical evaluation., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
23. Automated diagnosis of 7 canine skin tumors using machine learning on H&E-stained whole slide images.
- Author
-
Fragoso-Garcia M, Wilm F, Bertram CA, Merz S, Schmidt A, Donovan T, Fuchs-Baumgartinger A, Bartel A, Marzahl C, Diehl L, Puget C, Maier A, Aubreville M, Breininger K, and Klopfleisch R
- Subjects
- Animals, Dogs, Artificial Intelligence, Eosine Yellowish-(YS), Hematoxylin, Reproducibility of Results, Machine Learning, Deep Learning, Skin Neoplasms diagnosis, Skin Neoplasms veterinary, Dog Diseases diagnosis
- Abstract
Microscopic evaluation of hematoxylin and eosin-stained slides is still the diagnostic gold standard for a variety of diseases, including neoplasms. Nevertheless, intra- and interrater variability are well documented among pathologists. So far, computer assistance via automated image analysis has shown potential to support pathologists in improving accuracy and reproducibility of quantitative tasks. In this proof of principle study, we describe a machine-learning-based algorithm for the automated diagnosis of 7 of the most common canine skin tumors: trichoblastoma, squamous cell carcinoma, peripheral nerve sheath tumor, melanoma, histiocytoma, mast cell tumor, and plasmacytoma. We selected, digitized, and annotated 350 hematoxylin and eosin-stained slides (50 per tumor type) to create a database divided into training, n = 245 whole-slide images (WSIs), validation ( n = 35 WSIs), and test sets ( n = 70 WSIs). Full annotations included the 7 tumor classes and 6 normal skin structures. The data set was used to train a convolutional neural network (CNN) for the automatic segmentation of tumor and nontumor classes. Subsequently, the detected tumor regions were classified patch-wise into 1 of the 7 tumor classes. A majority of patches-approach led to a tumor classification accuracy of the network on the slide-level of 95% (133/140 WSIs), with a patch-level precision of 85%. The same 140 WSIs were provided to 6 experienced pathologists for diagnosis, who achieved a similar slide-level accuracy of 98% (137/140 correct majority votes). Our results highlight the feasibility of artificial intelligence-based methods as a support tool in diagnostic oncologic pathology with future applications in other species and tumor types., Competing Interests: Declaration of Conflicting InterestsThe author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
- Published
- 2023
- Full Text
- View/download PDF
24. A comprehensive multi-domain dataset for mitotic figure detection.
- Author
-
Aubreville M, Wilm F, Stathonikos N, Breininger K, Donovan TA, Jabari S, Veta M, Ganz J, Ammeling J, van Diest PJ, Klopfleisch R, and Bertram CA
- Subjects
- Humans, Algorithms, Prognosis, Mitosis, Neoplasms pathology
- Abstract
The prognostic value of mitotic figures in tumor tissue is well-established for many tumor types and automating this task is of high research interest. However, especially deep learning-based methods face performance deterioration in the presence of domain shifts, which may arise from different tumor types, slide preparation and digitization devices. We introduce the MIDOG++ dataset, an extension of the MIDOG 2021 and 2022 challenge datasets. We provide region of interest images from 503 histological specimens of seven different tumor types with variable morphology with in total labels for 11,937 mitotic figures: breast carcinoma, lung carcinoma, lymphosarcoma, neuroendocrine tumor, cutaneous mast cell tumor, cutaneous melanoma, and (sub)cutaneous soft tissue sarcoma. The specimens were processed in several laboratories utilizing diverse scanners. We evaluated the extent of the domain shift by using state-of-the-art approaches, observing notable differences in single-domain training. In a leave-one-domain-out setting, generalizability improved considerably. This mitotic figure dataset is the first that incorporates a wide domain shift based on different tumor types, laboratories, whole slide image scanners, and species., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
25. Defining a baseline identification of artifacts in confocal laser endomicroscopy in head and neck cancer imaging.
- Author
-
Pan Z, Breininger K, Aubreville M, Stelzle F, Oetter N, Maier A, Mantsopoulos K, Iro H, Goncalves M, and Sievert M
- Subjects
- Humans, Endoscopy, Microscopy, Lasers, Microscopy, Confocal methods, Artifacts, Head and Neck Neoplasms diagnostic imaging
- Abstract
Competing Interests: Declaration of competing interest None of the authors has a personal conflict of interest to declare.
- Published
- 2023
- Full Text
- View/download PDF
26. Pan-tumor T-lymphocyte detection using deep neural networks: Recommendations for transfer learning in immunohistochemistry.
- Author
-
Wilm F, Ihling C, Méhes G, Terracciano L, Puget C, Klopfleisch R, Schüffler P, Aubreville M, Maier A, Mrowiec T, and Breininger K
- Abstract
The success of immuno-oncology treatments promises long-term cancer remission for an increasing number of patients. The response to checkpoint inhibitor drugs has shown a correlation with the presence of immune cells in the tumor and tumor microenvironment. An in-depth understanding of the spatial localization of immune cells is therefore critical for understanding the tumor's immune landscape and predicting drug response. Computer-aided systems are well suited for efficiently quantifying immune cells in their spatial context. Conventional image analysis approaches are often based on color features and therefore require a high level of manual interaction. More robust image analysis methods based on deep learning are expected to decrease this reliance on human interaction and improve the reproducibility of immune cell scoring. However, these methods require sufficient training data and previous work has reported low robustness of these algorithms when they are tested on out-of-distribution data from different pathology labs or samples from different organs. In this work, we used a new image analysis pipeline to explicitly evaluate the robustness of marker-labeled lymphocyte quantification algorithms depending on the number of training samples before and after being transferred to a new tumor indication. For these experiments, we adapted the RetinaNet architecture for the task of T-lymphocyte detection and employed transfer learning to bridge the domain gap between tumor indications and reduce the annotation costs for unseen domains. On our test set, we achieved human-level performance for almost all tumor indications with an average precision of 0.74 in-domain and 0.72-0.74 cross-domain. From our results, we derive recommendations for model development regarding annotation extent, training sample selection, and label extraction for the development of robust algorithms for immune cell scoring. By extending the task of marker-labeled lymphocyte quantification to a multi-class detection task, the pre-requisite for subsequent analyses, e.g., distinguishing lymphocytes in the tumor stroma from tumor-infiltrating lymphocytes, is met., (© 2023 The Authors.)
- Published
- 2023
- Full Text
- View/download PDF
27. How scan parameter choice affects deep learning-based coronary artery disease assessment from computed tomography.
- Author
-
Denzinger F, Wels M, Breininger K, Taubmann O, Mühlberg A, Allmendinger T, Gülsün MA, Schöbinger M, André F, Buss SJ, Görich J, Sühling M, and Maier A
- Subjects
- Humans, Coronary Angiography methods, Tomography, X-Ray Computed, Heart, Predictive Value of Tests, Coronary Artery Disease diagnostic imaging, Coronary Artery Disease therapy, Deep Learning
- Abstract
Recently, algorithms capable of assessing the severity of Coronary Artery Disease (CAD) in form of the Coronary Artery Disease-Reporting and Data System (CAD-RADS) grade from Coronary Computed Tomography Angiography (CCTA) scans using Deep Learning (DL) were proposed. Before considering to apply these algorithms in clinical practice, their robustness regarding different commonly used Computed Tomography (CT)-specific image formation parameters-including denoising strength, slab combination, and reconstruction kernel-needs to be evaluated. For this study, we reconstructed a data set of 500 patient CCTA scans under seven image formation parameter configurations. We select one default configuration and evaluate how varying individual parameters impacts the performance and stability of a typical algorithm for automated CAD assessment from CCTA. This algorithm consists of multiple preprocessing and a DL prediction step. We evaluate the influence of the parameter changes on the entire pipeline and additionally on only the DL step by propagating the centerline extraction results of the default configuration to all others. We consider the standard deviation of the CAD severity prediction grade difference between the default and variation configurations to assess the stability w.r.t. parameter changes. For the full pipeline we observe slight instability (± 0.226 CAD-RADS) for all variations. Predictions are more stable with centerlines propagated from the default to the variation configurations (± 0.122 CAD-RADS), especially for differing denoising strengths (± 0.046 CAD-RADS). However, stacking slabs with sharp boundaries instead of mixing slabs in overlapping regions (called true stack ± 0.313 CAD-RADS) and increasing the sharpness of the reconstruction kernel (± 0.150 CAD-RADS) leads to unstable predictions. Regarding the clinically relevant tasks of excluding CAD (called rule-out; AUC default 0.957, min 0.937) and excluding obstructive CAD (called hold-out; AUC default 0.971, min 0.964) the performance remains on a high level for all variations. Concluding, an influence of reconstruction parameters on the predictions is observed. Especially, scans reconstructed with the true stack parameter need to be treated with caution when using a DL-based method. Also, reconstruction kernels which are underrepresented in the training data increase the prediction uncertainty., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
28. Mitosis domain generalization in histopathology images - The MIDOG challenge.
- Author
-
Aubreville M, Stathonikos N, Bertram CA, Klopfleisch R, Ter Hoeve N, Ciompi F, Wilm F, Marzahl C, Donovan TA, Maier A, Breen J, Ravikumar N, Chung Y, Park J, Nateghi R, Pourakpour F, Fick RHJ, Ben Hadj S, Jahanifar M, Shephard A, Dexl J, Wittenberg T, Kondo S, Lafarge MW, Koelzer VH, Liang J, Wang Y, Long X, Liu J, Razavi S, Khademi A, Yang S, Wang X, Erber R, Klang A, Lipnik K, Bolfa P, Dark MJ, Wasinger G, Veta M, and Breininger K
- Subjects
- Humans, Neoplasm Grading, Prognosis, Mitosis, Algorithms
- Abstract
The density of mitotic figures (MF) within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of MF by pathologists is subject to a strong inter-rater bias, limiting its prognostic value. State-of-the-art deep learning methods can support experts but have been observed to strongly deteriorate when applied in a different clinical environment. The variability caused by using different whole slide scanners has been identified as one decisive component in the underlying domain shift. The goal of the MICCAI MIDOG 2021 challenge was the creation of scanner-agnostic MF detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were provided. In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance. The winning algorithm yielded an F
1 score of 0.748 (CI95: 0.704-0.781), exceeding the performance of six experts on the same task., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2022 Elsevier B.V. All rights reserved.)- Published
- 2023
- Full Text
- View/download PDF
29. Cytologic scoring of equine exercise-induced pulmonary hemorrhage: Performance of human experts and a deep learning-based algorithm.
- Author
-
Bertram CA, Marzahl C, Bartel A, Stayt J, Bonsembiante F, Beeler-Marfisi J, Barton AK, Brocca G, Gelain ME, Gläsel A, Preez KD, Weiler K, Weissenbacher-Lang C, Breininger K, Aubreville M, Maier A, Klopfleisch R, and Hill J
- Subjects
- Animals, Bronchoalveolar Lavage Fluid, Hemorrhage diagnosis, Hemorrhage veterinary, Hemosiderin, Horses, Iron, Reproducibility of Results, Deep Learning, Horse Diseases diagnosis, Lung Diseases diagnosis, Lung Diseases veterinary
- Abstract
Exercise-induced pulmonary hemorrhage (EIPH) is a relevant respiratory disease in sport horses, which can be diagnosed by examination of bronchoalveolar lavage fluid (BALF) cells using the total hemosiderin score (THS). The aim of this study was to evaluate the diagnostic accuracy and reproducibility of annotators and to validate a deep learning-based algorithm for the THS. Digitized cytological specimens stained for iron were prepared from 52 equine BALF samples. Ten annotators produced a THS for each slide according to published methods. The reference methods for comparing annotator's and algorithmic performance included a ground truth dataset, the mean annotators' THSs, and chemical iron measurements. Results of the study showed that annotators had marked interobserver variability of the THS, which was mostly due to a systematic error between annotators in grading the intracytoplasmatic hemosiderin content of individual macrophages. Regarding overall measurement error between the annotators, 87.7% of the variance could be reduced by using standardized grades based on the ground truth. The algorithm was highly consistent with the ground truth in assigning hemosiderin grades. Compared with the ground truth THS, annotators had an accuracy of diagnosing EIPH (THS of < or ≥ 75) of 75.7%, whereas, the algorithm had an accuracy of 92.3% with no relevant differences in correlation with chemical iron measurements. The results show that deep learning-based algorithms are useful for improving reproducibility and routine applicability of the THS. For THS by experts, a diagnostic uncertainty interval of 40 to 110 is proposed. THSs within this interval have insufficient reproducibility regarding the EIPH diagnosis.
- Published
- 2023
- Full Text
- View/download PDF
30. Advanced neural networks for classification of MRI in psoriatic arthritis, seronegative, and seropositive rheumatoid arthritis.
- Author
-
Folle L, Bayat S, Kleyer A, Fagni F, Kapsner LA, Schlereth M, Meinderink T, Breininger K, Tascilar K, Krönke G, Uder M, Sticherling M, Bickelhaupt S, Schett G, Maier A, Roemer F, and Simon D
- Subjects
- Humans, Inflammation, Magnetic Resonance Imaging, Neural Networks, Computer, Arthritis, Psoriatic diagnostic imaging, Arthritis, Rheumatoid diagnostic imaging, Psoriasis diagnostic imaging
- Abstract
Objectives: To evaluate whether neural networks can distinguish between seropositive RA, seronegative RA, and PsA based on inflammatory patterns from hand MRIs and to test how psoriasis patients with subclinical inflammation fit into such patterns., Methods: ResNet neural networks were utilized to compare seropositive RA vs PsA, seronegative RA vs PsA, and seropositive vs seronegative RA with respect to hand MRI data. Results from T1 coronal, T2 coronal, T1 coronal and axial fat-suppressed contrast-enhanced (CE), and T2 fat-suppressed axial sequences were used. The performance of such trained networks was analysed by the area under the receiver operating characteristics curve (AUROC) with and without presentation of demographic and clinical parameters. Additionally, the trained networks were applied to psoriasis patients without clinical arthritis., Results: MRI scans from 649 patients (135 seronegative RA, 190 seropositive RA, 177 PsA, 147 psoriasis) were fed into ResNet neural networks. The AUROC was 75% for seropositive RA vs PsA, 74% for seronegative RA vs PsA, and 67% for seropositive vs seronegative RA. All MRI sequences were relevant for classification, however, when deleting contrast agent-based sequences the loss of performance was only marginal. The addition of demographic and clinical data to the networks did not provide significant improvements for classification. Psoriasis patients were mostly assigned to PsA by the neural networks, suggesting that a PsA-like MRI pattern may be present early in the course of psoriatic disease., Conclusion: Neural networks can be successfully trained to distinguish MRI inflammation related to seropositive RA, seronegative RA, and PsA., (© The Author(s) 2022. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For permissions, please email: journals.permissions@oup.com.)
- Published
- 2022
- Full Text
- View/download PDF
31. Pan-tumor CAnine cuTaneous Cancer Histology (CATCH) dataset.
- Author
-
Wilm F, Fragoso M, Marzahl C, Qiu J, Puget C, Diehl L, Bertram CA, Klopfleisch R, Maier A, Breininger K, and Aubreville M
- Subjects
- Algorithms, Animals, Dogs, Dog Diseases pathology, Neural Networks, Computer, Skin Neoplasms pathology, Skin Neoplasms veterinary
- Abstract
Due to morphological similarities, the differentiation of histologic sections of cutaneous tumors into individual subtypes can be challenging. Recently, deep learning-based approaches have proven their potential for supporting pathologists in this regard. However, many of these supervised algorithms require a large amount of annotated data for robust development. We present a publicly available dataset of 350 whole slide images of seven different canine cutaneous tumors complemented by 12,424 polygon annotations for 13 histologic classes, including seven cutaneous tumor subtypes. In inter-rater experiments, we show a high consistency of the provided labels, especially for tumor annotations. We further validate the dataset by training a deep neural network for the task of tissue segmentation and tumor subtype classification. We achieve a class-averaged Jaccard coefficient of 0.7047, and 0.9044 for tumor in particular. For classification, we achieve a slide-level accuracy of 0.9857. Since canine cutaneous tumors possess various histologic homologies to human tumors the added value of this dataset is not limited to veterinary pathology but extends to more general fields of application., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
32. A single latent channel is sufficient for biomedical glottis segmentation.
- Author
-
Kist AM, Breininger K, Dörrich M, Dürr S, Schützenberger A, and Semmler M
- Subjects
- Endoscopy, Image Processing, Computer-Assisted, Neural Networks, Computer, Video Recording, Glottis diagnostic imaging, Larynx
- Abstract
Glottis segmentation is a crucial step to quantify endoscopic footage in laryngeal high-speed videoendoscopy. Recent advances in deep neural networks for glottis segmentation allow for a fully automatic workflow. However, exact knowledge of integral parts of these deep segmentation networks remains unknown, and understanding the inner workings is crucial for acceptance in clinical practice. Here, we show that a single latent channel as a bottleneck layer is sufficient for glottal area segmentation using systematic ablations. We further demonstrate that the latent space is an abstraction of the glottal area segmentation relying on three spatially defined pixel subtypes allowing for a transparent interpretation. We further provide evidence that the latent space is highly correlated with the glottal area waveform, can be encoded with four bits, and decoded using lean decoders while maintaining a high reconstruction accuracy. Our findings suggest that glottis segmentation is a task that can be highly optimized to gain very efficient and explainable deep neural networks, important for application in the clinic. In the future, we believe that online deep learning-assisted monitoring is a game-changer in laryngeal examinations., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
33. Improving Deep Learning-based Cardiac Abnormality Detection in 12-Lead ECG with Data Augmentation.
- Author
-
Qiu J, Oppelt MP, Nissen M, Anneken L, Breininger K, and Eskofier B
- Subjects
- Electrocardiography methods, Neural Networks, Computer, ROC Curve, Deep Learning
- Abstract
Automated Electrocardiogram (ECG) classification using deep neural networks requires large datasets annotated by medical professionals, which is time-consuming and expensive. This work examines ECG augmentation as a method for enriching existing datasets at low cost. First, we introduce three novel augmentations: Limb Electrode Move and Chest Electrode Move both simulate a minor electrode mislocation during signal measurement, and Heart Vector Transform generates an ECG by modeling a rotated main heart axis. These techniques are then combined with nine time series signal augmentations from literature. Evaluation was performed on ICBEB, PTB-XL Diagnostic, PTB-XL Rhythm, and PTB-XL Form datasets. Compared to models trained without data augmentation, area under the receiver operating characteristic curve (AUC) was increased by 3.5%, 1.7%, 1.4% and 3.5%, respectively. Our experiments demonstrated that data augmentation can improve deep learning performance in ECG classification. Analyses of the individual augmentation effects established the efficacy of the three proposed augmentations.
- Published
- 2022
- Full Text
- View/download PDF
34. Inter-species cell detection - datasets on pulmonary hemosiderophages in equine, human and feline specimens.
- Author
-
Marzahl C, Hill J, Stayt J, Bienzle D, Welker L, Wilm F, Voigt J, Aubreville M, Maier A, Klopfleisch R, Breininger K, and Bertram CA
- Subjects
- Animals, Cats, Hemosiderin, Horses, Humans, Species Specificity, Bronchoalveolar Lavage Fluid cytology, Macrophages, Alveolar
- Abstract
Pulmonary hemorrhage (P-Hem) occurs among multiple species and can have various causes. Cytology of bronchoalveolar lavage fluid (BALF) using a 5-tier scoring system of alveolar macrophages based on their hemosiderin content is considered the most sensitive diagnostic method. We introduce a novel, fully annotated multi-species P-Hem dataset, which consists of 74 cytology whole slide images (WSIs) with equine, feline and human samples. To create this high-quality and high-quantity dataset, we developed an annotation pipeline combining human expertise with deep learning and data visualisation techniques. We applied a deep learning-based object detection approach trained on 17 expertly annotated equine WSIs, to the remaining 39 equine, 12 human and 7 feline WSIs. The resulting annotations were semi-automatically screened for errors on multiple types of specialised annotation maps and finally reviewed by a trained pathologist. Our dataset contains a total of 297,383 hemosiderophages classified into five grades. It is one of the largest publicly available WSIs datasets with respect to the number of annotations, the scanned area and the number of species covered., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
35. Computer-assisted mitotic count using a deep learning-based algorithm improves interobserver reproducibility and accuracy.
- Author
-
Bertram CA, Aubreville M, Donovan TA, Bartel A, Wilm F, Marzahl C, Assenmacher CA, Becker K, Bennett M, Corner S, Cossic B, Denk D, Dettwiler M, Gonzalez BG, Gurtner C, Haverkamp AK, Heier A, Lehmbecker A, Merz S, Noland EL, Plog S, Schmidt A, Sebastian F, Sledge DG, Smedley RC, Tecilla M, Thaiwong T, Fuchs-Baumgartinger A, Meuten DJ, Breininger K, Kiupel M, Maier A, and Klopfleisch R
- Subjects
- Algorithms, Animals, Artificial Intelligence, Dogs, Humans, Pathologists, Reproducibility of Results, Deep Learning
- Abstract
The mitotic count (MC) is an important histological parameter for prognostication of malignant neoplasms. However, it has inter- and intraobserver discrepancies due to difficulties in selecting the region of interest (MC-ROI) and in identifying or classifying mitotic figures (MFs). Recent progress in the field of artificial intelligence has allowed the development of high-performance algorithms that may improve standardization of the MC. As algorithmic predictions are not flawless, computer-assisted review by pathologists may ensure reliability. In the present study, we compared partial (MC-ROI preselection) and full (additional visualization of MF candidates and display of algorithmic confidence values) computer-assisted MC analysis to the routine (unaided) MC analysis by 23 pathologists for whole-slide images of 50 canine cutaneous mast cell tumors (ccMCTs). Algorithmic predictions aimed to assist pathologists in detecting mitotic hotspot locations, reducing omission of MFs, and improving classification against imposters. The interobserver consistency for the MC significantly increased with computer assistance (interobserver correlation coefficient, ICC = 0.92) compared to the unaided approach (ICC = 0.70). Classification into prognostic stratifications had a higher accuracy with computer assistance. The algorithmically preselected hotspot MC-ROIs had a consistently higher MCs than the manually selected MC-ROIs. Compared to a ground truth (developed with immunohistochemistry for phosphohistone H3), pathologist performance in detecting individual MF was augmented when using computer assistance (F1-score of 0.68 increased to 0.79) with a reduction in false negatives by 38%. The results of this study demonstrate that computer assistance may lead to more reproducible and accurate MCs in ccMCTs.
- Published
- 2022
- Full Text
- View/download PDF
36. "Keep it simple, scholar": an experimental analysis of few-parameter segmentation networks for retinal vessels in fundus imaging.
- Author
-
Fu W, Breininger K, Schaffert R, Pan Z, and Maier A
- Subjects
- Databases, Factual, Fundus Oculi, Humans, Deep Learning, Diagnostic Imaging methods, Image Processing, Computer-Assisted methods, Neural Networks, Computer, Retinal Diseases diagnosis, Retinal Vessels diagnostic imaging
- Abstract
Purpose: With the recent development of deep learning technologies, various neural networks have been proposed for fundus retinal vessel segmentation. Among them, the U-Net is regarded as one of the most successful architectures. In this work, we start with simplification of the U-Net, and explore the performance of few-parameter networks on this task., Methods: We firstly modify the model with popular functional blocks and additional resolution levels, then we switch to exploring the limits for compression of the network architecture. Experiments are designed to simplify the network structure, decrease the number of trainable parameters, and reduce the amount of training data. Performance evaluation is carried out on four public databases, namely DRIVE, STARE, HRF and CHASE_DB1. In addition, the generalization ability of the few-parameter networks are compared against the state-of-the-art segmentation network., Results: We demonstrate that the additive variants do not significantly improve the segmentation performance. The performance of the models are not severely harmed unless they are harshly degenerated: one level, or one filter in the input convolutional layer, or trained with one image. We also demonstrate that few-parameter networks have strong generalization ability., Conclusion: It is counter-intuitive that the U-Net produces reasonably good segmentation predictions until reaching the mentioned limits. Our work has two main contributions. On the one hand, the importance of different elements of the U-Net is evaluated, and the minimal U-Net which is capable of the task is presented. On the other hand, our work demonstrates that retinal vessel segmentation can be tackled by surprisingly simple configurations of U-Net reaching almost state-of-the-art performance. We also show that the simple configurations have better generalization ability than state-of-the-art models with high model complexity. These observations seem to be in contradiction to the current trend of continued increase in model complexity and capacity for the task under consideration.
- Published
- 2021
- Full Text
- View/download PDF
37. EXACT: a collaboration toolset for algorithm-aided annotation of images with annotation version control.
- Author
-
Marzahl C, Aubreville M, Bertram CA, Maier J, Bergler C, Kröger C, Voigt J, Breininger K, Klopfleisch R, and Maier A
- Abstract
In many research areas, scientific progress is accelerated by multidisciplinary access to image data and their interdisciplinary annotation. However, keeping track of these annotations to ensure a high-quality multi-purpose data set is a challenging and labour intensive task. We developed the open-source online platform EXACT (EXpert Algorithm Collaboration Tool) that enables the collaborative interdisciplinary analysis of images from different domains online and offline. EXACT supports multi-gigapixel medical whole slide images as well as image series with thousands of images. The software utilises a flexible plugin system that can be adapted to diverse applications such as counting mitotic figures with a screening mode, finding false annotations on a novel validation view, or using the latest deep learning image analysis technologies. This is combined with a version control system which makes it possible to keep track of changes in the data sets and, for example, to link the results of deep learning experiments to specific data set versions. EXACT is freely available and has already been successfully applied to a broad range of annotation tasks, including highly diverse applications like deep learning supported cytology scoring, interdisciplinary multi-centre whole slide image tumour annotation, and highly specialised whale sound spectroscopy clustering.
- Published
- 2021
- Full Text
- View/download PDF
38. Projection-to-Projection Translation for Hybrid X-ray and Magnetic Resonance Imaging.
- Author
-
Stimpel B, Syben C, Würfl T, Breininger K, Hoelter P, Dörfler A, and Maier A
- Abstract
Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details.
- Published
- 2019
- Full Text
- View/download PDF
39. Simultaneous reconstruction of multiple stiff wires from a single X-ray projection for endovascular aortic repair.
- Author
-
Breininger K, Hanika M, Weule M, Kowarschik M, Pfister M, and Maier A
- Subjects
- Aortic Aneurysm, Abdominal diagnosis, Humans, Iliac Artery, Reproducibility of Results, Aortic Aneurysm, Abdominal surgery, Computed Tomography Angiography methods, Endovascular Procedures methods, Fluoroscopy methods, Imaging, Three-Dimensional methods, Surgery, Computer-Assisted methods
- Abstract
Purpose: Endovascular repair of aortic aneurysms (EVAR) can be supported by fusing pre- and intraoperative data to allow for improved navigation and to reduce the amount of contrast agent needed during the intervention. However, stiff wires and delivery devices can deform the vasculature severely, which reduces the accuracy of the fusion. Knowledge about the 3D position of the inserted instruments can help to transfer these deformations to the preoperative information., Method: We propose a method to simultaneously reconstruct the stiff wires in both iliac arteries based on only a single monoplane acquisition, thereby avoiding interference with the clinical workflow. In the available X-ray projection, the 2D course of the wire is extracted. Then, a virtual second view of each wire orthogonal to the real projection is estimated using the preoperative vessel anatomy from a computed tomography angiography as prior information. Based on the real and virtual 2D wire courses, the wires can then be reconstructed in 3D using epipolar geometry., Results: We achieve a mean modified Hausdorff distance of 4.2 mm between the estimated 3D position and the true wire course for the contralateral side and 4.5 mm for the ipsilateral side., Conclusion: The accuracy and speed of the proposed method allow for use in an intraoperative setting of deformation correction for EVAR.
- Published
- 2019
- Full Text
- View/download PDF
40. Iliac artery deformation during EVAR.
- Author
-
Koutouzi G, Pfister M, Breininger K, Hellström M, Roos H, and Falkenberg M
- Subjects
- Aged, Aged, 80 and over, Aortic Aneurysm, Abdominal diagnostic imaging, Blood Vessel Prosthesis, Blood Vessel Prosthesis Implantation instrumentation, Computed Tomography Angiography, Endovascular Procedures instrumentation, Female, Humans, Iliac Artery diagnostic imaging, Male, Predictive Value of Tests, Prosthesis Design, Risk Assessment, Risk Factors, Stents, Treatment Outcome, Vascular Access Devices, Aortic Aneurysm, Abdominal surgery, Blood Vessel Prosthesis Implantation adverse effects, Endovascular Procedures adverse effects, Iliac Artery surgery
- Published
- 2019
- Full Text
- View/download PDF
41. Intraoperative stent segmentation in X-ray fluoroscopy for endovascular aortic repair.
- Author
-
Breininger K, Albarqouni S, Kurzendorfer T, Pfister M, Kowarschik M, and Maier A
- Subjects
- Animals, Fluoroscopy methods, Humans, Tomography, X-Ray Computed, Treatment Outcome, Aorta diagnostic imaging, Aorta surgery, Blood Vessel Prosthesis, Endovascular Procedures methods, Stents
- Abstract
Purpose: Fusion of preoperative data with intraoperative X-ray images has proven the potential to reduce radiation exposure and contrast agent, especially for complex endovascular aortic repair (EVAR). Due to patient movement and introduced devices that deform the vasculature, the fusion can become inaccurate. This is usually detected by comparing the preoperative information with the contrasted vessel. To avoid repeated use of iodine, comparison with an implanted stent can be used to adjust the fusion. However, detecting the stent automatically without the use of contrast is challenging as only thin stent wires are visible., Method: We propose a fast, learning-based method to segment aortic stents in single uncontrasted X-ray images. To this end, we employ a fully convolutional network with residual units. Additionally, we investigate whether incorporation of prior knowledge improves the segmentation., Results: We use 36 X-ray images acquired during EVAR for training and evaluate the segmentation on 27 additional images. We achieve a Dice coefficient of 0.933 (AUC 0.996) when using X-ray alone, and 0.918 (AUC 0.993) and 0.888 (AUC 0.99) when adding the preoperative model, and information about the expected wire width, respectively., Conclusion: The proposed method is fully automatic, fast and segments aortic stent grafts in fluoroscopic images with high accuracy. The quality and performance of the segmentation will allow for an intraoperative comparison with the preoperative information to assess the accuracy of the fusion.
- Published
- 2018
- Full Text
- View/download PDF
42. Deep Learning Computed Tomography: Learning Projection-Domain Weights From Image Domain in Limited Angle Problems.
- Author
-
Wurfl T, Hoffmann M, Christlein V, Breininger K, Huang Y, Unberath M, and Maier AK
- Subjects
- Algorithms, Humans, Deep Learning, Radiographic Image Interpretation, Computer-Assisted methods, Tomography, X-Ray Computed methods
- Abstract
In this paper, we present a new deep learning framework for 3-D tomographic reconstruction. To this end, we map filtered back-projection-type algorithms to neural networks. However, the back-projection cannot be implemented as a fully connected layer due to its memory requirements. To overcome this problem, we propose a new type of cone-beam back-projection layer, efficiently calculating the forward pass. We derive this layer's backward pass as a projection operation. Unlike most deep learning approaches for reconstruction, our new layer permits joint optimization of correction steps in volume and projection domain. Evaluation is performed numerically on a public data set in a limited angle setting showing a consistent improvement over analytical algorithms while keeping the same computational test-time complexity by design. In the region of interest, the peak signal-to-noise ratio has increased by 23%. In addition, we show that the learned algorithm can be interpreted using known concepts from cone beam reconstruction: the network is able to automatically learn strategies such as compensation weights and apodization windows.
- Published
- 2018
- Full Text
- View/download PDF
43. Rewiring of neuronal networks during synaptic silencing.
- Author
-
Wrosch JK, Einem VV, Breininger K, Dahlmanns M, Maier A, Kornhuber J, and Groemer TW
- Subjects
- Animals, Calcium metabolism, Cells, Cultured, Diagnostic Imaging, Electric Stimulation methods, Hippocampus cytology, Hippocampus physiology, Rats, Single-Cell Analysis, Synapses physiology, Tetrodotoxin pharmacology, Nerve Net physiology, Synaptic Transmission physiology
- Abstract
Analyzing the connectivity of neuronal networks, based on functional brain imaging data, has yielded new insight into brain circuitry, bringing functional and effective networks into the focus of interest for understanding complex neurological and psychiatric disorders. However, the analysis of network changes, based on the activity of individual neurons, is hindered by the lack of suitable meaningful and reproducible methodologies. Here, we used calcium imaging, statistical spike time analysis and a powerful classification model to reconstruct effective networks of primary rat hippocampal neurons in vitro. This method enables the calculation of network parameters, such as propagation probability, path length, and clustering behavior through the measurement of synaptic activity at the single-cell level, thus providing a fuller understanding of how changes at single synapses translate to an entire population of neurons. We demonstrate that our methodology can detect the known effects of drug-induced neuronal inactivity and can be used to investigate the extensive rewiring processes affecting population-wide connectivity patterns after periods of induced neuronal inactivity.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.