14 results on '"Pachade S"'
Search Results
2. Foveal Avascular Zone Segmentation Using Deep Learning-Driven Image-Level Optimization and Fundus Photographs
- Author
-
Coronado, I., primary, Pachade, S., additional, Dawoodally, H., additional, Marioni, S. Salazar, additional, Yan, J., additional, Abdelkhaleq, R., additional, Bahrainian, M., additional, Jagolino-Cole, A., additional, Channa, R., additional, Sheth, S. A., additional, and Giancardo, L., additional
- Published
- 2023
- Full Text
- View/download PDF
3. Generalizable self-supervised learning for brain CTA in acute stroke.
- Author
-
Dong Y, Pachade S, Roberts K, Jiang X, Sheth SA, and Giancardo L
- Abstract
Acute stroke management involves rapid and accurate interpretation of CTA imaging data. However, generalizable models for multiple acute stroke tasks able to learn from unlabeled data do not exist. We propose a linear probed self-supervised contrastive learning utilizing 3D CTA images and the findings section of radiologists' reports for pretraining. Subsequently, the pretrained model was applied to four disparate tasks: large vessel occlusion (LVO) detection, acute ischemic stroke detection, acute ischemic stroke, intracerebral hemorrhage classification, and ischemic core volume prediction. The tasks chosen are particularly challenging as they cannot be directly extracted from the radiology reports findings with keywords. The difficulty is compounded by the 3D feature representation required by tasks such as LVO detection. All imaging models were trained from scratch. In the pretraining phase, our dataset comprised 1,542 pairs of 3D CTA brains and corresponding radiologists' reports from 3 sites without any additional labels. To test the generalizability, we performed fine-tuning and testing phase with labeled data from another site on CTA brains from 592 subjects. In our experiments, we evaluated the influence of linear probing during the pretraining phase and found that, on average, it enhanced our model's generalizability, as shown by the improved classification performance with the appropriate text encoder. Our findings indicate that the best-performing models exhibit robust generalization to out-of-distribution data for multiple tasks. In all scenarios, linear probing during pretraining yielded superior predictive performance compared to a standard strategy. Furthermore, pretraining with reports findings conferred significant performance advantages compared to training the imaging encoder solely on labeled data., Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2024 Elsevier Ltd. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
4. RFMiD: Retinal Image Analysis for multi-Disease Detection challenge.
- Author
-
Pachade S, Porwal P, Kokare M, Deshmukh G, Sahasrabuddhe V, Luo Z, Han F, Sun Z, Qihan L, Kamata SI, Ho E, Wang E, Sivajohan A, Youn S, Lane K, Chun J, Wang X, Gu Y, Lu S, Oh YT, Park H, Lee CY, Yeh H, Cheng KW, Wang H, Ye J, He J, Gu L, Müller D, Soto-Rey I, Kramer F, Arai H, Ochi Y, Okada T, Giancardo L, Quellec G, and Mériaudeau F
- Abstract
In the last decades, many publicly available large fundus image datasets have been collected for diabetic retinopathy, glaucoma, and age-related macular degeneration, and a few other frequent pathologies. These publicly available datasets were used to develop a computer-aided disease diagnosis system by training deep learning models to detect these frequent pathologies. One challenge limiting the adoption of a such system by the ophthalmologist is, computer-aided disease diagnosis system ignores sight-threatening rare pathologies such as central retinal artery occlusion or anterior ischemic optic neuropathy and others that ophthalmologists currently detect. Aiming to advance the state-of-the-art in automatic ocular disease classification of frequent diseases along with the rare pathologies, a grand challenge on "Retinal Image Analysis for multi-Disease Detection" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2021). This paper, reports the challenge organization, dataset, top-performing participants solutions, evaluation measures, and results based on a new "Retinal Fundus Multi-disease Image Dataset" (RFMiD). There were two principal sub-challenges: disease screening (i.e. presence versus absence of pathology - a binary classification problem) and disease/pathology classification (a 28-class multi-label classification problem). It received a positive response from the scientific community with 74 submissions by individuals/teams that effectively entered in this challenge. The top-performing methodologies utilized a blend of data-preprocessing, data augmentation, pre-trained model, and model ensembling. This multi-disease (frequent and rare pathologies) detection will enable the development of generalizable models for screening the retina, unlike the previous efforts that focused on the detection of specific diseases., Competing Interests: Declaration of competing interest The authors have no conflicts of interest to declare., (Copyright © 2024 Elsevier B.V. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
5. A self-supervised learning approach for registration agnostic imaging models with 3D brain CTA.
- Author
-
Dong Y, Pachade S, Liang X, Sheth SA, and Giancardo L
- Abstract
Deep learning-based neuroimaging pipelines for acute stroke typically rely on image registration, which not only increases computation but also introduces a point of failure. In this paper, we propose a general-purpose contrastive self-supervised learning method that converts a convolutional deep neural network designed for registered images to work on a different input domain, i.e., with unregistered images. This is accomplished by using a self-supervised strategy that does not rely on labels, where the original model acts as a teacher and a new network as a student. Large vessel occlusion (LVO) detection experiments using computed tomographic angiography (CTA) data from 402 CTA patients show the student model achieving competitive LVO detection performance (area under the receiver operating characteristic curve [AUC] = 0.88 vs. AUC = 0.81) compared to the teacher model, even with unregistered images. The student model trained directly on unregistered images using standard supervised learning achieves an AUC = 0.63, highlighting the proposed method's efficacy in adapting models to different pipelines and domains., Competing Interests: The authors declare no competing interests., (© 2024 The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
6. Author Correction: Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks.
- Author
-
Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, and Giancardo L
- Published
- 2023
- Full Text
- View/download PDF
7. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks.
- Author
-
Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, and Giancardo L
- Subjects
- Angiography, Fundus Oculi, Retinal Vessels diagnostic imaging, Tomography, Optical Coherence, Optic Disk
- Abstract
Vessel segmentation in fundus images permits understanding retinal diseases and computing image-based biomarkers. However, manual vessel segmentation is a time-consuming process. Optical coherence tomography angiography (OCT-A) allows direct, non-invasive estimation of retinal vessels. Unfortunately, compared to fundus images, OCT-A cameras are more expensive, less portable, and have a reduced field of view. We present an automated strategy relying on generative adversarial networks to create vascular maps from fundus images without training using manual vessel segmentation maps. Further post-processing used for standard en face OCT-A allows obtaining a vessel segmentation map. We compare our approach to state-of-the-art vessel segmentation algorithms trained on manual vessel segmentation maps and vessel segmentations derived from OCT-A. We evaluate them from an automatic vascular segmentation perspective and as vessel density estimators, i.e., the most common imaging biomarker for OCT-A used in studies. Using OCT-A as a training target over manual vessel delineations yields improved vascular maps for the optic disc area and compares to the best-performing vessel segmentation algorithm in the macular region. This technique could reduce the cost and effort incurred when training vessel segmentation algorithms. To incentivize research in this field, we will make the dataset publicly available to the scientific community., (© 2023. Springer Nature Limited.)
- Published
- 2023
- Full Text
- View/download PDF
8. Foveal avascular zone segmentation using deep learning-driven image-level optimization and fundus photographs.
- Author
-
Coronado I, Pachade S, Dawoodally H, Salazar Marioni S, Yan J, Abdelkhaleq R, Bahrainian M, Jagolino-Cole A, Channa R, Sheth SA, and Giancardo L
- Abstract
The foveal avascular zone (FAZ) is a retinal area devoid of capillaries and associated with multiple retinal pathologies and visual acuity. Optical Coherence Tomography Angiography (OCT-A) is a very effective means of visualizing retinal vascular and avascular areas, but its use remains limited to research settings due to its complex optics limiting availability. On the other hand, fundus photography is widely available and often adopted in population studies. In this work, we test the feasibility of estimating the FAZ from fundus photos using three different approaches. The first two approaches rely on pixel-level and image-level FAZ information to segment FAZ pixels and regress FAZ area, respectively. The third is a training mask-free pipeline combining saliency maps with an active contours approach to segment FAZ pixels while being trained on image-level measures of the FAZ areas. This enables training FAZ segmentation methods without manual alignment of fundus and OCT-A images, a time-consuming process, which limits the dataset that can be used for training. Segmentation methods trained on pixel-level labels and image-level labels had good agreement with masks from a human grader (respectively DICE of 0.45 and 0.4). Results indicate the feasibility of using fundus images as a proxy to estimate the FAZ when angiography data is not available.
- Published
- 2023
- Full Text
- View/download PDF
9. SELF-SUPERVISED LEARNING WITH RADIOLOGY REPORTS, A COMPARATIVE ANALYSIS OF STRATEGIES FOR LARGE VESSEL OCCLUSION AND BRAIN CTA IMAGES.
- Author
-
Pachade S, Datta S, Dong Y, Salazar-Marioni S, Abdelkhaleq R, Niktabe A, Roberts K, Sheth SA, and Giancardo L
- Abstract
Scarcity of labels for medical images is a significant barrier for training representation learning approaches based on deep neural networks. This limitation is also present when using imaging data collected during routine clinical care stored in picture archiving communication systems (PACS), as these data rarely have attached the high-quality labels required for medical image computing tasks. However, medical images extracted from PACS are commonly coupled with descriptive radiology reports that contain significant information and could be leveraged to pre-train imaging models, which could serve as starting points for further task-specific fine-tuning. In this work, we perform a head-to-head comparison of three different self-supervised strategies to pre-train the same imaging model on 3D brain computed tomography angiogram (CTA) images, with large vessel occlusion (LVO) detection as the downstream task. These strategies evaluate two natural language processing (NLP) approaches, one to extract 100 explicit radiology concepts (Rad-SpatialNet) and the other to create general-purpose radiology reports embeddings (DistilBERT). In addition, we experiment with learning radiology concepts directly or by using a recent self-supervised learning approach (CLIP) that learns by ranking the distance between language and image vector embeddings. The LVO detection task was selected because it requires 3D imaging data, is clinically important, and requires the algorithm to learn outputs not explicitly stated in the radiology report. Pre-training was performed on an unlabeled dataset containing 1,542 3D CTA - reports pairs. The downstream task was tested on a labeled dataset of 402 subjects for LVO. We find that the pre-training performed with CLIP-based strategies improve the performance of the imaging model to detect LVO compared to a model trained only on the labeled data. The best performance was achieved by pre-training using the explicit radiology concepts and CLIP strategy.
- Published
- 2023
- Full Text
- View/download PDF
10. Detection of Stroke with Retinal Microvascular Density and Self-Supervised Learning Using OCT-A and Fundus Imaging.
- Author
-
Pachade S, Coronado I, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino A, Green C, Bahrainian M, Channa R, Sheth SA, and Giancardo L
- Abstract
Acute cerebral stroke is a leading cause of disability and death, which could be reduced with a prompt diagnosis during patient transportation to the hospital. A portable retina imaging system could enable this by measuring vascular information and blood perfusion in the retina and, due to the homology between retinal and cerebral vessels, infer if a cerebral stroke is underway. However, the feasibility of this strategy, the imaging features, and retina imaging modalities to do this are not clear. In this work, we show initial evidence of the feasibility of this approach by training machine learning models using feature engineering and self-supervised learning retina features extracted from OCT-A and fundus images to classify controls and acute stroke patients. Models based on macular microvasculature density features achieved an area under the receiver operating characteristic curve (AUC) of 0.87-0.88. Self-supervised deep learning models were able to generate features resulting in AUCs ranging from 0.66 to 0.81. While further work is needed for the final proof for a diagnostic system, these results indicate that microvasculature density features from OCT-A images have the potential to be used to diagnose acute cerebral stroke from the retina.
- Published
- 2022
- Full Text
- View/download PDF
11. NENet: Nested EfficientNet and adversarial learning for joint optic disc and cup segmentation.
- Author
-
Pachade S, Porwal P, Kokare M, Giancardo L, and Mériaudeau F
- Subjects
- Diagnostic Techniques, Ophthalmological, Fundus Oculi, Humans, Image Processing, Computer-Assisted, Mass Screening, Glaucoma diagnostic imaging, Optic Disk diagnostic imaging
- Abstract
Glaucoma is an ocular disease threatening irreversible vision loss. Primary screening of Glaucoma involves computation of optic cup (OC) to optic disc (OD) ratio that is widely accepted metric. Recent deep learning frameworks for OD and OC segmentation have shown promising results and ways to attain remarkable performance. In this paper, we present a novel segmentation network, Nested EfficientNet (NENet) that consists of EfficientNetB4 as an encoder along with a nested network of pre-activated residual blocks, atrous spatial pyramid pooling (ASPP) block and attention gates (AGs). The combination of cross-entropy and dice coefficient (DC) loss is utilized to guide the network for accurate segmentation. Further, a modified patch-based discriminator is designed for use with the NENet to improve the local segmentation details. Three publicly available datasets, REFUGE, Drishti-GS, and RIM-ONE-r3 were utilized to evaluate the performances of the proposed network. In our experiments, NENet outperformed state-of-the-art methods for segmentation of OD and OC. Additionally, we show that NENet has excellent generalizability across camera types and image resolution. The obtained results suggest that the proposed technique has potential to be an important component for an automated Glaucoma screening system., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2021 Elsevier B.V. All rights reserved.)
- Published
- 2021
- Full Text
- View/download PDF
12. Towards Stroke Biomarkers on Fundus Retinal Imaging: A Comparison Between Vasculature Embeddings and General Purpose Convolutional Neural Networks.
- Author
-
Coronado I, Abdelkhaleq R, Yan J, Marioni SS, Jagolino-Cole A, Channa R, Pachade S, Sheth SA, and Giancardo L
- Subjects
- Biomarkers, Fundus Oculi, Humans, Retina diagnostic imaging, Neural Networks, Computer, Stroke diagnostic imaging
- Abstract
Fundus Retinal imaging is an easy-to-acquire modality typically used for monitoring eye health. Current evidence indicates that the retina, and its vasculature in particular, is associated with other disease processes making it an ideal candidate for biomarker discovery. The development of these biomarkers has typically relied on predefined measurements, which makes the development process slow. Recently, representation learning algorithms such as general purpose convolutional neural networks or vasculature embeddings have been proposed as an approach to learn imaging biomarkers directly from the data, hence greatly speeding up their discovery. In this work, we compare and contrast different state-of-the-art retina biomarker discovery methods to identify signs of past stroke in the retinas of a curated patient cohort of 2,472 subjects from the UK Biobank dataset. We investigate two convolutional neural networks previously used in retina biomarker discovery and directly trained on the stroke outcome, and an extension of the vasculature embedding approach which infers its feature representation from the vasculature and combines the information of retinal images from both eyes.In our experiments, we show that the pipeline based on vasculature embeddings has comparable or better performance than other methods with a much more compact feature representation and ease of training.Clinical Relevance-This study compares and contrasts three retinal biomarker discovery strategies, using a curated dataset of subject evidence, for the analysis of the retina as a proxy in the assessment of clinical outcomes, such as stroke risk.
- Published
- 2021
- Full Text
- View/download PDF
13. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge.
- Author
-
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, and Mériaudeau F
- Subjects
- Datasets as Topic, Humans, Pattern Recognition, Automated, Deep Learning, Diabetic Retinopathy diagnostic imaging, Diagnosis, Computer-Assisted methods, Image Interpretation, Computer-Assisted methods, Photography
- Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular., (Copyright © 2019 Elsevier B.V. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
14. Retinal image analysis for disease screening through local tetra patterns.
- Author
-
Porwal P, Pachade S, Kokare M, Giancardo L, and Mériaudeau F
- Subjects
- Algorithms, Bayes Theorem, Diabetic Retinopathy diagnostic imaging, Fundus Oculi, Humans, Image Interpretation, Computer-Assisted methods, Macular Degeneration diagnostic imaging, Neural Networks, Computer, Optic Disk diagnostic imaging, Pattern Recognition, Automated, ROC Curve, Regression Analysis, Support Vector Machine, Vision, Ocular, Image Processing, Computer-Assisted methods, Retina diagnostic imaging
- Abstract
Age-related Macular Degeneration (AMD) and Diabetic Retinopathy (DR) are the most prevalent diseases responsible for visual impairment in the world. This work investigates discrimination potential in the texture of color fundus images to distinguish between diseased and healthy cases by avoiding the prior lesion segmentation step. It presents a retinal background characterization approach and explores the potential of Local Tetra Patterns (LTrP) for texture classification of AMD, DR and Normal images. Five different experiments distinguishing between DR - normal, AMD - normal, DR - AMD, pathological - normal and AMD - DR - normal cases were conducted and validated using the proposed approach, and promising results were obtained. For all five experiments, different classifiers namely, AdaBoost, c4.5, logistic regression, naive Bayes, neural network, random forest and support vector machine were tested. We experimented with three public datasets, ARIA, STARE and E-Optha. Further, the performance of LTrP is compared with other texture descriptors, such as local phase quantization, local binary pattern and local derivative pattern. In all cases, the proposed method obtained the area under the receiver operating characteristic curve and f-score values higher than 0.78 and 0.746 respectively. It was found that both performance measures achieve over 0.995 for DR and AMD detection using a random forest classifier. The obtained results suggest that the proposed technique can discriminate retinal disease using texture information and has potential to be an important component for an automated screening solution for retinal images., (Copyright © 2018 Elsevier Ltd. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.