13 results on '"Imchen, Tsusennaro"'
Search Results
2. Field validation of deep learning based Point-of-Care device for early detection of oral malignant and potentially malignant disorders
- Author
-
Birur N., Praveen, Song, Bofan, Sunny, Sumsum P, G., Keerthi, Mendonca, Pramila, Mukhia, Nirza, Li, Shaobai, Patrick, Sanjana, G., Shubha, A.R., Subhashini, Imchen, Tsusennaro, Leivon, Shirley T, Kolur, Trupti, Shetty, Vivek, R., Vidya Bhushan, Vaibhavi, Daksha, Rajeev, Surya, Pednekar, Sneha, Banik, Ankita Dutta, Ramesh, Rohan Michael, Pillai, Vijay, O.S., Kathryn, Smith, Petra Wilder, Sigamani, Alben, Suresh, Amritha, Liang, Rongguang, and Kuriakose, Moni A
- Subjects
Data Management and Data Science ,Information and Computing Sciences ,Biomedical and Clinical Sciences ,Prevention ,Clinical Research ,Cancer ,Telehealth ,Rare Diseases ,Networking and Information Technology R&D (NITRD) ,Bioengineering ,Machine Learning and Artificial Intelligence ,Digestive Diseases ,Health Disparities ,Health Services ,Clinical Trials and Supportive Activities ,Dental/Oral and Craniofacial Disease ,4.2 Evaluation of markers and technologies ,4.1 Discovery and preclinical testing of markers and technologies ,Good Health and Well Being ,Cell Phone ,Deep Learning ,Early Detection of Cancer ,Humans ,Mouth Neoplasms ,Point-of-Care Systems ,Telemedicine - Abstract
Early detection of oral cancer in low-resource settings necessitates a Point-of-Care screening tool that empowers Frontline-Health-Workers (FHW). This study was conducted to validate the accuracy of Convolutional-Neural-Network (CNN) enabled m(mobile)-Health device deployed with FHWs for delineation of suspicious oral lesions (malignant/potentially-malignant disorders). The effectiveness of the device was tested in tertiary-care hospitals and low-resource settings in India. The subjects were screened independently, either by FHWs alone or along with specialists. All the subjects were also remotely evaluated by oral cancer specialist/s. The program screened 5025 subjects (Images: 32,128) with 95% (n = 4728) having telediagnosis. Among the 16% (n = 752) assessed by onsite specialists, 20% (n = 102) underwent biopsy. Simple and complex CNN were integrated into the mobile phone and cloud respectively. The onsite specialist diagnosis showed a high sensitivity (94%), when compared to histology, while telediagnosis showed high accuracy in comparison with onsite specialists (sensitivity: 95%; specificity: 84%). FHWs, however, when compared with telediagnosis, identified suspicious lesions with less sensitivity (60%). Phone integrated, CNN (MobileNet) accurately delineated lesions (n = 1416; sensitivity: 82%) and Cloud-based CNN (VGG19) had higher accuracy (sensitivity: 87%) with tele-diagnosis as reference standard. The results of the study suggest that an automated mHealth-enabled, dual-image system is a useful triaging tool and empowers FHWs for oral cancer screening in low-resource settings.
- Published
- 2022
3. Interpretable deep learning approach for oral cancer classification using guided attention inference network
- Author
-
Figueroa, Kevin Chew, Song, Bofan, Sunny, Sumsum, Li, Shaobai, Gurushanth, Keerthi, Mendonca, Pramila, Mukhia, Nirza, Patrick, Sanjana, Gurudath, Shubha, Raghavan, Subhashini, Imchen, Tsusennaro, Leivon, Shirley T, Kolur, Trupti, Shetty, Vivek, Bushan, Vidya, Ramesh, Rohan, Pillai, Vijay, Wilder-Smith, Petra, Sigamani, Alben, Suresh, Amritha, Kuriakose, Moni Abraham, Birur, Praveen, and Liang, Rongguang
- Subjects
Biomedical and Clinical Sciences ,Engineering ,Biomedical Engineering ,Physical Sciences ,Ophthalmology and Optometry ,Atomic ,Molecular and Optical Physics ,Networking and Information Technology R&D (NITRD) ,Bioengineering ,Machine Learning and Artificial Intelligence ,Cancer ,Attention ,Deep Learning ,Humans ,Mouth Neoplasms ,Neural Networks ,Computer ,Reproducibility of Results ,oral cancer ,interpretable deep learning ,guided attention inference network ,Optical Physics ,Opthalmology and Optometry ,Optics ,Ophthalmology and optometry ,Biomedical engineering ,Atomic ,molecular and optical physics - Abstract
SignificanceConvolutional neural networks (CNNs) show the potential for automated classification of different cancer lesions. However, their lack of interpretability and explainability makes CNNs less than understandable. Furthermore, CNNs may incorrectly concentrate on other areas surrounding the salient object, rather than the network's attention focusing directly on the object to be recognized, as the network has no incentive to focus solely on the correct subjects to be detected. This inhibits the reliability of CNNs, especially for biomedical applications.AimDevelop a deep learning training approach that could provide understandability to its predictions and directly guide the network to concentrate its attention and accurately delineate cancerous regions of the image.ApproachWe utilized Selvaraju et al.'s gradient-weighted class activation mapping to inject interpretability and explainability into CNNs. We adopted a two-stage training process with data augmentation techniques and Li et al.'s guided attention inference network (GAIN) to train images captured using our customized mobile oral screening devices. The GAIN architecture consists of three streams of network training: classification stream, attention mining stream, and bounding box stream. By adopting the GAIN training architecture, we jointly optimized the classification and segmentation accuracy of our CNN by treating these attention maps as reliable priors to develop attention maps with more complete and accurate segmentation.ResultsThe network's attention map will help us to actively understand what the network is focusing on and looking at during its decision-making process. The results also show that the proposed method could guide the trained neural network to highlight and focus its attention on the correct lesion areas in the images when making a decision, rather than focusing its attention on relevant yet incorrect regions.ConclusionsWe demonstrate the effectiveness of our approach for more interpretable and reliable oral potentially malignant lesion and malignant lesion classification.
- Published
- 2022
4. Mobile-based oral cancer classification for point-of-care screening
- Author
-
Song, Bofan, Sunny, Sumsum, Li, Shaobai, Gurushanth, Keerthi, Mendonca, Pramila, Mukhia, Nirza, Patrick, Sanjana, Gurudath, Shubha, Raghavan, Subhashini, Imchen, Tsusennaro, Leivon, Shirley T, Kolur, Trupti, Shetty, Vivek, Bushan, Vidya, Ramesh, Rohan, Lima, Natzem, Pillai, Vijay, Wilder-Smith, Petra, Sigamani, Alben, Suresh, Amritha, Kuriakose, Moni A, Birur, Praveen, and Liang, Rongguang
- Subjects
Biomedical and Clinical Sciences ,Ophthalmology and Optometry ,Health Services ,Dental/Oral and Craniofacial Disease ,Prevention ,Clinical Research ,Cancer ,Bioengineering ,Detection ,screening and diagnosis ,4.1 Discovery and preclinical testing of markers and technologies ,4.2 Evaluation of markers and technologies ,Early Detection of Cancer ,Humans ,Mouth Neoplasms ,Point-of-Care Systems ,Sensitivity and Specificity ,Smartphone ,oral cancer ,mobile screening device ,dual-modality ,efficient deep learning ,Optical Physics ,Biomedical Engineering ,Opthalmology and Optometry ,Optics ,Ophthalmology and optometry ,Biomedical engineering ,Atomic ,molecular and optical physics - Abstract
SignificanceOral cancer is among the most common cancers globally, especially in low- and middle-income countries. Early detection is the most effective way to reduce the mortality rate. Deep learning-based cancer image classification models usually need to be hosted on a computing server. However, internet connection is unreliable for screening in low-resource settings.AimTo develop a mobile-based dual-mode image classification method and customized Android application for point-of-care oral cancer detection.ApproachThe dataset used in our study was captured among 5025 patients with our customized dual-modality mobile oral screening devices. We trained an efficient network MobileNet with focal loss and converted the model into TensorFlow Lite format. The finalized lite format model is ∼16.3 MB and ideal for smartphone platform operation. We have developed an Android smartphone application in an easy-to-use format that implements the mobile-based dual-modality image classification approach to distinguish oral potentially malignant and malignant images from normal/benign images.ResultsWe investigated the accuracy and running speed on a cost-effective smartphone computing platform. It takes ∼300 ms to process one image pair with the Moto G5 Android smartphone. We tested the proposed method on a standalone dataset and achieved 81% accuracy for distinguishing normal/benign lesions from clinically suspicious lesions, using a gold standard of clinical impression based on the review of images by oral specialists.ConclusionsOur study demonstrates the effectiveness of a mobile-based approach for oral cancer screening in low-resource settings.
- Published
- 2021
5. Inter-observer agreement among specialists in the diagnosis of Oral Potentially Malignant Disorders and Oral Cancer using Store-and-Forward technology
- Author
-
Gurushanth, Keerthi, primary, Mukhia, Nirza, additional, Sunny, Sumsum P, additional, Song, Bofan, additional, Raghavan, Shubhasini A, additional, Gurudath, Shubha, additional, Mendonca, Pramila, additional, Li, Shaobai, additional, Patrick, Sanjana, additional, Imchen, Tsusennaro, additional, Leivon, Shirley T., additional, Shruti, Tulika, additional, Kolur, Trupti, additional, Shetty, Vivek, additional, R, Vidya Bhushan, additional, Ramesh, Rohan Michael, additional, Pillai, Vijay, additional, S, Kathryn O., additional, Smith, Petra Wilder, additional, Suresh, Amritha, additional, Liang, Rongguang, additional, N, Praveen Birur, additional, and Kuriakose, Moni A., additional
- Published
- 2023
- Full Text
- View/download PDF
6. Interpretable and Reliable Oral Cancer Classifier with Attention Mechanism and Expert Knowledge Embedding via Attention Map
- Author
-
Bofan Song, Chicheng Zhang, Sumsum Sunny, Dharma Raj KC, Shaobai Li, Keerthi Gurushanth, Pramila Mendonca, Nirza Mukhia, Sanjana Patrick, Shubha Gurudath, Subhashini Raghavan, Imchen Tsusennaro, Shirley T. Leivon, Trupti Kolur, Vivek Shetty, Vidya Bushan, Rohan Ramesh, Vijay Pillai, Petra Wilder-Smith, Amritha Suresh, Moni Abraham Kuriakose, Praveen Birur, and Rongguang Liang
- Subjects
attention branch network ,Cancer Research ,Oncology ,attention map ,Behavioral and Social Science ,Oncology and Carcinogenesis ,human-in-the-loop deep learning ,attention mechanism ,Basic Behavioral and Social Science ,expert knowledge embedding ,Cancer ,visual explanation - Abstract
Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN based approaches. In this study, we proposed a neural network called the attention branch network (ABN), which combines the visual explanation and attention mechanisms to improve the recognition performance and interpret the decision-making simultaneously. We also embedded expert knowledge into the network by having human experts manually edit the attention maps for the attention mechanism. Our experiments have shown that ABN performs better than the original baseline network. By introducing the Squeeze-and-Excitation (SE) blocks to the network, the cross-validation accuracy increased further. Furthermore, we observed that some previously misclassified cases were correctly recognized after updating by manually editing the attention maps. The cross-validation accuracy increased from 0.846 to 0.875 with the ABN (Resnet18 as baseline), 0.877 with SE-ABN, and 0.903 after embedding expert knowledge. The proposed method provides an accurate, interpretable, and reliable oral cancer computer-aided diagnosis system through visual explanation, attention mechanisms, and expert knowledge embedding.
- Published
- 2023
- Full Text
- View/download PDF
7. Exploring uncertainty measures in convolutional neural network for semantic segmentation of oral cancer images
- Author
-
Bofan Song, Shaobai Li, Sumsum Sunny, Keerthi Gurushanth, Pramila Mendonca, Nirza Mukhia, Sanjana Patrick, Tyler Peterson, Shubha Gurudath, Subhashini Raghavan, Imchen Tsusennaro, Shirley T. Leivon, Trupti Kolur, Vivek Shetty, Vidya Bushan, Rohan Ramesh, Vijay Pillai, Petra Wilder-Smith, Amritha Suresh, Moni Abraham Kuriakose, Praveen Birur, and Rongguang Liang
- Subjects
uncertainty measures of deep learning ,Neural Networks ,Image Processing ,Biomedical Engineering ,Optical Physics ,Biomaterials ,Bayesian deep learning ,Computer ,Computer-Assisted ,Opthalmology and Optometry ,Image Processing, Computer-Assisted ,Humans ,Dental/Oral and Craniofacial Disease ,Cancer ,Monte Carlo dropout ,Uncertainty ,Reproducibility of Results ,Bayes Theorem ,Optics ,oral cancer ,Atomic and Molecular Physics, and Optics ,semantic segmentation ,Electronic, Optical and Magnetic Materials ,Semantics ,Mouth Neoplasms ,Neural Networks, Computer - Abstract
SignificanceOral cancer is one of the most prevalent cancers, especially in middle- and low-income countries such as India. Automatic segmentation of oral cancer images can improve the diagnostic workflow, which is a significant task in oral cancer image analysis. Despite the remarkable success of deep-learning networks in medical segmentation, they rarely provide uncertainty quantification for their output.AimWe aim to estimate uncertainty in a deep-learning approach to semantic segmentation of oral cancer images and to improve the accuracy and reliability of predictions.ApproachThis work introduced a UNet-based Bayesian deep-learning (BDL) model to segment potentially malignant and malignant lesion areas in the oral cavity. The model can quantify uncertainty in predictions. We also developed an efficient model that increased the inference speed, which is almost six times smaller and two times faster (inference speed) than the original UNet. The dataset in this study was collected using our customized screening platform and was annotated by oral oncology specialists.ResultsThe proposed approach achieved good segmentation performance as well as good uncertainty estimation performance. In the experiments, we observed an improvement in pixel accuracy and mean intersection over union by removing uncertain pixels. This result reflects that the model provided less accurate predictions in uncertain areas that may need more attention and further inspection. The experiments also showed that with some performance compromises, the efficient model reduced computation time and model size, which expands the potential for implementation on portable devices used in resource-limited settings.ConclusionsOur study demonstrates the UNet-based BDL model not only can perform potentially malignant and malignant oral lesion segmentation, but also can provide informative pixel-level uncertainty estimation. With this extra uncertainty information, the accuracy and reliability of the model's prediction can be improved.Oral cancer is one of the most prevalent cancers, especially in middle- and low-income countries such as India. Automatic segmentation of oral cancer images can improve the diagnostic workflow, which is a significant task in oral cancer image analysis. Despite the remarkable success of deep-learning networks in medical segmentation, they rarely provide uncertainty quantification for their output.We aim to estimate uncertainty in a deep-learning approach to semantic segmentation of oral cancer images and to improve the accuracy and reliability of predictions.This work introduced a UNet-based Bayesian deep-learning (BDL) model to segment potentially malignant and malignant lesion areas in the oral cavity. The model can quantify uncertainty in predictions. We also developed an efficient model that increased the inference speed, which is almost six times smaller and two times faster (inference speed) than the original UNet. The dataset in this study was collected using our customized screening platform and was annotated by oral oncology specialists.The proposed approach achieved good segmentation performance as well as good uncertainty estimation performance. In the experiments, we observed an improvement in pixel accuracy and mean intersection over union by removing uncertain pixels. This result reflects that the model provided less accurate predictions in uncertain areas that may need more attention and further inspection. The experiments also showed that with some performance compromises, the efficient model reduced computation time and model size, which expands the potential for implementation on portable devices used in resource-limited settings.Our study demonstrates the UNet-based BDL model not only can perform potentially malignant and malignant oral lesion segmentation, but also can provide informative pixel-level uncertainty estimation. With this extra uncertainty information, the accuracy and reliability of the model’s prediction can be improved.
- Published
- 2022
8. Classification of imbalanced oral cancer image data from high-risk population
- Author
-
Shubha Gurudath, Shirley T Leivon, Praveen Birur, Rohan Ramesh, Vivek Shetty, Nirza Mukhia, Vidya Bushan, Alben Sigamani, Pramila Mendonca, Imchen Tsusennaro, Subhashini Raghavan, Petra Wilder-Smith, Sumsum P. Sunny, Moni Abraham Kuriakose, Trupti Kolur, Sanjana Patrick, Keerthi Gurushanth, Rongguang Liang, Shaobai Li, Amritha Suresh, Bofan Song, Tyler Peterson, and Vijay Pillai
- Subjects
Paper ,Neural Networks ,Computer science ,Population ,Biomedical Engineering ,mobile screening device ,Bioengineering ,Optical Physics ,Machine learning ,computer.software_genre ,Biomaterials ,Machine Learning ,imbalanced multi-class datasets ,Computer ,Breast cancer ,Rare Diseases ,Disease Screening ,Clinical Research ,Opthalmology and Optometry ,medicine ,Humans ,Dental/Oral and Craniofacial Disease ,education ,General ,Early Detection of Cancer ,Cancer ,education.field_of_study ,Artificial neural network ,Contextual image classification ,business.industry ,Deep learning ,Prevention ,deep learning ,Optics ,oral cancer ,Health Services ,medicine.disease ,Ensemble learning ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,ensemble learning ,Mouth Neoplasms ,Artificial intelligence ,Neural Networks, Computer ,business ,computer ,Algorithms - Abstract
Significance: Early detection of oral cancer is vital for high-risk patients, and machine learning-based automatic classification is ideal for disease screening. However, current datasets collected from high-risk populations are unbalanced and often have detrimental effects on the performance of classification. Aim: To reduce the class bias caused by data imbalance. Approach: We collected 3851 polarized white light cheek mucosa images using our customized oral cancer screening device. We use weight balancing, data augmentation, undersampling, focal loss, and ensemble methods to improve the neural network performance of oral cancer image classification with the imbalanced multi-class datasets captured from high-risk populations during oral cancer screening in low-resource settings. Results: By applying both data-level and algorithm-level approaches to the deep learning training process, the performance of the minority classes, which were difficult to distinguish at the beginning, has been improved. The accuracy of “premalignancy” class is also increased, which is ideal for screening applications. Conclusions: Experimental results show that the class bias induced by imbalanced oral cancer image datasets could be reduced using both data- and algorithm-level methods. Our study may provide an important basis for helping understand the influence of unbalanced datasets on oral cancer deep learning classifiers and how to mitigate.
- Published
- 2021
9. Bayesian deep learning for reliable oral cancer image classification
- Author
-
Praveen Birur, Rohan Ramesh, Vidya Bushan, Keerthi Gurushanth, Vijay Pillai, Rongguang Liang, Vivek Shetty, Alben Sigamani, Imchen Tsusennaro, Moni Abraham Kuriakose, Petra Wilder-Smith, Amritha Suresh, Shubha Gurudath, Pramila Mendonca, Bofan Song, Tyler Peterson, Sumsum P. Sunny, Shirley T Leivon, Nirza Mukhia, Shaobai Li, Subhashini Raghavan, Trupti Kolur, and Sanjana Patrick
- Subjects
Computer science ,Image quality ,Bayesian probability ,Population ,Optical Physics ,Machine learning ,computer.software_genre ,Basic Behavioral and Social Science ,Rare Diseases ,Behavioral and Social Science ,Medical imaging ,Dental/Oral and Craniofacial Disease ,education ,Reliability (statistics) ,Cancer ,education.field_of_study ,Artificial neural network ,Contextual image classification ,business.industry ,Deep learning ,Materials Engineering ,Atomic and Molecular Physics, and Optics ,Artificial intelligence ,business ,computer ,Biotechnology - Abstract
In medical imaging, deep learning-based solutions have achieved state-of-the-art performance. However, reliability restricts the integration of deep learning into practical medical workflows since conventional deep learning frameworks cannot quantitatively assess model uncertainty. In this work, we propose to address this shortcoming by utilizing a Bayesian deep network capable of estimating uncertainty to assess oral cancer image classification reliability. We evaluate the model using a large intraoral cheek mucosa image dataset captured using our customized device from high-risk population to show that meaningful uncertainty information can be produced. In addition, our experiments show improved accuracy by uncertainty-informed referral. The accuracy of retained data reaches roughly 90% when referring either 10% of all cases or referring cases whose uncertainty value is greater than 0.3. The performance can be further improved by referring more patients. The experiments show the model is capable of identifying difficult cases needing further inspection.
- Published
- 2021
10. mHealth-Based Point-Of-Care Diagnostic Tool for Early Detection of Oral Cancer and Pre-Cancer Lesions in a Low-Resource Setting
- Author
-
N, Praveen Birur, primary, Song, Bofan, additional, Sunny, Sumsum P., additional, G, Keerthi, additional, Mendonca, Pramila, additional, Mukhia, Nirza, additional, Li, Shaobai, additional, Patrick, Sanjana, additional, G, Shubha, additional, AR, Shubhashini, additional, Imchen, Tsusennaro, additional, Leivon, Shirley T., additional, Kolur, Trupti, additional, Shetty, Vivek, additional, R, Vidya Bushan, additional, Vaibhavi, Daksha, additional, Rajeev, Surya, additional, Pednekar, Sneha, additional, Banik, Ankita Dutta, additional, Ramesh, Rohan Michael, additional, Pillai, Vijay, additional, Osann, Kathryn, additional, Smith, Petra Wilder, additional, Sigamani, Alben, additional, Suresh, Amritha, additional, Liang, Rongguang, additional, and Kuriakose, Moni A., additional
- Published
- 2021
- Full Text
- View/download PDF
11. Study of biomedical waste management among healthcare personnel at a Tertiary hospital in Lucknow district
- Author
-
Imchen, Tsusennaro, primary, Kumari, Reema, additional, Singh, J. V., additional, Srivastava, Kirti, additional, and Singh, Anshita, additional
- Published
- 2017
- Full Text
- View/download PDF
12. Housing and sanitary conditions in slums of Lucknow, capital of Uttar Pradesh
- Author
-
Shukla, Mukesh, primary, Agarwal, Monika, additional, Rehman, Hossain, additional, Yadav, Kriti, additional, and Imchen, Tsusennaro, additional
- Published
- 2016
- Full Text
- View/download PDF
13. Inter-observer agreement among specialists in the diagnosis of Oral Potentially Malignant Disorders and Oral Cancer using Store-and-Forward technology.
- Author
-
Gurushanth K, Mukhia N, Sunny SP, Song B, Raghavan SA, Gurudath S, Mendonca P, Li S, Patrick S, Imchen T, Leivon ST, Shruti T, Kolur T, Shetty V, Bhushan R V, Ramesh RM, Pillai V, S KO, Smith PW, Suresh A, Liang R, Birur N P, and Kuriakose MA
- Abstract
Oral Cancer is one of the most common causes of morbidity and mortality. Screening and mobile Health (mHealth) based approach facilitates remote early detection of Oral cancer in a resource-constrained settings. The emerging eHealth technology has aided specialist reach to rural areas enabling remote monitoring and triaging to downstage Oral cancer. Though the diagnostic accuracy of the remote specialist has been evaluated, there are no studies evaluating the consistency among the remote specialists, to the best of our knowledge. The purpose of the study was to evaluate the interobserver agreement between the specialists through telemedicine systems in real-world settings using store and forward technology. Two remote specialists independently diagnosed the clinical images from image repositories, and the diagnostic accuracy was compared with onsite specialist and histopathological diagnosis when available. Moderate agreement (k = 0.682) between two remote specialists and (k = 0.629) between the onsite specialist and two remote specialists in diagnosing oral lesions. The sensitivity and specificity of remote specialist 1 were 92.7% and 83.3%, whereas remote specialist 2 was 95.8% and 60%, respectively, compared to histopathology. The store and forward technology and telecare can be effective tools in triaging and surveillance of patients., Competing Interests: Declarations Authors declare no conflict of interest.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.