5 results on '"Shi, Siyuan"'
Search Results
2. Diagnostic Performance of Deep Learning in Video-Based Ultrasonography for Breast Cancer: A Retrospective Multicentre Study.
- Author
-
Chen, Jing, Huang, Zhibin, Jiang, Yitao, Wu, Huaiyu, Tian, Hongtian, Cui, Chen, Shi, Siyuan, Tang, Shuzhen, Xu, Jinfeng, Xu, Dong, and Dong, Fajin
- Subjects
- *
BREAST ultrasound , *DEEP learning , *BREAST cancer , *RESOURCE-limited settings , *SIGNAL convolution , *CANCER diagnosis - Abstract
Although ultrasound is a common tool for breast cancer screening, its accuracy is often operator-dependent. In this study, we proposed a new automated deep-learning framework that extracts video-based ultrasound data for breast cancer screening. Our framework incorporates DenseNet121, MobileNet, and Xception as backbones for both video- and image-based models. We used data from 3907 patients to train and evaluate the models, which were tested using video- and image-based methods, as well as reader studies with human experts. This study evaluated 3907 female patients aged 22 to 86 years. The results indicated that the MobileNet video model achieved an AUROC of 0.961 in prospective data testing, surpassing the DenseNet121 video model. In real-world data testing, it demonstrated an accuracy of 92.59%, outperforming both the DenseNet121 and Xception video models, and exceeding the 76.00% to 85.60% accuracy range of human experts. Additionally, the MobileNet video model exceeded the performance of image models and other video models across all evaluation metrics, including accuracy, sensitivity, specificity, F1 score, and AUC. Its exceptional performance, particularly suitable for resource-limited clinical settings, demonstrates its potential for clinical application in breast cancer screening. The level of expertise reached by the video models was greater than that achieved by image-based models. We have developed an artificial intelligence framework based on videos that may be able to aid breast cancer diagnosis and alleviate the shortage of experienced experts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. How much can AI see in early pregnancy: A multi‐center study of fetus head characterization in week 10–14 in ultrasound using deep learning.
- Author
-
Lin, Qi, Zhou, Yuli, Shi, Siyuan, Zhang, Yujuan, Yin, Shaoli, Liu, Xuye, Peng, Qihui, Huang, Shaoting, Jiang, Yitao, Cui, Chen, She, Ruilian, Xu, Jinfeng, and Dong, Fajin
- Subjects
- *
FETAL ultrasonic imaging , *DEEP learning , *FETUS , *NASAL bone , *ULTRASONIC imaging , *COMPUTER-assisted image analysis (Medicine) , *PREGNANCY tests - Abstract
• A novel Fetus framework to auto-select standard images from fetal ultrasound screening is proposed. • Fetus Framework outperforms human experts with 1-, 3- and 5-year ultrasound training in the standard/ non-standard images classification task. • A novel 'divide-and-conquer' principle to improve key structure detection of fetus head is applied to Fetus framework. • Superior generalization capacity of Fetus Framework to classic CNN models has been confirmed by the external test. To investigate if artificial intelligence can identify fetus intracranial structures in pregnancy week 11–14; to provide an automated method of standard and non-standard sagittal view classification in obstetric ultrasound examination We proposed a newly designed scheme based on deep learning (DL) – Fetus Framework to identify nine fetus intracranial structures: thalami, midbrain, palate, 4th ventricle, cisterna magna, nuchal translucency (NT), nasal tip, nasal skin, and nasal bone. Fetus Framework was trained and tested on a dataset of 1528 2D sagittal-view ultrasound images from 1519 females collected from Shenzhen People's Hospital. Results from Fetus Framework were further used for standard/non-standard (S-NS) plane classification, a key step for NT measurement and Down Syndrome assessment. S-NS classification was also tested with 156 images from the Longhua branch of Shenzhen People's Hospital. Sensitivity, specificity, and area under the curve (AUC) were evaluated for comparison among Fetus Framework, three classic DL models, and human experts with 1-, 3- and 5-year ultrasound training. Furthermore, 4 physicians with more than 5 years of experience conducted a reader study of diagnosing fetal malformation on a dataset of 316 standard images confirmed by the Fetus framework and another dataset of 316 standard images selected by physicians. Accuracy, sensitivity, specificity, precision, and F1-Score of physicians' diagnosis on both sets are compared. Nine intracranial structures identified by Fetus Framework in validation are all consistent with that of senior radiologists. For S-NS sagittal view identification, Fetus Framework achieved an AUC of 0.996 (95%CI: 0.987, 1.000) in internal test, at par with classic DL models. In external test, FF reaches an AUC of 0.974 (95%CI: 0.952, 0.995), while ResNet-50 arrives at AUC∼0.883, 95% CI 0.828–0.939, Xception AUC∼0.890, 95% CI 0.834–0.946, and DenseNet-121 AUC∼0.894, 95% CI 0.839–0.949. For the internal test set, the sensitivity and specificity of the proposed framework are (0.905, 1), while the first-, third-, and fifth-year clinicians are (0.619, 0.986), (0.690, 0.958), and (0.798, 0.986), respectively. For the external test set, the sensitivity and specificity of FF is (0.989, 0.797), and first-, third-, and fifth-year clinicians are (0.533, 0.875), (0.609, 0.844), and (0.663, 0.781), respectively.On the fetal malformation classification task, all physicians achieved higher accuracy and F1-Score on Fetus selected standard images with statistical significance (p < 0.01). We proposed a new deep learning-based Fetus Framework for identifying key fetus intracranial structures. The framework was tested on data from two different medical centers. The results show consistency and improvement from classic models and human experts in standard and non-standard sagittal view classification during pregnancy week 11–13+6. With further refinement in larger population, the proposed model can improve the efficiency and accuracy of early pregnancy test using ultrasound examination. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Sonography-based multimodal information platform for identifying the surgical pathology of ductal carcinoma in situ.
- Author
-
Wu, Huaiyu, Jiang, Yitao, Tian, Hongtian, Ye, Xiuqin, Cui, Chen, Shi, Siyuan, Chen, Ming, Ding, Zhimin, Li, Shiyu, Huang, Zhibin, Luo, Yuwei, Peng, Quanzhou, Xu, Jinfeng, and Dong, Fajin
- Abstract
• In this study of 754 lesions, 22.6 % of low-grade ductal carcinomas in situ (DCIS) were upgraded to intermediate-to-high-grade DCIS, 18.6 % of them were upgraded to upstaged DCIS, and 42.4 % of intermediate-to-high-grade DCIS were upgraded to upstaged DCIS. • By developing the DCIS-Net based on ultrasound (US), diagnosing low-grade, intermediate-to-high-grade, and upstaged DCIS was possible (accuracy = 0.617). It could also distinguish DCIS from upstaged DCIS (accuracy = 0.688) and low-grade DCIS from upstaged low-grade DCIS (accuracy = 0.606). • Integrating the clinical, US, mammography, and pathology of core needle biopsy, the performance of Multimodal-DCIS-Net improved (accuracy = 0.766, 0.780, and 0.987 in the three-classification task, DCIS vs. upstaged DCIS and low-grade DCIS vs. upstaged low-grade DCIS, respectively). The risk of ductal carcinoma in situ (DCIS) identified by biopsy often increases during surgery. Therefore, confirming the DCIS grade preoperatively is necessary for clinical decision-making. To train a three-classification deep learning (DL) model based on ultrasound (US), combining clinical data, mammography (MG), US, and core needle biopsy (CNB) pathology to predict low-grade DCIS, intermediate-to-high-grade DCIS, and upstaged DCIS. Data of 733 patients with 754 DCIS cases confirmed by biopsy were retrospectively collected from May 2013 to June 2022 (N1), and other data (N2) were confirmed by biopsy as low-grade DCIS. The lesions were randomly divided into training (n=471), validation (n=142), and test (n = 141) sets to establish the DCIS-Net. Information on the DCIS-Net, clinical (age and sign), US (size, calcifications, type, breast imaging reporting and data system [BI-RADS]), MG (microcalcifications, BI-RADS), and CNB pathology (nuclear grade, architectural features, and immunohistochemistry) were collected. Logistic regression and random forest analyses were conducted to develop Multimodal DCIS-Net to calculate the specificity, sensitivity, accuracy, receiver operating characteristic curve, and area under the curve (AUC). In the test set of N1, the accuracy and AUC of the multimodal DCIS-Net were 0.752–0.766 and 0.859–0.907 in the three-classification task, respectively. The accuracy and AUC for discriminating DCIS from upstaged DCIS were 0.751–0.780 and 0.829–0.861, respectively. In the test set of N2, the accuracy and AUC of discriminating low-grade DCIS from upstaged low-grade DCIS were 0.769–0.987 and 0.818–0.939, respectively. DL was ranked from one to five in the importance of features in the multimodal-DCIS-Net. By developing the DCIS-Net and integrating it with multimodal information, diagnosing low-grade DCIS, intermediate-to high-grade DCIS, and upstaged DCIS is possible. It can also be used to distinguish DCIS from upstaged DCIS and low-grade DCIS from upstaged low-grade DCIS, which could pave the way for the DCIS clinical workflow. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A new xAI framework with feature explainability for tumors decision-making in Ultrasound data: comparing with Grad-CAM.
- Author
-
Song, Di, Yao, Jincao, Jiang, Yitao, Shi, Siyuan, Cui, Chen, Wang, Liping, Wang, Lijing, Wu, Huaiyu, Tian, Hongtian, Ye, Xiuqin, Ou, Di, Li, Wei, Feng, Na, Pan, Weiyun, Song, Mei, Xu, Jinfeng, Xu, Dong, Wu, Linghu, and Dong, Fajin
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *ULTRASONIC imaging , *DIAGNOSIS , *THYROID cancer , *DECISION making , *IODINE isotopes , *MICROBUBBLES - Abstract
• We propose an explainable framework called Explainer for thyroid nodule classification. • Explainer provides physicians a tool to eval the reliability of AI. • Explainer uses an intrinsic method to explain decisions. • Reader studies prove that physicians achieve better performance when assisted by Explainer than when diagnosing alone. • Explainer is capable of locating more reasonable and feature-related regions than the classic post-hoc technique. The value of implementing artificial intelligence (AI) on ultrasound screening for thyroid cancer has been acknowledged, with numerous early studies confirming AI might help physicians acquire more accurate diagnoses. However, the black box nature of AI's decision-making process makes it difficult for users to grasp the foundation of AI's predictions. Furthermore, explainability is not only related to AI performance, but also responsibility and risk in medical diagnosis. In this paper, we offer Explainer, an intrinsically explainable framework that can categorize images and create heatmaps highlighting the regions on which its prediction is based. A dataset of 19341 thyroid ultrasound images with pathological results and physician-annotated TI-RADS features is used to train and test the robustness of the proposed framework. Then we conducted a benign-malignant classification study to determine whether physicians perform better with the assistance of an explainer than they do alone or with Gradient-weighted Class Activation Mapping (Grad-CAM). Reader studies show that the Explainer can achieve a more accurate diagnosis while explaining heatmaps, and that physicians' performances are improved when assisted by the Explainer. Case study results confirm that the Explainer is capable of locating more reasonable and feature-related regions than the Grad-CAM. The Explainer offers physicians a tool to understand the basis of AI predictions and evaluate their reliability, which has the potential to unbox the "black box" of medical imaging AI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.