111 results on '"deep learning"'
Search Results
2. Automated curation of large‐scale cancer histopathology image datasets using deep learning.
- Author
-
Hilgers, Lars, Ghaffari Laleh, Narmin, West, Nicholas P, Westwood, Alice, Hewitt, Katherine J, Quirke, Philip, Grabsch, Heike I, Carrero, Zunamys I, Matthaei, Emylou, Loeffler, Chiara M L, Brinker, Titus J, Yuan, Tanwei, Brenner, Hermann, Brobeil, Alexander, Hoffmeister, Michael, and Kather, Jakob Nikolas
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *CONVOLUTIONAL neural networks , *HISTOPATHOLOGY , *ARTIFICIAL intelligence , *COLORECTAL cancer - Abstract
Background: Artificial intelligence (AI) has numerous applications in pathology, supporting diagnosis and prognostication in cancer. However, most AI models are trained on highly selected data, typically one tissue slide per patient. In reality, especially for large surgical resection specimens, dozens of slides can be available for each patient. Manually sorting and labelling whole‐slide images (WSIs) is a very time‐consuming process, hindering the direct application of AI on the collected tissue samples from large cohorts. In this study we addressed this issue by developing a deep‐learning (DL)‐based method for automatic curation of large pathology datasets with several slides per patient. Methods: We collected multiple large multicentric datasets of colorectal cancer histopathological slides from the United Kingdom (FOXTROT, N = 21,384 slides; CR07, N = 7985 slides) and Germany (DACHS, N = 3606 slides). These datasets contained multiple types of tissue slides, including bowel resection specimens, endoscopic biopsies, lymph node resections, immunohistochemistry‐stained slides, and tissue microarrays. We developed, trained, and tested a deep convolutional neural network model to predict the type of slide from the slide overview (thumbnail) image. The primary statistical endpoint was the macro‐averaged area under the receiver operating curve (AUROCs) for detection of the type of slide. Results: In the primary dataset (FOXTROT), with an AUROC of 0.995 [95% confidence interval [CI]: 0.994–0.996] the algorithm achieved a high classification performance and was able to accurately predict the type of slide from the thumbnail image alone. In the two external test cohorts (CR07, DACHS) AUROCs of 0.982 [95% CI: 0.979–0.985] and 0.875 [95% CI: 0.864–0.887] were observed, which indicates the generalizability of the trained model on unseen datasets. With a confidence threshold of 0.95, the model reached an accuracy of 94.6% (7331 classified cases) in CR07 and 85.1% (2752 classified cases) for the DACHS cohort. Conclusion: Our findings show that using the low‐resolution thumbnail image is sufficient to accurately classify the type of slide in digital pathology. This can support researchers to make the vast resource of existing pathology archives accessible to modern AI models with only minimal manual annotations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Stacked neural network for predicting polygenic risk score.
- Author
-
Kim, Sun bin, Kang, Joon Ho, Cheon, MyeongJae, Kim, Dong Jun, and Lee, Byung-Chul
- Subjects
- *
GENETIC risk score , *ARTIFICIAL neural networks , *GENOME-wide association studies , *GENETIC variation , *DISEASE susceptibility , *BREAST - Abstract
In recent years, the utility of polygenic risk scores (PRS) in forecasting disease susceptibility from genome-wide association studies (GWAS) results has been widely recognised. Yet, these models face limitations due to overfitting and the potential overestimation of effect sizes in correlated variants. To surmount these obstacles, we devised the Stacked Neural Network Polygenic Risk Score (SNPRS). This novel approach synthesises outputs from multiple neural network models, each calibrated using genetic variants chosen based on diverse p-value thresholds. By doing so, SNPRS captures a broader array of genetic variants, enabling a more nuanced interpretation of the combined effects of these variants. We assessed the efficacy of SNPRS using the UK Biobank data, focusing on the genetic risks associated with breast and prostate cancers, as well as quantitative traits like height and BMI. We also extended our analysis to the Korea Genome and Epidemiology Study (KoGES) dataset. Impressively, our results indicate that SNPRS surpasses traditional PRS models and an isolated deep neural network in terms of accuracy, highlighting its promise in refining the efficacy and relevance of PRS in genetic studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Haplotype function score improves biological interpretation and cross-ancestry polygenic prediction of human complex traits.
- Author
-
Weichen Song, Yongyong Shi, and Guan Ning Lin
- Subjects
- *
HAPLOTYPES , *DEEP learning , *GENOTYPES , *SINGLE nucleotide polymorphisms - Abstract
We propose a new framework for human genetic association studies: at each locus, a deep learning model (in this study, Sei) is used to calculate the functional genomic activity score for two haplotypes per individual. This score, defined as the Haplotype Function Score (HFS), replaces the original genotype in association studies. Applying the HFS framework to 14 complex traits in the UK Biobank, we identified 3619 independent HFS-trait associations with a significance of p < 5 × 10-8. Fine-mapping revealed 2699 causal associations, corresponding to a median increase of 63 causal findings per trait compared with single-nucleotide polymorphism (SNP)-based analysis. HFS-based enrichment analysis uncovered 727 pathway-trait associations and 153 tissue-trait associations with strong biological interpretability, including 'circadian pathway-chronotype' and 'arachidonic acid-intelligence'. Lastly, we applied least absolute shrinkage and selection operator (LASSO) regression to integrate HFS prediction score with SNP-based polygenic risk scores, which showed an improvement of 16.1-39.8% in cross-ancestry polygenic prediction. We concluded that HFS is a promising strategy for understanding the genetic basis of human complex traits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Unsupervised deep representation learning enables phenotype discovery for genetic association studies of brain imaging.
- Author
-
Patel, Khush, Xie, Ziqian, Yuan, Hao, Islam, Sheikh Muhammad Saiful, Xie, Yaochen, He, Wei, Zhang, Wanheng, Gottlieb, Assaf, Chen, Han, Giancardo, Luca, Knaack, Alexander, Fletcher, Evan, Fornage, Myriam, Ji, Shuiwang, and Zhi, Degui
- Subjects
- *
DEEP learning , *BRAIN imaging , *GENOME-wide association studies , *DIAGNOSTIC imaging , *BRAIN anatomy , *GENETICS - Abstract
Understanding the genetic architecture of brain structure is challenging, partly due to difficulties in designing robust, non-biased descriptors of brain morphology. Until recently, brain measures for genome-wide association studies (GWAS) consisted of traditionally expert-defined or software-derived image-derived phenotypes (IDPs) that are often based on theoretical preconceptions or computed from limited amounts of data. Here, we present an approach to derive brain imaging phenotypes using unsupervised deep representation learning. We train a 3-D convolutional autoencoder model with reconstruction loss on 6130 UK Biobank (UKBB) participants' T1 or T2-FLAIR (T2) brain MRIs to create a 128-dimensional representation known as Unsupervised Deep learning derived Imaging Phenotypes (UDIPs). GWAS of these UDIPs in held-out UKBB subjects (n = 22,880 discovery and n = 12,359/11,265 replication cohorts for T1/T2) identified 9457 significant SNPs organized into 97 independent genetic loci of which 60 loci were replicated. Twenty-six loci were not reported in earlier T1 and T2 IDP-based UK Biobank GWAS. We developed a perturbation-based decoder interpretation approach to show that these loci are associated with UDIPs mapped to multiple relevant brain regions. Our results established unsupervised deep learning can derive robust, unbiased, heritable, and interpretable brain imaging phenotypes. A study utilizing unsupervised deep learning to generate interpretable brain imaging phenotypes from brain T1 and T2-FLAIR MRI identified 97 genetic loci enhancing understanding of brain structure genetics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. External validation, radiological evaluation, and development of deep learning automatic lung segmentation in contrast-enhanced chest CT.
- Author
-
Dwivedi, Krit, Sharkey, Michael, Alabed, Samer, Langlotz, Curtis P., Swift, Andy J., and Bluethgen, Christian
- Subjects
- *
COMPUTED tomography , *DEEP learning , *ATELECTASIS , *LUNGS , *PULMONARY hypertension , *INTERSTITIAL lung diseases , *LUNG volume - Abstract
Objectives: There is a need for CT pulmonary angiography (CTPA) lung segmentation models. Clinical translation requires radiological evaluation of model outputs, understanding of limitations, and identification of failure points. This multicentre study aims to develop an accurate CTPA lung segmentation model, with evaluation of outputs in two diverse patient cohorts with pulmonary hypertension (PH) and interstitial lung disease (ILD). Methods: This retrospective study develops an nnU-Net-based segmentation model using data from two specialist centres (UK and USA). Model was trained (n = 37), tested (n = 12), and clinically evaluated (n = 176) on a diverse 'real-world' cohort of 225 PH patients with volumetric CTPAs. Dice score coefficient (DSC) and normalised surface distance (NSD) were used for testing. Clinical evaluation of outputs was performed by two radiologists who assessed clinical significance of errors. External validation was performed on heterogenous contrast and non-contrast scans from 28 ILD patients. Results: A total of 225 PH and 28 ILD patients with diverse demographic and clinical characteristics were evaluated. Mean accuracy, DSC, and NSD scores were 0.998 (95% CI 0.9976, 0.9989), 0.990 (0.9840, 0.9962), and 0.983 (0.9686, 0.9972) respectively. There were no segmentation failures. On radiological review, 82% and 71% of internal and external cases respectively had no errors. Eighteen percent and 25% respectively had clinically insignificant errors. Peripheral atelectasis and consolidation were common causes for suboptimal segmentation. One external case (0.5%) with patulous oesophagus had a clinically significant error. Conclusion: State-of-the-art CTPA lung segmentation model provides accurate outputs with minimal clinical errors on evaluation across two diverse cohorts with PH and ILD. Clinical relevance: Clinical translation of artificial intelligence models requires radiological review and understanding of model limitations. This study develops an externally validated state-of-the-art model with robust radiological review. Intended clinical use is in techniques such as lung volume or parenchymal disease quantification. Key Points: • Accurate, externally validated CT pulmonary angiography (CTPA) lung segmentation model tested in two large heterogeneous clinical cohorts (pulmonary hypertension and interstitial lung disease). • No segmentation failures and robust review of model outputs by radiologists found 1 (0.5%) clinically significant segmentation error. • Intended clinical use of this model is a necessary step in techniques such as lung volume, parenchymal disease quantification, or pulmonary vessel analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. DeLIVR: a deep learning approach to IV regression for testing nonlinear causal effects in transcriptome-wide association studies.
- Author
-
He, Ruoyu, Liu, Mingyang, Lin, Zhaotong, Zhuang, Zhong, Shen, Xiaotong, and Pan, Wei
- Subjects
- *
DEEP learning , *NONLINEAR regression , *LDL cholesterol , *LEAST squares , *INFERENTIAL statistics , *HIGH density lipoproteins - Abstract
Transcriptome-wide association studies (TWAS) have been increasingly applied to identify (putative) causal genes for complex traits and diseases. TWAS can be regarded as a two-sample two-stage least squares method for instrumental variable (IV) regression for causal inference. The standard TWAS (called TWAS-L) only considers a linear relationship between a gene's expression and a trait in stage 2, which may lose statistical power when not true. Recently, an extension of TWAS (called TWAS-LQ) considers both the linear and quadratic effects of a gene on a trait, which however is not flexible enough due to its parametric nature and may be low powered for nonquadratic nonlinear effects. On the other hand, a deep learning (DL) approach, called DeepIV, has been proposed to nonparametrically model a nonlinear effect in IV regression. However, it is both slow and unstable due to the ill-posed inverse problem of solving an integral equation with Monte Carlo approximations. Furthermore, in the original DeepIV approach, statistical inference, that is, hypothesis testing, was not studied. Here, we propose a novel DL approach, called DeLIVR, to overcome the major drawbacks of DeepIV, by estimating a related but different target function and including a hypothesis testing framework. We show through simulations that DeLIVR was both faster and more stable than DeepIV. We applied both parametric and DL approaches to the GTEx and UK Biobank data, showcasing that DeLIVR detected additional 8 and 7 genes nonlinearly associated with high-density lipoprotein (HDL) cholesterol and low-density lipoprotein (LDL) cholesterol, respectively, all of which would be missed by TWAS-L, TWAS-LQ, and DeepIV; these genes include BUD13 associated with HDL, SLC44A2 and GMIP with LDL, all supported by previous studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A deep learning quantification of patient specificity as a predictor of session attendance and treatment response to internet-enabled cognitive behavioural therapy for common mental health disorders.
- Author
-
Hitchcock, Caitlin, Funk, Julia, Cummins, Ronan, Patel, Shivam D., Catarino, Ana, Takano, Keisuke, Dalgleish, Tim, and Ewbank, Michael
- Subjects
- *
MENTAL illness , *BEHAVIOR therapy , *COGNITIVE therapy , *DEEP learning , *MENTAL depression - Abstract
Increasing an individual's ability to focus on concrete, specific detail, thus reducing the tendency toward overly broad, decontextualised generalisations about the self and world, is a target within cognitive behavioural therapy (CBT). However, empirical investigation of the impact of within-treatment specificity on treatment outcomes is scarce. We evaluated whether the specificity of patient dialogue predicted a) end-of-treatment symptoms and b) session completion for CBT for common mental health issues. This preregistered (https://osf.io/agr4t) study trained a deep learning model to score the specificity of patient dialogue in transcripts from 353,614 internet-enabled CBT sessions for common mental health disorders, delivered on behalf of UK NHS services. Data were from obtained from 65,030 participants (n = 47,308 female, n = 241 unstated) aged 18–94 years (M = 34.69, SD = 12.35). Depressive disorders were the most common (39.1 %) primary diagnosis. Primary outcome was end-of-treatment score on the Patient Health Questionnaire-9 (PHQ-9). Secondary outcome was number of sessions attended. Linear mixed-effects models demonstrated that increased patient specificity significantly predicted lower post-treatment symptoms on the PHQ-9, although the size and direction of the effect varied depending on the type of therapeutic activity being completed. Effect sizes were consistently small. Higher patient specificity was associated with completing a greater number of sessions. We are unable to infer causation from our data. Although effect sizes were small, an effect of specificity was observed across common mental health disorders. Further studies are needed to explore whether encouraging patient specificity during CBT may provide an enhancement of treatment attendance and treatment effects. • We explored whether the specificity of patient dialogue influences CBT outcomes • Specificity of therapy transcripts was scored using deep learning • increased patient specificity predicted lower post-treatment symptoms • Higher specificity was associated with completing a greater number of sessions [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Prediction of cardiovascular risk factors from retinal fundus photographs: Validation of a deep learning algorithm in a prospective non‐interventional study in Kenya.
- Author
-
White, Tom, Selvarajah, Viknesh, Wolfhagen‐Sand, Fredrik, Svangård, Nils, Mohankumar, Gayathri, Fenici, Peter, Rough, Kathryn, Onyango, Nelson, Lyons, Kendall, Mack, Christina, Nduba, Videlis, Noorali Saleh, Mansoor, Abayo, Innocent, Siddiqui, Afrah, Majdanska‐Strzalka, Malgorzata, Kaszubska, Katarzyna, Hegelund‐Myrback, Tove, Esterline, Russell, Manzur, Antonio, and Parker, Victoria E. R.
- Subjects
- *
DEEP learning , *CARDIOVASCULAR diseases risk factors , *RESOURCE-limited settings , *BLOOD pressure testing machines , *GLOMERULAR filtration rate , *LOW-income countries , *MACHINE learning , *PHOTOPLETHYSMOGRAPHY , *BLOOD pressure - Abstract
Aim: Hypertension and diabetes mellitus (DM) are major causes of morbidity and mortality, with growing burdens in low‐income countries where they are underdiagnosed and undertreated. Advances in machine learning may provide opportunities to enhance diagnostics in settings with limited medical infrastructure. Materials and Methods: A non‐interventional study was conducted to develop and validate a machine learning algorithm to estimate cardiovascular clinical and laboratory parameters. At two sites in Kenya, digital retinal fundus photographs were collected alongside blood pressure (BP), laboratory measures and medical history. The performance of machine learning models, originally trained using data from the UK Biobank, were evaluated for their ability to estimate BP, glycated haemoglobin, estimated glomerular filtration rate and diagnoses from fundus images. Results: In total, 301 participants were enrolled. Compared with the UK Biobank population used for algorithm development, participants from Kenya were younger and would probably report Black/African ethnicity, with a higher body mass index and prevalence of DM and hypertension. The mean absolute error was comparable or slightly greater for systolic BP, diastolic BP, glycated haemoglobin and estimated glomerular filtration rate. The model trained to identify DM had an area under the receiver operating curve of 0.762 (0.818 in the UK Biobank) and the hypertension model had an area under the receiver operating curve of 0.765 (0.738 in the UK Biobank). Conclusions: In a Kenyan population, machine learning models estimated cardiovascular parameters with comparable or slightly lower accuracy than in the population where they were trained, suggesting model recalibration may be appropriate. This study represents an incremental step toward leveraging machine learning to make early cardiovascular screening more accessible, particularly in resource‐limited settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Multi-aspect multilingual and cross-lingual parliamentary speech analysis.
- Author
-
Miok, Kristian, Hidalgo Tenorio, Encarnación, Osenova, Petya, Benítez-Castro, Miguel-Ángel, and Robnik-Šikonja, Marko
- Subjects
- *
COMPUTATIONAL linguistics , *SPEECH , *NATURAL language processing , *POLITICAL affiliation - Abstract
Parliamentary and legislative debate transcripts provide an informative insight into elected politicians' opinions, positions, and policy preferences. They are interesting for political and social sciences as well as linguistics and natural language processing (NLP) research. While exiting research studied individual parliaments, we apply advanced NLP methods to a joint and comparative analysis of six national parliaments (Bulgarian, Czech, French, Slovene, Spanish, and United Kingdom) between 2017 and 2020. We analyze emotions and sentiment in the transcripts from the ParlaMint dataset collection, and assess if the age, gender, and political orientation of speakers can be detected from their speeches. The results show some commonalities and many surprising differences among the analyzed countries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Examination of alternative eGFR definitions on the performance of deep learning models for detection of chronic kidney disease from fundus photographs.
- Author
-
An, Songyang, Vaghefi, Ehsan, Yang, Song, Xie, Li, and Squirrell, David
- Subjects
- *
CHRONIC kidney failure , *EPIDERMAL growth factor receptors , *DEEP learning , *CYSTATIN C - Abstract
Deep learning (DL) models have shown promise in detecting chronic kidney disease (CKD) from fundus photographs. However, previous studies have utilized a serum creatinine-only estimated glomerular rate (eGFR) equation to measure kidney function despite the development of more up-to-date methods. In this study, we developed two sets of DL models using fundus images from the UK Biobank to ascertain the effects of using a creatinine and cystatin-C eGFR equation over the baseline creatinine-only eGFR equation on fundus image-based DL CKD predictors. Our results show that a creatinine and cystatin-C eGFR significantly improved classification performance over the baseline creatinine-only eGFR when the models were evaluated conventionally. However, these differences were no longer significant when the models were assessed on clinical labels based on ICD10. Furthermore, we also observed variations in model performance and systemic condition incidence between our study and the ones conducted previously. We hypothesize that limitations in existing eGFR equations and the paucity of retinal features uniquely indicative of CKD may contribute to these inconsistencies. These findings emphasize the need for developing more transparent models to facilitate a better understanding of the mechanisms underpinning the ability of DL models to detect CKD from fundus images. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Pricing and cost-saving potential for deep-learning computer-aided lung nodule detection software in CT lung cancer screening.
- Author
-
Du, Yihui, Greuter, Marcel J. W., Prokop, Mathias W., and de Bock, Geertruida H.
- Subjects
- *
EARLY detection of cancer , *PULMONARY nodules , *LUNG cancer , *PRICES , *COMPUTER-aided diagnosis - Abstract
Objective: An increasing number of commercial deep learning computer-aided detection (DL-CAD) systems are available but their cost-saving potential is largely unknown. This study aimed to gain insight into appropriate pricing for DL-CAD in different reading modes to be cost-saving and to determine the potentially most cost-effective reading mode for lung cancer screening. Methods: In three representative settings, DL-CAD was evaluated as a concurrent, pre-screening, and second reader. Scoping review was performed to estimate radiologist reading time with and without DL-CAD. Hourly cost of radiologist time was collected for the USA (€196), UK (€127), and Poland (€45), and monetary equivalence of saved time was calculated. The minimum number of screening CTs to reach break-even was calculated for one-time investment of €51,616 for DL-CAD. Results: Mean reading time was 162 (95% CI: 111–212) seconds per case without DL-CAD, which decreased by 77 (95% CI: 47–107) and 104 (95% CI: 71–136) seconds for DL-CAD as concurrent and pre-screening reader, respectively, and increased by 33–41 s for DL-CAD as second reader. This translates into €1.0–4.3 per-case cost for concurrent reading and €0.8–5.7 for pre-screening reading in the USA, UK, and Poland. To achieve break-even with a one-time investment, the minimum number of CT scans was 12,300–53,600 for concurrent reader, and 9400–65,000 for pre-screening reader in the three countries. Conclusions: Given current pricing, DL-CAD must be priced substantially below €6 in a pay-per-case setting or used in a high-workload environment to reach break-even in lung cancer screening. DL-CAD as pre-screening reader shows the largest potential to be cost-saving. Critical relevance statement: Deep-learning computer-aided lung nodule detection (DL-CAD) software must be priced substantially below 6 euro in a pay-per-case setting or must be used in high-workload environments with one-time investment in order to achieve break-even. DL-CAD as a pre-screening reader has the greatest cost savings potential. Key points: • DL-CAD must be substantially below €6 in a pay-per-case setting to reach break-even. • DL-CAD must be used in a high-workload screening environment to achieve break-even. • DL-CAD as a pre-screening reader shows the largest potential to be cost-saving. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. "I think that comes with experience": A thematic analysis exploring how dental students at a transitional stage of training understand and engage with reflection.
- Author
-
Penlington, Chris, Lyttle, Ross, Dillon, Megan, Ali, Adam, and Waterhouse, Paula
- Subjects
- *
DENTAL education , *DENTAL schools , *DENTAL students , *THEMATIC analysis , *MATURATION (Psychology) , *DEEP learning , *THEATER students - Abstract
Introduction: Reflection is an important skill for dentists but there is little consensus about how reflection can most usefully be integrated into dental education. The aim of this study was to conduct focus groups to explore how students at a transitional point of dental education in one UK dental school had experienced, and conceptualised reflection. Methods: Students at the beginning of their clinical studies were recruited by email and invited to attend a single focus group. Focus groups were co‐facilitated by a team of staff and student researchers and analysed using thematic analysis. Students acted as research partners in planning a topic guide, recruiting students, conducting focus groups and considering the implications of research findings for the curriculum, and contributed their perspectives to other aspects of the research. Results: Students primarily associated reflection with their clinical learning and valued the skill highly in this context. They were less familiar with the potential for reflection to support personal development and deeper learning. Themes were identified of learning, uncertainty, emotions and wellbeing, community and challenges and are discussed in detail. Conclusion: Reflection is highly valued within our dental education setting but many students may be missing out on using it to its' full potential. Changes to the undergraduate curriculum, including offering reflection from an early stage of education may be warranted. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Multi-Deep CNN based Experimentations for Early Diagnosis of Breast Cancer.
- Author
-
Sannasi Chakravarthy, S. R., Bharanidharan, N., and Rajaguru, Harikumar
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *COMPUTER-aided diagnosis , *CONVOLUTIONAL neural networks , *SUPPORT vector machines , *CANCER diagnosis - Abstract
Breast cancer is one of the deadly cancer types that causes high mortality among women globally. Meanwhile, Deep Learning (DL) emerges as the most frequently utilized and rapidly developing branch of classical machine learning. The study examines a modern Computer-Aided Diagnosis (CAD) framework that uses DL to extract features and classify them for aiding radiologists in breast cancer diagnosis. This is accomplished through four distinct experimentations aimed at identifying the most optimal method of effective classification. Here, the first uses Deep CNNs that are pre-trained, such as AlexNet, GoogleNet, ResNet50, and Dense-Net121. The second is based on experiments using Deep CNNs to extract features and applying them onto a Support Vector Machine algorithm using three different kernels. The next one involves the fusion of different deep features for demonstrating the classification improvement by fusion of these deep features. The final experiment involves Principal Component Analysis (PCA) for reducing the computational cost and for decreasing the larger feature vectors created during fusion. The abovesaid experimentations are carried out in two different mammogram datasets namely MIAS and INbreast. The classification accuracy attained for both datasets through the fuzing of deep features (97.93% for MIAS and 96.646% for INbreast) is the highest compared with the state-of-the-art frameworks. In contrast, the classification performance did not enhance while applying the PCA on combined deep features; but the decrease in execution time provides a reduced computational cost. Abbreviations: CAD: Computer Aided Diagnosis; CNN: Convolution Neural Network; CSI: Classification Success Index; DCNN: Deep Convolution Neural Network; DICOM: Digital Imaging and Communications in Medicine; DL: Deep Learning; FC layer: Fully Connected layer; FFDM: Full-Field Digital Mammograms; FN: False Negative; FP: False Positive; ICSI: Individual Classification Success Index; MIAS: Mammographic Image Analysis Society; ML: Machine Learning; MLO: Medio-Lateral Oblique; PCA: Principal Component Analysis; PGM: Portable Gray Map; PPV: Positive Predictive Value; RBF: Radial Basis Function; SGDM: Stochastic Gradient Descent with Momentum; SVM: Support Vector Machine; TN: True Negative; TP: True Positive; TPR: True Positive Rate; UK: United Kingdom [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. The use of artificial intelligence and deep learning reconstruction in urological computed tomography: Dose reduction at ghost level.
- Author
-
Rauf, Abdul, Javed, Saqib, Chandrasekar, Bhargavi, Miah, Saiful, Lyttle, Margaret, Siraj, Mamoon, Mukherjee, Rono, McLeavy, Christopher, Alaaraj, Hazem, and Hawkins, Richard
- Subjects
- *
ARTIFICIAL intelligence , *DEEP learning , *COMPUTED tomography , *URINARY calculi , *URINARY organs , *DICOM (Computer network protocol) - Abstract
Objective: The objective of the study is to demonstrate that with the use of artificial intelligence (AI) in computed tomography (CT), radiation doses of CT kidney-ureter-bladder (KUB) and CT urogram (CTU) can be reduced to less than that of X-ray KUB and CT KUB, respectively, while maintaining the good image quality. Materials and Methods: We reviewed all CT KUBs (n = 121) performed in September 2019 and all CTUs (n = 74) performed in December 2019 at our institution. The dose length product (DLP) of all CT KUBs and each individual phase of CTU were recorded. DLP of each scan done with new scanner (Canon Aquilion One Genesis with AiCE [CAOG]) which uses AI and deep learning reconstruction (DLR) were compared against traditional non-AI scanner (GE OPTIMA 660 [GEO-660]). We also compared DLPs of both scanners against the United Kingdom, National Diagnostic Reference Levels (NDRL) for CT. Results: One hundred and twenty-one patient's CT KUBs and 74 patient's CTUs were reviewed. For CT KUB group, the mean DLP of 81/121 scans done using AI/DLR scanner (CAOG) was 77.8 mGy cm (1.16 mSv), while the mean DLP of 40/121 CT KUB done with GEO-660 was 317.1 mGy cm (4.75 mSv). For CTU group, the mean DLP for 46/74 scans done using AI/DLR scanner (CAOG) was 401.9 mGy cm (6 mSv), compared to mean DLP of 1352.6 mGy cm (20.2 mSv) from GEO-660. Conclusion: We propose that CT scanners using AI/DLR method have the potential of reducing radiation doses of CT KUB and CTU to such an extent that it heralds the extinction of plain film XR KUB for follow-up of urinary tract stones. To the best of our knowledge, this is the first study comparing CT KUB and CTU doses from new scanners utilizing AI/DLR technology with traditional scanners using hybrid iterative reconstruction technology. Moreover, we have shown that this technology can markedly reduce the cumulative radiation burden in all urological patients undergoing CT examinations, whether this is CT KUB or CTU. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Determination of drilling mud weight using deep learning techniques.
- Author
-
Khazaei, Aref, Radfar, Reza, and Toloie Eshlaghy, Abbas
- Subjects
- *
DRILLING muds , *ARTIFICIAL neural networks , *DEEP learning - Abstract
The wellbore stability is an important issue in the drilling operation. Wellbore instability can interrupt the drilling and waste lots of time and money. The drilling mud is used to keep up the stability of the wellbore. Therefore, selecting the proper mud weight is an important issue in the drilling industry. The goal of this research is presenting an efficient mud weight estimator using deep learning techniques. To obtain this goal, a relatively big dataset (contained more than half-million samples) has been compiled from 116 wells of two fields in the United Kingdom and Norway. Our main contributions are assembling this large dataset, and applying the deep learning techniques to obtain efficient mud weight estimators. Our estimator is an artificial neural network with five hidden layers and 256 nodes in each layer that is able to estimate the mud weight for new wells and depths with the mean absolute error (MAE) of less than ±0.04 pound per gallon (ppg). In various experiments, the presented model has been challenged and the real-world conditions have been simulated. The results have shown that our model can be reliable and efficient in the real world. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Automatic detection of skin cancer melanoma using transfer learning in deep network.
- Author
-
Wang, Xuyiling, Yang, Ying, and Mandal, Bappaditya
- Subjects
- *
SKIN cancer , *MELANOMA , *SKIN imaging , *EARLY detection of cancer , *DATA augmentation , *DEEP learning - Abstract
As the deadliest type of skin cancer, melanoma has a high mortality rate and takes away thousands of lives in the UK every year. However, if detected at earlier stage, the survival rate largely increases. With the development of machine learning, many well-known pre-trained models were used to detect melanoma accurately through imaging analysis. The overall performance is far beyond skillful human experts. This paper examined the performance of a pre-trained model—Visual Geometry Group network (VGG) on International Skin Imaging Collaboration (ISIC) 2019 challenge dataset in automatically classifying melanoma and non-melanoma diseases. The highest accuracy achieved was 0.9067 with AU ROC over 0.93. Ablation studies illustrated potential factors that could affect model performance, including training data size, frozen layers, classifier nodes and data augmentation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. A divide and conquer approach to maximise deep learning mammography classification accuracies.
- Author
-
Jaamour, Adam, Myles, Craig, Patel, Ashay, Chen, Shuen-Jen, McMillan, Lewis, and Harris-Birtill, David
- Subjects
- *
DEEP learning , *CONVOLUTIONAL neural networks , *MAMMOGRAMS , *IMAGE analysis , *BREAST imaging - Abstract
Breast cancer claims 11,400 lives on average every year in the UK, making it one of the deadliest diseases. Mammography is the gold standard for detecting early signs of breast cancer, which can help cure the disease during its early stages. However, incorrect mammography diagnoses are common and may harm patients through unnecessary treatments and operations (or a lack of treatment). Therefore, systems that can learn to detect breast cancer on their own could help reduce the number of incorrect interpretations and missed cases. Various deep learning techniques, which can be used to implement a system that learns how to detect instances of breast cancer in mammograms, are explored throughout this paper. Convolution Neural Networks (CNNs) are used as part of a pipeline based on deep learning techniques. A divide and conquer approach is followed to analyse the effects on performance and efficiency when utilising diverse deep learning techniques such as varying network architectures (VGG19, ResNet50, InceptionV3, DenseNet121, MobileNetV2), class weights, input sizes, image ratios, pre-processing techniques, transfer learning, dropout rates, and types of mammogram projections. This approach serves as a starting point for model development of mammography classification tasks. Practitioners can benefit from this work by using the divide and conquer results to select the most suitable deep learning techniques for their case out-of-the-box, thus reducing the need for extensive exploratory experimentation. Multiple techniques are found to provide accuracy gains relative to a general baseline (VGG19 model using uncropped 512 × 512 pixels input images with a dropout rate of 0.2 and a learning rate of 1 × 10−3) on the Curated Breast Imaging Subset of DDSM (CBIS-DDSM) dataset. These techniques involve transfer learning pre-trained ImagetNet weights to a MobileNetV2 architecture, with pre-trained weights from a binarised version of the mini Mammography Image Analysis Society (mini-MIAS) dataset applied to the fully connected layers of the model, coupled with using weights to alleviate class imbalance, and splitting CBIS-DDSM samples between images of masses and calcifications. Using these techniques, a 5.6% gain in accuracy over the baseline model was accomplished. Other deep learning techniques from the divide and conquer approach, such as larger image sizes, do not yield increased accuracies without the use of image pre-processing techniques such as Gaussian filtering, histogram equalisation and input cropping. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Short-term forecasts of streamflow in the UK based on a novel hybrid artificial intelligence algorithm.
- Author
-
Di Nunno, Fabio, de Marinis, Giovanni, and Granata, Francesco
- Subjects
- *
ARTIFICIAL intelligence , *MACHINE learning , *STREAMFLOW , *WATER management , *DEEP learning , *WATERSHEDS - Abstract
In recent years, the growing impact of climate change on surface water bodies has made the analysis and forecasting of streamflow rates essential for proper planning and management of water resources. This study proposes a novel ensemble (or hybrid) model, based on the combination of a Deep Learning algorithm, the Nonlinear AutoRegressive network with eXogenous inputs, and two Machine Learning algorithms, Multilayer Perceptron and Random Forest, for the short-term streamflow forecasting, considering precipitation as the only exogenous input and a forecast horizon up to 7 days. A large regional study was performed, considering 18 watercourses throughout the United Kingdom, characterized by different catchment areas and flow regimes. In particular, the predictions obtained with the ensemble Machine Learning-Deep Learning model were compared with the ones achieved with simpler models based on an ensemble of both Machine Learning algorithms and on the only Deep Learning algorithm. The hybrid Machine Learning-Deep Learning model outperformed the simpler models, with values of R2 above 0.9 for several watercourses, with the greatest discrepancies for small basins, where high and non-uniform rainfall throughout the year makes the streamflow rate forecasting a challenging task. Furthermore, the hybrid Machine Learning-Deep Learning model has been shown to be less affected by reductions in performance as the forecasting horizon increases compared to the simpler models, leading to reliable predictions even for 7-day forecasts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Longitudinal fundus imaging and its genome- wide association analysis provide evidence for a human retinal aging clock.
- Author
-
Ahadi, Sara, Wilson, Kenneth A., Babenko, Boris, McLean, Cory Y., Bryant, Drew, Pritchard, Orion, Carrera, Ajay KumarEnrique M., Lamy, Ricardo, Stewart, Jay M., Varadarajan, Avinash, Bernd, Marc, Kapahi, Pankaj, and Bashir, Ali
- Subjects
- *
CLOCKS & watches , *AGE , *CLOCK genes , *AGING , *DEEP learning - Abstract
Biological age, distinct from an individual's chronological age, has been studied extensively through predictive aging clocks. However, these clocks have limited accuracy in short time- scales. Here we trained deep learning models on fundus images from the EyePACS dataset to predict individuals' chronological age. Our retinal aging clocking, 'eyeAge', predicted chronological age more accurately than other aging clocks (mean absolute error of 2.86 and 3.30 years on quality- filtered data from EyePACS and UK Biobank, respectively). Additionally, eyeAge was independent of blood marker- based measures of biological age, maintaining an all- cause mortality hazard ratio of 1.026 even when adjusted for phenotypic age. The individual- specific nature of eyeAge was reinforced via multiple GWAS hits in the UK Biobank cohort. The top GWAS locus was further validated via knockdown of the fly homolog, Alk, which slowed age- related decline in vision in flies. This study demonstrates the potential utility of a retinal aging clock for studying aging and age- related diseases and quantitatively measuring aging on very short time- scales, opening avenues for quick and actionable evaluation of gero- protective therapeutics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. The Genetic Determinants of Aortic Distention.
- Author
-
Pirruccello, James P., Rämö, Joel T., Choi, Seung Hoan, Chaffin, Mark D., Kany, Shinwan, Nekoui, Mahan, Chou, Elizabeth L., Jurgens, Sean J., Friedman, Samuel F., Juric, Dejan, Stone, James R., Batra, Puneet, Ng, Kenney, Philippakis, Anthony A., Lindsay, Mark E., and Ellinor, Patrick T.
- Subjects
- *
CARDIAC magnetic resonance imaging , *AORTA , *HEART beat , *GENETIC correlations , *DELAYED onset of disease , *CORONARY artery disease - Abstract
As the largest conduit vessel, the aorta is responsible for the conversion of phasic systolic inflow from ventricular ejection into more continuous peripheral blood delivery. Systolic distention and diastolic recoil conserve energy and are enabled by the specialized composition of the aortic extracellular matrix. Aortic distensibility decreases with age and vascular disease. In this study, we sought to discover epidemiologic correlates and genetic determinants of aortic distensibility and strain. We trained a deep learning model to quantify thoracic aortic area throughout the cardiac cycle from cardiac magnetic resonance images and calculated aortic distensibility and strain in 42,342 UK Biobank participants. Descending aortic distensibility was inversely associated with future incidence of cardiovascular diseases, such as stroke (HR: 0.59 per SD; P = 0.00031). The heritabilities of aortic distensibility and strain were 22% to 25% and 30% to 33%, respectively. Common variant analyses identified 12 and 26 loci for ascending and 11 and 21 loci for descending aortic distensibility and strain, respectively. Of the newly identified loci, 22 were not significantly associated with thoracic aortic diameter. Nearby genes were involved in elastogenesis and atherosclerosis. Aortic strain and distensibility polygenic scores had modest effect sizes for predicting cardiovascular outcomes (delaying or accelerating disease onset by 2%-18% per SD change in scores) and remained statistically significant predictors after accounting for aortic diameter polygenic scores. Genetic determinants of aortic function influence risk for stroke and coronary artery disease and may lead to novel targets for medical intervention. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Predicting sex, age, general cognition and mental health with machine learning on brain structural connectomes.
- Author
-
Yeung, Hon Wah, Stolicyn, Aleks, Buchanan, Colin R., Tucker‐Drob, Elliot M., Bastin, Mark E., Luz, Saturnino, McIntosh, Andrew M., Whalley, Heather C., Cox, Simon R., and Smith, Keith
- Subjects
- *
MACHINE learning , *MENTAL health , *DIFFUSION magnetic resonance imaging , *DEEP learning , *COGNITION - Abstract
There is an increasing expectation that advanced, computationally expensive machine learning (ML) techniques, when applied to large population‐wide neuroimaging datasets, will help to uncover key differences in the human brain in health and disease. We take a comprehensive approach to explore how multiple aspects of brain structural connectivity can predict sex, age, general cognitive function and general psychopathology, testing different ML algorithms from deep learning (DL) model (BrainNetCNN) to classical ML methods. We modelled N = 8183 structural connectomes from UK Biobank using six different structural network weightings obtained from diffusion MRI. Streamline count generally provided the highest prediction accuracies in all prediction tasks. DL did not improve on prediction accuracies from simpler linear models. Further, high correlations between gradient attribution coefficients from DL and model coefficients from linear models suggested the models ranked the importance of features in similar ways, which indirectly suggested the similarity in models' strategies for making predictive decision to some extent. This highlights that model complexity is unlikely to improve detection of associations between structural connectomes and complex phenotypes with the current sample size. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. Temporal Pattern Attention for Multivariate Time Series of Tennis Strokes Classification.
- Author
-
Skublewska-Paszkowska, Maria and Powroznik, Pawel
- Subjects
- *
TENNIS rackets , *COMPUTER vision , *MOTION capture (Human mechanics) , *TENNIS , *MOTION capture (Cinematography) , *DEEP learning , *TIME series analysis - Abstract
Human Action Recognition is a challenging task used in many applications. It interacts with many aspects of Computer Vision, Machine Learning, Deep Learning and Image Processing in order to understand human behaviours as well as identify them. It makes a significant contribution to sport analysis, by indicating players' performance level and training evaluation. The main purpose of this study is to investigate how the content of three-dimensional data influences on classification accuracy of four basic tennis strokes: forehand, backhand, volley forehand, and volley backhand. An entire player's silhouette and its combination with a tennis racket were taken into consideration as input to the classifier. Three-dimensional data were recorded using the motion capture system (Vicon Oxford, UK). The Plug-in Gait model consisting of 39 retro-reflective markers was used for the player's body acquisition. A seven-marker model was created for tennis racket capturing. The racket is represented in the form of a rigid body; therefore, all points associated with it changed their coordinates simultaneously. The Attention Temporal Graph Convolutional Network was applied for these sophisticated data. The highest accuracy, up to 93%, was achieved for the data of the whole player's silhouette together with a tennis racket. The obtained results indicated that for dynamic movements, such as tennis strokes, it is necessary to analyze the position of the whole body of the player as well as the racket position. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. Validation of a deep-learning-based retinal biomarker (Reti-CVD) in the prediction of cardiovascular disease: data from UK Biobank.
- Author
-
Tseng, Rachel Marjorie Wei Wen, Rim, Tyler Hyungtaek, Shantsila, Eduard, Yi, Joseph K., Park, Sungha, Kim, Sung Soo, Lee, Chan Joo, Thakur, Sahil, Nusinovici, Simon, Peng, Qingsheng, Kim, Hyeonmin, Lee, Geunyoung, Yu, Marco, Tham, Yih-Chung, Bakhai, Ameet, Leeson, Paul, Lip, Gregory Y.H., Wong, Tien Yin, and Cheng, Ching-Yu
- Subjects
- *
CARDIOVASCULAR diseases , *RISK of violence , *BIOMARKERS , *RISK assessment , *PROGNOSIS , *CONFIDENCE intervals - Abstract
Background: Currently in the United Kingdom, cardiovascular disease (CVD) risk assessment is based on the QRISK3 score, in which 10% 10-year CVD risk indicates clinical intervention. However, this benchmark has limited efficacy in clinical practice and the need for a more simple, non-invasive risk stratification tool is necessary. Retinal photography is becoming increasingly acceptable as a non-invasive imaging tool for CVD. Previously, we developed a novel CVD risk stratification system based on retinal photographs predicting future CVD risk. This study aims to further validate our biomarker, Reti-CVD, (1) to detect risk group of ≥ 10% in 10-year CVD risk and (2) enhance risk assessment in individuals with QRISK3 of 7.5–10% (termed as borderline-QRISK3 group) using the UK Biobank. Methods: Reti-CVD scores were calculated and stratified into three risk groups based on optimized cut-off values from the UK Biobank. We used Cox proportional-hazards models to evaluate the ability of Reti-CVD to predict CVD events in the general population. C-statistics was used to assess the prognostic value of adding Reti-CVD to QRISK3 in borderline-QRISK3 group and three vulnerable subgroups. Results: Among 48,260 participants with no history of CVD, 6.3% had CVD events during the 11-year follow-up. Reti-CVD was associated with an increased risk of CVD (adjusted hazard ratio [HR] 1.41; 95% confidence interval [CI], 1.30–1.52) with a 13.1% (95% CI, 11.7–14.6%) 10-year CVD risk in Reti-CVD-high-risk group. The 10-year CVD risk of the borderline-QRISK3 group was greater than 10% in Reti-CVD-high-risk group (11.5% in non-statin cohort [n = 45,473], 11.5% in stage 1 hypertension cohort [n = 11,966], and 14.2% in middle-aged cohort [n = 38,941]). C statistics increased by 0.014 (0.010–0.017) in non-statin cohort, 0.013 (0.007–0.019) in stage 1 hypertension cohort, and 0.023 (0.018–0.029) in middle-aged cohort for CVD event prediction after adding Reti-CVD to QRISK3. Conclusions: Reti-CVD has the potential to identify individuals with ≥ 10% 10-year CVD risk who are likely to benefit from earlier preventative CVD interventions. For borderline-QRISK3 individuals with 10-year CVD risk between 7.5 and 10%, Reti-CVD could be used as a risk enhancer tool to help improve discernment accuracy, especially in adult groups that may be pre-disposed to CVD. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Machine learning political orders.
- Author
-
Amoore, Louise
- Subjects
- *
MACHINE learning , *EMIGRATION & immigration , *DEEP learning , *BREXIT Referendum, 2016 , *COMMUNITIES , *LEARNING problems , *INTERNATIONAL organization - Abstract
A significant set of epistemic and political transformations are taking place as states and societies begin to understand themselves and their problems through the paradigm of deep neural network algorithms. A machine learning political order does not merely change the political technologies of governance, but is itself a reordering of politics, of what the political can be. When algorithmic systems reduce the pluridimensionality of politics to the output of a model, they simultaneously foreclose the potential for other political claims to be made and alternative political projects to be built. More than this foreclosure, a machine learning political order actively profits and learns from the fracturing of communities and the destabilising of democratic rights. The transformation from rules-based algorithms to deep learning models has paralleled the undoing of rules-based social and international orders – from the use of machine learning in the campaigns of the UK EU referendum, to the trialling of algorithmic immigration and welfare systems, and the use of deep learning in the COVID-19 pandemic – with political problems becoming reconfigured as machine learning problems. Machine learning political orders decouple their attributes, features and clusters from underlying social values, no longer tethered to notions of good governance or a good society, but searching instead for the optimal function of abstract representations of data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. A Topological Loss Function for Deep-Learning Based Image Segmentation Using Persistent Homology.
- Author
-
Clough, James R., Byrne, Nicholas, Oksuz, Ilkay, Zimmer, Veronika A., Schnabel, Julia A., and King, Andrew P.
- Subjects
- *
DEEP learning , *IMAGE segmentation , *MAGNETIC resonance imaging , *IMAGE denoising , *BETTI numbers - Abstract
We introduce a method for training neural networks to perform image or volume segmentation in which prior knowledge about the topology of the segmented object can be explicitly provided and then incorporated into the training process. By using the differentiable properties of persistent homology, a concept used in topological data analysis, we can specify the desired topology of segmented objects in terms of their Betti numbers and then drive the proposed segmentations to contain the specified topological features. Importantly this process does not require any ground-truth labels, just prior knowledge of the topology of the structure being segmented. We demonstrate our approach in four experiments: one on MNIST image denoising and digit recognition, one on left ventricular myocardium segmentation from magnetic resonance imaging data from the UK Biobank, one on the ACDC public challenge dataset and one on placenta segmentation from 3-D ultrasound. We find that embedding explicit prior knowledge in neural network segmentation tasks is most beneficial when the segmentation task is especially challenging and that it can be used in either a semi-supervised or post-processing context to extract a useful training gradient from images without pixelwise labels. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Clinical workflow of sonographers performing fetal anomaly ultrasound scans: deep-learning-based analysis.
- Author
-
Drukker, L., Sharma, H., Karim, J. N., Droste, R., Noble, J. A., and Papageorghiou, A. T.
- Subjects
- *
FETAL ultrasonic imaging , *FETAL abnormalities , *ANATOMICAL planes , *WORKFLOW , *AMNIOTIC liquid , *BIOINFORMATICS software , *FETAL anatomy , *RESEARCH funding , *SECOND trimester of pregnancy , *SYSTEM analysis - Abstract
Objective: Despite decades of obstetric scanning, the field of sonographer workflow remains largely unexplored. In the second trimester, sonographers use scan guidelines to guide their acquisition of standard planes and structures; however, the scan-acquisition order is not prescribed. Using deep-learning-based video analysis, the aim of this study was to develop a deeper understanding of the clinical workflow undertaken by sonographers during second-trimester anomaly scans.Methods: We collected prospectively full-length video recordings of routine second-trimester anomaly scans. Important scan events in the videos were identified by detecting automatically image freeze and image/clip save. The video immediately preceding and following the important event was extracted and labeled as one of 11 commonly acquired anatomical structures. We developed and used a purposely trained and tested deep-learning annotation model to label automatically the large number of scan events. Thus, anomaly scans were partitioned as a sequence of anatomical planes or fetal structures obtained over time.Results: A total of 496 anomaly scans performed by 14 sonographers were available for analysis. UK guidelines specify that an image or videoclip of five different anatomical regions must be stored and these were detected in the majority of scans: head/brain was detected in 97.2% of scans, coronal face view (nose/lips) in 86.1%, abdomen in 93.1%, spine in 95.0% and femur in 92.3%. Analyzing the clinical workflow, we observed that sonographers were most likely to begin their scan by capturing the head/brain (in 24.4% of scans), spine (in 23.2%) or thorax/heart (in 22.8%). The most commonly identified two-structure transitions were: placenta/amniotic fluid to maternal anatomy, occurring in 44.5% of scans; head/brain to coronal face (nose/lips) in 42.7%; abdomen to thorax/heart in 26.1%; and three-dimensional/four-dimensional face to sagittal face (profile) in 23.7%. Transitions between three or more consecutive structures in sequence were uncommon (up to 13% of scans). None of the captured anomaly scans shared an entirely identical sequence.Conclusions: We present a novel evaluation of the anomaly scan acquisition process using a deep-learning-based analysis of ultrasound video. We note wide variation in the number and sequence of structures obtained during routine second-trimester anomaly scans. Overall, each anomaly scan was found to be unique in its scanning sequence, suggesting that sonographers take advantage of the fetal position and acquire the standard planes according to their visibility rather than following a strict acquisition order. © 2022 The Authors. Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
28. Automated imaging-based abdominal organ segmentation and quality control in 20,000 participants of the UK Biobank and German National Cohort Studies.
- Author
-
Kart, Turkay, Fischer, Marc, Winzeck, Stefan, Glocker, Ben, Bai, Wenjia, Bülow, Robin, Emmel, Carina, Friedrich, Lena, Kauczor, Hans-Ulrich, Keil, Thomas, Kröncke, Thomas, Mayer, Philipp, Niendorf, Thoralf, Peters, Annette, Pischon, Tobias, Schaarschmidt, Benedikt M., Schmidt, Börge, Schulze, Matthias B., Umutle, Lale, and Völzke, Henry
- Subjects
- *
MAGNETIC resonance imaging , *IMAGE analysis , *DEEP learning , *COHORT analysis , *DIAGNOSTIC imaging , *QUALITY control - Abstract
Large epidemiological studies such as the UK Biobank (UKBB) or German National Cohort (NAKO) provide unprecedented health-related data of the general population aiming to better understand determinants of health and disease. As part of these studies, Magnetic Resonance Imaging (MRI) is performed in a subset of participants allowing for phenotypical and functional characterization of different organ systems. Due to the large amount of imaging data, automated image analysis is required, which can be performed using deep learning methods, e. g. for automated organ segmentation. In this paper we describe a computational pipeline for automated segmentation of abdominal organs on MRI data from 20,000 participants of UKBB and NAKO and provide results of the quality control process. We found that approx. 90% of data sets showed no relevant segmentation errors while relevant errors occurred in a varying proportion of data sets depending on the organ of interest. Image-derived features based on automated organ segmentations showed relevant deviations of varying degree in the presence of segmentation errors. These results show that large-scale, deep learning-based abdominal organ segmentation on MRI data is feasible with overall high accuracy, but visual quality control remains an important step ensuring the validity of down-stream analyses in large epidemiological imaging studies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Analysis of developments and hotspots of international research on sports AI.
- Author
-
Li, Jian, Li, Meiyue, and Lin, Hao
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *SPORTS sciences , *WEARABLE technology , *MACHINE learning - Abstract
In this paper, 1,538 papers retrieved with the keywords "sports artificial intelligence (AI)" on the Web of Science database since 2007 were taken as the data source, and the Cite Space V software was used to visualize and analyze them. A visual knowledge graph was used to streamline the countries, institutions and authors conducting sports AI research, discipline distribution, research hotspots and development trends in the past 15 years. Subsequently, its development direction and research progress were discussed. Sports AI was widely distributed, with the US, China and the UK leading the way. The most prolific authors and teams in research on sports AI were concentrated in American universities. Their main research direction is to develop and improve smart wearable devices based on machine learning and deep learning technologies for different groups of people. Research on sports AI involved multiple disciplines, which mainly applied and referred to research methodologies and theories on engineering, computer science and sports science. It could be seen from the frequency and centrality of keywords that in the current field of sports AI, machine learning is the main direction, artificial neural networks is the main algorithm, and practical and empirical research based on data mining is the focus. The research hotspots were divided into three major clusters: physical health promotion, sports injury prevention and control, and athletic performance enhancement. How to introduce intelligent technology into sports for a perfect integration still has an arduous and long way to go. Future development requires joint efforts and participation of scientific researchers, professionals and common people. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Towards an Automated Approach for Monitoring Tree Phenology Using Vehicle Dashcams in Urban Environments.
- Author
-
Boyd, Doreen S., Crudge, Sally, and Foody, Giles
- Subjects
- *
PLANT phenology , *URBAN trees , *PHENOLOGY , *CLIMATE change , *REMOTE sensing , *DEEP learning , *LANDSAT satellites , *THEMATIC mapper satellite - Abstract
Trees in urban environments hold significant value in providing ecosystem services, which will become increasingly important as urban populations grow. Tree phenology is highly sensitive to climatic variation, and resultant phenological shifts have significant impact on ecosystem function. Data on urban tree phenology is important to collect. Typical remote methods to monitor tree phenological transitions, such as satellite remote sensing and fixed digital camera networks, are limited by financial costs and coarse resolutions, both spatially and temporally and thus there exists a data gap in urban settings. Here, we report on a pilot study to evaluate the potential to estimate phenological metrics from imagery acquired with a conventional dashcam fitted to a car. Dashcam images were acquired daily in spring 2020, March to May, for a 2000 m stretch of road in Melksham, UK. This pilot study indicates that time series imagery of urban trees, from which meaningful phenological data can be extracted, is obtainable from a car-mounted dashcam. The method based on the YOLOv3 deep learning algorithm demonstrated suitability for automating stages of processing towards deriving a greenness metric from which the date of tree green-up was calculated. These dates of green-up are similar to those obtained by visual analyses, with a maximum of a 4-day difference; and differences in green-up between trees (species-dependent) were evident. Further work is required to fully automate such an approach for other remote sensing capture methods, and to scale-up through authoritative and citizen science agencies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Detecting and responding to hostile disinformation activities on social media using machine learning and deep neural networks.
- Author
-
Cartwright, Barry, Frank, Richard, Weir, George, and Padda, Karmvir
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *SOCIAL media , *DISINFORMATION , *DEEP learning , *BREXIT Referendum, 2016 , *ARTIFICIAL intelligence - Abstract
Disinformation attacks that make use of social media platforms, e.g., the attacks orchestrated by the Russian "Internet Research Agency" during the 2016 U.S. Presidential election campaign and the 2016 Brexit referendum in the UK, have led to increasing demands from governmental agencies for AI tools that are capable of identifying such attacks in their earliest stages, rather than responding to them in retrospect. This research undertaken on behalf of the Canadian Armed Forces and Department of National Defence. Our ultimate objective is the development of an integrated set of machine-learning algorithms which will mobilize artificial intelligence to identify hostile disinformation activities in "near-real-time." Employing The Dark Crawler, the Posit Toolkit, TensorFlow (Deep Neural Networks), plus the Random Forest classifier and short-text classification programs known as LibShortText and LibLinear, we have analysed a wide sample of social media posts that exemplify the "fake news" that was disseminated by Russia's Internet Research Agency, comparing them to "real news" posts in order to develop an automated means of classification. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. From YouTube to the brain: Transfer learning can improve brain-imaging predictions with deep learning.
- Author
-
Malik, Nahiyan and Bzdok, Danilo
- Subjects
- *
DEEP learning , *CONVOLUTIONAL neural networks , *FLUID intelligence , *X-ray imaging - Abstract
Deep learning has recently achieved best-in-class performance in several fields, including biomedical domains such as X-ray images. Yet, data scarcity poses a strict limit on training successful deep learning systems in many, if not most, biomedical applications, including those involving brain images. In this study, we translate state-of-the-art transfer learning techniques for single-subject prediction of simpler (sex and age) and more complex phenotypes (number of people in household, household income, fluid intelligence and smoking behavior). We fine-tuned 2D and 3D ResNet-18 convolutional neural networks for target phenotype predictions from brain images of ∼ 40,000 UK Biobank participants, after pretraining on YouTube videos from the Kinetics dataset and natural images from the ImageNet dataset. Transfer learning was effective on several phenotypes, especially sex and age classification. Additionally, transfer learning in particular outperformed deep learning models trained from scratch especially on smaller sample sizes. The out-of-sample performance using transfer learning from previously learned knowledge based on real-world images and videos could unlock the potential in many areas of imaging neuroscience where deep learning solutions are currently infeasible. • Neuroimaging faces challenges in effectively training DNNs due to data scarcity. • Transfer learning (TL) could aid in training DNNs in the data-scarce neuroimaging setting. • Using TL, we improved predictions on several phenotypes and smaller sample sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. A Deep Learning Application to Map Weed Spatial Extent from Unmanned Aerial Vehicles Imagery.
- Author
-
Fraccaro, Paolo, Butt, Junaid, Edwards, Blair, Freckleton, Robert P., Childs, Dylan Z., Reusch, Katharina, and Comont, David
- Subjects
- *
DEEP learning , *LANDSAT satellites , *WEEDS , *DRONE aircraft , *WINTER wheat , *AGRICULTURAL productivity , *MACHINE learning - Abstract
Weed infestation is a global threat to agricultural productivity, leading to low yields and financial losses. Weed detection, based on applying machine learning to imagery collected by Unmanned Aerial Vehicles (UAV) has shown potential in the past; however, validation on large data-sets (e.g., across a wide number of different fields) remains lacking, with few solutions actually made operational. Here, we demonstrate the feasibility of automatically detecting weeds in winter wheat fields based on deep learning methods applied to UAV data at scale. Focusing on black-grass (the most pernicious weed across northwest Europe), we show high performance (i.e., accuracy above 0.9) and highly statistically significant correlation (i.e., ro > 0.75 and p < 0.00001) between imagery-derived local and global weed maps and out-of-bag field survey data, collected by experts over 31 fields (205 hectares) in the UK. We demonstrate how the developed deep learning model can be made available via an easy-to-use docker container, with results accessible through an interactive dashboard. Using this approach, clickable weed maps can be created and deployed rapidly, allowing the user to explore actual model predictions for each field. This shows the potential for this approach to be used operationally and influence agronomic decision-making in the real world. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. EMM-LC Fusion: Enhanced Multimodal Fusion for Lung Cancer Classification.
- Author
-
Barrett, James and Viana, Thiago
- Subjects
- *
LUNG cancer , *TUMOR classification , *MACHINE learning , *MULTIMODAL user interfaces , *DELAYED diagnosis , *TUMOR markers - Abstract
Lung cancer (LC) is the most common cause of cancer-related deaths in the UK due to delayed diagnosis. The existing literature establishes a variety of factors which contribute to this, including the misjudgement of anatomical structure by doctors and radiologists. This study set out to develop a solution which utilises multiple modalities in order to detect the presence of LC. A review of the existing literature established failings within methods to exploit rich intermediate feature representations, such that it can capture complex multimodal associations between heterogenous data sources. The methodological approach involved the development of a novel machine learning (ML) model to facilitate quantitative analysis. The proposed solution, named EMM-LC Fusion, extracts intermediate features from a pre-trained modified AlignedXception model and concatenates these with linearly inflated features of Clinical Data Elements (CDE). The implementation was evaluated and compared against existing literature using F1 score, average precision (AP), and area under curve (AUC) as metrics. The findings presented in this study show a statistically significant improvement (p < 0.05) upon the previous fusion method, with an increase in F-Score from 0.402 to 0.508. The significance of this establishes that the extraction of intermediate features produces a fertile environment for the detection of intermodal relationships for the task of LC classification. This research also provides an architecture to facilitate the future implementation of alternative biomarkers for lung cancer, one of the acknowledged limitations of this study. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Online Hybrid Learning Methods for Real-Time Structural Health Monitoring Using Remote Sensing and Small Displacement Data.
- Author
-
Entezami, Alireza, Arslan, Ali Nadir, De Michele, Carlo, and Behkamal, Bahareh
- Subjects
- *
STRUCTURAL health monitoring , *BLENDED learning , *REMOTE sensing , *ONLINE education , *DEEP learning , *ARTIFICIAL intelligence , *INTRUSION detection systems (Computer security) - Abstract
Structural health monitoring (SHM) by using remote sensing and synthetic aperture radar (SAR) images is a promising approach to assessing the safety and the integrity of civil structures. Apart from this issue, artificial intelligence and machine learning have brought great opportunities to SHM by learning an automated computational model for damage detection. Accordingly, this article proposes online hybrid learning methods to firstly deal with some major challenges in data-driven SHM and secondly detect damage via small displacement data from SAR images in a real-time manner. The proposed methods contain three main parts: (i) data augmentation by Hamiltonian Monte Carlo and slice sampling for addressing the problem of small displacement data, (ii) data normalization by an online deep transfer learning algorithm for removing the effects of environmental and/or operational variability from augmented data, and (iii) feature classification via a scalar novelty score. The major contributions of this research include proposing two online hybrid unsupervised learning methods and providing effective frameworks for online damage detection. A small set of displacement samples extracted from SAR images of TerraSar-X regarding a long-term monitoring scheme of the Tadcaster Bridge in United Kingdom is applied to validate the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. transferGWAS: GWAS of images using deep transfer learning.
- Author
-
Kirchler, Matthias, Konigorski, Stefan, Norden, Matthias, Meltendorf, Christian, Kloft, Marius, Schurmann, Claudia, and Lippert, Christoph
- Subjects
- *
FUNDUS oculi , *DEEP learning , *ARTIFICIAL neural networks , *FALSE positive error , *GENOME-wide association studies , *GENETIC variation , *EYE color - Abstract
Motivation Medical images can provide rich information about diseases and their biology. However, investigating their association with genetic variation requires non-standard methods. We propose transferGWAS , a novel approach to perform genome-wide association studies directly on full medical images. First, we learn semantically meaningful representations of the images based on a transfer learning task, during which a deep neural network is trained on independent but similar data. Then, we perform genetic association tests with these representations. Results We validate the type I error rates and power of transferGWAS in simulation studies of synthetic images. Then we apply transferGWAS in a genome-wide association study of retinal fundus images from the UK Biobank. This first-of-a-kind GWAS of full imaging data yielded 60 genomic regions associated with retinal fundus images, of which 7 are novel candidate loci for eye-related traits and diseases. Availability and implementation Our method is implemented in Python and available at https://github.com/mkirchler/transferGWAS/. Supplementary information Supplementary data are available at Bioinformatics online. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Automated image curation in diabetic retinopathy screening using deep learning.
- Author
-
Nderitu, Paul, Nunez do Rio, Joan M., Webster, Ms Laura, Mann, Samantha S., Hopkins, David, Cardoso, M. Jorge, Modat, Marc, Bergeles, Christos, and Jackson, Timothy L.
- Subjects
- *
DIABETIC retinopathy , *DEEP learning , *MEDICAL screening , *LATERAL dominance , *GENERALIZATION - Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Policy-Driven, Multimodal Deep Learning for Predicting Visual Fields from the Optic Disc and OCT Imaging.
- Author
-
Kihara, Yuka, Montesano, Giovanni, Chen, Andrew, Amerasinghe, Nishani, Dimitriou, Chrysostomos, Jacob, Aby, Chabi, Almira, Crabb, David P., and Lee, Aaron Y.
- Subjects
- *
VISUAL fields , *OPTICAL coherence tomography , *OPTIC disc , *DEEP learning , *VISUAL learning - Abstract
To develop and validate a deep learning (DL) system for predicting each point on visual fields (VFs) from disc and OCT imaging and derive a structure–function mapping. Retrospective, cross-sectional database study. A total of 6437 patients undergoing routine care for glaucoma in 3 clinical sites in the United Kingdom. OCT and infrared reflectance (IR) optic disc imaging were paired with the closest VF within 7 days. EfficientNet B2 was used to train 2 single-modality DL models to predict each of the 52 sensitivity points on the 24-2 VF pattern. A policy DL model was designed and trained to fuse the 2 model predictions. Pointwise mean absolute error (PMAE). A total of 5078 imaging scans to VF pairs were used as a held-out test set to measure the final performance. The improvement in PMAE with the policy model was 0.485 (0.438, 0.533) decibels (dB) compared with the IR image of the disc alone and 0.060 (0.047, 0.073) dB with to the OCT alone. The improvement with the policy fusion model was statistically significant (P < 0.0001). Occlusion masking shows that the DL models learned the correct structure–function mapping in a data-driven, feature agnostic fashion. The multimodal, policy DL model performed the best; it provided explainable maps of its confidence in fusing data from single modalities and provides a pathway for probing the structure–function relationship in glaucoma. We used a large, real-world dataset to train a multimodal interpretable deep learning method to predict visual field sensitivity values in patients with glaucoma. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Multi-agent deep reinforcement learning for adaptive coordinated metro service operations with flexible train composition.
- Author
-
Ying, Cheng-shuo, Chow, Andy H.F., Nguyen, Hoa T.M., and Chin, Kwai-Sang
- Subjects
- *
ADAPTIVE control systems , *ARTIFICIAL neural networks , *DEEP learning , *SUBWAYS , *REINFORCEMENT learning , *ACTING education , *MARKOV processes - Abstract
This paper presents an adaptive control system for coordinated metro operations with flexible train composition by using a multi-agent deep reinforcement learning (MADRL) approach. The control problem is formulated as a Markov decision process (MDP) with multiple agents regulating different service lines in a metro network with passenger transfer. To ensure the overall computational effectiveness and stability of the control system, we adopt an actor–critic reinforcement learning framework in which each control agent is associated with a critic function for estimating future system states and an actor function deriving local operational decisions. The critics and actors in the MADRL are represented by multi-layer artificial neural networks (ANNs). A multi-agent deep deterministic policy gradient (MADDPG) algorithm is developed for training the actor and critic ANNs through successive simulated transitions over the entire metro network. The developed framework is tested with a real-world scenario in Bakerloo and Victoria Lines of London Underground, UK. Experiment results demonstrate that the proposed method can outperform previous centralized optimization and distributed control approaches in terms of solution quality and performance achieved. Further analysis shows the merits of MADRL for coordinated service regulation with flexible train composition. This study contributes to real-time coordinated metro network services with flexible train composition and advanced optimization techniques. • An adaptive rail transit control system with passengers' transfers and flexible train composition. • A novel modeling and optimization framework based on multi-agent deep reinforcement learning. • A computational framework with 'decentralized execution and centralized training' for effectiveness and stability. • Case study demonstrating the system efficiency and computational effectiveness of proposed algorithm over previous methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Computational limits to the legibility of the imaged human brain.
- Author
-
Ruffle, James K., Gray, Robert J, Mohinta, Samia, Pombo, Guilherme, Kaul, Chaitanya, Hyare, Harpreet, Rees, Geraint, and Nachev, Parashkev
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *MAGNETIC resonance imaging , *BRAIN imaging , *ADLERIAN psychology , *INDIVIDUATION (Psychology) - Abstract
• Individuals remain poorly predictable from population-level analyses of the brain. • The fidelity limits data scale, compute, and model flexibility impose are unknown. • We quantify these limits in multimodal, multitarget analyses across 23810 people. • We show a radical change in modelling regimes is needed for individual prediction. Our knowledge of the organisation of the human brain at the population-level is yet to translate into power to predict functional differences at the individual-level, limiting clinical applications and casting doubt on the generalisability of inferred mechanisms. It remains unknown whether the difficulty arises from the absence of individuating biological patterns within the brain, or from limited power to access them with the models and compute at our disposal. Here we comprehensively investigate the resolvability of such patterns with data and compute at unprecedented scale. Across 23 810 unique participants from UK Biobank, we systematically evaluate the predictability of 25 individual biological characteristics, from all available combinations of structural and functional neuroimaging data. Over 4526 GPU*hours of computation, we train, optimize, and evaluate out-of-sample 700 individual predictive models, including fully-connected feed-forward neural networks of demographic, psychological, serological, chronic disease, and functional connectivity characteristics, and both uni- and multi-modal 3D convolutional neural network models of macro- and micro-structural brain imaging. We find a marked discrepancy between the high predictability of sex (balanced accuracy 99.7%), age (mean absolute error 2.048 years, R2 0.859), and weight (mean absolute error 2.609Kg, R2 0.625), for which we set new state-of-the-art performance, and the surprisingly low predictability of other characteristics. Neither structural nor functional imaging predicted an individual's psychology better than the coincidence of common chronic disease (p < 0.05). Serology predicted chronic disease (p < 0.05) and was best predicted by it (p < 0.001), followed by structural neuroimaging (p < 0.05). Our findings suggest either more informative imaging or more powerful models will be needed to decipher individual level characteristics from the human brain. We make our models and code openly available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. SUGAR: Spherical ultrafast graph attention framework for cortical surface registration.
- Author
-
Ren, Jianxun, An, Ning, Zhang, Youjia, Wang, Danyang, Sun, Zhenyu, Lin, Cong, Cui, Weigang, Wang, Weiwei, Zhou, Ying, Zhang, Wei, Hu, Qingyu, Zhang, Ping, Hu, Dan, Wang, Danhong, and Liu, Hesheng
- Subjects
- *
GRAPH neural networks , *DATA augmentation , *RECORDING & registration , *SUGAR , *DEEP learning - Abstract
• We propose a deep-learning framework for cortical surface registration. • A novel spherical graph neural network was developed for cortical spherical meshes. • Fold and distortion losses were used to preserve topology and control distortions. • Extensive evaluation was performed in over 10,000 scans from 7 diverse datasets. • Our method showed sub-second run time, superior accuracy, and minimal distortions. Cortical surface registration plays a crucial role in aligning cortical functional and anatomical features across individuals. However, conventional registration algorithms are computationally inefficient. Recently, learning-based registration algorithms have emerged as a promising solution, significantly improving processing efficiency. Nonetheless, there remains a gap in the development of a learning-based method that exceeds the state-of-the-art conventional methods simultaneously in computational efficiency, registration accuracy, and distortion control, despite the theoretically greater representational capabilities of deep learning approaches. To address the challenge, we present SUGAR, a unified unsupervised deep-learning framework for both rigid and non-rigid registration. SUGAR incorporates a U-Net-based spherical graph attention network and leverages the Euler angle representation for deformation. In addition to the similarity loss, we introduce fold and multiple distortion losses to preserve topology and minimize various types of distortions. Furthermore, we propose a data augmentation strategy specifically tailored for spherical surface registration to enhance the registration performance. Through extensive evaluation involving over 10,000 scans from 7 diverse datasets, we showed that our framework exhibits comparable or superior registration performance in accuracy, distortion, and test-retest reliability compared to conventional and learning-based methods. Additionally, SUGAR achieves remarkable sub-second processing times, offering a notable speed-up of approximately 12,000 times in registering 9,000 subjects from the UK Biobank dataset in just 32 min. This combination of high registration performance and accelerated processing time may greatly benefit large-scale neuroimaging studies. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Performance of artificial intelligence for detection of subtle and advanced colorectal neoplasia.
- Author
-
Ahmad, Omer F., González‐Bueno Puyal, Juana, Brandao, Patrick, Kader, Rawen, Abbasi, Faisal, Hussein, Mohamed, Haidry, Rehan J., Toth, Daniel, Mountney, Peter, Seward, Ed, Vega, Roser, Stoyanov, Danail, and Lovat, Laurence B.
- Subjects
- *
ARTIFICIAL intelligence , *TUMORS , *COLORECTAL cancer , *POLYPS - Abstract
Objectives: There is uncertainty regarding the efficacy of artificial intelligence (AI) software to detect advanced subtle neoplasia, particularly flat lesions and sessile serrated lesions (SSLs), due to low prevalence in testing datasets and prospective trials. This has been highlighted as a top research priority for the field. Methods: An AI algorithm was evaluated on four video test datasets containing 173 polyps (35,114 polyp‐positive frames and 634,988 polyp‐negative frames) specifically enriched with flat lesions and SSLs, including a challenging dataset containing subtle advanced neoplasia. The challenging dataset was also evaluated by eight endoscopists (four independent, four trainees, according to the Joint Advisory Group on gastrointestinal endoscopy [JAG] standards in the UK). Results: In the first two video datasets, the algorithm achieved per‐polyp sensitivities of 100% and 98.9%. Per‐frame sensitivities were 84.1% and 85.2%. In the subtle dataset, the algorithm detected a significantly higher number of polyps (P < 0.0001), compared to JAG‐independent and trainee endoscopists, achieving per‐polyp sensitivities of 79.5%, 37.2% and 11.5%, respectively. Furthermore, when considering subtle polyps detected by both the algorithm and at least one endoscopist, the AI detected polyps significantly faster on average. Conclusions: The AI based algorithm achieved high per‐polyp sensitivities for advanced colorectal neoplasia, including flat lesions and SSLs, outperforming both JAG independent and trainees on a very challenging dataset containing subtle lesions that could have been overlooked easily and contribute to interval colorectal cancer. Further prospective trials should evaluate AI to detect subtle advanced neoplasia in higher risk populations for colorectal cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Retinal photograph-based deep learning predicts biological age, and stratifies morbidity and mortality risk.
- Author
-
Nusinovici, Simon, Rim, Tyler Hyungtaek, Yu, Marco, Lee, Geunyoung, Tham, Yih-Chung, Cheung, Ning, Chong, Crystal Chun Yuen, Soh, Zhi Da, Thakur, Sahil, Lee, Chan Joo, Sabanayagam, Charumathi, Lee, Byoung Kwon, Park, Sungha, Kim, Sung Soo, Kim, Hyeon Chang, Wong, Tien-Yin, and Cheng, Ching-Yu
- Subjects
- *
DISEASE risk factors , *DEEP learning , *BIOMARKERS , *RETINA , *TISSUE banks , *MEDICAL screening , *CARDIOVASCULAR diseases , *RISK assessment , *PHOTOGRAPHY , *AGING , *DESCRIPTIVE statistics , *TUMORS , *SENSITIVITY & specificity (Statistics) , *ALGORITHMS , *PROPORTIONAL hazards models , *PHENOTYPES ,MORTALITY risk factors - Abstract
Background ageing is an important risk factor for a variety of human pathologies. Biological age (BA) may better capture ageing-related physiological changes compared with chronological age (CA). Objective we developed a deep learning (DL) algorithm to predict BA based on retinal photographs and evaluated the performance of our new ageing marker in the risk stratification of mortality and major morbidity in general populations. Methods we first trained a DL algorithm using 129,236 retinal photographs from 40,480 participants in the Korean Health Screening study to predict the probability of age being ≥65 years ('RetiAGE') and then evaluated the ability of RetiAGE to stratify the risk of mortality and major morbidity among 56,301 participants in the UK Biobank. Cox proportional hazards model was used to estimate the hazard ratios (HRs). Results in the UK Biobank, over a 10-year follow up, 2,236 (4.0%) died; of them, 636 (28.4%) were due to cardiovascular diseases (CVDs) and 1,276 (57.1%) due to cancers. Compared with the participants in the RetiAGE first quartile, those in the RetiAGE fourth quartile had a 67% higher risk of 10-year all-cause mortality (HR = 1.67 [1.42–1.95]), a 142% higher risk of CVD mortality (HR = 2.42 [1.69–3.48]) and a 60% higher risk of cancer mortality (HR = 1.60 [1.31–1.96]), independent of CA and established ageing phenotypic biomarkers. Likewise, compared with the first quartile group, the risk of CVD and cancer events in the fourth quartile group increased by 39% (HR = 1.39 [1.14–1.69]) and 18% (HR = 1.18 [1.10–1.26]), respectively. The best discrimination ability for RetiAGE alone was found for CVD mortality (c-index = 0.70, sensitivity = 0.76, specificity = 0.55). Furthermore, adding RetiAGE increased the discrimination ability of the model beyond CA and phenotypic biomarkers (increment in c-index between 1 and 2%). Conclusions the DL-derived RetiAGE provides a novel, alternative approach to measure ageing. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Digital twins based on bidirectional LSTM and GAN for modelling the COVID-19 pandemic.
- Author
-
Quilodrán-Casas, César, Silva, Vinicius L.S., Arcucci, Rossella, Heaney, Claire E., Guo, YiKe, and Pain, Christopher C.
- Subjects
- *
COVID-19 , *COVID-19 pandemic , *GENERATIVE adversarial networks , *INFECTIOUS disease transmission , *VIRAL transmission , *EPIDEMIOLOGICAL models , *REDUCED-order models - Abstract
• The application of reduced-order models to an epidemiological SEIRS model. • We introduce two digital twins of a SEIRS model applied to an idealised town. • The application of highly novel BDLSTM- and GAN-based reduced-order model approaches. • These two frameworks are accurate when compared to the original SEIRS model data. • These frameworks are data-agnostic and could be generalised to other domains. The outbreak of the coronavirus disease 2019 (COVID-19) has now spread throughout the globe infecting over 150 million people and causing the death of over 3.2 million people. Thus, there is an urgent need to study the dynamics of epidemiological models to gain a better understanding of how such diseases spread. While epidemiological models can be computationally expensive, recent advances in machine learning techniques have given rise to neural networks with the ability to learn and predict complex dynamics at reduced computational costs. Here we introduce two digital twins of a SEIRS model applied to an idealised town. The SEIRS model has been modified to take account of spatial variation and, where possible, the model parameters are based on official virus spreading data from the UK. We compare predictions from one digital twin based on a data-corrected Bidirectional Long Short-Term Memory network with predictions from another digital twin based on a predictive Generative Adversarial Network. The predictions given by these two frameworks are accurate when compared to the original SEIRS model data. Additionally, these frameworks are data-agnostic and could be applied to towns, idealised or real, in the UK or in other countries. Also, more compartments could be included in the SEIRS model, in order to study more realistic epidemiological behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. ECG-Based Deep Learning and Clinical Risk Factors to Predict Atrial Fibrillation.
- Author
-
Khurshid, Shaan, Friedman, Samuel, Reeder, Christopher, Di Achille, Paolo, Diamant, Nathaniel, Singh, Pulkit BA, Harrington, Lia X., Wang, Xin, Al-Alusi, Mostafa A., Sarma, Gopal, Foulkes, Andrea S. ScD, Ellinor, Patrick T., Anderson, Christopher D. M, Ho, Jennifer E., Philippakis, Anthony A., Batra, Puneet, Lubitz, Steven A., Singh, Pulkit, Foulkes, Andrea S, and Anderson, Christopher D
- Subjects
- *
ATRIAL fibrillation , *DEEP learning , *RECEIVER operating characteristic curves , *ATRIAL flutter , *HEART failure , *PROPORTIONAL hazards models , *SIGNAL convolution , *WOMEN'S hospitals - Abstract
Background: Artificial intelligence (AI)-enabled analysis of 12-lead ECGs may facilitate efficient estimation of incident atrial fibrillation (AF) risk. However, it remains unclear whether AI provides meaningful and generalizable improvement in predictive accuracy beyond clinical risk factors for AF.Methods: We trained a convolutional neural network (ECG-AI) to infer 5-year incident AF risk using 12-lead ECGs in patients receiving longitudinal primary care at Massachusetts General Hospital (MGH). We then fit 3 Cox proportional hazards models, composed of ECG-AI 5-year AF probability, CHARGE-AF clinical risk score (Cohorts for Heart and Aging in Genomic Epidemiology-Atrial Fibrillation), and terms for both ECG-AI and CHARGE-AF (CH-AI), respectively. We assessed model performance by calculating discrimination (area under the receiver operating characteristic curve) and calibration in an internal test set and 2 external test sets (Brigham and Women's Hospital [BWH] and UK Biobank). Models were recalibrated to estimate 2-year AF risk in the UK Biobank given limited available follow-up. We used saliency mapping to identify ECG features most influential on ECG-AI risk predictions and assessed correlation between ECG-AI and CHARGE-AF linear predictors.Results: The training set comprised 45 770 individuals (age 55±17 years, 53% women, 2171 AF events) and the test sets comprised 83 162 individuals (age 59±13 years, 56% women, 2424 AF events). Area under the receiver operating characteristic curve was comparable using CHARGE-AF (MGH, 0.802 [95% CI, 0.767-0.836]; BWH, 0.752 [95% CI, 0.741-0.763]; UK Biobank, 0.732 [95% CI, 0.704-0.759]) and ECG-AI (MGH, 0.823 [95% CI, 0.790-0.856]; BWH, 0.747 [95% CI, 0.736-0.759]; UK Biobank, 0.705 [95% CI, 0.673-0.737]). Area under the receiver operating characteristic curve was highest using CH-AI (MGH, 0.838 [95% CI, 0.807 to 0.869]; BWH, 0.777 [95% CI, 0.766 to 0.788]; UK Biobank, 0.746 [95% CI, 0.716 to 0.776]). Calibration error was low using ECG-AI (MGH, 0.0212; BWH, 0.0129; UK Biobank, 0.0035) and CH-AI (MGH, 0.012; BWH, 0.0108; UK Biobank, 0.0001). In saliency analyses, the ECG P-wave had the greatest influence on AI model predictions. ECG-AI and CHARGE-AF linear predictors were correlated (Pearson r: MGH, 0.61; BWH, 0.66; UK Biobank, 0.41).Conclusions: AI-based analysis of 12-lead ECGs has similar predictive usefulness to a clinical risk factor model for incident AF and the approaches are complementary. ECG-AI may enable efficient quantification of future AF risk. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
46. Deep Learning of the Retina Enables Phenome- and Genome-Wide Analyses of the Microvasculature.
- Author
-
Zekavat, Seyedeh Maryam, Raghu, Vineet K., Trinder, Mark, Ye, Yixuan, Koyama, Satoshi, Honigberg, Michael C. MPP, Yu, Zhi, Pampana, Akhil MS, Urbut, Sarah, Haidermota, Sara, O'Regan, Declan P., Zhao, Hongyu, Ellinor, Patrick T., Segre, Ayellet V., Elze, Tobias, Wiggs, Janey L., Martone, James, Adelman, Ron A., Zebardast, Nazlee, and Del Priore, Lucian
- Subjects
- *
DIABETIC retinopathy , *PLATELET-derived growth factor receptors , *DEEP learning , *VASCULAR endothelial growth factors , *SIGNAL convolution , *RETINA , *CONVOLUTIONAL neural networks - Abstract
Background: The microvasculature, the smallest blood vessels in the body, has key roles in maintenance of organ health and tumorigenesis. The retinal fundus is a window for human in vivo noninvasive assessment of the microvasculature. Large-scale complementary machine learning-based assessment of the retinal vasculature with phenome-wide and genome-wide analyses may yield new insights into human health and disease.Methods: We used 97 895 retinal fundus images from 54 813 UK Biobank participants. Using convolutional neural networks to segment the retinal microvasculature, we calculated vascular density and fractal dimension as a measure of vascular branching complexity. We associated these indices with 1866 incident International Classification of Diseases-based conditions (median 10-year follow-up) and 88 quantitative traits, adjusting for age, sex, smoking status, and ethnicity.Results: Low retinal vascular fractal dimension and density were significantly associated with higher risks for incident mortality, hypertension, congestive heart failure, renal failure, type 2 diabetes, sleep apnea, anemia, and multiple ocular conditions, as well as corresponding quantitative traits. Genome-wide association of vascular fractal dimension and density identified 7 and 13 novel loci, respectively, that were enriched for pathways linked to angiogenesis (eg, vascular endothelial growth factor, platelet-derived growth factor receptor, angiopoietin, and WNT signaling pathways) and inflammation (eg, interleukin, cytokine signaling).Conclusions: Our results indicate that the retinal vasculature may serve as a biomarker for future cardiometabolic and ocular disease and provide insights into genes and biological pathways influencing microvascular indices. Moreover, such a framework highlights how deep learning of images can quantify an interpretable phenotype for integration with electronic health record, biomarker, and genetic data to inform risk prediction and risk modification. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
47. Development of parametric bridge BIM and PCD generation algorithms and PCD-based member segmentation.
- Author
-
Lee, Min-Jin, Yang, Da-Hyeon, and Lee, Jong-Han
- Subjects
- *
DEEP learning , *ALGORITHMS , *PARAMETRIC modeling , *BRIDGES , *AUTOMATIC train control - Abstract
• This paper proposed a comprehensive framework for parametric semi-automatic BIM and bridge member segmentation. • The parametric algorithm accounted for diverse types of bridge members and produced BIM bridge models using shape information automatically extracted per parameter. • The parametrically generated bridge BIM model was converted into PCD, serving as deep learning train data for automatic segmentation of bridge members. • The trained deep learning model showed high accuracy for segmenting superstructures, indicating promise for automated segmentation of bridge members. • The segmentation accuracy increased when accounting for the point density of members corresponding to the different sizes of bridge members. This paper proposes a comprehensive framework for parametric semi-automatic BIM and bridge member segmentation. For this, a parametric algorithm was developed to accommodate diverse superstructures, piers, abutments, and bearings. Each structural member in a bridge was classified to designate its shape information as parameters. The range of each parameter was defined to account for various dimensions of the shape of the bridge member. Therefore, libraries for each member were established to produce BIM bridge models using shape information automatically extracted per parameter. Then, the framework includes an algorithm that is developed to convert bridge BIM models generated by the parametric algorithm into PCD data. The virtually generated PCD is used as deep learning train data for automatic member segmentation based on the Point-Net algorithm. The trained algorithm using the virtual PCD was applied to a real bridge PCD. The segmentation of the bridge members showed high accuracy for the superstructure but low accuracy for the bearing. The segmentation accuracy of the algorithm including the bearing member could increase by modifying the density of PCD to account for the different sizes of bridge members. Furthermore, the proposed framework was applied to a UK bridge PCD and showed its applicability for use in bridges in different countries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Parental status and markers of brain and cellular age: A 3D convolutional network and classification study.
- Author
-
de Lange, Ann-Marie G., Leonardsen, Esten H., Barth, Claudia, Schindler, Louise S., Crestol, Arielle, Holm, Madelene C., Subramaniapillai, Sivaniya, Hill, Dónal, Alnæs, Dag, and Westlye, Lars T.
- Subjects
- *
CELLULAR aging , *CONVOLUTIONAL neural networks , *MAGNETIC resonance imaging , *CHILD death , *HIERARCHICAL clustering (Cluster analysis) , *BIOMARKERS , *MALE reproductive health - Abstract
Recent research shows prominent effects of pregnancy and the parenthood transition on structural brain characteristics in humans. Here, we present a comprehensive study of how parental status and number of children born/fathered links to markers of brain and cellular ageing in 36,323 UK Biobank participants (age range 44.57–82.06 years; 52% female). To assess global effects of parenting on the brain, we trained a 3D convolutional neural network on T1-weighted magnetic resonance images, and estimated brain age in a held-out test set. To investigate regional specificity, we extracted cortical and subcortical volumes using FreeSurfer, and ran hierarchical clustering to group regional volumes based on covariance. Leukocyte telomere length (LTL) derived from DNA was used as a marker of cellular ageing. We employed linear regression models to assess relationships between number of children, brain age, regional brain volumes, and LTL, and included interaction terms to probe sex differences in associations. Lastly, we used the brain measures and LTL as features in binary classification models, to determine if markers of brain and cellular ageing could predict parental status. The results showed associations between a greater number of children born/fathered and younger brain age in both females and males, with stronger effects observed in females. Volume-based analyses showed maternal effects in striatal and limbic regions, which were not evident in fathers. We found no evidence for associations between number of children and LTL. Classification of parental status showed an Area under the ROC Curve (AUC) of 0.57 for the brain age model, while the models using regional brain volumes and LTL as predictors showed AUCs of 0.52. Our findings align with previous population-based studies of middle- and older-aged parents, revealing subtle but significant associations between parental experience and neuroimaging-based surrogate markers of brain health. The findings further corroborate results from longitudinal cohort studies following parents across pregnancy and postpartum, potentially indicating that the parenthood transition is associated with long-term influences on brain health. • Reproductive history is linked to brain age in UK Biobank males and females. • Effects in striatal and limbic regions were evident in mothers only. • No significant link between number of children and leukocyte telomere length. • Findings might indicate long-term brain health impacts of the parenthood transition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Modelling and forecasting high-frequency data with jumps based on a hybrid nonparametric regression and LSTM model.
- Author
-
Song, Yuping, Cai, Chunchun, Ma, Dexiang, and Li, Chen
- Subjects
- *
DEEP learning , *REGRESSION analysis , *FORECASTING , *SUPPORT vector machines , *BOX-Jenkins forecasting , *BEAR markets - Abstract
High-frequency financial data is more difficult to predict than low-frequency data because it possesses nonlinearity, nonstationarity, higher volatility, and long memory and is frequently accompanied by the jump phenomena. In this paper, the nonparametric regression (NR) model based on kernel function is used to fit the nonlinear relationship between the nonstationary series Y t and its lagging series to model the trend of high frequency financial time series. Furthermore, the deep learning LSTM (long short-term memory) model is applied to capture the high volatility and frequent jumps of high frequency financial data and to improve the forecasting accuracy of the residual series. The results demonstrate that the hybrid NR and LSTM model has greatly improved the forecasting accuracy in several evaluation criteria. In comparison to NR, support vector machine (SVM), LSTM, ARIMA and NR-SVM models, the mean absolute error (MAE) of NR-LSTM has reduced by 89.78%, 97.85%, 86.48%, 32.47% and 89%, respectively. In addition, we have constructed the trading strategy for the Shanghai-Shenzhen 300 index by using the NR-LSTM model. The NR-LSTM model can continue to provide good returns even during a bear market, which can serve as a guide for investors. Furthermore, the NR-LSTM model also exhibits the best forecasting effect when we model the high-frequency data of Ping An bank in China, the FTSE 100 index in the UK, and the S&P 500 index in the US. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Assessing the healthiness of menus of all out-of-home food outlets and its socioeconomic patterns in Great Britain.
- Author
-
Huang, Yuru, Burgoine, Thomas, Bishop, Tom R.P., and Adams, Jean
- Subjects
- *
FAST food restaurants , *DEEP learning , *MENUS - Abstract
Food environment research predominantly focuses on the spatial distribution of out-of-home food outlets. However, the healthiness of food choices available within these outlets has been understudied, largely due to resource constraints. In this study, we propose an innovative, low-resource approach to characterise the healthiness of out-of-home food outlets at scale. Menu healthiness scores were calculated for food outlets on JustEat , and a deep learning model was trained to predict these scores for all physical out-of-home outlets in Great Britain, based on outlet names. Our findings highlight the "double burden" of the unhealthy food environment in deprived areas where there tend to be more out-of-home food outlets, and these outlets tend to be less healthy. This methodological advancement provides a nuanced understanding of out-of-home food environments, with potential for automation and broad geographic application. • Assessed the menu healthiness of 170,000+ out-of-home food outlets in Great Britain. • A deep learning model to predict healthiness of menus of out-of-home food outlets. • More and less healthy out-of-home food outlets in more deprived areas. • Socioeconomic inequalities in menu healthiness most evident for fast-food outlets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.