144,550 results
Search Results
152. Deep learning-based denoising in projection-domain and reconstruction-domain for low-dose myocardial perfusion SPECT.
- Author
-
Sun J, Jiang H, Du Y, Li CY, Wu TH, Liu YH, Yang BH, and Mok GSP
- Subjects
- Humans, Retrospective Studies, Tomography, Emission-Computed, Single-Photon methods, Single Photon Emission Computed Tomography Computed Tomography, Technetium Tc 99m Sestamibi, Perfusion, Image Processing, Computer-Assisted methods, Phantoms, Imaging, Deep Learning
- Abstract
Background: Low-dose (LD) myocardial perfusion (MP) SPECT suffers from high noise level, leading to compromised diagnostic accuracy. Here we investigated the denoising performance for MP-SPECT using a conditional generative adversarial network (cGAN) in projection-domain (cGAN-prj) and reconstruction-domain (cGAN-recon)., Methods: Sixty-four noisy SPECT projections were simulated for a population of 100 XCAT phantoms with different anatomical variations and
99m Tc-sestamibi distributions. Series of LD projections were obtained by scaling the full dose (FD) count rate to be 1/20 to 1/2 of the original. Twenty patients with99m Tc-sestamibi stress SPECT/CT scans were retrospectively analyzed. For each patient, LD SPECT images (7/10 to 1/10 of FD) were generated from the FD list mode data. All projections were reconstructed by the quantitative OS-EM method. A 3D cGAN was implemented to predict FD images from their corresponding LD images in the projection- and reconstruction-domain. The denoised projections were reconstructed for analysis in various quantitative indices along with cGAN-recon, Gaussian, and Butterworth-filtered images., Results: cGAN denoising improves image quality as compared to LD and conventional post-reconstruction filtering. cGAN-prj can further reduce the dose level as compared to cGAN-recon without compromising the image quality., Conclusions: Denoising based on cGAN-prj is superior to cGAN-recon for MP-SPECT., (© 2022. The Author(s) under exclusive licence to American Society of Nuclear Cardiology.)- Published
- 2023
- Full Text
- View/download PDF
153. Computer-aided detection and diagnosis/radiomics/machine learning/deep learning in medical imaging.
- Subjects
- Diagnostic Imaging, Machine Learning, Radiography, Computers, Deep Learning
- Published
- 2023
- Full Text
- View/download PDF
154. Deep learning-based prediction of treatment prognosis from nasal polyp histology slides.
- Author
-
Wang K, Ren Y, Ma L, Fan Y, Yang Z, Yang Q, Shi J, and Sun Y
- Subjects
- Humans, Hyperplasia pathology, Eosinophils pathology, Prognosis, Nasal Polyps surgery, Nasal Polyps pathology, Deep Learning
- Abstract
Background: Histopathology of nasal polyps contains rich prognostic information, which is difficult to extract objectively. In the present study, we aimed to develop a prognostic indicator of patient outcomes by analyzing scanned conventional hematoxylin and eosin (H&E)-stained slides alone using deep learning., Methods: An interpretable supervised deep learning model was developed using 185 H&E-stained whole-slide images (WSIs) of nasal polyps, each from a patient randomly selected from the pool of 232 patients who underwent endoscopic sinus surgery at the First Affiliated Hospital of Sun Yat-Sen University (internal cohort). We internally validated the model on a holdout dataset from the internal cohort (47 H&E-stained WSIs) and externally validated the model on 122 H&E-stained WSIs from the Seventh Affiliated Hospital of Sun Yat-Sen University and the University of Hong Kong-Shenzhen Hospital (external cohort). A poor prognosis score (PPS) was established to evaluate patient outcomes, and then risk activation mapping was applied to visualize the histopathological features underlying PPS., Results: The model yielded a patient-level sensitivity of 79.5%, and specificity of 92.3%, with areas under the receiver operating characteristic curve of 0.943, on the multicenter external cohort. The predictive ability of PPS was superior to that of conventional tissue eosinophil number. Notably, eosinophil infiltration, goblet cell hyperplasia, glandular hyperplasia, squamous metaplasia, and fibrin deposition were identified as the main underlying features of PPS., Conclusions: Our deep learning model is an effective method for decoding pathological images of nasal polyps, providing a valuable solution for disease prognosis prediction and precise patient treatment., (© 2022 ARS-AAOA, LLC.)
- Published
- 2023
- Full Text
- View/download PDF
155. Identification of Active Pulmonary Tuberculosis Among Patients With Positive Interferon-Gamma Release Assay Results: Value of a Deep Learning-based Computer-aided Detection System in Different Scenarios of Implementation.
- Author
-
Park J, Hwang EJ, Lee JH, Hong W, Nam JG, Lim WH, Kim JH, Goo JM, and Park CM
- Subjects
- Male, Humans, Middle Aged, Interferon-gamma Release Tests, Radiographic Image Interpretation, Computer-Assisted methods, Sensitivity and Specificity, Computers, Retrospective Studies, Deep Learning, Tuberculosis, Pulmonary diagnostic imaging, Tuberculosis
- Abstract
Purpose: To evaluate the accuracy of a deep learning-based computer-aided detection (CAD) system in identifying active pulmonary tuberculosis on chest radiographs (CRs) of patients with positive interferon-gamma release assay (IGRA) results in different scenarios of clinical implementation., Materials and Methods: We collected the CRs of consecutive patients with positive IGRA results. Findings of active pulmonary tuberculosis on CRs were independently evaluated by the CAD and a thoracic radiologist, followed by interpretation using the CAD. Sensitivity and specificity were evaluated in different scenarios: (a) radiologists' interpretation, (b) radiologists' CAD-assisted interpretation, and (c) CAD-based prescreening (radiologists' interpretation for positive CAD results only). We conducted a reader test to compare the accuracy of the CAD with those of 5 radiologists., Results: Among 1780 patients (men, 53.8%; median age, 56 y), 44 (2.5%) were diagnosed with active pulmonary tuberculosis. The CAD-assisted interpretation exhibited a higher sensitivity (81.8% vs. 72.7%; P =0.046) but lower specificity than the radiologists' interpretation (84.1% vs. 85.7%; P <0.001). The CAD-based prescreening exhibited a higher specificity than the radiologists' interpretation (88.8% vs. 85.7%; P <0.001) at the same sensitivity, with a workload reduction of 85.2% (1780 to 263). In the reader test, the CAD exhibited a higher sensitivity than radiologists (72.7% vs. 59.5%; P =0.005) at the same specificity (88.0%), and CAD-assisted interpretation significantly improved the sensitivity of radiologists' interpretation (72.3%; P <0.001)., Conclusions: For identifying active pulmonary tuberculosis among patients with positive IGRA results, deep learning-based CAD can enhance the sensitivity of interpretation. CAD-based prescreening may reduce the radiologists' workload at an improved specificity., Competing Interests: E.J.H. received a research grant from Lunit Inc. outside the present study. J.G.N. received a research grant from Vuno, outside the present study. C.M.P. received a research grant from Lunit Inc. outside the present study, and holds stock of Promedius and stock options of Lunit Inc. and Coreline Soft. The remaining authors declare no conflicts of interest., (Copyright © 2023 Wolters Kluwer Health, Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
156. Deep learning model improves COPD risk prediction and gene discovery.
- Subjects
- Humans, Machine Learning, Genetic Association Studies, Deep Learning, Pulmonary Disease, Chronic Obstructive genetics
- Published
- 2023
- Full Text
- View/download PDF
157. A Characterization of Deep Learning Reconstruction Applied to Dual-Energy Computed Tomography Monochromatic and Material Basis Images.
- Author
-
Nikolau EP, Toia GV, Nett B, Tang J, and Szczykutowicz TP
- Subjects
- Humans, Radiographic Image Interpretation, Computer-Assisted methods, Tomography, X-Ray Computed methods, Phantoms, Imaging, Quality Improvement, Algorithms, Radiation Dosage, Image Processing, Computer-Assisted, Deep Learning
- Abstract
Objective: Advancements in computed tomography (CT) reconstruction have enabled image quality improvements and dose reductions. Previous advancements have included iterative and model-based reconstruction. The latest image reconstruction advancement uses deep learning, which has been evaluated for polychromatic imaging only. This article characterizes a commercially available deep learning imaging reconstruction applied to dual-energy CT., Methods: Monochromatic, iodine basis, and water basis images were reconstructed with filtered back projection (FBP), iterative (ASiR-V), and deep learning (DLIR) methods in a phantom experiment. Slice thickness, contrast-to-noise ratio, modulation transfer function, and noise power spectrum metrics were used to characterize ASiR-V and DLIR relative to FBP over a range of dose levels, phantom sizes, and iodine concentrations., Results: Slice thicknesses for ASiR-V and DLIR demonstrated no statistically significant difference relative to FBP for all measurement conditions. Contrast-to-noise ratio performance for DLIR-high and ASiR-V 40% at 2 mg I/mL on 40-keV images were 162% and 30% higher than FBP, respectively. Task-based modulation transfer function measurements demonstrated no clinically significant change between FBP and ASiR-V and DLIR on monochromatic or iodine basis images., Conclusions: Deep learning image reconstruction enabled better image quality at lower monochromatic energies and on iodine basis images where image contrast is maximized relative to polychromatic or high-energy monochromatic images. Deep learning image reconstruction did not demonstrate thicker slices, decreased spatial resolution, or poor noise texture (ie, "plastic") relative to FBP., (Copyright © 2023 Wolters Kluwer Health, Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
158. Comparative research on structure function recognition based on deep learning
- Author
-
Liu, Zhongbao and Zhao, Wenjuan
- Published
- 2024
- Full Text
- View/download PDF
159. A shallow 2D-CNN network for crack detection in concrete structures
- Author
-
Honarjoo, Ahmad and Darvishan, Ehsan
- Published
- 2024
- Full Text
- View/download PDF
160. SiLK-SLAM: accurate, robust and versatile visual SLAM with simple learned keypoints
- Author
-
Yao, Jianjun and Li, Yingzhao
- Published
- 2024
- Full Text
- View/download PDF
161. Content‐based and knowledge graph‐based paper recommendation: Exploring user preferences with the knowledge graphs for scientific paper recommendation.
- Author
-
Tang, Hao, Liu, Baisong, and Qian, Jiangbo
- Subjects
KNOWLEDGE graphs ,SCIENTIFIC knowledge ,DEEP learning ,CONVOLUTIONAL neural networks ,MACHINE learning ,RECOMMENDER systems ,USER-generated content - Abstract
Researchers usually face difficulties in finding scientific papers relevant to their research interests due to increasing growth. Recommender systems emerge as a leading solution to filter valuable items intelligently. Recently, deep learning algorithms, such as convolutional neural network, improved traditional recommendation technologies, for example, the graph‐based or content‐based methods. However, existing graph‐based methods ignore high‐order association between users and items on graphs, and content‐based methods ignore global features of texts for explicit user preferences. Therefore, this paper proposes a Content‐based and knowledge Graph‐based Paper Recommendation method (CGPRec), which uses a two‐layer self‐attention block to obtain global features of texts for more complete explicit user preferences, and proposes an improved graph convolutional network for modeling high‐order associations on the knowledge graph to mine implicit user preferences. And the knowledge graph in this paper is constructed with concept nodes, user nodes, paper nodes, and other meta‐data nodes. Experimental results on a public dataset, CiteULike‐a, and a real application log dataset, AHData, show that our model outperforms compared with baseline methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
162. fNIRS Signal Classification Based on Deep Learning in Rock-Paper-Scissors Imagery Task
- Author
-
Yuting Xia, Tengfei Ma, Xin Li, Chen Wentian, Sailing He, and Xinhua Zhu
- Subjects
Time series classification ,Technology ,Computer science ,QH301-705.5 ,Speech recognition ,QC1-999 ,fNIRS ,02 engineering and technology ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,Signal classification ,Motor imagery ,General Materials Science ,Biology (General) ,BCI ,Instrumentation ,QD1-999 ,Brain–computer interface ,TSC ,Fluid Flow and Transfer Processes ,business.industry ,Process Chemistry and Technology ,Deep learning ,Physics ,General Engineering ,deep learning ,rock–paper–scissors ,021001 nanoscience & nanotechnology ,Engineering (General). Civil engineering (General) ,Computer Science Applications ,Chemistry ,Duration (music) ,Artificial intelligence ,TA1-2040 ,0210 nano-technology ,business ,030217 neurology & neurosurgery ,CNN - Abstract
To explore whether the brain contains pattern differences in the rock–paper–scissors (RPS) imagery task, this paper attempts to classify this task using fNIRS and deep learning. In this study, we designed an RPS task with a total duration of 25 min and 40 s, and recruited 22 volunteers for the experiment. We used the fNIRS acquisition device (FOIRE-3000) to record the cerebral neural activities of these participants in the RPS task. The time series classification (TSC) algorithm was introduced into the time-domain fNIRS signal classification. Experiments show that CNN-based TSC methods can achieve 97% accuracy in RPS classification. CNN-based TSC method is suitable for the classification of fNIRS signals in RPS motor imagery tasks, and may find new application directions for the development of brain–computer interfaces (BCI).
- Published
- 2021
- Full Text
- View/download PDF
163. CiteOpinion: Evidence-based Evaluation Tool for Academic Contributions of Research Papers Based on Citing Sentences.
- Author
-
Le, Xiaoqiu, Chu, Jingdan, Deng, Siyi, Jiao, Qihang, Pei, Jingjing, Zhu, Liya, and Yao, Junliang
- Subjects
UNIVERSITY research ,SENTIMENT analysis ,COLLEGE majors ,DEEP learning - Abstract
To uncover the evaluation information on the academic contribution of research papers cited by peers based on the content cited by citing papers, and to provide an evidence-based tool for evaluating the academic value of cited papers. CiteOpinion uses a deep learning model to automatically extract citing sentences from representative citing papers; it starts with an analysis on the citing sentences, then it identifies major academic contribution points of the cited paper, positive/negative evaluations from citing authors and the changes in the subjects of subsequent citing authors by means of Recognizing Categories of Moves (problems, methods, conclusions, etc.), and sentiment analysis and topic clustering. Citing sentences in a citing paper contain substantial evidences useful for academic evaluation. They can also be used to objectively and authentically reveal the nature and degree of contribution of the cited paper reflected by citation, beyond simple citation statistics. The evidence-based evaluation tool CiteOpinion can provide an objective and in-depth academic value evaluation basis for the representative papers of scientific researchers, research teams, and institutions. No other similar practical tool is found in papers retrieved. There are difficulties in acquiring full text of citing papers. There is a need to refine the calculation based on the sentiment scores of citing sentences. Currently, the tool is only used for academic contribution evaluation, while its value in policy studies, technical application, and promotion of science is not yet tested. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
164. Automatic offline-capable smartphone paper-based microfluidic device for efficient biomarker detection of Alzheimer's disease.
- Author
-
Duan, Sixuan, Cai, Tianyu, Liu, Fuyuan, Li, Yifan, Yuan, Hang, Yuan, Wenwen, Huang, Kaizhu, Hoettges, Kai, Chen, Min, Lim, Eng Gee, Zhao, Chun, and Song, Pengfei
- Subjects
- *
MICROFLUIDIC devices , *ALZHEIMER'S disease , *RESOURCE-limited settings , *COMMUNICATION infrastructure , *SMARTPHONES , *MICROFLUIDIC analytical techniques - Abstract
Alzheimer's disease (AD) is a prevalent neurodegenerative disease with no effective treatment. Efficient and rapid detection plays a crucial role in mitigating and managing AD progression. Deep learning-assisted smartphone-based microfluidic paper analysis devices (μPADs) offer the advantages of low cost, good sensitivity, and rapid detection, providing a strategic pathway to address large-scale disease screening in resource-limited areas. However, existing smartphone-based detection platforms usually rely on large devices or cloud servers for data transfer and processing. Additionally, the implementation of automated colorimetric enzyme-linked immunoassay (c-ELISA) on μPADs can further facilitate the realization of smartphone μPADs platforms for efficient disease detection. This paper introduces a new deep learning-assisted offline smartphone platform for early AD screening, offering rapid disease detection in low-resource areas. The proposed platform features a simple mechanical rotating structure controlled by a smartphone, enabling fully automated c-ELISA on μPADs. Our platform successfully applied sandwich c-ELISA for detecting the β-amyloid peptide 1–42 (Aβ 1–42, a crucial AD biomarker) and demonstrated its efficacy in 38 artificial plasma samples (healthy: 19, unhealthy: 19, N = 6). Moreover, we employed the YOLOv5 deep learning model and achieved an impressive 97 % accuracy on a dataset of 1824 images, which is 10.16 % higher than the traditional method of curve-fitting results. The trained YOLOv5 model was seamlessly integrated into the smartphone using the NCNN (Tencent's Neural Network Inference Framework), enabling deep learning-assisted offline detection. A user-friendly smartphone application was developed to control the entire process, realizing a streamlined "samples in, answers out" approach. This deep learning-assisted, low-cost, user-friendly, highly stable, and rapid-response automated offline smartphone-based detection platform represents a good advancement in point-of-care testing (POCT). Moreover, our platform provides a feasible approach for efficient AD detection by examining the level of Aβ 1–42, particularly in areas with low resources and limited communication infrastructure. The schematic diagram of the overview of an offline deep learning-assisted smartphone-based paper-based microfluidic platform for screening of Alzheimer's disease. [Display omitted] • Deep learning-assisted smartphone-based platform enables offline detection. • Fully automated microfluidic paper analysis devices (μPADs). • Smartphone-based platform for detection of biomarker of Alzheimer's disease. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
165. A compact review of progress and prospects of deep learning in drug discovery.
- Author
-
Li H, Zou L, Kowah JAH, He D, Liu Z, Ding X, Wen H, Wang L, Yuan M, and Liu X
- Subjects
- Molecular Docking Simulation, Neural Networks, Computer, Drug Discovery methods, Machine Learning, Drug Design, Deep Learning
- Abstract
Background: Drug discovery processes, such as new drug development, drug synergy, and drug repurposing, consume significant yearly resources. Computer-aided drug discovery can effectively improve the efficiency of drug discovery. Traditional computer methods such as virtual screening and molecular docking have achieved many gratifying results in drug development. However, with the rapid growth of computer science, data structures have changed considerably; with more extensive and dimensional data and more significant amounts of data, traditional computer methods can no longer be applied well. Deep learning methods are based on deep neural network structures that can handle high-dimensional data very well, so they are used in current drug development., Results: This review summarized the applications of deep learning methods in drug discovery, such as drug target discovery, drug de novo design, drug recommendation, drug synergy, and drug response prediction. While applying deep learning methods to drug discovery suffers from a lack of data, transfer learning is an excellent solution to this problem. Furthermore, deep learning methods can extract deeper features and have higher predictive power than other machine learning methods. Deep learning methods have great potential in drug discovery and are expected to facilitate drug discovery development., (© 2023. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)
- Published
- 2023
- Full Text
- View/download PDF
166. Assessment of the effects of the biotic and abiotic harmful factors on the amount of industrial wood production with deep learning.
- Author
-
Sevinç V
- Subjects
- Forestry, Forests, Industry, Conservation of Natural Resources, Trees, Wood, Deep Learning
- Abstract
The protection and sustainability of forest assets is possible with planned production of forest products to lead to minimum loss. One of the products obtained from forests is the industrial wood, which is the most important raw material for many sectors. Thus, changes in industrial wood production amounts directly affect these sectors. For this reason, it is important to detect and examine the factors affecting industrial wood production amounts for optimum production and use of this raw material. This study aims to investigate and assess the effects of two biotic and two abiotic harmful factors on the amount of industrial wood production by building a deep learning estimation model. These factors are forest fires, insect outbreaks, diseases, and severe weather events. The study shows that the most harmful factor decreasing the industrial wood production level is diseases. The second effective factor, however, appears to be severe weather events. The third and the fourth factors were determined to be insect outbreaks and burned forest areas, respectively., (© 2023. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)
- Published
- 2023
- Full Text
- View/download PDF
167. Artificial intelligence using deep learning to predict the anatomical outcome of rhegmatogenous retinal detachment surgery: a pilot study.
- Author
-
Fung THM, John NCRA, Guillemaut JY, Yorston D, Frohlich D, Steel DHW, and Williamson TH
- Subjects
- Humans, Pilot Projects, Artificial Intelligence, Visual Acuity, Retrospective Studies, Vitrectomy methods, Treatment Outcome, Retinal Detachment diagnosis, Retinal Detachment surgery, Deep Learning
- Abstract
Purpose: To develop and evaluate an automated deep learning model to predict the anatomical outcome of rhegmatogenous retinal detachment (RRD) surgery., Methods: Six thousand six hundred and sixty-one digital images of RRD treated by vitrectomy and internal tamponade were collected from the British and Eire Association of Vitreoretinal Surgeons database. Each image was classified as a primary surgical success or a primary surgical failure. The synthetic minority over-sampling technique was used to address class imbalance. We adopted the state-of-the-art deep convolutional neural network architecture Inception v3 to train, validate, and test deep learning models to predict the anatomical outcome of RRD surgery. The area under the curve (AUC), sensitivity, and specificity for predicting the outcome of RRD surgery was calculated for the best predictive deep learning model., Results: The deep learning model was able to predict the anatomical outcome of RRD surgery with an AUC of 0.94, with a corresponding sensitivity of 73.3% and a specificity of 96%., Conclusion: A deep learning model is capable of accurately predicting the anatomical outcome of RRD surgery. This fully automated model has potential application in surgical care of patients with RRD., (© 2022. Crown.)
- Published
- 2023
- Full Text
- View/download PDF
168. Deep Learning-Based Automatic Detection and Grading of Motion-Related Artifacts on Gadoxetic Acid-Enhanced Liver MRI.
- Author
-
Park T, Kim DW, Choi SH, Khang S, Huh J, Hong SB, Lee TY, Ko Y, Kim KW, and Lee SS
- Subjects
- Male, Humans, Aged, Middle Aged, Artifacts, Gadolinium DTPA, Liver diagnostic imaging, Liver pathology, Magnetic Resonance Imaging methods, Retrospective Studies, Contrast Media, Deep Learning
- Abstract
Objectives: The aim of this study was to develop and validate a deep learning-based algorithm (DLA) for automatic detection and grading of motion-related artifacts on arterial phase liver magnetic resonance imaging (MRI)., Materials and Methods: Multistep DLA for detection and grading of motion-related artifacts, based on the modified ResNet-101 and U-net, were trained using 336 arterial phase images of gadoxetic acid-enhanced liver MRI examinations obtained in 2017 (training dataset; mean age, 68.6 years [range, 18-95]; 254 men). Motion-related artifacts were evaluated in 4 different MRI slices using a 3-tier grading system. In the validation dataset, 313 images from the same institution obtained in 2018 (internal validation dataset; mean age, 67.2 years [range, 21-87]; 228 men) and 329 from 3 different institutions (external validation dataset; mean age, 64.0 years [range, 23-90]; 214 men) were included, and the per-slice and per-examination performances for the detection of motion-related artifacts were evaluated., Results: The per-slice sensitivity and specificity of the DLA for detecting grade 3 motion-related artifacts were 91.5% (97/106) and 96.8% (1134/1172) in the internal validation dataset and 93.3% (265/284) and 91.6% (948/1035) in the external validation dataset. The per-examination sensitivity and specificity were 92.0% (23/25) and 99.7% (287/288) in the internal validation dataset and 90.0% (72/80) and 96.0% (239/249) in the external validation dataset, respectively. The processing time of the DLA for automatic grading of motion-related artifacts was from 4.11 to 4.22 seconds per MRI examination., Conclusions: The DLA enabled automatic and instant detection and grading of motion-related artifacts on arterial phase gadoxetic acid-enhanced liver MRI., Competing Interests: Conflicts of interest and sources of funding: All authors have no conflicts of interest to declare. This work was supported by a National Research Foundation of Korea grant funded by the Korean government (MSIT) (grant number NRF-2019R1G1A1099743) and a grant from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute, funded by the Ministry of Health and Welfare, Republic of Korea (grant number HI18C2383)., (Copyright © 2022 Wolters Kluwer Health, Inc. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
169. Obtaining genetics insights from deep learning via explainable artificial intelligence.
- Author
-
Novakovsky G, Dexter N, Libbrecht MW, Wasserman WW, and Mostafavi S
- Subjects
- Genomics, Artificial Intelligence, Deep Learning
- Abstract
Artificial intelligence (AI) models based on deep learning now represent the state of the art for making functional predictions in genomics research. However, the underlying basis on which predictive models make such predictions is often unknown. For genomics researchers, this missing explanatory information would frequently be of greater value than the predictions themselves, as it can enable new insights into genetic processes. We review progress in the emerging area of explainable AI (xAI), a field with the potential to empower life science researchers to gain mechanistic insights into complex deep learning models. We discuss and categorize approaches for model interpretation, including an intuitive understanding of how each approach works and their underlying assumptions and limitations in the context of typical high-throughput biological datasets., (© 2022. Springer Nature Limited.)
- Published
- 2023
- Full Text
- View/download PDF
170. Deep learning image reconstruction to improve accuracy of iodine quantification and image quality in dual-energy CT of the abdomen: a phantom and clinical study.
- Author
-
Fukutomi A, Sofue K, Ueshima E, Negi N, Ueno Y, Tsujita Y, Yabe S, Yamaguchi T, Shimada R, Kusaka A, Hori M, and Murakami T
- Subjects
- Male, Humans, Middle Aged, Aged, Retrospective Studies, Tomography, X-Ray Computed methods, Abdomen diagnostic imaging, Algorithms, Image Processing, Computer-Assisted methods, Radiographic Image Interpretation, Computer-Assisted methods, Radiation Dosage, Iodine, Deep Learning
- Abstract
Objectives: To investigate the effect of deep learning image reconstruction (DLIR) on the accuracy of iodine quantification and image quality of dual-energy CT (DECT) compared to that of other reconstruction algorithms in a phantom experiment and an abdominal clinical study., Methods: An elliptical phantom with five different iodine concentrations (1-12 mgI/mL) was imaged five times with fast-kilovoltage-switching DECT for three target volume CT dose indexes. All images were reconstructed using filtered back-projection, iterative reconstruction (two levels), and DLIR algorithms. Measured and nominal iodine concentrations were compared among the algorithms. Contrast-enhanced CT of the abdomen with the same scanner was acquired in clinical patients. In arterial and portal venous phase images, iodine concentration, image noise, and coefficients of variation for four locations were retrospectively compared among the algorithms. One-way repeated-measures analyses of variance were used to evaluate differences in the iodine concentrations, standard deviations, coefficients of variation, and percentages of error among the algorithms., Results: In the phantom study, the measured iodine concentrations were equivalent among the algorithms: within ± 8% of the nominal values, with root-mean-square deviations of 0.08-0.36 mgI/mL, regardless of radiation dose. In the clinical study (50 patients; 35 men; mean age, 68 ± 11 years), iodine concentrations were equivalent among the algorithms for each location (all p > .99). Image noise and coefficients of variation were lower with DLIR than with the other algorithms (all p < .01)., Conclusions: The DLIR algorithm reduced image noise and variability of iodine concentration values compared with other reconstruction algorithms in the fast-kilovoltage-switching dual-energy CT., Key Points: • In the phantom study, standard deviations and coefficients of variation in iodine quantification were lower on images with the deep learning image reconstruction algorithm than on those with other algorithms. • In the clinical study, iodine concentrations of measurement location in the upper abdomen were consistent across four reconstruction algorithms, while image noise and variability of iodine concentrations were lower on images with the deep learning image reconstruction algorithm., (© 2022. The Author(s), under exclusive licence to European Society of Radiology.)
- Published
- 2023
- Full Text
- View/download PDF
171. Estimation and uncertainty analysis of groundwater quality parameters in a coastal aquifer under seawater intrusion: a comparative study of deep learning and classic machine learning methods.
- Author
-
Taşan M, Taşan S, and Demir Y
- Subjects
- Environmental Monitoring methods, Uncertainty, Salinity, Seawater analysis, Deep Learning, Groundwater analysis, Water Pollutants, Chemical analysis
- Abstract
Excessive withdrawal of groundwater for agricultural irrigation can cause seawater intrusion into coastal aquifers. Such a case will in turn results in deterioration of irrigation water quality. Determination of irrigation water quality with traditional methods is a time-consuming and costly process. However, machine learning algorithms can be useful tools for modeling and estimating groundwater quality used for irrigation water purposes. In this study, TDS, PS, SAR, and Cl parameters of groundwater were estimated with models based on EC and pH variables. For this purpose, prediction performances of two different deep learning methods (convolutional neural network (CNN) and deep neural network (DNN)) and two different classical machine learning (Random Forest (RF) and extreme gradient boosting (XGBoost)) methods were compared. In addition, predictive uncertainty of the models was determined by quantile regression (QR) analysis. Performance criteria and results of uncertainty analysis revealed that CNN (in testing phase, NSE = 0.95 for TDS, NSE = 0.96 for PS, NSE = 0.67 for SAR and NSE = 0.93 for CI) and DNN (in testing phase, NSE = 0.91 for TDS, NSE = 0.91 for PS, NSE = 0.57 for SAR and NSE = 0.94 for Cl) models had quite a close performance in estimation of TDS, PS, SAR, and Cl parameters and higher than the other two classical machine learning methods. As a result, the CNN model can be considered the best performing model in estimating all quality parameters due to the highest NSE and lowest RMSE values. In addition, the Taylor diagram showed that the values estimated using the CNN model had the highest correlation with the measured data. It was determined that the model with the lowest uncertainty based on the PICP statistics was DNN, followed by the CNN model. However, the CNN model has predicted outliers more accurately. Present findings proved that deep learning models could offer efficient tools for predicting irrigation water quality parameters., (© 2022. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)
- Published
- 2023
- Full Text
- View/download PDF
172. Deep-learning language models help to improve protein sequence alignment.
- Subjects
- Sequence Alignment, Proteins genetics, Amino Acid Sequence, Deep Learning
- Published
- 2023
- Full Text
- View/download PDF
173. Improved image quality and dose reduction in abdominal CT with deep-learning reconstruction algorithm: a phantom study.
- Author
-
Greffier J, Durand Q, Frandon J, Si-Mohamed S, Loisy M, de Oliveira F, Beregi JP, and Dabli D
- Subjects
- Humans, Radiation Dosage, Drug Tapering, Artificial Intelligence, Phantoms, Imaging, Algorithms, Tomography, X-Ray Computed methods, Radiographic Image Interpretation, Computer-Assisted methods, Deep Learning
- Abstract
Objectives: To assess the impact of a new artificial intelligence deep-learning reconstruction (Precise Image; AI-DLR) algorithm on image quality against a hybrid iterative reconstruction (IR) algorithm in abdominal CT for different clinical indications., Methods: Acquisitions on phantoms were performed at 5 dose levels (CTDI
vol : 13/11/9/6/1.8 mGy). Raw data were reconstructed using level 4 of iDose4 (i4) and 3 levels of AI-DLR (Smoother/Smooth/Standard). Noise power spectrum (NPS), task-based transfer function (TTF) and detectability index (d') were computed: d' modelled detection of a liver metastasis (LM) and hepatocellular carcinoma at portal (HCCp) and arterial (HCCa) phases. Image quality was subjectively assessed on an anthropomorphic phantom by 2 radiologists., Results: From Standard to Smoother levels, noise magnitude and average NPS spatial frequency decreased and the detectability (d') of all simulated lesions increased. For both inserts, TTF values were similar for all three AI-DLR levels from 13 to 6 mGy but decreased from Standard to Smoother levels at 1.8 mGy. Compared to the i4 used in clinical practice, d' values were higher using the Smoother and Smooth levels and close for the Standard level. For all dose levels, except at 1.8 mGy, radiologists considered images satisfactory for clinical use for the 3 levels of AI-DLR, but rated images too smooth using the Smoother level., Conclusion: Use of the Smooth and Smoother levels of AI-DLR reduces the image noise and improves the detectability of lesions and spatial resolution for standard and low-dose levels. Using the Smooth level is apparently the best compromise between the lowest dose level and adequate image quality., Key Points: • Evaluation of the impact of a new artificial intelligence deep-learning reconstruction (AI-DLR) on image quality and dose compared to a hybrid iterative reconstruction (IR) algorithm. • The Smooth and Smoother levels of AI-DLR reduced the image noise and improved the detectability of lesions and spatial resolution for standard and low-dose levels. • The Smooth level seems the best compromise between the lowest dose level and adequate image quality., (© 2022. The Author(s), under exclusive licence to European Society of Radiology.)- Published
- 2023
- Full Text
- View/download PDF
174. Deep learning-based diagnosis of stifle joint diseases in dogs.
- Author
-
Shim H, Lee J, Choi S, Kim J, Jeong J, Cho C, Kim H, Kim JI, Kim J, and Eom K
- Subjects
- Dogs, Animals, Stifle diagnostic imaging, Retrospective Studies, Neural Networks, Computer, Deep Learning, Joint Diseases diagnostic imaging, Joint Diseases veterinary, Dog Diseases diagnostic imaging
- Abstract
In this retrospective, analytical study, we developed a deep learning-based diagnostic model that can be applied to canine stifle joint diseases and compared its accuracy with that achieved by veterinarians to verify its potential as a reliable diagnostic method. A total of 2382 radiographs of the canine stifle joint from cooperative animal hospitals were included in a dataset. Stifle joint regions were extracted from the original images using the faster region-based convolutional neural network (R-CNN) model, and the object detection accuracy was evaluated. Four radiographic findings: patellar deviation, drawer sign, osteophyte formation, and joint effusion, were observed in the stifle joint and used to train a residual network (ResNet) classification model. Implant and growth plate groups were analyzed to compare the classification accuracy against the total dataset. All deep learning-based classification models achieved target accuracies exceeding 80%, which is comparable to or slightly less than those achieved by veterinarians. However, in the case of drawer signs, further research is necessary to improve the low sensitivity of the model. When the implant group was excluded, the classification accuracy significantly improved, indicating that the implant acted as a distraction. These results indicate that deep learning-based diagnoses can be expected to become useful diagnostic models in veterinary medicine., (© 2022 American College of Veterinary Radiology.)
- Published
- 2023
- Full Text
- View/download PDF
175. Generalized predictive analysis of reactions in paper devices via graph neural networks.
- Author
-
Sun, Hao, Pan, Yihan, Dong, Hui, Liu, Canfeng, Yang, Jintian, Tao, Yihui, and Jia, Yuan
- Subjects
- *
GRAPH neural networks , *MICROFLUIDICS , *NUCLEIC acid amplification techniques , *DATA structures , *PATTERN recognition systems , *DEEP learning - Abstract
Microfluidic technology facilitates high-throughput generation of time series data for biological and medical studies. Deep learning enables accurate, predictive analysis and proactive decision-making based on autonomous recognition of intricate pattern hidden in series. In this work, we first devised a paper-based microfluidic system for portable nucleic acid amplification test with economic energy consumption. Then, we employed Graph Neural Network (GNN), distinguished by its non-Euclidean data structure tailored for deep learning, with spatio-temporal attention mechanism to perform near-sensor predictive analysis of the on-chip reaction. Our findings demonstrated that the novel GNN model can provide accurate predictions of positive outcomes at the early stages of the reaction using less than one-third of the total reaction time. Then, the deep learning model trained by on-chip data was subsequently applied to more than 900 clinical plots. Generalization of the GNN model was successfully validated across different detection methods, diverse types of datasets and time series with variable length. Accuracy, sensitivity and specificity of the predictive approach were 96.5 %, 94.3 % and 99.0 % by utilizing the early half of reaction information. Finally, we compared the GNN model with various deep learning models. Despite differences in the prediction of negative samples among various models were minute, GNN obviously offered overall superior performance. This work ignites a cutting-edge application of deep learning in point-of-care and near-sensor tests. By harnessing the power of body area networks and edge/fog computing, our approach unlocks promising possibilities in diverse fields like healthcare and instrument science. [Display omitted] • Paper devices for economic on-chip amplification. • GNN analyzes data generated in paper devices and predicts reaction results quickly. • The deep learning model can work across platforms, methods & datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
176. Research Trends in Artificial Intelligence and Security—Bibliometric Analysis.
- Author
-
Ilić, Luka, Šijan, Aleksandar, Predić, Bratislav, Viduka, Dejan, and Karabašević, Darjan
- Subjects
DEEP learning ,BIBLIOMETRICS ,ARTIFICIAL intelligence ,WEB analytics ,MACHINE learning ,PUBLIC health infrastructure - Abstract
This paper provides a bibliometric analysis of current research trends in the field of artificial intelligence (AI), focusing on key topics such as deep learning, machine learning, and security in AI. Through the lens of bibliometric analysis, we explore publications published from 2020 to 2024, using primary data from the Clarivate Analytics Web of Science Core Collection. The analysis includes the distribution of studies by year, the number of studies and citation rankings in journals, and the identification of leading countries, institutions, and authors in the field of AI research. Additionally, we investigate the distribution of studies by Web of Science categories, authors, affiliations, publication years, countries/regions, publishers, research areas, and citations per year. Key findings indicate a continued growth of interest in topics such as deep learning, machine learning, and security in AI over the past few years. We also identify leading countries and institutions active in researching this area. Awareness of data security is essential for the responsible application of AI technologies. Robust security frameworks are important to mitigate risks associated with AI integration into critical infrastructure such as healthcare and finance. Ensuring the integrity and confidentiality of data managed by AI systems is not only a technical challenge but also a societal necessity, demanding interdisciplinary collaboration and policy development. This analysis provides a deeper understanding of the current state of research in the field of AI and identifies key areas for further research and innovation. Furthermore, these findings may be valuable to practitioners and decision-makers seeking to understand current trends and innovations in AI to enhance their business processes and practices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
177. Position paper of the EACVI and EANM on artificial intelligence applications in multimodality cardiovascular imaging using SPECT/CT, PET/CT, and cardiac CT
- Author
-
Oliver Gaemperli, Paola Anna Erba, Antti Saraste, Michelle C. Williams, Alessia Gimelli, Piotr J. Slomka, Christoph Rischpler, Roland Hustinx, Marc R. Dweck, Hein J. Verberne, Andor W. J. M. Glaudemans, Bernard Cosyns, Márton Kolossváry, Panagiotis Georgoulias, Luis Eduardo Juarez-Orozco, Ivana Išgum, Gilbert Habib, Mark Lubberink, Riemer H. J. A. Slart, Olivier Gheysens, Dimitris Visvikis, Fabien Hyafil, Basic and Translational Research and Imaging Methodology Development in Groningen (BRIDGE), Translational Immunology Groningen (TRIGR), Cardiovascular Centre (CVC), IvI Research (FNWI), UCL - SSS/IREC/SLUC - Pôle St.-Luc, UCL - (SLuc) Centre du cancer, UCL - (SLuc) Service de médecine nucléaire, Clinical sciences, Cardio-vascular diseases, Cardiology, Slart, R, Williams, M, Juarez-Orozco, L, Rischpler, C, Dweck, M, Glaudemans, A, Gimelli, A, Georgoulias, P, Gheysens, O, Gaemperli, O, Habib, G, Hustinx, R, Cosyns, B, Verberne, H, Hyafil, F, Erba, P, Lubberink, M, Slomka, P, Isgum, I, Visvikis, D, Kolossvary, M, Saraste, A, University Medical Center Groningen [Groningen] (UMCG), University of Twente, University of Edinburgh, Utrecht University [Utrecht], University of Groningen [Groningen], Universität Duisburg-Essen = University of Duisburg-Essen [Essen], Fondazione Toscana Gabriele Monasterio, University Hospital of Larissa, Cliniques Universitaires Saint-Luc [Bruxelles], Université Catholique de Louvain = Catholic University of Louvain (UCL), Hirslanden Medical Center, Microbes évolution phylogénie et infections (MEPHI), Institut de Recherche pour le Développement (IRD)-Aix Marseille Université (AMU)-Centre National de la Recherche Scientifique (CNRS), Hôpital de la Timone [CHU - APHM] (TIMONE), Institut Hospitalier Universitaire Méditerranée Infection (IHU Marseille), GIGA [Université Liège], Université de Liège, Universitair Ziekenhuis [Brussels, Belgium], University of Amsterdam [Amsterdam] (UvA), Hôpital Européen Georges Pompidou [APHP] (HEGP), Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Hôpitaux Universitaires Paris Ouest - Hôpitaux Universitaires Île de France Ouest (HUPO), Paris-Centre de Recherche Cardiovasculaire (PARCC (UMR_S 970/ U970)), Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Hôpitaux Universitaires Paris Ouest - Hôpitaux Universitaires Île de France Ouest (HUPO)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP)-Hôpitaux Universitaires Paris Ouest - Hôpitaux Universitaires Île de France Ouest (HUPO)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Paris Cité (UPCité), University of Pisa - Università di Pisa, Uppsala University, Uppsala University Hospital, Cedars-Sinai Medical Center, Laboratoire de Traitement de l'Information Medicale (LaTIM), Université de Brest (UBO)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Centre Hospitalier Régional Universitaire de Brest (CHRU Brest)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut Brestois Santé Agro Matière (IBSAM), Université de Brest (UBO), Semmelweis University [Budapest], University of Turku, Turku University Hospital (TYKS), Radiology and Nuclear Medicine, ACS - Amsterdam Cardiovascular Sciences, Biomedical Engineering and Physics, ACS - Atherosclerosis & ischemic syndromes, ANS - Brain Imaging, and ACS - Heart failure & arrhythmias
- Subjects
medicine.medical_specialty ,Medizin ,030204 cardiovascular system & hematology ,Guidelines ,Cardiovascular ,Multimodality imaging ,030218 nuclear medicine & medical imaging ,Multimodality ,03 medical and health sciences ,0302 clinical medicine ,[SDV.MHEP.CSC]Life Sciences [q-bio]/Human health and pathology/Cardiology and cardiovascular system ,[SDV.MHEP.MI]Life Sciences [q-bio]/Human health and pathology/Infectious diseases ,Artificial Intelligence ,Positron Emission Tomography Computed Tomography ,Machine learning ,medicine ,Humans ,[SDV.MP.PAR]Life Sciences [q-bio]/Microbiology and Parasitology/Parasitology ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Position paper ,Deep learning ,Positron-Emission Tomography ,Tomography, Emission-Computed, Single-Photon ,Tomography, X-Ray Computed ,Nuclear Medicine ,Tomography ,[SDV.MHEP.ME]Life Sciences [q-bio]/Human health and pathology/Emerging diseases ,PET-CT ,medicine.diagnostic_test ,business.industry ,Coronary computed tomography angiography ,General Medicine ,[SDV.MP.BAC]Life Sciences [q-bio]/Microbiology and Parasitology/Bacteriology ,X-Ray Computed ,Functional imaging ,Positron emission tomography ,[SDV.MP.VIR]Life Sciences [q-bio]/Microbiology and Parasitology/Virology ,Radiologi och bildbehandling ,Applications of artificial intelligence ,Emission-Computed ,Cardiology and Cardiovascular Medicine ,business ,Emission computed tomography ,Radiology, Nuclear Medicine and Medical Imaging ,Single-Photon - Abstract
In daily clinical practice, clinicians integrate available data to ascertain the diagnostic and prognostic probability of a disease or clinical outcome for their patients. For patients with suspected or known cardiovascular disease, several anatomical and functional imaging techniques are commonly performed to aid this endeavor, including coronary computed tomography angiography (CCTA) and nuclear cardiology imaging. Continuous improvement in positron emission tomography (PET), single-photon emission computed tomography (SPECT), and CT hardware and software has resulted in improved diagnostic performance and wide implementation of these imaging techniques in daily clinical practice. However, the human ability to interpret, quantify, and integrate these data sets is limited. The identification of novel markers and application of machine learning (ML) algorithms, including deep learning (DL) to cardiovascular imaging techniques will further improve diagnosis and prognostication for patients with cardiovascular diseases. The goal of this position paper of the European Association of Nuclear Medicine (EANM) and the European Association of Cardiovascular Imaging (EACVI) is to provide an overview of the general concepts behind modern machine learning-based artificial intelligence, highlights currently prefered methods, practices, and computational models, and proposes new strategies to support the clinical application of ML in the field of cardiovascular imaging using nuclear cardiology (hybrid) and CT techniques.
- Published
- 2021
178. Blockchain-based deep learning in IoT, healthcare and cryptocurrency price prediction: a comprehensive review
- Author
-
Arora, Shefali, Mittal, Ruchi, Shrivastava, Avinash K., and Bali, Shivani
- Published
- 2024
- Full Text
- View/download PDF
179. Addressing performance improvement of a neural network model for Reynolds-averaged Navier–Stokes solutions with high wake formation
- Author
-
Ajaya Kumar, Ananthajit and Assam, Ashwani
- Published
- 2024
- Full Text
- View/download PDF
180. Research and design of an expert diagnosis system for rail vehicle driven by data mechanism models
- Author
-
Li, Lin, Wang, Jiushan, and Xiao, Shilu
- Published
- 2024
- Full Text
- View/download PDF
181. Recognizing emotions in restaurant online reviews: a hybrid model integrating deep learning and a sentiment lexicon
- Author
-
Liu, Jun, Hu, Sike, Mehraliyev, Fuad, Zhou, Haiyue, Yu, Yunyun, and Yang, Luyu
- Published
- 2024
- Full Text
- View/download PDF
182. A novel deep learning method to use feature complementarity for review helpfulness prediction
- Author
-
Li, Xinzhe, Li, Qinglong, Jeong, Dasom, and Kim, Jaekyeong
- Published
- 2024
- Full Text
- View/download PDF
183. Advancing predictive maintenance: a deep learning approach to sensor and event-log data fusion
- Author
-
Liu, Zengkun and Hui, Justine
- Published
- 2024
- Full Text
- View/download PDF
184. A hybrid method for forecasting coal price based on ensemble learning and deep learning with data decomposition and data enhancement
- Author
-
Tang, Jing, Guo, Yida, and Han, Yilin
- Published
- 2024
- Full Text
- View/download PDF
185. A physics-driven and machine learning-based digital twinning approach to transient thermal systems
- Author
-
Di Meglio, Armando, Massarotti, Nicola, and Nithiarasu, Perumal
- Published
- 2024
- Full Text
- View/download PDF
186. An interactive assessment framework for residential space layouts using pix2pix predictive model at the early-stage building design
- Author
-
Mostafavi, Fatemeh, Tahsildoost, Mohammad, Zomorodian, Zahra Sadat, and Shahrestani, Seyed Shayan
- Published
- 2024
- Full Text
- View/download PDF
187. The Arquive of Tatuoca Magnetic Observatory Brazil: from paper to intelligent bytes
- Author
-
Cristian Berrio-Zapata, Ester Ferreira da Silva, Mayara Costa Pinheiro, Vinicius Augusto Carvalho de Abreu, Cristiano Mendel Martins, Mario Augusto Gongora, and Kelso Dunman
- Subjects
Big Data ,records management ,Observatories ,Deep learning ,Geomagnetism ,geophysics computing ,information retrieval systems ,Collaboration ,Data mining - Abstract
The Magnetic Observatory of Tatuoca (TTB) was installed by Observatório Nacional (ON) in 1957, near Belém city in the state of Pará, Brazilian Amazon. Its history goes back to 1933, when a Danish mission used this location to collect data, due to its privileged position near the terrestrial equator. Between 1957 and 2007, TTB produced 18,000 magnetograms on paper using photographic variometers, and other associated documents like absolute value forms and yearbooks. Data was obtained manually from these graphs with rulers and grids, taking 24 average readings per day, that is, one per hour. In 2017, the Federal University of Pará (UFPA in the Portuguese acronym) and ON collaborated to rescue this physical archive. In 2022 UFPA took a step forward and proposed not only digitizing the documents but also developing an intelligent agent capable of reading and extracting the information of the curves with a resolution better than an hour, being this the central goal of the project. If the project succeeds, it will rescue 50 years of data imprisoned in paper, increasing measurement sensitivity far beyond what these sources used to give. This will also open the possibility of applying the same AI to similar documents in other observatories or disciplines like seismography. This article recaps the project, and the complex challenges faced in articulating Archival Science principles with AI and Geoscience.
- Published
- 2022
- Full Text
- View/download PDF
188. MLCAD: A Survey of Research in Machine Learning for CAD Keynote Paper.
- Author
-
Rapp, Martin, Amrouch, Hussam, Lin, Yibo, Yu, Bei, Pan, David Z., Wolf, Marilyn, and Henkel, Jorg
- Subjects
- *
MACHINE learning , *CIRCUIT complexity , *COMPUTER-aided design , *ARTIFICIAL neural networks , *INTEGRATED circuits , *CONFIGURATION space , *MULTICASTING (Computer networks) - Abstract
Due to the increasing size of integrated circuits (ICs), their design and optimization phases (i.e., computer-aided design, CAD) grow increasingly complex. At design time, a large design space needs to be explored to find an implementation that fulfills all specifications and then optimizes metrics like energy, area, delay, reliability, etc. At run time, a large configuration space needs to be searched to find the best set of parameters (e.g., voltage/frequency) to further optimize the system. Both spaces are infeasible for exhaustive search typically leading to heuristic optimization algorithms that find some tradeoff between design quality and computational overhead. Machine learning (ML) can build powerful models that have successfully been employed in related domains. In this survey, we categorize how ML may be used and is used for design-time and run-time optimization and exploration strategies of ICs. A metastudy of published techniques unveils areas in CAD that are well explored and underexplored with ML, as well as trends in the employed ML algorithms. We present a comprehensive categorization and summary of the state of the art on ML for CAD. Finally, we summarize the remaining challenges and promising open research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
189. Advancing mortality rate prediction in European population clusters: integrating deep learning and multiscale analysis.
- Author
-
Shen Y, Yang X, Liu H, and Li Z
- Subjects
- Humans, Cluster Analysis, Databases, Factual, Electric Power Supplies, Europe epidemiology, Deep Learning
- Abstract
Accurately predicting population mortality rates is crucial for effective retirement insurance and economic policy formulation. Recent advancements in deep learning time series forecasting (DLTSF) have led to improved mortality rate predictions compared to traditional models like Lee-Carter (LC). This study focuses on mortality rate prediction in large clusters across Europe. By utilizing PCA dimensionality reduction and statistical clustering techniques, we integrate age features from high-dimensional mortality data of multiple countries, analyzing their similarities and differences. To capture the heterogeneous characteristics, an adaptive adjustment matrix is generated, incorporating sequential variation and spatial geographical information. Additionally, a combination of graph neural networks and a transformer network with an adaptive adjustment matrix is employed to capture the spatiotemporal features between different clusters. Extensive numerical experiments using data from the Human Mortality Database validate the superiority of the proposed GT-A model over traditional LC models and other classic neural networks in terms of prediction accuracy. Consequently, the GT-A model serves as a powerful forecasting tool for global population studies and the international life insurance field., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
190. Deep Learning Image Reconstruction Algorithm for CCTA: Image Quality Assessment and Clinical Application.
- Author
-
Catapano F, Lisi C, Savini G, Olivieri M, Figliozzi S, Caracciolo A, Monti L, and Francone M
- Subjects
- Humans, Artificial Intelligence, Prospective Studies, Reproducibility of Results, Radiographic Image Interpretation, Computer-Assisted methods, Radiation Dosage, Algorithms, Image Processing, Computer-Assisted, Computed Tomography Angiography, Deep Learning
- Abstract
Objective: The increasing number of coronary computed tomography angiography (CCTA) requests raised concerns about dose exposure. New dose reduction strategies based on artificial intelligence have been proposed to overcome limitations of iterative reconstruction (IR) algorithms. Our prospective study sought to explore the added value of deep-learning image reconstruction (DLIR) in comparison with a hybrid IR algorithm (adaptive statistical iterative reconstruction-veo [ASiR-V]) in CCTA, even in clinical challenging scenarios, as obesity, heavily calcified vessels and coronary stents., Methods: We prospectively included 103 consecutive patients who underwent CCTA. Data sets were reconstructed with ASiR-V and DLIR. For each reconstruction signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) was calculated, and qualitative assessment was made with a four-point Likert scale by two independent and blinded radiologists with different expertise., Results: Both SNR and CNR were significantly higher in DLIR (SNR-DLIR median value [interquartile range] of 13.89 [11.06-16.35] and SNR-ASiR-V 25.42 [22.46-32.22], P < 0.001; CNR-DLIR 16.84 [9.83-27.08] vs CNR-ASiR-V 10.09 [5.69-13.5], P < 0.001).Median qualitative score was 4 for DLIR images versus 3 for ASiR-V ( P < 0.001), with a good interreader reliability [intraclass correlation coefficient(2,1)e intraclass correlation coefficient(3,1) 0.60 for DLIR and 0.62 and 0.73 for ASiR-V].In the obese and in the "calcifications and stents" groups, DLIR showed significantly higher values of SNR (24.23 vs 11.11, P < 0.001 and 24.55 vs 14.09, P < 0.001, respectively) and CNR (16.08 vs 8.04, P = 0.008 and 17.31 vs 10.14, P = 0.003) and image quality., Conclusions: Deep-learning image reconstruction in CCTA allows better SNR, CNR, and qualitative assessment than ASiR-V, with an added value in the most challenging clinical scenarios., Competing Interests: The authors declare that they have no conflict of interests., (Copyright © 2023 Wolters Kluwer Health, Inc. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
191. Deep learning-based whole-body characterization of prostate cancer lesions on [ 68 Ga]Ga-PSMA-11 PET/CT in patients with post-prostatectomy recurrence.
- Author
-
Huang B, Yang Q, Li X, Wu Y, Liu Z, Pan Z, Zhong S, Song S, and Zuo C
- Subjects
- Male, Humans, Gallium Radioisotopes, Positron Emission Tomography Computed Tomography methods, Gallium Isotopes, Retrospective Studies, Neoplasm Recurrence, Local diagnostic imaging, Prostatectomy, Edetic Acid, Deep Learning, Prostatic Neoplasms diagnostic imaging, Prostatic Neoplasms surgery, Prostatic Neoplasms pathology
- Abstract
Purpose: The automatic segmentation and detection of prostate cancer (PC) lesions throughout the body are extremely challenging due to the lesions' complexity and variability in appearance, shape, and location. In this study, we investigated the performance of a three-dimensional (3D) convolutional neural network (CNN) to automatically characterize metastatic lesions throughout the body in a dataset of PC patients with recurrence after radical prostatectomy., Methods: We retrospectively collected [
68 Ga]Ga-PSMA-11 PET/CT images from 116 patients with metastatic PC at two centers: center 1 provided the data for fivefold cross validation (n = 78) and internal testing (n = 19), and center 2 provided the data for external testing (n = 19). PET and CT data were jointly input into a 3D U-Net to achieve whole-body segmentation and detection of PC lesions. The performance in both the segmentation and the detection of lesions throughout the body was evaluated using established metrics, including the Dice similarity coefficient (DSC) for segmentation and the recall, precision, and F1-score for detection. The correlation and consistency between tumor burdens (PSMA-TV and TL-PSMA) calculated from automatic segmentation and artificial ground truth were assessed by linear regression and Bland‒Altman plots., Results: On the internal test set, the DSC, precision, recall, and F1-score values were 0.631, 0.961, 0.721, and 0.824, respectively. On the external test set, the corresponding values were 0.596, 0.888, 0.792, and 0.837, respectively. Our approach outperformed previous studies in segmenting and detecting metastatic lesions throughout the body. Tumor burden indicators derived from deep learning and ground truth showed strong correlation (R2 ≥ 0.991, all P < 0.05) and consistency., Conclusion: Our 3D CNN accurately characterizes whole-body tumors in relapsed PC patients; its results are highly consistent with those of manual contouring. This automatic method is expected to improve work efficiency and to aid in the assessment of tumor burden., (© 2023. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.)- Published
- 2024
- Full Text
- View/download PDF
192. ENVINet5 deep learning change detection framework for the estimation of agriculture variations during 2012-2023 with Landsat series data.
- Author
-
Singh G, Dahiya N, Sood V, Singh S, and Sharma A
- Subjects
- Environmental Monitoring methods, Satellite Imagery, Agriculture, Seasons, Deep Learning
- Abstract
Remote sensing is one of the most important methods for analysing the multitemporal changes over a certain period. As a cost-effective way, remote sensing allows the long-term analysis of agricultural land by collecting satellite imagery from different satellite missions. Landsat is one of the longest-running world missions which offers a moderate-resolution earth observation dataset. Land surface mapping and monitoring are generally performed by incorporating classification and change detection models. In this work, a deep learning-based change detection (DCD) algorithm has been proposed to detect long-term agricultural changes using the Landsat series datasets (i.e., Landsat-7, Landsat-8, and Landsat-9) during the period 2012 to 2023. The proposed algorithm extracts the features from satellite data according to their spectral and geographic characteristics and identifies seasonal variability. The DCD integrates the deep learning-based (Environment for visualizing images) ENVI Net-5 classification model and posterior probability-based post-classification comparison-based change detection model (PCD). The DCD is capable of providing seasonal variations accurately with distinct Landsat series dataset and promises to use higher resolution dataset with accurate results. The experimental result concludes that vegetation has decreased from 2012 to 2023, while build-up land has increased up to 88.22% (2012-2023) for Landsat-7 and Landsat-8 datasets. On the other side, degraded area includes water (3.20-0.05%) and fallow land (1-0.59%). This study allows the identification of crop growth, crop yield prediction, precision farming, and crop mapping., (© 2024. The Author(s), under exclusive licence to Springer Nature Switzerland AG.)
- Published
- 2024
- Full Text
- View/download PDF
193. A preliminary exploration into top-down and bottom-up deep-learning approaches to localising neuro-interventional point targets in volumetric MRI.
- Author
-
Giffard E, Jannin P, and Baxter JSH
- Subjects
- Humans, Magnetic Resonance Imaging, Neural Networks, Computer, Machine Learning, Deep Learning
- Abstract
Purpose: Point localisation is a critical aspect of many interventional planning procedures, specifically representing anatomical regions of interest or landmarks as individual points. This could be seen as analogous to the problem of visual search in cognitive psychology, in which this search is performed either: bottom-up, constructing increasingly abstract and coarse-resolution features over the entire image; or top-down, using contextual cues from the entire image to refine the scope of the region being investigated. Traditional convolutional neural networks use the former, but it is not clear if this is optimal. This article is a preliminary investigation as to how this motivation affects 3D point localisation in neuro-interventional planning., Methods: Two neuro-imaging datasets were collected: one for cortical point localisation for repetitive transcranial magnetic stimulation and the other for sub-cortical anatomy localisation for deep brain stimulation. Four different frameworks were developed using top-down versus bottom-up paradigms as well as representing points as co-ordinates or heatmaps. These networks were applied to point localisation for transcranial magnetic stimulation and subcortical anatomy localisation. These networks were evaluated using cross-validation and a varying number of training datasets to analyse their sensitivity to quantity of training data., Results: Each network shows increasing performance as the amount of available training data increases, with the co-ordinate-based top-down network consistently outperforming the others. Specifically, the top-down architectures tend to outperform the bottom-up ones. An analysis of their memory consumption also encourages the top-down co-ordinate based architecture as it requires significantly less memory than either bottom-up architectures or those representing their predictions via heatmaps., Conclusion: This paper is a preliminary foray into a fundamental aspect of machine learning architectural design: that of the top-down/bottom-up divide from cognitive psychology. Although there are additional considerations within the particular architectures investigated that could affect these results and the number of architectures investigated is limited, our results do indicate that the less commonly used top-down paradigm could lead to more efficient and effective architectures in the future., (© 2023. CARS.)
- Published
- 2024
- Full Text
- View/download PDF
194. Deep learning enables the differentiation between early and late stages of hip avascular necrosis.
- Author
-
Klontzas ME, Vassalou EE, Spanakis K, Meurer F, Woertler K, Zibis A, Marias K, and Karantanas AH
- Subjects
- Humans, Retrospective Studies, Neural Networks, Computer, Magnetic Resonance Imaging methods, Deep Learning, Osteonecrosis
- Abstract
Objectives: To develop a deep learning methodology that distinguishes early from late stages of avascular necrosis of the hip (AVN) to determine treatment decisions., Methods: Three convolutional neural networks (CNNs) VGG-16, Inception ResnetV2, InceptionV3 were trained with transfer learning (ImageNet) and finetuned with a retrospectively collected cohort of (n = 104) MRI examinations of AVN patients, to differentiate between early (ARCO 1-2) and late (ARCO 3-4) stages. A consensus CNN ensemble decision was recorded as the agreement of at least two CNNs. CNN and ensemble performance was benchmarked on an independent cohort of 49 patients from another country and was compared to the performance of two MSK radiologists. CNN performance was expressed with areas under the curve (AUC), the respective 95% confidence intervals (CIs) and precision, and recall and f1-scores. AUCs were compared with DeLong's test., Results: On internal testing, Inception-ResnetV2 achieved the highest individual performance with an AUC of 99.7% (95%CI 99-100%), followed by InceptionV3 and VGG-16 with AUCs of 99.3% (95%CI 98.4-100%) and 97.3% (95%CI 95.5-99.2%) respectively. The CNN ensemble the same AUCs Inception ResnetV2. On external validation, model performance dropped with VGG-16 achieving the highest individual AUC of 78.9% (95%CI 51.6-79.6%) The best external performance was achieved by the model ensemble with an AUC of 85.5% (95%CI 72.2-93.9%). No significant difference was found between the CNN ensemble and expert MSK radiologists (p = 0.22 and 0.092 respectively)., Conclusion: An externally validated CNN ensemble accurately distinguishes between the early and late stages of AVN and has comparable performance to expert MSK radiologists., Clinical Relevance Statement: This paper introduces the use of deep learning for the differentiation between early and late avascular necrosis of the hip, assisting in a complex clinical decision that can determine the choice between conservative and surgical treatment., Key Points: • A convolutional neural network ensemble achieved excellent performance in distinguishing between early and late avascular necrosis. • The performance of the deep learning method was similar to the performance of expert readers., (© 2023. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
195. Deep learning-based 3D brain multimodal medical image registration.
- Author
-
Deng L, Lan Q, Zhi Q, Huang S, Wang J, and Yang X
- Subjects
- Brain diagnostic imaging, Learning, Benchmarking, Databases, Factual, Image Processing, Computer-Assisted, Deep Learning
- Abstract
Medical image registration is a critical preprocessing step in medical image analysis. While traditional medical image registration techniques have matured, their registration speed and accuracy still fall short of clinical requirements. In this paper, we propose an improved VoxelMorph network incorporating ResNet modules and CBAM (RCV-Net), for 3D multimodal unsupervised registration. Unlike popular convolution-based U-shaped registration networks like VoxelMorph, RCV-Net incorporates the convolutional block attention module (CBAM) during the convolution process. This inclusion enhances the feature map information extraction capabilities during training and effectively prevents information loss. Additionally, we introduce a lightweight and residual network module at the network's base, which enhances learning ability without significantly increasing training parameters. To evaluate the superiority of our registration model, we utilize evaluation metrics such as structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and mean square error (MSE). Experimental results demonstrate that our proposed network structure outperforms current state-of-the-art methods, yielding better performance in multimodal registration tasks. Furthermore, generalization testing on databases outside of the training set has confirmed the optimal registration effectiveness of our model., (© 2023. International Federation for Medical and Biological Engineering.)
- Published
- 2024
- Full Text
- View/download PDF
196. Individual honey bee tracking in a beehive environment using deep learning and Kalman filter.
- Author
-
Kongsilp P, Taetragool U, and Duangphakdee O
- Subjects
- Bees, Animals, Ecosystem, Deep Learning
- Abstract
The honey bee is the most essential pollinator and a key contributor to the natural ecosystem. There are numerous ways for thousands of bees in a hive to communicate with one another. Individual trajectories and social interactions are thus complex behavioral features that can provide valuable information for an ecological study. To study honey bee behavior, the key challenges that have resulted from unreliable studies include complexity (high density of similar objects, small objects, and occlusion), the variety of background scenes, the dynamism of individual bee movements, and the similarity between the bee body and the background in the beehive. This study investigated the tracking of individual bees in a beehive environment using a deep learning approach and a Kalman filter. Detection of multiple bees and individual object segmentation were performed using Mask R-CNN with a ResNet-101 backbone network. Subsequently, the Kalman filter was employed for tracking multiple bees by tracking the body of each bee across a sequence of image frames. Three metrics were used to assess the proposed framework: mean average precision (mAP) for multiple-object detection and segmentation tasks, CLEAR MOT for multiple object tracking tasks, and MOTS for multiple object tracking and segmentation tasks. For CLEAR MOT and MOTS metrics, accuracy (MOTA and MOTSA) and precision (MOTP and MOTSP) are considered. By employing videos from a custom-designed observation beehive, recorded at a frame rate of 30 frames per second (fps) and utilizing a continuous frame rate of 10 fps as input data, our system displayed impressive performance. It yielded satisfactory outcomes for tasks involving segmentation and tracking of multiple instances of bee behavior. For the multiple-object segmentation task based on Mask R-CNN, we achieved a 0.85 mAP. For the multiple-object-tracking task with the Kalman filter, we achieved 77.48% MOTA, 79.79% MOTSP, and 79.56% recall. For the overall system for multiple-object tracking and segmentation tasks, we achieved 77.00% MOTSA, 75.60% MOTSP, and 80.30% recall., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
197. Using deep learning to accurately detect sow vulva size in a group pen with a single camera.
- Author
-
Chang SC, Wu XR, Kuan HY, Peng SY, and Chang CY
- Subjects
- Swine, Animals, Female, Housing, Animal, Weaning, Estrus physiology, Vulva physiology, Deep Learning
- Abstract
This paper presents a non-contact method for the detection of changes in sow vulva size in a group pen. The traditional approach to estrus detection is manually pressing down on the back of the sow to elicit standing responses; however, this method causes undue distress for sows not in estrus. When a sow is in estrus, the vulva is red and swollen due to the presence of endocrine. Monitoring changes in vulva size to detect estrus with as little impact on the sow as possible is the focus of this study. This is achieved using a single camera combined with a deep learning framework. Our approach comprises two steps: vulva detection and vulva size conversion. Images of sows of Yorkshire, Landrace, and Duroc breeds were collected in group housing, and the vulva was detected through artificial markers and the network architecture of YOLO v4. Based on the internal and external parameters of the camera, the detected size was converted into millimeters and the results of manual measurement (MM) and automatic calculation combined to calculate the size of the vulva. Analysis of the calculated size compared with MM indicates that the object recognition rate of the system exceeds 97.06%, with a size error of only + 1.70 to -4.47 mm and high-calculation efficiency (>2.8 frames/s). Directions for future research include the automatic detection of pig width., (© The Author(s) 2023. Published by Oxford University Press on behalf of the American Society of Animal Science. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2024
- Full Text
- View/download PDF
198. Deep learning-based photodamage reduction on harmonic generation microscope at low-level optical power.
- Author
-
Shen YJ, Liao EY, Tai TM, Liao YH, Sun CK, Lee CK, See S, and Chen HW
- Subjects
- Microscopy, Deep Learning
- Abstract
The trade-off between high-quality images and cellular health in optical bioimaging is a crucial problem. We demonstrated a deep-learning-based power-enhancement (PE) model in a harmonic generation microscope (HGM), including second harmonic generation (SHG) and third harmonic generation (THG). Our model can predict high-power HGM images from low-power images, greatly reducing the risk of phototoxicity and photodamage. Furthermore, the PE model trained only on normal skin data can also be used to predict abnormal skin data, enabling the dermatopathologist to successfully identify and label cancer cells. The PE model shows potential for in-vivo and ex-vivo HGM imaging., (© 2023 Wiley-VCH GmbH.)
- Published
- 2024
- Full Text
- View/download PDF
199. Differentiation and risk stratification of basal cell carcinoma with deep learning on histopathologic images and measuring nuclei and tumor microenvironment features.
- Author
-
Lan X, Guo G, Wang X, Yan Q, Xue R, Li Y, Zhu J, Dong Z, Wang F, Li G, Wang X, Xu J, and Jiang Y
- Subjects
- Humans, Tumor Microenvironment, Risk Assessment, Deep Learning, Carcinoma, Basal Cell diagnostic imaging, Skin Neoplasms diagnostic imaging
- Abstract
Background: Nuclear pleomorphism and tumor microenvironment (TME) play a critical role in cancer development and progression. Identifying most predictive nuclei and TME features of basal cell carcinoma (BCC) may provide insights into which characteristics pathologists can use to distinguish and stratify this entity., Objectives: To develop an automated workflow based on nuclei and TME features from basaloid cell tumor regions to differentiate BCC from trichoepithelioma (TE) and stratify BCC into high-risk (HR) and low-risk (LR) subtypes, and to identify the nuclear and TME characteristics profile of different basaloid cell tumors., Methods: The deep learning systems were trained on 161 H&E -stained sections which contained 51 sections of HR-BCC, 50 sections of LR-BCC and 60 sections of TE from one institution (D1), and externally and independently validated on D2 (46 sections) and D3 (76 sections), from 2015 to 2022. 60%, 20% and 20% of D1 data were randomly splitted for training, validation and testing, respectively. The framework comprised four stages: tumor regions identification by multi-head self-attention (MSA) U-Net, nuclei segmentation by HoVer-Net, quantitative feature by handcrafted extraction, and differentiation and risk stratification classifier construction. Pixel accuracy, precision, recall, dice score, intersection over union (IoU) and area under the curve (AUC) were used to evaluate the performance of tumor segmentation model and classifiers., Results: MSA-U-Net model detected tumor regions with 0.910 precision, 0.869 recall, 0.889 dice score and 0.800 IoU. The differentiation classifier achieved 0.977 ± 0.0159, 0.955 ± 0.0181, 0.885 ± 0.0237 AUC in D1, D2 and D3, respectively. The most discriminative features between BCC and TE contained Homogeneity, Elongation, T-T_meanEdgeLength, T-T_Nsubgraph, S-T_HarmonicCentrality, S-S_Degrees. The risk stratification model can well predict HR-BCC and LR-BCC with 0.920 ± 0.0579, 0.839 ± 0.0176, 0.825 ± 0.0153 AUC in D1, D2 and D3, respectively. The most discriminative features between HR-BCC and LR-BCC comprised IntensityMin, Solidity, T-T_minEdgeLength, T-T_Coreness, T-T_Degrees, T-T_Betweenness, S-T_Degrees., Conclusions: This framework hold potential for future use as a second opinion helping inform diagnosis of BCC, and identify nuclei and TME features related with malignancy and tumor risk stratification., (© 2024 The Authors. Skin Research and Technology published by John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
200. Application of Deep Learning for Studying NMDA Receptors.
- Author
-
Deng Z, Gu R, and Wen H
- Subjects
- Humans, Drug Development methods, Receptors, N-Methyl-D-Aspartate metabolism, Deep Learning, Quantitative Structure-Activity Relationship, Blood-Brain Barrier metabolism
- Abstract
Artificial intelligence underwent remarkable advancement in the past decade, revolutionizing our way of thinking and unlocking unprecedented opportunities across various fields, including drug development. The emergence of large pretrained models, such as ChatGPT, has even begun to demonstrate human-level performance in certain tasks.However, the difficulties of deploying and utilizing AI and pretrained model for nonexpert limited its practical use. To overcome this challenge, here we presented three highly accessible online tools based on a large pretrained model for chemistry, the Uni-Mol, for drug development against CNS diseases, including those targeting NMDA receptor: the blood-brain barrier (BBB) permeability prediction, the quantitative structure-activity relationship (QSAR) analysis system, and a versatile interface of the AI-based molecule generation model named VD-gen. We believe that these resources will effectively bridge the gap between cutting-edge AI technology and NMDAR experts, facilitating rapid and rational drug development., (© 2024. The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature.)
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.