25,323 results
Search Results
2. Predicting Breast Cancer by Paper Spray Ion Mobility Spectrometry Mass Spectrometry and Machine Learning
- Author
-
Ewelina P. Dutkiewicz, Chih-Lin Chen, Hua-Yi Hsieh, Cheng-Chih Hsu, Ying-Chen Huang, Ming-Yang Wang, Hsin-Hsiang Chung, and Bo-Rong Chen
- Subjects
Paper ,Core needle ,Spectrometry, Mass, Electrospray Ionization ,Ion-mobility spectrometry ,Electrospray ionization ,Breast Neoplasms ,010402 general chemistry ,Machine learning ,computer.software_genre ,Mass spectrometry ,01 natural sciences ,Analytical Chemistry ,Machine Learning ,Breast cancer ,Ion Mobility Spectrometry ,medicine ,Humans ,business.industry ,Chemistry ,010401 analytical chemistry ,medicine.disease ,Mass spectrometric ,0104 chemical sciences ,Ion-mobility spectrometry–mass spectrometry ,Female ,Artificial intelligence ,Asymmetric waveform ,business ,computer ,Algorithms - Abstract
Paper spray ionization has been used as a fast sampling/ionization method for the direct mass spectrometric analysis of biological samples at ambient conditions. Here, we demonstrated that by utilizing paper spray ionization-mass spectrometry (PSI-MS) coupled with field asymmetric waveform ion mobility spectrometry (FAIMS), predictive metabolic and lipidomic profiles of routine breast core needle biopsies could be obtained effectively. By the combination of machine learning algorithms and pathological examination reports, we developed a classification model, which has an overall accuracy of 87.5% for an instantaneous differentiation between cancerous and noncancerous breast tissues utilizing metabolic and lipidomic profiles. Our results suggested that paper spray ionization-ion mobility spectrometry-mass spectrometry (PSI-IMS-MS) is a powerful approach for rapid breast cancer diagnosis based on altered metabolic and lipidomic profiles.
- Published
- 2019
3. LA INTEL·LIGÈNCIA ARTIFICIAL EN LA DETECCIÓ DE LES PRÀCTIQUES DE BID RIGGING: EL PAPER CAPDAVANTER DE L'ACCO.
- Author
-
Jiménez Cardona, Noemí
- Subjects
GOVERNMENT purchasing ,ARTIFICIAL intelligence ,ANTITRUST law ,SOFTWARE development tools ,CARTELS - Abstract
Copyright of Revista Catalana de Dret Públic is the property of Revista Catalana de Dret Public and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
4. Canadian Association of Radiologists White Paper on De-identification of Medical Imaging: Part 2, Practical Considerations
- Author
-
Flavie Lavoie-Cardinal, Khaled El-Emam, Bruce Gray, Casey Hurrell, Caroline Reinhold, Mark Cicero, An Tang, Marleine Azar, William Parker, Jacob L. Jaremko, Lori Sheremeta, Emil Lee, Andrea Lum, Benoit Desjardins, and Rebecca Bromwich
- Subjects
Diagnostic Imaging ,Canada ,Knowledge management ,Best practice ,Data management ,Lifelong learning ,Big data ,030218 nuclear medicine & medical imaging ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,White paper ,Artificial Intelligence ,Data Anonymization ,Health care ,Radiologists ,Medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Societies, Medical ,business.industry ,De-identification ,General Medicine ,Data sharing ,030220 oncology & carcinogenesis ,business ,Algorithms - Abstract
The application of big data, radiomics, machine learning, and artificial intelligence (AI) algorithms in radiology requires access to large data sets containing personal health information. Because machine learning projects often require collaboration between different sites or data transfer to a third party, precautions are required to safeguard patient privacy. Safety measures are required to prevent inadvertent access to and transfer of identifiable information. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI Ethical and Legal standing committee with the mandate to guide the medical imaging community in terms of best practices in data management, access to health care data, de-identification, and accountability practices. Part 2 of this article will inform CAR members on the practical aspects of medical imaging de-identification, strengths and limitations of de-identification approaches, list of de-identification software and tools available, and perspectives on future directions.
- Published
- 2020
5. Combining Optical Character Recognition With Paper ECG Digitization
- Author
-
Amit J. Shah, Srini Tridandapani, Pamela Bhatti, Shambavi Ganesh, Mhmtjamil Alkhalaf, and Shishir Gupta
- Subjects
optical character recognition ,Computer science ,Computer applications to medicine. Medical informatics ,Biomedical Engineering ,R858-859.7 ,computer.software_genre ,Article ,connected component analysis ,Electrocardiography ,Cohen's kappa ,Medical technology ,Electronic Health Records ,Humans ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Medical diagnosis ,R855-855.5 ,electronic medical record ,Digitization ,Graphical user interface ,business.industry ,Pattern recognition ,Signal Processing, Computer-Assisted ,General Medicine ,Optical character recognition ,Image segmentation ,Interfacing ,Artificial intelligence ,business ,computer ,Connected-component labeling ,Algorithms - Abstract
Objective: We propose a MATLAB-based tool to convert electrocardiography (ECG) waveforms from paper-based ECG records into digitized ECG signals that is vendor-agnostic. The tool is packaged as an open source standalone graphical user interface (GUI) based application. Methods and procedures: To reach this objective we: (1) preprocess the ECG records, which includes skew correction, background grid removal and linear filtering; (2) segment ECG signals using Connected Components Analysis (CCA); (3) implement Optical Character Recognition (OCR) for removal of overlapping ECG lead characters and for interfacing of patients’ demographic information with their research records or their electronic medical record (EMR). The ECG digitization results are validated through a reader study where clinically salient features, such as intervals of QRST complex, between the paper ECG records and the digitized ECG records are compared. Results: Comparison of clinically important features between the paper-based ECG records and the digitized ECG signals, reveals intra- and inter-observer correlations of 0.86–0.99 and 0.79–0.94, respectively. The kappa statistic was found to average at 0.86 and 0.72 for intra- and inter-observer correlations, respectively. Conclusion: The clinically salient features of the ECG waveforms such as the intervals of QRST complex, are preserved during the digitization procedure. Clinical and Healthcare Impact: This open-source digitization tool can be used as a research resource to digitize paper ECG records thereby enabling development of new prediction algorithms to risk stratify individuals with cardiovascular disease, and/or allow for development of ECG-based cardiovascular diagnoses relying upon automated digital algorithms.
- Published
- 2021
6. Artificial intelligence and breast screening: French Radiology Community position paper
- Author
-
L Verzaux, B Séradour, A Maire, G Lenczner, Corinne Balleyguier, P Heid, Isabelle Thomassin-Naggara, Patrice Taourel, Luc Ceugnart, CHU Tenon [AP-HP], Sorbonne Université (SU)-Assistance publique - Hôpitaux de Paris (AP-HP) (AP-HP), Sorbonne Université (SU), and Hôpital Lapeyronie [Montpellier] (CHU)
- Subjects
Digital mammography ,Breast imaging ,[SDV.IB.IMA]Life Sciences [q-bio]/Bioengineering/Imaging ,[SDV]Life Sciences [q-bio] ,Breast Neoplasms ,Radiation Dosage ,Digital breast tomosynthesis ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Screening program ,0302 clinical medicine ,Breast cancer ,Artificial Intelligence ,medicine ,Image Processing, Computer-Assisted ,Breast screening ,Humans ,Radiology, Nuclear Medicine and imaging ,Precision Medicine ,Early Detection of Cancer ,Breast Density ,Radiological and Ultrasound Technology ,business.industry ,General Medicine ,medicine.disease ,3. Good health ,030220 oncology & carcinogenesis ,Related research ,Clinical value ,Position paper ,Female ,Breast disease ,Artificial intelligence ,France ,business ,Algorithms ,Needs Assessment ,Mammography - Abstract
International audience; The objective of this article was to evaluate the evidence currently available about the clinical value of artificial intelligence (AI) in breast imaging. Nine experts from the disciplines involved in breast disease management – including physicists and radiologists – convened a meeting on June 3, 2019 to discuss the evidence for the use of this technology in plenary and focused sessions. Prior to the meeting, the group performed a literature review on predefined topics. This paper presents the consensus reached by this working group on recommendations for the future use of AI in breast screening and related research topics.
- Published
- 2019
7. Position paper on COVID-19 imaging and AI: From the clinical needs and technological challenges to initial AI solutions at the lab and national level towards a new era for AI in healthcare
- Author
-
Hayit Greenspan, Wiro J. Niessen, Mads Nielsen, Raúl San José Estépar, Eliot L. Siegel, Radiology & Nuclear Medicine, and Medical Informatics
- Subjects
Diagnostic Imaging ,Coronavirus disease 2019 (COVID-19) ,Computer science ,Pneumonia, Viral ,Population ,Health Informatics ,Article ,030218 nuclear medicine & medical imaging ,Task (project management) ,Betacoronavirus ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,Health care ,Pandemic ,Humans ,Radiology, Nuclear Medicine and imaging ,National level ,education ,Pandemics ,education.field_of_study ,Radiological and Ultrasound Technology ,SARS-CoV-2 ,business.industry ,imaging ,COVID-19 ,Computer Graphics and Computer-Aided Design ,Engineering management ,Key factors ,Radiology Nuclear Medicine and imaging ,AI ,Position paper ,Computer Vision and Pattern Recognition ,Coronavirus Infections ,business ,Algorithms ,030217 neurology & neurosurgery - Abstract
Highlights • In this position paper, we provide a collection of views on the role of AI in the COVID-19 pandemic, from the clinical needs to the design of AI-based systems, to the translation of the developed tools to the clinic. We highlight key factors in designing system solutions - per specific task; as well as design issues in managing the disease on the national level. We focus on three specific use-cases for which AI systems can be built: from the early disease detection, the management of the disease in a hospital setting, and building patient-specific predictive models that require the combination of imaging with additional clinical features. Infrastructure considerations and population modeling in two European countries will be described. This pandemic has made the practical and scientific challenges of making AI solutions very explicit. A discussion concludes this paper, with a list of challenges facing the community in the AI road ahead., In this position paper, we provide a collection of views on the role of AI in the COVID-19 pandemic, from clinical requirements to the design of AI-based systems, to the translation of the developed tools to the clinic. We highlight key factors in designing system solutions - per specific task; as well as design issues in managing the disease at the national level. We focus on three specific use-cases for which AI systems can be built: early disease detection, management in a hospital setting, and building patient-specific predictive models that require the combination of imaging with additional clinical data. Infrastructure considerations and population modeling in two European countries will be described. This pandemic has made the practical and scientific challenges of making AI solutions very explicit. A discussion concludes this paper, with a list of challenges facing the community in the AI road ahead.
- Published
- 2020
8. QRS Detection and Measurement Method of ECG Paper Based on Convolutional Neural Networks
- Author
-
Bingli Jiao, Runze Yu, Tiangang Zhu, Zhilong Wang, Xiaohui Duan, and Yingguo Gao
- Subjects
0301 basic medicine ,Databases, Factual ,Computer science ,Feature extraction ,02 engineering and technology ,Convolutional neural network ,03 medical and health sciences ,QRS complex ,Electrocardiography ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,Measurement method ,Measure (data warehouse) ,Artificial neural network ,medicine.diagnostic_test ,business.industry ,Pattern recognition ,Arrhythmias, Cardiac ,Paper based ,030104 developmental biology ,020201 artificial intelligence & image processing ,Artificial intelligence ,Neural Networks, Computer ,business ,Algorithms - Abstract
In this paper, we propose an end-to-end approach to addressing QRS complex detection and measurement of Electrocardiograph (ECG) paper using convolutional neural networks (CNNs). Unlike conventional detection solutions that convert images to digital data, our method can directly detect QRS complex in images using Faster-RCNN, then the R-peak can be located and measured through a CNN. Validated by clinical ECG data in the St.-Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database and real ECG paper from Peking University People’s Hospital, the proposed method can achieve the recall of 98.32%, the precision of 99.01% in detecting and 0.012 mv of mean absolute error in measuring. Experimental results demonstrate the superior performance of our method over conventional solutions, which would pave the way to detect and measure ECG paper using CNNs.
- Published
- 2018
9. Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches--A Systematic Literature Review and Mapping Study.
- Author
-
García-Peñalvo, Francisco José, Vázquez-Ingelmo, Andrea, and García-Holgado, Alicia
- Subjects
ARTIFICIAL intelligence ,LITERATURE reviews ,SOFTWARE engineering ,ALGORITHMS ,HEURISTIC ,SOFTWARE engineers - Abstract
The exponential use of artificial intelligence (AI) to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed. While AI is a powerful means to discover interesting patterns and obtain predictive models, the use of these algorithms comes with a great responsibility, as an incomplete or unbalanced set of training data or an unproper interpretation of the models' outcomes could result in misleading conclusions that ultimately could become very dangerous. For these reasons, it is important to rely on expert knowledge when applying these methods. However, not every user can count on this specific expertise; non-AI-expert users could also benefit from applying these powerful algorithms to their domain problems, but they need basic guidelines to obtain the most out of AI models. The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features. The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering. As a result, 9 papers that tackle AI algorithm recommendation through tangible and traceable rules and heuristics were collected. The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. WASP (Write a Scientific Paper): Using correspondence analysis to investigate qualitative data elicited through photographs
- Author
-
Josef Lauri and Mary Anne Lauri
- Subjects
business.industry ,Obstetrics and Gynecology ,Qualitative property ,computer.software_genre ,Medical Writing ,Focus group ,Correspondence analysis ,03 medical and health sciences ,0302 clinical medicine ,030225 pediatrics ,Pediatrics, Perinatology and Child Health ,Photography ,Artificial intelligence ,Psychology ,business ,computer ,Algorithms ,Qualitative Research ,030217 neurology & neurosurgery ,Natural language processing - Abstract
Images can be helpful for eliciting data in the form of responses from participants. Sometimes photographs can help participants speak about issues, events, thoughts and emotions which they find difficult to talk about. This paper discusses how photos can be used and how the data collected through their use in a discussion can be analysed using Correspondence Analysis (CA). Although CA is considered a quantitative tool for analysing data, this paper describes how it can be used with qualitative data.
- Published
- 2019
11. Physics driven behavioural clustering of free-falling paper shapes
- Author
-
Fumiya Iida, Toby Howison, Josie Hughes, Fabio Giardina, Howison, Toby [0000-0001-8548-5550], Iida, Fumiya [0000-0001-9246-7190], and Apollo - University of Cambridge Repository
- Subjects
Inertia ,Physiology ,Physical system ,Social Sciences ,computer.software_genre ,Systems Science ,01 natural sciences ,010305 fluids & plasmas ,Physical Phenomena ,Physical phenomena ,Medicine and Health Sciences ,Psychology ,Cluster Analysis ,Moment of Inertia ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,theoretical model ,article ,Classical Mechanics ,Dynamical Systems ,Variety (cybernetics) ,Free falling ,machine learning ,Physical Sciences ,Medicine ,physics ,Algorithms ,Research Article ,Paper ,Computer and Information Sciences ,Reynolds Number ,Science ,Fluid Mechanics ,Research and Analysis Methods ,Machine learning ,Continuum Mechanics ,Motion ,Machine Learning Algorithms ,Artificial Intelligence ,0103 physical sciences ,010306 general physics ,Set (psychology) ,Cluster analysis ,Behavior ,Biological Locomotion ,business.industry ,Biology and Life Sciences ,Fluid Dynamics ,Models, Theoretical ,Nonlinear Dynamics ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. © 2019 Howison et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- Published
- 2019
12. Non-contact and non-destructive detection and identification of Bacillus anthracis inside paper envelopes
- Author
-
Shay Cohen, Barak Fishbain, Raviv Raich, Ran Aharoni, Haim Levy, Izhar Ron, Shai Kendler, Shay Weiss, and Ziv Mano
- Subjects
Spores, Bacterial ,biology ,Infrared Rays ,Computer science ,business.industry ,Spectrum Analysis ,Forensic Sciences ,education ,Hyperspectral imaging ,Pattern recognition ,biology.organism_classification ,Bioterrorism ,Pathology and Forensic Medicine ,Bacillus anthracis ,Identification (information) ,Non destructive ,Humans ,Screening tool ,Postal Service ,Artificial intelligence ,business ,Law ,Algorithms - Abstract
Efficient and safe detection of Bacillus anthracis spores (BAS) is a challenging task especially in bio-terror scenarios where the agent is concealed. We provide a proof-of-concept for the identification of concealed BAS inside mail envelopes using short-wave infrared hyperspectral imaging (SWIR-HSI). The spores and two other benign materials are identified according to their typical absorption spectrum. The identification process is based on the removal of the envelope signal using a new automatic new algorithm. This method may serve as a fast screening tool prior to using classical bioanalytical techniques.
- Published
- 2019
13. Smart Random Walk Distributed Secured Edge Algorithm Using Multi-Regression for Green Network.
- Author
-
Saba, Tanzila, Haseeb, Khalid, Rehman, Amjad, Damaševičius, Robertas, and Bahaj, Saeed Ali
- Subjects
RANDOM walks ,ALGORITHMS ,ARTIFICIAL intelligence ,INTERNET of things ,ELECTRONIC paper ,INTERNET traffic - Abstract
Smart communication has significantly advanced with the integration of the Internet of Things (IoT). Many devices and online services are utilized in the network system to cope with data gathering and forwarding. Recently, many traffic-aware solutions have explored autonomous systems to attain the intelligent routing and flowing of internet traffic with the support of artificial intelligence. However, the inefficient usage of nodes' batteries and long-range communication degrades the connectivity time for the deployed sensors with the end devices. Moreover, trustworthy route identification is another significant research challenge for formulating a smart system. Therefore, this paper presents a smart Random walk Distributed Secured Edge algorithm (RDSE), using a multi-regression model for IoT networks, which aims to enhance the stability of the chosen IoT network with the support of an optimal system. In addition, by using secured computing, the proposed architecture increases the trustworthiness of smart devices with the least node complexity. The proposed algorithm differs from other works in terms of the following factors. Firstly, it uses the random walk to form the initial routes with certain probabilities, and later, by exploring a multi-variant function, it attains long-lasting communication with a high degree of network stability. This helps to improve the optimization criteria for the nodes' communication, and efficiently utilizes energy with the combination of mobile edges. Secondly, the trusted factors successfully identify the normal nodes even when the system is compromised. Therefore, the proposed algorithm reduces data risks and offers a more reliable and private system. In addition, the simulations-based testing reveals the significant performance of the proposed algorithm in comparison to the existing work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Deep learning for digitizing highly noisy paper-based ECG records
- Author
-
Yao Li, Liheng Yu, Linghao Shen, Meng Wang, Jun Wang, Kunlun He, and Qixun Qu
- Subjects
0301 basic medicine ,Computer science ,Health Informatics ,Data_CODINGANDINFORMATIONTHEORY ,Signal ,03 medical and health sciences ,Electrocardiography ,0302 clinical medicine ,Deep Learning ,Sørensen–Dice coefficient ,Waveform ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Segmentation ,Digitization ,Signal processing ,business.industry ,Deep learning ,Pattern recognition ,Signal Processing, Computer-Assisted ,Image segmentation ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,030104 developmental biology ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Algorithms - Abstract
Electrocardiography (ECG) is essential in many heart diseases. However, some ECGs are recorded by paper, which can be highly noisy. Digitizing the paper-based ECG records into a high-quality signal is critical for further analysis. We formulated the digitization problem as a segmentation problem and proposed a deep learning method to digitize highly noisy ECG scans. Our method extracts the ECG signal in an end-to-end manner and can handle different paper record layouts. In the experiment, our model clearly extracted the ECG waveform with a Dice coefficient of 0.85 and accurately measured the common ECG parameters with more than 0.90 Pearson's correlation. We showed that the end-to-end approach with deep learning can be powerful in ECG digitization. To the best of our knowledge, we provide the first approach to digitize the least informative noisy binary ECG scans and potentially be generalized to digitize various ECG records.
- Published
- 2020
15. How to read and review papers on machine learning and artificial intelligence in radiology: a survival guide to key methodological concepts
- Author
-
Burak Kocak, Ece Ates Kus, and Ozgur Kilickesmez
- Subjects
medicine.medical_specialty ,Group method of data handling ,Feature selection ,Feature scaling ,Machine learning ,computer.software_genre ,030218 nuclear medicine & medical imaging ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Robustness (computer science) ,Artificial Intelligence ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Hyperparameter ,business.industry ,Deep learning ,Reproducibility of Results ,General Medicine ,Data sharing ,Reading ,030220 oncology & carcinogenesis ,Information leakage ,Radiology ,Artificial intelligence ,business ,computer ,Algorithms - Abstract
In recent years, there has been a dramatic increase in research papers about machine learning (ML) and artificial intelligence in radiology. With so many papers around, it is of paramount importance to make a proper scientific quality assessment as to their validity, reliability, effectiveness, and clinical applicability. Due to methodological complexity, the papers on ML in radiology are often hard to evaluate, requiring a good understanding of key methodological issues. In this review, we aimed to guide the radiology community about key methodological aspects of ML to improve their academic reading and peer-review experience. Key aspects of ML pipeline were presented within four broad categories: study design, data handling, modelling, and reporting. Sixteen key methodological items and related common pitfalls were reviewed with a fresh perspective: database size, robustness of reference standard, information leakage, feature scaling, reliability of features, high dimensionality, perturbations in feature selection, class balance, bias-variance trade-off, hyperparameter tuning, performance metrics, generalisability, clinical utility, comparison with traditional tools, data sharing, and transparent reporting.Key Points• Machine learning is new and rather complex for the radiology community.• Validity, reliability, effectiveness, and clinical applicability of studies on machine learning can be evaluated with a proper understanding of key methodological concepts about study design, data handling, modelling, and reporting.• Understanding key methodological concepts will provide a better academic reading and peer-review experience for the radiology community.
- Published
- 2020
16. An effective digitization method for CTG paper report with binary background grids taken by smartphone
- Author
-
Liu Zhikang, Yu Zhang, Zhao Zhidong, Haihui Ye, and Si Yingsong
- Subjects
Cardiotocography ,Computer science ,business.industry ,Health Informatics ,Pattern recognition ,Heart Rate, Fetal ,Signal ,030218 nuclear medicine & medical imaging ,Computer Science Applications ,03 medical and health sciences ,Uterine Contraction ,0302 clinical medicine ,Transmission (telecommunications) ,Feature (computer vision) ,Pregnancy ,Distortion ,Humans ,Female ,Artificial intelligence ,Smartphone ,business ,030217 neurology & neurosurgery ,Software ,Digitization ,Algorithms - Abstract
Background and Objective Cardiotocography (CTG) is the most popular prenatal diagnostic examination, which includes continuous monitoring of foetal heart rate (FHR, bpm) and uterine contraction (UC, mmHg) signals. Compared with CTG paper reports, digitized reports have better storage, transmission and retrieval capabilities, in addition to being able to assess foetal health. However, most of the existing digitization methods extract signals from paper reports with colour background grids, and they cannot extract signals completely from paper reports with binary background grids, which are widely used in clinical CTG monitoring. Moreover, the existing digitization algorithms often neglect the image distortion caused by the imaging equipment. Methods To overcome the above drawbacks, a digitization method for CTG paper reports with binary background grids taken by smartphones is proposed in this paper. In the stage of removing the grid background, a region merger based on super-pixels and an improved binary line mask removal are designed. Then, signal extraction is performed separately according to the different states of the image column. Through a projection map used to synchronize the signal, the distortion effect of the mobile phone is removed. Results The experimental results show that the average correlation coefficient (ρ) between the recovery signal obtained by the proposed method and the reference signal is 0.9855±0.0108 for FHR and 0.9866 ± 0.1020 for UC, and the root mean square errors (RMSE) of FHR and UC processed by the proposed method are 1.0366 ± 0.4953 and 2.0355 ± 1.0246, and the mean absolute errors (MAE) of FHR and UC processed by the proposed method are 0.8735 ± 0.0684 and 1.4991 ± 0.2837, which are higher than those of the existing digitization methods. Compared with clinical signals, no significant difference is found in the feature of digitization CTG. Conclusion The proposed digitization method is a promising useful tool to realize the electronization of CTG signal.
- Published
- 2019
17. Canadian Association of Radiologists White Paper on De-Identification of Medical Imaging: Part 1, General Principles
- Author
-
Khaled El-Emam, Rebecca Bromwich, Emil Lee, Casey Hurrell, Andrea Lum, Mark Cicero, Bruce Gray, Jacob L. Jaremko, Lori Sheremeta, An Tang, Benoit Desjardins, William Parker, Flavie Lavoie-Cardinal, Marleine Azar, and Caroline Reinhold
- Subjects
Diagnostic Imaging ,Canada ,Data management ,Best practice ,Lifelong learning ,Internet privacy ,Big data ,030218 nuclear medicine & medical imaging ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,Data Anonymization ,Health care ,Radiologists ,Medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Pseudonymization ,Societies, Medical ,business.industry ,De-identification ,General Medicine ,Data sharing ,030220 oncology & carcinogenesis ,business ,Algorithms - Abstract
The application of big data, radiomics, machine learning, and artificial intelligence (AI) algorithms in radiology requires access to large data sets containing personal health information. Because machine learning projects often require collaboration between different sites or data transfer to a third party, precautions are required to safeguard patient privacy. Safety measures are required to prevent inadvertent access to and transfer of identifiable information. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI Ethical and Legal standing committee with the mandate to guide the medical imaging community in terms of best practices in data management, access to health care data, de-identification, and accountability practices. Part 1 of this article will inform CAR members on principles of de-identification, pseudonymization, encryption, direct and indirect identifiers, k-anonymization, risks of reidentification, implementations, data set release models, and validation of AI algorithms, with a view to developing appropriate standards to safeguard patient information effectively.
- Published
- 2020
18. AI GODS, JEANS GODS, AND THRIFT GODS: RESPONDING TO RESPONSES TO THE BLESSED BY THE ALGORITHM PAPER (SINGLER 2020).
- Author
-
Singler, Beth
- Subjects
GODS ,ARTIFICIAL intelligence ,ALGORITHMS ,THRIFT institutions - Published
- 2023
- Full Text
- View/download PDF
19. Short Keynote Paper: Mainstreaming Personalized Healthcare-Transforming Healthcare Through New Era of Artificial Intelligence
- Author
-
P W B Nanayakkara, Ketan Paranjape, Michiel Schinkel, Internal medicine, ACS - Diabetes & metabolism, APH - Quality of Care, APH - Digital Health, Center of Experimental and Molecular Medicine, and Graduate School
- Subjects
Lung Neoplasms ,Digital era ,Wearable computer ,Mainstreaming ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Health Information Management ,Artificial Intelligence ,Sepsis ,Health care ,Mainstream ,Humans ,Electrical and Electronic Engineering ,Precision Medicine ,030304 developmental biology ,0303 health sciences ,Modalities ,business.industry ,Liability ,Artificial Intelligence (AI) ,Genomics ,Computer Science Applications ,machine learning ,ComputingMethodologies_PATTERNRECOGNITION ,030220 oncology & carcinogenesis ,personalized healthcare ,Artificial intelligence ,Personalized medicine ,business ,Algorithms ,Medical Informatics ,Biotechnology - Abstract
Medicine has entered the digital era, driven by data from new modalities, especially genomics and imaging, as well as new sources such as wearables and Internet of Things. As we gain a deeper understanding of the disease biology and how diseases affect an individual, we are developing targeted therapies to personalize treatments. There is a need for technologies like Artificial Intelligence (AI) to be able to support predictions for personalized treatments. In order to mainstream AI in healthcare we will need to address issues such as explainability, liability and privacy. Developing explainable algorithms and including AI training in medical education are many of the solutions that can help alleviate these concerns.
- Published
- 2020
20. A Review on Federated Learning and Machine Learning Approaches: Categorization, Application Areas, and Blockchain Technology.
- Author
-
Ogundokun, Roseline Oluwaseun, Misra, Sanjay, Maskeliunas, Rytis, and Damasevicius, Robertas
- Subjects
BLOCKCHAINS ,ARTIFICIAL intelligence ,MACHINE learning ,CONFERENCE papers ,ALGORITHMS ,SCIENCE publishing - Abstract
Federated learning (FL) is a scheme in which several consumers work collectively to unravel machine learning (ML) problems, with a dominant collector synchronizing the procedure. This decision correspondingly enables the training data to be distributed, guaranteeing that the individual device's data are secluded. The paper systematically reviewed the available literature using the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guiding principle. The study presents a systematic review of appliable ML approaches for FL, reviews the categorization of FL, discusses the FL application areas, presents the relationship between FL and Blockchain Technology (BT), and discusses some existing literature that has used FL and ML approaches. The study also examined applicable machine learning models for federated learning. The inclusion measures were (i) published between 2017 and 2021, (ii) written in English, (iii) published in a peer-reviewed scientific journal, and (iv) Preprint published papers. Unpublished studies, thesis and dissertation studies, (ii) conference papers, (iii) not in English, and (iv) did not use artificial intelligence models and blockchain technology were all removed from the review. In total, 84 eligible papers were finally examined in this study. Finally, in recent years, the amount of research on ML using FL has increased. Accuracy equivalent to standard feature-based techniques has been attained, and ensembles of many algorithms may yield even better results. We discovered that the best results were obtained from the hybrid design of an ML ensemble employing expert features. However, some additional difficulties and issues need to be overcome, such as efficiency, complexity, and smaller datasets. In addition, novel FL applications should be investigated from the standpoint of the datasets and methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Construction of Personalized Learning Platform Based on Collaborative Filtering Algorithm.
- Author
-
Zhang, Qian
- Subjects
ARTIFICIAL intelligence ,DATABASE design ,ALGORITHMS ,RECOMMENDER systems ,ELECTRONIC paper - Abstract
On the network service platform for vocational education, there are currently over 10,000 online courses. Learners face a challenge in selecting interesting courses from the vast resources available. Learners' urgent need for personalized learning is becoming more apparent as educational informatization progresses. Personalized recommendation (PR) technology can aid personalized learning and increase learners' learning efficiency significantly. This paper constructs a smart classroom model based on AI (artificial intelligence) by studying the connotation and characteristics of smart classroom in light of the current research status and trend of smart classroom at home and abroad. The merits of the recommendation system are determined by the recommendation algorithm used by PR system. This paper primarily focuses on developing a personalized learning platform based on the CF (collaborative filtering) algorithm, as well as conducting system requirements analysis, database design, functional module design, implementation, and testing on this foundation. Experiments are carried out to see if the optimized PR algorithm in the network learning platform is effective. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Accuracy of three-dimensional, paper-based models generated using a low-cost, three-dimensional printer
- Author
-
Piotr Szymor, Marcin Kozakiewicz, and Raphael Olszewski
- Subjects
Rapid prototyping ,Models, Anatomic ,Paper ,medicine.medical_specialty ,Cone beam computed tomography ,Cephalometry ,Mandible ,Matrix (mathematics) ,DICOM ,Software ,Imaging, Three-Dimensional ,medicine ,Image Processing, Computer-Assisted ,Humans ,Point (geometry) ,Computer vision ,business.industry ,Paper based ,Cone-Beam Computed Tomography ,Surgery ,Otorhinolaryngology ,Printing, Three-Dimensional ,Automatic segmentation ,Computer-Aided Design ,Artificial intelligence ,Oral Surgery ,Anatomic Landmarks ,business ,Tooth ,Algorithms - Abstract
Our study aimed to determine the accuracy of a low-cost, paper-based 3D printer by comparing a dry human mandible to its corresponding three-dimensional (3D) model using a 3D measuring arm. One dry human mandible and its corresponding printed model were evaluated. The model was produced using DICOM data from cone beam computed tomography. The data were imported into Maxilim software, wherein automatic segmentation was performed, and the STL file was saved. These data were subsequently analysed, repaired, cut and prepared for printing with netfabb software. These prepared data were used to create a paper-based model of a mandible with an MCor Matrix 300 printer. Seventy-six anatomical landmarks were chosen and measured 20 times on the mandible and the model using a MicroScribe G2X 3D measuring arm. The distances between all the selected landmarks were measured and compared. Only landmarks with a point inaccuracy less than 30% were used in further analyses. The mean absolute difference for the selected 2016 measurements was 0.36 ± 0.29 mm. The mean relative difference was 1.87 ± 3.14%; however, the measurement length significantly influenced the relative difference. The accuracy of the 3D model printed using the paper-based, low-cost 3D Matrix 300 printer was acceptable. The average error was no greater than that measured with other types of 3D printers. The mean relative difference should not be considered the best way to compare studies. The point inaccuracy methodology proposed in this study may be helpful in future studies concerned with evaluating the accuracy of 3D rapid prototyping models.
- Published
- 2014
23. Physics driven behavioural clustering of free-falling paper shapes.
- Author
-
Howison, Toby, Hughes, Josie, Giardina, Fabio, and Iida, Fumiya
- Subjects
PHYSICS ,SET functions ,MACHINE learning ,PHENOMENOLOGICAL theory (Physics) ,CONTINUUM mechanics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
24. Ink-jet printout of radiographs on transparent film and glossy paper versus monitor display: an ROC analysis
- Author
-
Thomas Lambrecht, Dorothea Dagassan-Berndt, Bernd d'Hoedt, Frank Krummenauer, Ralf Schulze, and Sebastian Kühl
- Subjects
Paper ,Computer science ,Cephalometry ,Radiography ,Crt monitor ,Diagnostic system ,Grayscale ,Statistics, Nonparametric ,Computer vision ,General Dentistry ,Observer Variation ,Receiver operating characteristic ,Cathode Ray Tube ,business.industry ,Phantoms, Imaging ,Print media ,X-Ray Film ,Radiography, Dental, Digital ,Pair wise ,Observer Bias ,ROC Curve ,Data Display ,Printing ,Artificial intelligence ,business ,Algorithms - Abstract
The aim of this study was to compare the depiction ability of small grayscale contrasts in ink-jet printouts of digital radiographs on different print media with CRT monitor. A CCD-based digital cephalometric image of a stepless aluminum wedge containing 50 bur holes of different depth was cut into 100 isometric images. Each image was printed on glossy paper and on transparent film by means of a high-resolution desktop inkjet printer at specific settings. The printed images were viewed under standardized conditions, and the perceptibility of the bur holes was evaluated and compared to the perceptibility on a 17-in CRT monitor. Thirty observers stated their blinded decision on a five-point confidence scale. Areas (Az) under receiver operating characteristics curves were calculated and compared using the pair wise sign tests. Overall agreement was estimated using Cohen’s kappa device and observer bias using McNemar’s test. Glossy paper prints and monitor display revealed significantly higher (P
- Published
- 2009
25. Ten simple rules for structuring papers
- Author
-
Konrad P. Kording and Brett D. Mensh
- Subjects
0301 basic medicine ,Science and Technology Workforce ,Economics ,Computer science ,Writing ,Social Sciences ,Careers in Research ,Structuring ,Subject matter ,0302 clinical medicine ,Documentation ,Sociology ,Medicine and Health Sciences ,Biology (General) ,Simple (philosophy) ,Grammar ,Ecology ,Careers ,Experimental Design ,Cell Differentiation ,Professions ,Editorial ,Computational Theory and Mathematics ,Research Design ,Modeling and Simulation ,Physical Sciences ,Periodicals as Topic ,Algorithms ,Career development ,Employment ,QH301-705.5 ,Science Policy ,Materials by Structure ,Process (engineering) ,Science ,Materials Science ,Context (language use) ,Patient Advocacy ,Research and Analysis Methods ,Crystals ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Scientific writing ,Genetics ,Syntax ,Set (psychology) ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Syntax (programming languages) ,business.industry ,Research ,Biology and Life Sciences ,Linguistics ,Data science ,Communications ,Health Care ,030104 developmental biology ,Reading ,Labor Economics ,People and Places ,Scientists ,Population Groupings ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Developmental Biology - Abstract
Good scientific writing is essential to career development and to the progress of science. A well-structured manuscript allows readers and reviewers to get excited about the subject matter, to understand and verify the paper’s contributions, and to integrate these contributions into a broader context. However, many scientists struggle with producing high-quality manuscripts and typically get little training in paper writing. Focusing on how readers consume information, we present a set of 10 simple rules to help you get across the main idea of your paper. These rules are designed to make your paper more influential and the process of writing more efficient and pleasurable.
- Published
- 2016
26. An enhanced memetic differential evolution in filter design for defect detection in paper production
- Author
-
Ville Tirronen, Kirsi Majava, Ferrante Neri, Tommi Kärkkäinen, and Tuomo Rossi
- Subjects
Paper ,Quality Control ,Mathematical optimization ,Population ,Evolutionary algorithm ,multimeme algorithms ,digital filter design ,Artificial Intelligence ,Image Interpretation, Computer-Assisted ,FIR filter ,Humans ,Industry ,Local search (optimization) ,Computer Simulation ,memetic algorithms ,education ,Metaheuristic ,Mathematics ,Probability ,edge detection ,education.field_of_study ,Electronic Data Processing ,Stochastic Processes ,Models, Statistical ,business.industry ,differential evolution ,paper production ,Models, Theoretical ,Computational Mathematics ,Filter design ,Differential evolution ,Simulated annealing ,Memetic algorithm ,business ,Algorithms ,Software - Abstract
This article proposes an Enhanced Memetic Differential Evolution (EMDE) for designing digital filters which aim at detecting defects of the paper produced during an industrial process. Defect detection is handled by means of two Gabor filters and their design is performed by the EMDE. The EMDE is a novel adaptive evolutionary algorithm which combines the powerful explorative features of Differential Evolution with the exploitative features of three local search algorithms employing different pivot rules and neighborhood generating functions. These local search algorithms are the Hooke Jeeves Algorithm, a Stochastic Local Search, and Simulated Annealing. The local search algorithms are adaptively coordinated by means of a control parameter that measures fitness distribution among individuals of the population and a novel probabilistic scheme. Numerical results confirm that Differential Evolution is an efficient evolutionary framework for the image processing problem under investigation and show that the EMDE performs well. As a matter of fact, the application of the EMDE leads to a design of an efficiently tailored filter. A comparison with various popular metaheuristics proves the effectiveness of the EMDE in terms of convergence speed, stagnation prevention, and capability in detecting solutions having high performance.
- Published
- 2008
27. FDA Releases Two Discussion Papers to Spur Conversation about Artificial Intelligence and Machine Learning in Drug Development and Manufacturing.
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,DRUG factories ,DRUG development ,RECOMBINANT proteins - Abstract
The regulatory uses are real: In 2021, more than 100 drug and biologic applications submitted to the FDA included AI/ML components. Keywords: Algorithms; Artificial Intelligence; Bioengineering; Biologics; Biotechnology; Cybersecurity; Cyborgs; Drug Development; Drug Manufacturing; Drugs and Therapies; Emerging Technologies; FDA; Genetic Engineering; Genetically-Engineered Proteins; Government Agencies Offices and Entities; Health and Medicine; Machine Learning; Office of the FDA Commissioner; Public Health; Technology; U.S. Food and Drug Administration EN Algorithms Artificial Intelligence Bioengineering Biologics Biotechnology Cybersecurity Cyborgs Drug Development Drug Manufacturing Drugs and Therapies Emerging Technologies FDA Genetic Engineering Genetically-Engineered Proteins Government Agencies Offices and Entities Health and Medicine Machine Learning Office of the FDA Commissioner Public Health Technology U.S. Food and Drug Administration 497 497 1 05/22/23 20230523 NES 230523 2023 MAY 22 (NewsRx) -- By a News Reporter-Staff News Editor at Clinical Trials Week -- By: Patrizia Cavazzoni, M.D., Director of the Center for Drug Evaluation and Research Artificial intelligence (AI) and machine learning (ML) are no longer futuristic concepts; they are now part of how we live and work. [Extracted from the article]
- Published
- 2023
28. Design Characteristics of Studies Reporting the Performance of Artificial Intelligence Algorithms for Diagnostic Analysis of Medical Images: Results from Recently Published Papers
- Author
-
Kyung Won Kim, Hye Young Jang, Seong Ho Park, Dongwook Kim, and Youngbin Shin
- Subjects
Databases, Factual ,MEDLINE ,Design characteristics ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,Diagnostic analysis ,Machine learning ,Image Processing, Computer-Assisted ,Humans ,Medicine ,Appropriateness ,Radiology, Nuclear Medicine and imaging ,Internal validation ,Accuracy ,business.industry ,Significant difference ,Deep learning ,Clinical validation ,Study design ,Quality ,Clinical trial ,Meta-analysis ,Case-Control Studies ,030220 oncology & carcinogenesis ,Systematic review ,Original Article ,Artificial intelligence ,business ,Algorithm ,Algorithms ,Cohort study - Abstract
Objective To evaluate the design characteristics of studies that evaluated the performance of artificial intelligence (AI) algorithms for the diagnostic analysis of medical images. Materials and methods PubMed MEDLINE and Embase databases were searched to identify original research articles published between January 1, 2018 and August 17, 2018 that investigated the performance of AI algorithms that analyze medical images to provide diagnostic decisions. Eligible articles were evaluated to determine 1) whether the study used external validation rather than internal validation, and in case of external validation, whether the data for validation were collected, 2) with diagnostic cohort design instead of diagnostic case-control design, 3) from multiple institutions, and 4) in a prospective manner. These are fundamental methodologic features recommended for clinical validation of AI performance in real-world practice. The studies that fulfilled the above criteria were identified. We classified the publishing journals into medical vs. non-medical journal groups. Then, the results were compared between medical and non-medical journals. Results Of 516 eligible published studies, only 6% (31 studies) performed external validation. None of the 31 studies adopted all three design features: diagnostic cohort design, the inclusion of multiple institutions, and prospective data collection for external validation. No significant difference was found between medical and non-medical journals. Conclusion Nearly all of the studies published in the study period that evaluated the performance of AI algorithms for diagnostic analysis of medical images were designed as proof-of-concept technical feasibility studies and did not have the design features that are recommended for robust validation of the real-world clinical performance of AI algorithms.
- Published
- 2019
29. Path planning and collision avoidance for autonomous surface vehicles II: a comparative study of algorithms.
- Author
-
Vagale, Anete, Bye, Robin T., Oucheikh, Rachid, Osen, Ottar L., and Fossen, Thor I.
- Subjects
PROBLEM solving ,ALGORITHMS ,COLLISIONS at sea ,AUTONOMOUS vehicles ,COMPARATIVE studies ,ARTIFICIAL intelligence ,EVOLUTIONARY algorithms - Abstract
Artificial intelligence is an enabling technology for autonomous surface vehicles, with methods such as evolutionary algorithms, artificial potential fields, fast marching methods, and many others becoming increasingly popular for solving problems such as path planning and collision avoidance. However, there currently is no unified way to evaluate the performance of different algorithms, for example with regard to safety or risk. This paper is a step in that direction and offers a comparative study of current state-of-the art path planning and collision avoidance algorithms for autonomous surface vehicles. Across 45 selected papers, we compare important performance properties of the proposed algorithms related to the vessel and the environment it is operating in. We also analyse how safety is incorporated, and what components constitute the objective function in these algorithms. Finally, we focus on comparing advantages and limitations of the 45 analysed papers. A key finding is the need for a unified platform for evaluating and comparing the performance of algorithms under a large set of possible real-world scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Clinical Pearl: The Clinical Relevance of Neonatal Informatics.
- Author
-
Falciglia, Gustave H., Hageman, Joseph R., Hussain, Walid, Alkureishi, Lolita Alcocer, Shah, Kshama, and Goldstein, Mitchell
- Subjects
MEDICAL logic ,CRITICALLY ill ,PATIENTS ,ARTIFICIAL intelligence ,NEONATAL intensive care units ,ACUTE kidney failure in children ,COMPUTER science ,NEONATAL intensive care ,HOSPITAL nurseries ,INFORMATION science ,ELECTRONIC health records ,WATER-electrolyte balance (Physiology) ,QUALITY assurance ,ALGORITHMS ,CHILDREN - Abstract
The article focuses on the importance of clinical informatics in neonatal care, highlighting its potential to provide critical resources for clinicians. Topics include the specialized data needed for neonatal care, the challenges in transitioning from paper to electronic health records, and the impact of informatics on real-time patient management and research.
- Published
- 2024
31. USING EVOLUTIONARY ALGORITHMS TO OPTIMIZE ANTHROPOGENIC MATERIAL STREAMS.
- Author
-
Pollmann, Olaf
- Subjects
ALGORITHMS ,ALGEBRA ,ARTIFICIAL intelligence ,INTELLIGENT agents ,MACHINE theory - Abstract
To optimize anthropogenic material streams, the production process, as well as the quality of the products, must be known. With knowledge of these requirements, it is possible to use extra applied algorithms—in this case evolutionary algorithms as part of artificial intelligence—for the optimization of these secondary material streams. The benefit of this application is the fast and precise calculation of the local and global optima of the optimizing problem. This calculation method uses the benefits of the biological reproduction by applications of mutation, selection, and recombination to find one of the best results in a huge amount of possible and potential results. For the use of secondary materials in the paper production it could be proven that in spite of high quotes of secondary materials in different paper classes, there are some paper classes in which the amount of secondary material could be raised without losing any quality. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
32. Algorithms for Liver Segmentation in Computed Tomography Scans: A Historical Perspective.
- Author
-
Niño, Stephanie Batista, Bernardino, Jorge, and Domingues, Inês
- Subjects
COMPUTED tomography ,IMAGE processing ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,ALGORITHMS ,IMAGE reconstruction algorithms - Abstract
Oncology has emerged as a crucial field of study in the domain of medicine. Computed tomography has gained widespread adoption as a radiological modality for the identification and characterisation of pathologies, particularly in oncology, enabling precise identification of affected organs and tissues. However, achieving accurate liver segmentation in computed tomography scans remains a challenge due to the presence of artefacts and the varying densities of soft tissues and adjacent organs. This paper compares artificial intelligence algorithms and traditional medical image processing techniques to assist radiologists in liver segmentation in computed tomography scans and evaluates their accuracy and efficiency. Despite notable progress in the field, the limited availability of public datasets remains a significant barrier to broad participation in research studies and replication of methodologies. Future directions should focus on increasing the accessibility of public datasets, establishing standardised evaluation metrics, and advancing the development of three-dimensional segmentation techniques. In addition, maintaining a collaborative relationship between technological advances and medical expertise is essential to ensure that these innovations not only achieve technical accuracy, but also remain aligned with clinical needs and realities. This synergy ensures their applicability and effectiveness in real-world healthcare environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,ALGORITHMS ,MACHINE learning ,INFORMATION technology ,MEDICAL care ,MOTION capture (Human mechanics) ,MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
34. BioRAT: extracting biological information from full-length papers
- Author
-
Bernard F. Buxton, David Corney, David T. Jones, and William B. Langdon
- Subjects
Statistics and Probability ,Abstracting and Indexing ,Computer science ,media_common.quotation_subject ,Information Storage and Retrieval ,Documentation ,computer.software_genre ,Biochemistry ,User-Computer Interface ,Text mining ,Artificial Intelligence ,Quality (business) ,Biology ,Molecular Biology ,Natural Language Processing ,media_common ,Information retrieval ,business.industry ,Databases, Bibliographic ,Computer Science Applications ,Computational Mathematics ,Information extraction ,Vocabulary, Controlled ,Computational Theory and Mathematics ,Bibliometrics ,Database Management Systems ,Periodicals as Topic ,business ,computer ,Algorithms ,Software - Abstract
Motivation: Converting the vast quantity of free-format text found in journals into a concise, structured format makes the researcher's quest for information easier. Recently, several information extraction systems have been developed that attempt to simplify the retrieval and analysis of biological and medical data. Most of this work has used the abstract alone, owing to the convenience of access and the quality of data. Abstracts are generally available through central collections with easy direct access (e.g. PubMed). The full-text papers contain more information, but are distributed across many locations (e.g. publishers' web sites, journal web sites and local repositories), making access more difficult. In this paper, we present BioRAT, a new information extraction (IE) tool, specifically designed to perform biomedical IE, and which is able to locate and analyse both abstracts and full-length papers. BioRAT is a Biological Research Assistant for Text mining, and incorporates a document search ability with domain-specific IE. Results: We show first, that BioRAT performs as well as existing systems, when applied to abstracts; and second, that significantly more information is available to BioRAT through the full-length papers than via the abstracts alone. Typically, less than half of the available information is extracted from the abstract, with the majority coming from the body of each paper. Overall, BioRAT recalled 20.31% of the target facts from the abstracts with 55.07% precision, and achieved 43.6% recall with 51.25% precision on full-length papers. Availability: The software and documentation can be found at http://bioinf.cs.ucl.ac.uk/biorat
- Published
- 2004
35. Predicting translational progress in biomedical research.
- Author
-
Hutchins, B. Ian, Davis, Matthew T., Meseroll, Rebecca A., and Santangelo, George M.
- Subjects
MEDICAL research ,SCIENTIFIC community ,SCIENTIFIC discoveries ,MACHINE learning ,CLINICAL trials ,FALSE discovery rate ,THERAPEUTICS - Abstract
Fundamental scientific advances can take decades to translate into improvements in human health. Shortening this interval would increase the rate at which scientific discoveries lead to successful treatment of human disease. One way to accomplish this would be to identify which advances in knowledge are most likely to translate into clinical research. Toward that end, we built a machine learning system that detects whether a paper is likely to be cited by a future clinical trial or guideline. Despite the noisiness of citation dynamics, as little as 2 years of postpublication data yield accurate predictions about a paper's eventual citation by a clinical article (accuracy = 84%, F1 score = 0.56; compared to 19% accuracy by chance). We found that distinct knowledge flow trajectories are linked to papers that either succeed or fail to influence clinical research. Translational progress in biomedicine can therefore be assessed and predicted in real time based on information conveyed by the scientific community's early reaction to a paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. Learning and decision making in monkeys during a rock–paper–scissors game
- Author
-
Daeyeol Lee, Benjamin P. McGreevy, and Dominic J. Barraclough
- Subjects
Male ,Process (engineering) ,Entropy ,Cognitive Neuroscience ,Decision Making ,Experimental and Cognitive Psychology ,Models, Psychological ,Outcome (game theory) ,Behavioral Neuroscience ,Strategy ,Animals ,Learning ,Reinforcement learning ,Set (psychology) ,Probability ,Behavior, Animal ,business.industry ,Macaca mulatta ,Games, Experimental ,Zero-sum game ,Artificial intelligence ,business ,Psychology ,Game theory ,Algorithms ,Choice sequence ,Cognitive psychology - Abstract
Game theory provides a solution to the problem of finding a set of optimal decision-making strategies in a group. However, people seldom play such optimal strategies and adjust their strategies based on their experience. Accordingly, many theories postulate a set of variables related to the probabilities of choosing various strategies and describe how such variables are dynamically updated. In reinforcement learning, these value functions are updated based on the outcome of the player's choice, whereas belief learning allows the value functions of all available choices to be updated according to the choices of other players. We investigated the nature of learning process in monkeys playing a competitive game with ternary choices, using a rock-paper-scissors game. During the baseline condition in which the computer selected its targets randomly, each animal displayed biases towards some targets. When the computer exploited the pattern of animal's choice sequence but not its reward history, the animal's choice was still systematically biased by the previous choice of the computer. This bias was reduced when the computer exploited both the choice and reward histories of the animal. Compared to simple models of reinforcement learning or belief learning, these adaptive processes were better described by a model that incorporated the features of both models. These results suggest that stochastic decision-making strategies in primates during social interactions might be adjusted according to both actual and hypothetical payoffs.
- Published
- 2005
37. Normalised fuzzy index for research ranking.
- Author
-
Hedar, Abdel-Rahman, Abdel-Hakima, Alaa, and Alotaibi, Youseef
- Subjects
ALGORITHMS ,ARTIFICIAL intelligence ,BIBLIOMETRICS ,IMMUNOLOGY ,RESEARCH methodology ,MOLECULAR biology ,SERIAL publications ,BIBLIOGRAPHIC databases ,STRUCTURAL equation modeling ,ACQUISITION of data ,DESCRIPTIVE statistics ,MANN Whitney U Test - Abstract
There are great interests of designing research metrics and indices to measure the research impacts in research institutes. Unfortunately, most of those indices ignore critical design issues, e.g. the disparity between domains, the impact of journals or conferences in which papers are published, normalising the range of the index values to certain intervals, and the scalability of using the index to rank different research entities. In this paper, a new normalised fuzzy index, (NF
index ), is proposed as a fuzzy-based research impact metric. The proposed index is a scalable index whose values are normalised to the percentage levels. NFindex achieves both inter-discipline normalisation and intra-discipline consistency. The capability of NFindex to achieve the inter-discipline normalisation enables fair comparison between different research domains regardless their nature in terms of influence and contribution to other research areas, e.g. natural science. Therefore, NFindex gives a universal normalised single-number metric that can be used by research institutes to solve the problem of inter-discipline scholar ranking. Moreover, it can help universal ranking of universities and research institutes according to their research capabilities and impacts. The obtained results, on diverse research areas, prove the potential of NFindex in terms of both intra-discipline consistency and inter-discipline normalisation. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
38. Data Mining Algorithm Based on Fusion Computer Artificial Intelligence Technology.
- Author
-
Yingqian Bai, Kepeng Bao, and Tao Xu
- Subjects
ARTIFICIAL intelligence ,DATA mining ,ALGORITHMS ,DISTRIBUTED databases ,ENTROPY (Information theory) - Abstract
INTRODUCTION: The paper constructs a massive data mining model of distributed spatiotemporal databases for the Internet of Things. Then a homologous data fusion method based on information entropy is proposed. The storage space required by the tree structure is reduced by constructing the data schema tree of the merged data set. Secondly, the optimal dynamic support degree is obtained by using a neural network and genetic algorithm. Frequent items in the Internet of Things data are mined to achieve the normalization of the clustered feature data based on the threshold value. Experiments show that the F-measure of the data mining algorithm improves the efficiency by 15.64% and 18.25% compared with the kinds of other literatures respectively. RI increased by 21.17% and 26.07%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Predicting Money Laundering Using Machine Learning and Artificial Neural Networks Algorithms in Banks.
- Author
-
Lokanan, Mark E.
- Subjects
ARTIFICIAL neural networks ,MONEY laundering ,MACHINE learning ,ALGORITHMS ,RANDOM forest algorithms - Abstract
This paper aims to build a machine learning and a neural network model to detect the probability of money laundering in banks. The paper's data came from a simulation of actual transactions flagged for money laundering in Middle Eastern banks. The main findings highlight that criminal networks mainly use the integration stage to integrate money into the financial system. Fraudsters prefer to launder funds in the early hours, morning followed by the business day's afternoon time intervals. Additionally, the Naïve Bayes and Random Forest classifiers were identified as the two best-performing models to predict bank money laundering transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Development of data reconstruction system of paper-recorded EEG: method and its evaluation
- Author
-
Gang Wang and Morikuni Takigawa
- Subjects
Electronic Data Processing ,Scanner ,medicine.diagnostic_test ,Computer science ,business.industry ,General Neuroscience ,Data reconstruction ,Digital data ,Electroencephalography ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Computer analysis ,Microcomputer ,medicine ,Humans ,Computer vision ,Diagnosis, Computer-Assisted ,Neurology (clinical) ,Artificial intelligence ,business ,Algorithms - Abstract
A new system by which paper-recorded EEG can be converted to computer treatable digital data was developed. It consists simply of an image scanner and a microcomputer. The performance of the system was evaluated and found to perform well in data reconstruction. The system makes it possible to apply various computer analyses to EEGs recorded on paper.
- Published
- 1992
41. A selection of papers from MICCAI 2004: the marriage of data and prior information.: A selection of papers from MICCAI 2004: the marriage of data and prior information
- Author
-
Haynor, David, Barillot, Christian, Hellier, Pierre, University of Washington [Seattle], Vision, Action et Gestion d'informations en Santé (VisAGeS), Institut National de la Santé et de la Recherche Médicale (INSERM)-Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE (IRISA-D5), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Télécom Bretagne-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS), CentraleSupélec-Télécom Bretagne-Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National de Recherche en Informatique et en Automatique (Inria)-École normale supérieure - Rennes (ENS Rennes)-Université de Bretagne Sud (UBS)-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-CentraleSupélec-Télécom Bretagne-Université de Rennes 1 (UR1), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-École normale supérieure - Rennes (ENS Rennes)-Université de Bretagne Sud (UBS)-Centre National de la Recherche Scientifique (CNRS)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), and Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)
- Subjects
Diagnostic Imaging ,MESH: Subtraction Technique ,MESH: Diagnostic Imaging ,MESH: Congresses ,Publications ,MESH: Models, Biological ,MESH: Algorithms ,MESH: Publications ,Congresses as Topic ,Image Enhancement ,Models, Biological ,United States ,Surgery, Computer-Assisted ,Artificial Intelligence ,Subtraction Technique ,Image Interpretation, Computer-Assisted ,MESH: United States ,MESH: Artificial Intelligence ,MESH: Surgery, Computer-Assisted ,[SDV.IB]Life Sciences [q-bio]/Bioengineering ,MESH: Image Enhancement ,MESH: Image Interpretation, Computer-Assisted ,Algorithms - Published
- 2005
42. Ten Simple Rules for Writing Research Papers
- Author
-
Weixiong Zhang
- Subjects
Research design ,Computer science ,Process (engineering) ,Writing ,Statistics as Topic ,Cellular and Molecular Neuroscience ,Genetics ,Mathematics education ,Humans ,lcsh:QH301-705.5 ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Simple (philosophy) ,Ecology ,business.industry ,Research ,Computational Biology ,Advice (programming) ,Professional writing ,Editorial ,lcsh:Biology (General) ,Computational Theory and Mathematics ,Research Design ,Publishing ,Modeling and Simulation ,Computer Science ,Artificial intelligence ,Periodicals as Topic ,Complement (linguistics) ,business ,Algorithms ,Natural language - Abstract
The importance of writing well can never be overstated for a successful professional career, and the ability to write solid papers is an essential trait of a productive researcher. Writing and publishing a paper has its own life cycle; properly following a course of action and avoiding missteps can be vital to the overall success not only of a paper but of the underlying research as well. Here, we offer ten simple rules for writing and publishing research papers. As a caveat, this essay is not about the mechanics of composing a paper, much of which has been covered elsewhere, e.g., [1], [2]. Rather, it is about the principles and attitude that can help guide the process of writing in particular and research in general. In this regard, some of the discussion will complement, extend, and refine some advice given in early articles of this Ten Simple Rules series of PLOS Computational Biology [3]–[8].
- Published
- 2014
43. Tissue deformation and shape models in image-guided interventions: a discussion paper
- Author
-
Jamie R. McClelland, JM Blackall, Graeme P. Penney, Carolyn S. K. Chan, Derek L. G. Hill, Dean C. Barratt, Philip J. Edwards, David J. Hawkes, and Kawal Rhode
- Subjects
Computer science ,Movement ,Anatomical structures ,Soft tissue deformation ,Image registration ,Health Informatics ,Models, Biological ,Image Interpretation, Computer-Assisted ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Computer Simulation ,Intraoperative imaging ,Tissue deformation ,Radiological and Ultrasound Technology ,business.industry ,Rigid body ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Elasticity ,Visualization ,Surgery, Computer-Assisted ,Connective Tissue ,Subtraction Technique ,Image guided interventions ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms - Abstract
This paper promotes the concept of active models in image-guided interventions. We outline the limitations of the rigid body assumption in image-guided interventions and describe how intraoperative imaging provides a rich source of information on spatial location of anatomical structures and therapy devices, allowing a preoperative plan to be updated during an intervention. Soft tissue deformation and variation from an atlas to a particular individual can both be determined using non-rigid registration. Established methods using free-form deformations have a very large number of degrees of freedom. Three examples of deformable models--motion models, biomechanical models and statistical shape models--are used to illustrate how prior information can be used to restrict the number of degrees of freedom of the registration algorithm and thus provide active models for image-guided interventions. We provide preliminary results from applications for each type of model.
- Published
- 2005
44. An Innovative K-Anonymity Privacy-Preserving Algorithm to Improve Data Availability in the Context of Big Data.
- Author
-
Linlin Yuan, Tiantian Zhang, Yuling Chen, Yuxiang Yang, and Huang Li
- Subjects
BIG data ,GREEDY algorithms ,INFORMATION theory ,ALGORITHMS ,ARTIFICIAL intelligence ,STATISTICS ,BLOCKCHAINS - Abstract
The development of technologies such as big data and blockchain has brought convenience to life, but at the same time, privacy and security issues are becoming more and more prominent. The K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users' privacy by anonymizing big data. However, the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data availability. In addition, ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be reduced. Based on this, we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data, while guaranteeing improved data usability. Specifically, we construct a new information loss function based on the information quantity theory. Considering that different quasi-identification attributes have different impacts on sensitive attributes, we set weights for each quasi-identification attribute when designing the information loss function. In addition, to reduce information loss, we improve K-anonymity in two ways. First, we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms, i.e., greedy algorithm and 2-means clustering algorithm. In addition, we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of mass. Meanwhile, we design the K-anonymity algorithm of this scheme based on the constructed information loss function, the improved 2-means clustering algorithm, and the greedy algorithm, which reduces the information loss. Finally, we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. New paper explores Insilico Medicine's generative AI drug design platform Chemistry42.
- Subjects
ARTIFICIAL intelligence ,DRUG design ,GENERATIVE adversarial networks - Published
- 2023
46. Automatic construction of knowledge base from biological papers
- Author
-
Y, Ohta, Y, Yamamoto, T, Okazaki, I, Uchiyama, and T, Takagi
- Subjects
Computer Communication Networks ,Vocabulary, Controlled ,Artificial Intelligence ,Publications ,Cluster Analysis ,Dictionaries as Topic ,Humans ,Information Storage and Retrieval ,Bayes Theorem ,Models, Theoretical ,Biology ,Algorithms - Abstract
We designed a system that acquires domain specific knowledge from human written biological papers, and we call this system IFBP (Information Finding from Biological Papers). IFBP is divided into three phases, Information Retrieval (IR), Information Extraction (IE) and Dictionary Construction (DC). We propose a query modification method using automatically constructed thesaurus for IR and a statistical keyword prediction method for IE. A dictionary of domain specific terms, which is one of the central knowledge sources for the task of knowledge acquisition, is also constructed automatically in the DC phase. IFBP is currently used for constructing the Transcription Factor DataBase (TFDB) and shows good performance. Since the model of knowledge base construction that is adopted into IFBP is carried out entirely automatically, this system can be easily ported across domains.
- Published
- 1997
47. Artificial Intelligence and Machine Learning.
- Author
-
Muthuraj and Singla, Shrutika
- Subjects
BIOLOGICAL evolution ,REINFORCEMENT (Psychology) ,DATA security ,ARTIFICIAL intelligence ,NATURAL language processing ,DEEP learning ,ARTIFICIAL neural networks ,MACHINE learning ,ALGORITHMS ,USER interfaces - Abstract
Artificial Intelligence (AI) and Machine Learning (ML) have rapidly gained prominence as transformative technologies with immense potential to revolutionize various industries and domains. This research paper presents a comprehensive review of AI and ML, encompassing their fundamental concepts, techniques, and applications. Additionally, it explores recent advancements in the field and offers valuable insights into the future prospects of AI and ML. The paper discusses the historical evolution of AI, the different approaches to AI development, and the components that constitute AI systems. Furthermore, it delves into the core concepts and algorithms of ML, including supervised, unsupervised, and reinforcement learning, as well as the advent of deep learning and neural networks. The applications of AI and ML across diverse domains such as natural language processing, computer vision, healthcare, and finance are also discussed. Recent advancements, such as transfer learning, generative adversarial networks, explainable AI, and federated learning, are highlighted, along with the challenges and limitations faced by these technologies, such as ethical concerns, data quality issues, and interpretability challenges. The paper concludes by presenting future perspectives, including the integration of AI with other technologies, advancements in human-computer interaction, and the impact of quantum computing on ML. This research emphasizes the importance of ongoing research and development in AI and ML and the need to address ethical, security, and interpretability considerations for responsible and beneficial implementation in society. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. A Real-Time Olive Fruit Detection for Harvesting Robot Based on YOLO Algorithms.
- Author
-
Aljaafreh, Ahmad, Elzagzoug, Ezzaldeen Y., Abukhait, Jafar, Soliman, Abdel-Hamid, Alja'Afreh, Saqer S., Sivanathan, Aparajithan, and Hughes, James
- Subjects
ARTIFICIAL neural networks ,OLIVE ,FRUIT harvesting ,OBJECT recognition (Computer vision) ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
Deep neural network models have become powerful tools of machine learning and artificial intelligence. They can approximate functions and dynamics by learning from examples. This paper reviews the state-of-art of deep learning-based object detection frameworks that are used for fruit detection in general and for olive fruit in particular. A dataset of olive fruit on the tree is built to train and evaluate deep models. The ultimate goal of this work is the capability of on-edge real-time olive fruit detection on the tree from digital videos. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed You Only Look Once version five (YOLOv5). This paper builds a dataset of 1.2 K source images of olive fruit on the tree and evaluates the latest object detection algorithms focusing on variants of YOLOv5 and YOLOR. The results of the YOLOv5 models show that the YOLOv5 new network models are able to extract rich olive features from images and detect the olive fruit with a high precision of higher than 0.75 mAP_0.5. YOLOv5s performs better for real-time olive fruit detection on the tree over other YOLOv5 variants and YOLOR. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. A Deep Learning-Based Programming and Creation Algorithm of NFT Artwork.
- Author
-
Wang, T.
- Subjects
DEEP learning ,GENERATIVE adversarial networks ,COMPUTER vision ,ALGORITHMS ,ARTIFICIAL intelligence ,IMAGE analysis - Abstract
In the field of computer vision, it is a very challenging task to use artificial intelligence deep learning method to realize the programming and creation of NFT artwork. With the continuous development and improvement of deep learning technology, this task has become a reality. The generative adversarial network model used in deep learning can generate new images based on the extraction and analysis of image data features and has become an important tool for NFT artwork image generation. In order to better realize the NFT artwork programming, this paper analyzes the working principle of the traditional adversarial generation method and then uses the StyleGAN model to edit the higher-level attributes of the image, which can effectively control the generated style and style of the NFT artwork image. Finally, in order to improve the quality of the generated images, this paper introduces a channel attention mechanism and a spatial attention mechanism to ensure that the generated images are more reasonable and realistic. Finally, through a large number of experiments, it is proved that the NFT artwork transmission programming algorithm based on artificial intelligence deep learning proposed in this paper can control the overall style of image generation according to the needs of the transmission, and the generated image features have good details and high visual quality. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.