5,561 results
Search Results
2. A Machine Learning Model to Predict Citation Counts of Scientific Papers in Otology Field.
- Author
-
Alohali, Yousef A., Fayed, Mahmoud S., Mesallam, Tamer, Abdelsamad, Yassin, Almuhawas, Fida, and Hagr, Abdulrahman
- Subjects
DECISION trees ,SERIAL publications ,NATURAL language processing ,BIBLIOMETRICS ,MACHINE learning ,REGRESSION analysis ,RANDOM forest algorithms ,CITATION analysis ,DESCRIPTIVE statistics ,PREDICTION models ,ARTIFICIAL neural networks ,MEDICAL research ,MEDICAL specialties & specialists ,ALGORITHMS - Abstract
One of the most widely used measures of scientific impact is the number of citations. However, due to its heavy-tailed distribution, citations are fundamentally difficult to predict but can be improved. This study was aimed at investigating the factors and parts influencing the citation number of a scientific paper in the otology field. Therefore, this work proposes a new solution that utilizes machine learning and natural language processing to process English text and provides a paper citation as the predicted results. Different algorithms are implemented in this solution, such as linear regression, boosted decision tree, decision forest, and neural networks. The application of neural network regression revealed that papers' abstracts have more influence on the citation numbers of otological articles. This new solution has been developed in visual programming using Microsoft Azure machine learning at the back end and Programming Without Coding Technology at the front end. We recommend using machine learning models to improve the abstracts of research articles to get more citations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Cigarette paper as evidence: Forensic profiling using ATR-FTIR spectroscopy and machine learning algorithms.
- Author
-
Kapoor, Muskaan, Sharma, Akanksha, and Sharma, Vishal
- Subjects
- *
CIGARETTES , *FORENSIC sciences , *FOURIER transform infrared spectroscopy , *MACHINE learning , *ALGORITHMS - Abstract
This research highlights the underestimated significance of cigarette paper as evidence at crime scenes. The primary objective is to distinguish cigarette paper from similar-looking alternatives, addressing the first research objective. The second objective involves identifying cigarette paper brands using attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy and machine learning (ML) algorithms. Accurate differentiation of cigarette paper from normal paper is emphasized. ATR-FTIR spectroscopy, coupled with principal component analysis (PCA) for dimensionality reduction, is employed for brand identification. Among fifteen ML algorithms compared, the CatBoost classifier excels for both objectives. This research presents a non-destructive, effective method for studying cigarette paper, contributing valuable insights to crime scene investigations. [Display omitted] • Forensic evaluation of cigarette paper utilizing ATR-FTIR spectroscopy and Machine learning algorithms. • Peak characterization and differentiation-distinguishing cigarette paper from other types. • Machine learning algorithm comparison: assessing discrimination across nine cigarette brands. • External validation of the dominant algorithm using unknown samples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. A BPNN Model-Based AdaBoost Algorithm for Estimating Inside Moisture of Oil–Paper Insulation of Power Transformer.
- Author
-
Liu, Jiefeng, Ding, Zheshi, Fan, Xianhao, Geng, Chuhan, Song, Boshu, Wang, Qingyin, and Zhang, Yiyi
- Subjects
- *
POWER transformers , *TRANSFORMER insulation , *MOISTURE , *ALGORITHMS , *MACHINE learning , *CLASSIFICATION algorithms - Abstract
The traditional method for transformer moisture diagnosis is to establish empirical equations between feature parameters extracted from frequency domain spectroscopy (FDS) and the transformer’s moisture content. However, the established empirical equation may not be applicable to a novel testing environment, resulting in an unreliable evaluation result. In this regard, it is acknowledged that FDS combined with machine learning is more suitable for estimating moisture content in a variety of test environments. Nonetheless, the accuracy of the estimation results obtained using the existing method is limited by the algorithm’s inability to generalize. To address this issue, we propose an AdaBoost algorithm-enhanced back-propagation neural network (BP_AdaBoost). This study creates a database by extracting feature parameters from the FDS that characterize the insulation states of the prepared samples. Then, using the BP_AdaBoost algorithm and the newly constructed database, the moisture estimation models are trained. Finally, the results of the estimation are discussed in terms of laboratory and field transformers. By comparing the proposed BP_AdaBoost algorithm to other intelligence algorithms, it is demonstrated that it not only performs better in generalization, but also maintains a high level of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. SDP-Based Bounds for the Quadratic Cycle Cover Problem via Cutting-Plane Augmented Lagrangian Methods and Reinforcement Learning: INFORMS Journal on Computing Meritorious Paper Awardee.
- Author
-
de Meijer, Frank and Sotirov, Renata
- Subjects
- *
REINFORCEMENT learning , *COMBINATORIAL optimization , *TRAVELING salesman problem , *ALGORITHMS , *SEMIDEFINITE programming , *MACHINE learning , *DIRECTED graphs - Abstract
We study the quadratic cycle cover problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting-plane framework by utilizing Dykstra's projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting-planes. Computational results show that our SDP bounds and efficient cutting-plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which is a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment. Summary of Contribution: The quadratic cycle cover problem (QCCP) is the problem of finding a set of node-disjoint cycles covering all the nodes in a graph such that the total interaction cost between successive arcs is minimized. The QCCP has applications in many fields, among which are robotics, transportation, energy distribution networks, and automatic inspection. Besides this, the problem has a high theoretical relevance because of its close connection to the quadratic traveling salesman problem (QTSP). The QTSP has several applications, for example, in bioinformatics, and is considered to be among the most difficult combinatorial optimization problems nowadays. After removing the subtour elimination constraints, the QTSP boils down to the QCCP. Hence, an in-depth study of the QCCP also contributes to the construction of strong bounds for the QTSP. In this paper, we study the application of semidefinite programming (SDP) to obtain strong bounds for the QCCP. Our strongest SDP relaxation is very hard to solve by any SDP solver because of the large number of involved cutting-planes. Because of that, we propose a new approach in which an augmented Lagrangian method is incorporated into a cutting-plane framework by utilizing Dykstra's projection algorithm. We emphasize an efficient implementation of the method and perform an extensive computational study. This study shows that our method is able to handle a large number of cuts and that the resulting bounds are currently the best QCCP bounds in the literature. We also introduce several upper bounding techniques, among which is a distributed reinforcement learning algorithm that exploits our SDP relaxations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Scientific papers and artificial intelligence. Brave new world?
- Author
-
Nexøe, Jørgen
- Subjects
COMPUTERS ,MANUSCRIPTS ,ARTIFICIAL intelligence ,MACHINE learning ,DATA analysis ,MEDICAL literature ,MEDICAL research ,ALGORITHMS - Published
- 2023
- Full Text
- View/download PDF
7. Critical Appraisal of a Machine Learning Paper: A Guide for the Neurologist.
- Author
-
Vinny, Pulikottil W., Garg, Rahul, Srivastava, M. V. Padma, Lal, Vivek, and Vishnu, Venugoapalan Y.
- Subjects
- *
DEEP learning , *NEUROLOGISTS , *EVIDENCE-based medicine , *MACHINE learning , *BENCHMARKING (Management) , *TERMS & phrases , *ARTIFICIAL neural networks , *PREDICTION models , *ALGORITHMS - Abstract
Machine learning (ML), a form of artificial intelligence (AI), is being increasingly employed in neurology. Reported performance metrics often match or exceed the efficiency of average clinicians. The neurologist is easily baffled by the underlying concepts and terminologies associated with ML studies. The superlative performance metrics of ML algorithms often hide the opaque nature of its inner workings. Questions regarding ML model's interpretability and reproducibility of its results in real-world scenarios, need emphasis. Given an abundance of time and information, the expert clinician should be able to deliver comparable predictions to ML models, a useful benchmark while evaluating its performance. Predictive performance metrics of ML models should not be confused with causal inference between its input and output. ML and clinical gestalt should compete in a randomized controlled trial before they can complement each other for screening, triaging, providing second opinions and modifying treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Canadian Association of Radiologists White Paper on De-identification of Medical Imaging: Part 2, Practical Considerations.
- Author
-
Parker, William, Jaremko, Jacob L., Cicero, Mark, Azar, Marleine, El-Emam, Khaled, Gray, Bruce G., Hurrell, Casey, Lavoie-Cardinal, Flavie, Desjardins, Benoit, Lum, Andrea, Sheremeta, Lori, Lee, Emil, Reinhold, Caroline, Tang, An, and Bromwich, Rebecca
- Subjects
- *
ALGORITHMS , *ARTIFICIAL intelligence , *DATA encryption , *DATABASE management , *DIAGNOSTIC imaging , *HEALTH services accessibility , *MACHINE learning , *MEDICAL protocols , *DICOM (Computer network protocol) , *COVID-19 pandemic - Abstract
The application of big data, radiomics, machine learning, and artificial intelligence (AI) algorithms in radiology requires access to large data sets containing personal health information. Because machine learning projects often require collaboration between different sites or data transfer to a third party, precautions are required to safeguard patient privacy. Safety measures are required to prevent inadvertent access to and transfer of identifiable information. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI Ethical and Legal standing committee with the mandate to guide the medical imaging community in terms of best practices in data management, access to health care data, de-identification, and accountability practices. Part 2 of this article will inform CAR members on the practical aspects of medical imaging de-identification, strengths and limitations of de-identification approaches, list of de-identification software and tools available, and perspectives on future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Physics driven behavioural clustering of free-falling paper shapes.
- Author
-
Howison, Toby, Hughes, Josie, Giardina, Fabio, and Iida, Fumiya
- Subjects
- *
PHYSICS , *SET functions , *MACHINE learning , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology.
- Author
-
Jaremko, Jacob L., Azar, Marleine, Bromwich, Rebecca, Lum, Andrea, Alicia Cheong, Li Hsia, Gibert, Martin, Laviolette, François, Gray, Bruce, Reinhold, Caroline, Cicero, Mark, Chong, Jaron, Shaw, James, Rybicki, Frank J., Hurrell, Casey, Lee, Emil, and Tang, An
- Subjects
- *
ARTIFICIAL intelligence laws , *ACQUISITION of property , *ALGORITHMS , *ARTIFICIAL intelligence , *AUTONOMY (Psychology) , *CONCEPTUAL structures , *MEDICAL ethics , *MEDICAL practice , *MEDICAL specialties & specialists , *PRIVACY , *RADIOLOGISTS , *DATA security - Abstract
Artificial intelligence (AI) software that analyzes medical images is becoming increasingly prevalent. Unlike earlier generations of AI software, which relied on expert knowledge to identify imaging features, machine learning approaches automatically learn to recognize these features. However, the promise of accurate personalized medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimization, and prognostication. Radiology is positioned to lead development and implementation of AI algorithms and to manage the associated ethical and legal challenges. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. A Review on Federated Learning and Machine Learning Approaches: Categorization, Application Areas, and Blockchain Technology.
- Author
-
Ogundokun, Roseline Oluwaseun, Misra, Sanjay, Maskeliunas, Rytis, and Damasevicius, Robertas
- Subjects
BLOCKCHAINS ,ARTIFICIAL intelligence ,MACHINE learning ,CONFERENCE papers ,ALGORITHMS ,SCIENCE publishing - Abstract
Federated learning (FL) is a scheme in which several consumers work collectively to unravel machine learning (ML) problems, with a dominant collector synchronizing the procedure. This decision correspondingly enables the training data to be distributed, guaranteeing that the individual device's data are secluded. The paper systematically reviewed the available literature using the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guiding principle. The study presents a systematic review of appliable ML approaches for FL, reviews the categorization of FL, discusses the FL application areas, presents the relationship between FL and Blockchain Technology (BT), and discusses some existing literature that has used FL and ML approaches. The study also examined applicable machine learning models for federated learning. The inclusion measures were (i) published between 2017 and 2021, (ii) written in English, (iii) published in a peer-reviewed scientific journal, and (iv) Preprint published papers. Unpublished studies, thesis and dissertation studies, (ii) conference papers, (iii) not in English, and (iv) did not use artificial intelligence models and blockchain technology were all removed from the review. In total, 84 eligible papers were finally examined in this study. Finally, in recent years, the amount of research on ML using FL has increased. Accuracy equivalent to standard feature-based techniques has been attained, and ensembles of many algorithms may yield even better results. We discovered that the best results were obtained from the hybrid design of an ML ensemble employing expert features. However, some additional difficulties and issues need to be overcome, such as efficiency, complexity, and smaller datasets. In addition, novel FL applications should be investigated from the standpoint of the datasets and methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. SOFTWARE DEFECT PREDICTION APPROACHES REVISITED.
- Author
-
Shebl, Khaled S., Afify, Yasmine M., and Badr, Nagwa
- Subjects
SEMANTICS ,DATABASES ,ALGORITHMS ,COMPUTER software testing ,MACHINE learning - Abstract
A crucial field in software development and testing is Software Defect Prediction (SDP) because the quality, dependability, efficiency, and cost of the software are all improved by forecasting software defects at an earlier stage. Many existing models predict defects to facilitate software testing process for testers. A comprehensive review of these models from different perspectives is crucial to help new researchers enter this field and learn about its latest developments. Algorithms, method types, datasets, and tools were the only perspectives discussed in the current literature. A comprehensive study that takes into account a wide spectrum of viewpoints hasn't yet been published. Examining the development and advancement of SDP-related studies is the goal of this literature review. It provides a comprehensive and updated state-of-the-art that satisfies all stated criteria. Out of 591 papers retrieved from 6 reputable databases, 73 papers were eligible for analysis. This review addresses relevant research questions regarding techniques & method types, data details, tools, code syntax, semantics, structural and domain information. Motivation to conduct this comprehensive review is to equip the readers with the necessary information and keep them informed about the software defect prediction domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Community Discovery Algorithm Based on Multi-Relationship Embedding.
- Author
-
Dongming Chen, Mingshuo Nie, Jie Wang, and Dongqi Wang
- Subjects
EMBEDDED computer systems ,ALGORITHMS ,MATRICES (Mathematics) ,CONVOLUTIONAL neural networks ,MACHINE learning - Abstract
Complex systems in the real world often can be modeled as network structures, and community discovery algorithms for complex networks enable researchers to understand the internal structure and implicit information of networks. Existing community discovery algorithms are usually designed for single-layer networks or single-interaction relationships and do not consider the attribute information of nodes. However, many real-world networks consist of multiple types of nodes and edges, and there may be rich semantic information on nodes and edges. The methods for single-layer networks cannot effectively tackle multi-layer information, multi-relationship information, and attribute information. This paper proposes a community discovery algorithm based on multi-relationship embedding. The proposed algorithm first models the nodes in the network to obtain the embedding matrix for each node relationship type and generates the node embedding matrix for each specific relationship type in the network by node encoder. The node embedding matrix is provided as input for aggregating the node embedding matrix of each specific relationship type using a Graph Convolutional Network (GCN) to obtain the final node embedding matrix. This strategy allows capturing of rich structural and attributes information in multi-relational networks. Experiments were conducted on different datasets with baselines, and the results show that the proposed algorithm obtains significant performance improvement in community discovery, node clustering, and similarity search tasks, and compared to the baseline with the best performance, the proposed algorithm achieves an average improvement of 3.1% on Macro-F1 and 4.7% on Micro-F1, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Avoiding the Digital Age is Hurting Research Efforts: A greater shift from paper records and physical assets is achievable.
- Author
-
HOLLAN, MIKE
- Subjects
DIGITAL technology ,ARTIFICIAL intelligence ,LIFE sciences ,AUTOMATIC data collection systems ,ELECTRONIC data interchange ,ELECTRONIC health records ,MACHINE learning ,DRUG development ,ALGORITHMS - Abstract
The article offers information on the importance of data in drug development and the life sciences industry. Topics include the use of new technologies like AI and machine learning for data collection and analysis, the persistence of paper-based processes in the industry, and challenges such as the "first-mile problem" in data collection and management.
- Published
- 2024
15. Exploring the opportunity of using machine learning to support the system dynamics method: Comment on the paper by Edali and Yücel.
- Author
-
Duggan, Jim
- Subjects
ALGORITHMS ,COMPUTER simulation ,DECISION making ,MACHINE learning ,HUMAN services programs - Abstract
The author presents comments on a paper on the use of machine learning to support the system dynamics method. Topics discussed include its interpretation of simulation models and explanation of policy analysis, and the emerging view whereby dynamic problems from endogenous feedback structures can be tackled via wider tools and methodological approaches. Also noted is the resulting potential for greater insights into the modelling process.
- Published
- 2020
- Full Text
- View/download PDF
16. Deep Learning Algorithms for Traffic Forecasting: A Comprehensive Review and Comparison with Classical Ones.
- Author
-
Afandizadeh, Shahriar, Abdolahi, Saeid, Mirzahossein, Hamid, and Li, Ruimin
- Subjects
MACHINE learning ,TRAFFIC estimation ,TRANSPORTATION management system ,DEEP learning ,INTELLIGENT transportation systems ,ALGORITHMS ,FORECASTING ,TRAFFIC safety - Abstract
Accurate and timely forecasting of critical components is pivotal in intelligent transportation systems and traffic management, crucially mitigating congestion and enhancing safety. This paper aims to comprehensively review deep learning algorithms and classical models employed in traffic forecasting. Spanning diverse traffic datasets, the study encompasses various scenarios, offering a nuanced understanding of traffic forecasting methods. Reviewing 111 seminal research works since the 1980s, encompassing both deep learning and classical models, the paper begins by detailing the data sources utilized in transportation systems. Subsequently, it delves into the theoretical underpinnings of prevalent deep learning algorithms and classical models prevalent in traffic forecasting. Furthermore, it investigates the application of these algorithms and models in forecasting key traffic characteristics, informed by their utility in transport and traffic analyses. Finally, the study elucidates the merits and drawbacks of proposed models through applied research in traffic forecasting. Findings indicate that while deep learning algorithms and classic models serve as valuable tools, their suitability varies across contexts, necessitating careful consideration in future studies. The study underscores research opportunities in road traffic forecasting, providing a comprehensive guide for future endeavors in this domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Reinforcement Machine Learning for Sparse Array Antenna Optimization with PPO.
- Author
-
Mohammad-Ali-Nezhad, Sajad and Kassem, Mohammad H.
- Subjects
ANTENNA arrays ,ANTENNAS (Electronics) ,TELECOMMUNICATION systems ,MACHINE learning ,ALGORITHMS - Abstract
This paper focuses on optimizing the radiation pattern of sparse array antennas using reinforcement learning, with many algorithms. The paper aims to leverage Proximal Policy Optimization’s (PPO’s) advantages in optimization and its effectiveness in handling stochastic transitions and rewards to achieve a reduced number of elements while maintaining desired signal performance and minimizing unnecessary side lobe signals. By removing a few of the antennas using reinforcement learning and PPO optimization, the same results as a complete array have been obtained. The anticipated outcomes of this research hold the promise of significantly enhancing the effectiveness and utility of sparse array antennas in communication systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Applying Machine Learning in Marketing: An Analysis Using the NMF and k-Means Algorithms.
- Author
-
Gallego, Victor, Lingan, Jessica, Freixes, Alfons, Juan, Angel A., and Osorio, Celia
- Subjects
K-means clustering ,MACHINE learning ,ARTIFICIAL intelligence ,ADVERTISING effectiveness ,DATABASES - Abstract
The integration of machine learning (ML) techniques into marketing strategies has become increasingly relevant in modern business. Utilizing scientific manuscripts indexed in the Scopus database, this article explores how this integration is being carried out. Initially, a focused search is undertaken for academic articles containing both the terms "machine learning" and "marketing" in their titles, which yields a pool of papers. These papers have been processed using the Supabase platform. The process has included steps like text refinement and feature extraction. In addition, our study uses two key ML methodologies: topic modeling through NMF and a comparative analysis utilizing the k-means clustering algorithm. Through this analysis, three distinct clusters emerged, thus clarifying how ML techniques are influencing marketing strategies, from enhancing customer segmentation practices to optimizing the effectiveness of advertising campaigns. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Research on User Default Prediction Algorithm Based on Adjusted Homogenous and Heterogeneous Ensemble Learning.
- Author
-
Lu, Yao, Wang, Kui, Sun, Hui, Qu, Hanwen, Chen, Jiajia, Liu, Wei, and Chang, Chenjie
- Subjects
DEFAULT (Finance) ,FORECASTING ,FEATURE selection ,ALGORITHMS ,CREDIT risk ,ECONOMETRIC models ,MACHINE learning ,GREEN technology - Abstract
In the field of risk assessment, the traditional econometric models are generally used to assess credit risk. And with the introduction of the "dual-carbon" goals to promote the development of a low-carbon economy, the scale of green credit in China has rapidly expanded. But with the advent of the big data era, due to the poor interpretability of a traditional single machine learning model, it is difficult to capture nonlinear relationships, and there are shortcomings in prediction accuracy and robustness. This paper selects the adjusted ensemble learning model based on the homogeneous and heterogeneous factors for user default prediction, which can efficiently process large quantities of high-dimensional data. This article adjusts each model to adapt to the task and innovatively compares various models. In this paper, the missing value filling method, feature selection, and ensemble model are studied and discussed, and the optimal ensemble model is obtained. When comparing the predictions of single models and ensemble models, the accuracy, sensitivity, specificity, F1-Score, Kappa, and MCC of Categorical Features Gradient Boosting (CatBoost) and Random undersampling Boosting (RUSBoost) all reach 100%. The experimental results prove that the algorithm based on adjusted homogeneous and heterogeneous ensemble learning can predict the user default efficiently and accurately. This paper also provides some references for establishing a risk assessment index system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. FDA Releases Two Discussion Papers to Spur Conversation about Artificial Intelligence and Machine Learning in Drug Development and Manufacturing.
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,DRUG factories ,DRUG development ,RECOMBINANT proteins - Abstract
The regulatory uses are real: In 2021, more than 100 drug and biologic applications submitted to the FDA included AI/ML components. Keywords: Algorithms; Artificial Intelligence; Bioengineering; Biologics; Biotechnology; Cybersecurity; Cyborgs; Drug Development; Drug Manufacturing; Drugs and Therapies; Emerging Technologies; FDA; Genetic Engineering; Genetically-Engineered Proteins; Government Agencies Offices and Entities; Health and Medicine; Machine Learning; Office of the FDA Commissioner; Public Health; Technology; U.S. Food and Drug Administration EN Algorithms Artificial Intelligence Bioengineering Biologics Biotechnology Cybersecurity Cyborgs Drug Development Drug Manufacturing Drugs and Therapies Emerging Technologies FDA Genetic Engineering Genetically-Engineered Proteins Government Agencies Offices and Entities Health and Medicine Machine Learning Office of the FDA Commissioner Public Health Technology U.S. Food and Drug Administration 497 497 1 05/22/23 20230523 NES 230523 2023 MAY 22 (NewsRx) -- By a News Reporter-Staff News Editor at Clinical Trials Week -- By: Patrizia Cavazzoni, M.D., Director of the Center for Drug Evaluation and Research Artificial intelligence (AI) and machine learning (ML) are no longer futuristic concepts; they are now part of how we live and work. [Extracted from the article]
- Published
- 2023
21. Predicting translational progress in biomedical research.
- Author
-
Hutchins, B. Ian, Davis, Matthew T., Meseroll, Rebecca A., and Santangelo, George M.
- Subjects
MEDICAL research ,SCIENTIFIC community ,SCIENTIFIC discoveries ,MACHINE learning ,CLINICAL trials ,FALSE discovery rate ,THERAPEUTICS - Abstract
Fundamental scientific advances can take decades to translate into improvements in human health. Shortening this interval would increase the rate at which scientific discoveries lead to successful treatment of human disease. One way to accomplish this would be to identify which advances in knowledge are most likely to translate into clinical research. Toward that end, we built a machine learning system that detects whether a paper is likely to be cited by a future clinical trial or guideline. Despite the noisiness of citation dynamics, as little as 2 years of postpublication data yield accurate predictions about a paper's eventual citation by a clinical article (accuracy = 84%, F1 score = 0.56; compared to 19% accuracy by chance). We found that distinct knowledge flow trajectories are linked to papers that either succeed or fail to influence clinical research. Translational progress in biomedicine can therefore be assessed and predicted in real time based on information conveyed by the scientific community's early reaction to a paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Intelligent Stroke Disease Prediction Model Using Deep Learning Approaches.
- Author
-
Gao, Chunhua, Wang, Hui, and Mezzapesa, Domenico Maria
- Subjects
STROKE diagnosis ,RISK assessment ,RANDOM forest algorithms ,PREDICTION models ,DATABASE management ,RESEARCH funding ,SYMPTOMS ,SUPPORT vector machines ,DEEP learning ,ARTIFICIAL neural networks ,STROKE ,COMPARATIVE studies ,MACHINE learning ,DECISION trees ,REGRESSION analysis ,ALGORITHMS ,DISEASE risk factors - Abstract
Stroke is a high morbidity and mortality disease that poses a serious threat to people's health. Early recognition of the various warning signs of stroke is necessary so that timely clinical intervention can help reduce the severity of stroke. Deep neural networks have powerful feature representation capabilities and can automatically learn discriminant features from large amounts of data. This paper uses a range of physiological characteristic parameters and collaborates with deep neural networks, such as the Wasserstein generative adversarial networks with gradient penalty and regression network, to construct a stroke prediction model. Firstly, to address the problem of imbalance between positive and negative samples in the stroke public data set, we performed positive sample data augmentation and utilized WGAN‐GP to generate stroke data with high fidelity and used it for the training of the prediction network model. Then, the relationship between observable physiological characteristic parameters and the predicted risk of suffering a stroke was modeled as a nonlinear mapping transformation, and a stroke prediction model based on a deep regression network was designed. Finally, the proposed method is compared with commonly used machine learning‐based classification algorithms such as decision tree, random forest, support vector machine, and artificial neural networks. The prediction results of the proposed method are optimal in the comprehensive measurement index F. Further ablation experiments also show that the designed prediction model has certain robustness and can effectively predict stroke diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Bio-Inspired Intelligent Swarm Confrontation Algorithm for a Complex Urban Scenario.
- Author
-
Cai, He, Luo, Yaoguo, Gao, Huanli, and Wang, Guangbin
- Subjects
BIOLOGICALLY inspired computing ,MACHINE learning ,WILDLIFE films ,REINFORCEMENT learning ,ALGORITHMS - Abstract
This paper considers the confrontation problem for two tank swarms of equal size and capability in a complex urban scenario. Based on the Unity platform (2022.3.20f1c1), the confrontation scenario is constructed featuring multiple crossing roads. Through the analysis of a substantial amount of biological data and wildlife videos regarding animal behavioral strategies during confrontations for hunting or food competition, two strategies are been utilized to design a novel bio-inspired intelligent swarm confrontation algorithm. The first one is the "fire concentration" strategy, which assigns a target for each tank in a way that the isolated opponent will be preferentially attacked with concentrated firepower. The second one is the "back and forth maneuver" strategy, which makes the tank tactically retreat after firing in order to avoid being hit when the shell is reloading. Two state-of-the-art swarm confrontation algorithms, namely the reinforcement learning algorithm and the assign nearest algorithm, are chosen as the opponents for the bio-inspired swarm confrontation algorithm proposed in this paper. Data of comprehensive confrontation tests show that the bio-inspired swarm confrontation algorithm has significant advantages over its opponents from the aspects of both win rate and efficiency. Moreover, we discuss how vital algorithm parameters would influence the performance indices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Design and Optimization of Power Shift Tractor Starting Control Strategy Based on PSO-ELM Algorithm.
- Author
-
Qian, Yu, Wang, Lin, and Lu, Zhixiong
- Subjects
CLUTCHES (Machinery) ,FARM tractors ,PARTICLE swarm optimization ,MACHINE learning ,FUZZY algorithms ,ALGORITHMS ,TRACTORS - Abstract
Power shift tractors have been widely used in agricultural tractors in recent years because of their advantages of uninterrupted power during shifting, high transmission efficiency and high stability. As one of the indispensable driving states of the power shift tractor, the starting process requires a small impact and a starting speed that meets the driver's requirements. In this paper, aiming at such contradictory requirements, the starting control strategy of a power shift tractor is formulated with the goal of starting quality and the driver's intention. Firstly, the identification characteristics of the driver under three starting intentions are obtained by a real vehicle test. An extreme learning machine with fast identification speed and short training time is used to establish the basic driver's intention identification model. For the instability of the identification results of the Extreme Learning Machine (ELM), the particle swarm optimization algorithm (PSO) is used to optimize the ELM. The optimized extreme learning machine model has an accuracy of 96.891% for driver's intention identification. The wet clutch is an important part of the power shift gearbox. In this paper, the starting control strategy knowledge base of the starting clutch is established by a combination of bench tests and simulation tests. Through the fuzzy algorithm, the driver's intention is combined with the starting control strategy. Different drivers' intentions will affect the comprehensive evaluation model of the clutch (the single evaluation index of the clutch is: the maximum sliding power, the sliding power, the speed stability time, the impact degree), thus affecting the final choice of the starting clutch control strategy considering the driver's intention. On this basis, this paper studies and establishes the MPC starting controller for the power shift gearbox. Compared with the linear control strategy, the PSO-ELM-fuzzy weight starting strategy proposed in this paper can reduce the maximum sliding friction power by 45%, the sliding friction power by 69.45%, and the speed stabilization time by 0.11 s. The effectiveness of the starting control strategy considering the driver's intention proposed in this paper to improve the starting quality of the power shift tractor is verified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information.
- Author
-
Wang, Yinglong, Liu, Xiaoxiong, Zhao, Minkun, and Xu, Xinlong
- Subjects
MOBILE robots ,MACHINE learning ,MOBILE learning ,DEEP learning ,ALGORITHMS ,INFORMATION measurement ,PROBABILITY theory ,GEOMETRY - Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. An Algorithm for Distracted Driving Recognition Based on Pose Features and an Improved KNN.
- Author
-
Gong, Yingjie and Shen, Xizhong
- Subjects
DISTRACTED driving ,MACHINE learning ,K-nearest neighbor classification ,ALGORITHMS ,DEEP learning ,TRAFFIC safety ,MOTOR vehicle driving - Abstract
To reduce safety accidents caused by distracted driving and address issues such as low recognition accuracy and deployment difficulties in current algorithms for distracted behavior detection, this paper proposes an algorithm that utilizes an improved KNN for classifying driver posture features to predict distracted driving behavior. Firstly, the number of channels in the Lightweight OpenPose network is pruned to predict and output the coordinates of key points in the upper body of the driver. Secondly, based on the principles of ergonomics, driving behavior features are modeled, and a set of five-dimensional feature values are obtained through geometric calculations. Finally, considering the relationship between the distance between samples and the number of samples, this paper proposes an adjustable distance-weighted KNN algorithm (ADW-KNN), which is used for classification and prediction. The experimental results show that the proposed algorithm achieved a recognition rate of 94.04% for distracted driving behavior on the public dataset SFD3, with a speed of up to 50FPS, superior to mainstream deep learning algorithms in terms of accuracy and speed. The superiority of ADW-KNN was further verified through experiments on other public datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Ensemble Learning Improves the Efficiency of Microseismic Signal Classification in Landslide Seismic Monitoring.
- Author
-
Xin, Bingyu, Huang, Zhiyong, Huang, Shijie, and Feng, Liang
- Subjects
SIGNAL classification ,DATABASES ,RANDOM forest algorithms ,DECISION trees ,ALGORITHMS ,LANDSLIDES - Abstract
A deep-seated landslide could release numerous microseismic signals from creep-slip movement, which includes a rock-soil slip from the slope surface and a rock-soil shear rupture in the subsurface. Machine learning can effectively enhance the classification of microseismic signals in landslide seismic monitoring and interpret the mechanical processes of landslide motion. In this paper, eight sets of triaxial seismic sensors were deployed inside the deep-seated landslide, Jiuxianping, China, and a large number of microseismic signals related to the slope movement were obtained through 1-year-long continuous monitoring. All the data were passed through the seismic event identification mode, the ratio of the long-time average and short-time average. We selected 11 days of data, manually classified 4131 data into eight categories, and created a microseismic event database. Classical machine learning algorithms and ensemble learning algorithms were tested in this paper. In order to evaluate the seismic event classification performance of each algorithmic model, we evaluated the proposed algorithms through the dimensions of the accuracy, precision, and recall of each model. The validation results demonstrated that the best performing decision tree algorithm among the classical machine learning algorithms had an accuracy of 88.75%, while the ensemble algorithms, including random forest, Gradient Boosting Trees, Extreme Gradient Boosting, and Light Gradient Boosting Machine, had an accuracy range from 93.5% to 94.2% and also achieved better results in the combined evaluation of the precision, recall, and F1 score. The specific classification tests for each microseismic event category showed the same results. The results suggested that the ensemble learning algorithms show better results compared to the classical machine learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A Method for Reducing Training Time of ML-Based Cascade Scheme for Large-Volume Data Analysis.
- Author
-
Izonin, Ivan, Muzyka, Roman, Tkachenko, Roman, Dronyuk, Ivanna, Yemets, Kyrylo, and Mitoulis, Stergios-Aristoteles
- Subjects
PRINCIPAL components analysis ,FEATURE extraction ,DATA analysis ,TRAINING needs ,ALGORITHMS - Abstract
We live in the era of large data analysis, where processing vast datasets has become essential for uncovering valuable insights across various domains of our lives. Machine learning (ML) algorithms offer powerful tools for processing and analyzing this abundance of information. However, the considerable time and computational resources needed for training ML models pose significant challenges, especially within cascade schemes, due to the iterative nature of training algorithms, the complexity of feature extraction and transformation processes, and the large sizes of the datasets involved. This paper proposes a modification to the existing ML-based cascade scheme for analyzing large biomedical datasets by incorporating principal component analysis (PCA) at each level of the cascade. We selected the number of principal components to replace the initial inputs so that it ensured 95% variance retention. Furthermore, we enhanced the training and application algorithms and demonstrated the effectiveness of the modified cascade scheme through comparative analysis, which showcased a significant reduction in training time while improving the generalization properties of the method and the accuracy of the large data analysis. The improved enhanced generalization properties of the scheme stemmed from the reduction in nonsignificant independent attributes in the dataset, which further enhanced its performance in intelligent large data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A High-Performance Anti-Noise Algorithm for Arrhythmia Recognition.
- Author
-
Feng, Jianchao, Si, Yujuan, Zhang, Yu, Sun, Meiqi, and Yang, Wenke
- Subjects
BLIND source separation ,INDEPENDENT component analysis ,ARRHYTHMIA ,SIGNAL separation ,PRINCIPAL components analysis ,ALGORITHMS - Abstract
In recent years, the incidence of cardiac arrhythmias has been on the rise because of changes in lifestyle and the aging population. Electrocardiograms (ECGs) are widely used for the automated diagnosis of cardiac arrhythmias. However, existing models possess poor noise robustness and complex structures, limiting their effectiveness. To solve these problems, this paper proposes an arrhythmia recognition system with excellent anti-noise performance: a convolutionally optimized broad learning system (COBLS). In the proposed COBLS method, the signal is convolved with blind source separation using a signal analysis method based on high-order-statistic independent component analysis (ICA). The constructed feature matrix is further feature-extracted and dimensionally reduced using principal component analysis (PCA), which reveals the essence of the signal. The linear feature correlation between the data can be effectively reduced, and redundant attributes can be eliminated to obtain a low-dimensional feature matrix that retains the essential features of the classification model. Then, arrhythmia recognition is realized by combining this matrix with the broad learning system (BLS). Subsequently, the model was evaluated using the MIT-BIH arrhythmia database and the MIT-BIH noise stress test database. The outcomes of the experiments demonstrate exceptional performance, with impressive achievements in terms of the overall accuracy, overall precision, overall sensitivity, and overall F1-score. Specifically, the results indicate outstanding performance, with figures reaching 99.11% for the overall accuracy, 96.95% for the overall precision, 89.71% for the overall sensitivity, and 93.01% for the overall F1-score across all four classification experiments. The model proposed in this paper shows excellent performance, with 24 dB, 18 dB, and 12 dB signal-to-noise ratios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. An effective video inpainting technique using morphological Haar wavelet transform with krill herd based criminisi algorithm.
- Author
-
Srinivasan, M. Nuthal, Chinnadurai, M., Senthilkumar, S., and Dinesh, E.
- Subjects
WAVELET transforms ,MACHINE learning ,INPAINTING ,ANIMAL herds ,ALGORITHMS ,SIGNAL-to-noise ratio - Abstract
In recent times, video inpainting techniques have intended to fill the missing areas or gaps in a video by utilizing known pixels. The variety in brightness or difference of the patches causes the state-of-the-art video inpainting techniques to exhibit high computation complexity and create seams in the target areas. To resolve these issues, this paper introduces a novel video inpainting technique that employs the Morphological Haar Wavelet Transform combined with the Krill Herd based Criminisi algorithm (MHWT-KHCA) to address the challenges of high computational demand and visible seam artifacts in current inpainting practices. The proposed MHWT-KHCA algorithm strategically reduces computation times and enhances the seamlessness of the inpainting process in videos. Through a series of experiments, the technique is validated against standard metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), where it demonstrates superior performance compared to existing methods. Additionally, the paper outlines potential real-world applications ranging from video restoration to real-time surveillance enhancement, highlighting the technique's versatility and effectiveness. Future research directions include optimizing the algorithm for diverse video formats and integrating machine learning models to advance its capabilities further. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Probabilistic Confusion Matrix: A Novel Method for Machine Learning Algorithm Generalized Performance Analysis.
- Author
-
Markoulidakis, Ioannis and Markoulidakis, Georgios
- Subjects
MACHINE learning ,MATRICES (Mathematics) ,MACHINE performance ,ALGORITHMS ,CLASSIFICATION - Abstract
The paper addresses the issue of classification machine learning algorithm performance based on a novel probabilistic confusion matrix concept. The paper develops a theoretical framework which associates the proposed confusion matrix and the resulting performance metrics with the regular confusion matrix. The theoretical results are verified based on a wide variety of real-world classification problems and state-of-the-art machine learning algorithms. Based on the properties of the probabilistic confusion matrix, the paper then highlights the benefits of using the proposed concept both during the training phase and the application phase of a classification machine learning algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Unmanned Ground Vehicle Path Planning Based on Improved DRL Algorithm.
- Author
-
Liu, Lisang, Chen, Jionghui, Zhang, Youyuan, Chen, Jiayu, Liang, Jingrun, and He, Dongwei
- Subjects
DEEP reinforcement learning ,MACHINE learning ,AUTONOMOUS vehicles ,REMOTELY piloted vehicles ,ALGORITHMS ,SUCCESSIVE approximation analog-to-digital converters ,REINFORCEMENT learning - Abstract
Path planning and obstacle avoidance are fundamental problems in unmanned ground vehicle path planning. Aiming at the limitations of Deep Reinforcement Learning (DRL) algorithms in unmanned ground vehicle path planning, such as low sampling rate, insufficient exploration, and unstable training, this paper proposes an improved algorithm called Dual Priority Experience and Ornstein–Uhlenbeck Soft Actor-Critic (DPEOU-SAC) based on Ornstein–Uhlenbeck (OU noise) and double-factor prioritized sampling experience replay (DPE) with the introduction of expert experience, which is used to help the agent achieve faster and better path planning and obstacle avoidance. Firstly, OU noise enhances the agent's action selection quality through temporal correlation, thereby improving the agent's detection performance in complex unknown environments. Meanwhile, the experience replay is based on double-factor preferential sampling, which has better sample continuity and sample utilization. Then, the introduced expert experience can help the agent to find the optimal path with faster training speed and avoid falling into a local optimum, thus achieving stable training. Finally, the proposed DPEOU-SAC algorithm is tested against other deep reinforcement learning algorithms in four different simulation environments. The experimental results show that the convergence speed of DPEOU-SAC is 88.99% higher than the traditional SAC algorithm, and the shortest path length of DPEOU-SAC is 27.24, which is shorter than that of SAC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Selected and Extended Papers from TACAS 2018: Preface.
- Author
-
Beyer, Dirk and Huisman, Marieke
- Subjects
SOFTWARE development tools ,DATA structures ,ALGORITHMS ,MACHINE learning ,DYNAMIC programming - Published
- 2020
- Full Text
- View/download PDF
34. Performance analysis of deep learning-based object detection algorithms on COCO benchmark: a comparative study.
- Author
-
Tian, Jiya, Jin, Qiangshan, Wang, Yizong, Yang, Jie, Zhang, Shuping, and Sun, Dengxun
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,MACHINE learning ,ALGORITHMS ,SMART cities ,URBAN renewal - Abstract
This paper thoroughly explores the role of object detection in smart cities, specifically focusing on advancements in deep learning-based methods. Deep learning models gain popularity for their autonomous feature learning, surpassing traditional approaches. Despite progress, challenges remain, such as achieving high accuracy in urban scenes and meeting real-time requirements. The study aims to contribute by analyzing state-of-the-art deep learning algorithms, identifying accurate models for smart cities, and evaluating real-time performance using the Average Precision at Medium Intersection over Union (IoU) metric. The reported results showcase various algorithms' performance, with Dynamic Head (DyHead) emerging as the top scorer, excelling in accurately localizing and classifying objects. Its high precision and recall at medium IoU thresholds signify robustness. The paper suggests considering the mean Average Precision (mAP) metric for a comprehensive evaluation across IoU thresholds, if available. Despite this, DyHead stands out as the superior algorithm, particularly at medium IoU thresholds, making it suitable for precise object detection in smart city applications. The performance analysis using Average Precision at Medium IoU is reinforced by the Average Precision at Low IoU (APL), consistently depicting DyHead's superiority. These findings provide valuable insights for researchers and practitioners, guiding them toward employing DyHead for tasks prioritizing accurate object localization and classification in smart cities. Overall, the paper navigates through the complexities of object detection in urban environments, presenting DyHead as a leading solution with robust performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,ALGORITHMS ,MACHINE learning ,INFORMATION technology ,MEDICAL care ,MOTION capture (Human mechanics) ,MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
36. Survey on Machine Learning Biases and Mitigation Techniques.
- Author
-
Siddique, Sunzida, Haque, Mohd Ariful, George, Roy, Gupta, Kishor Datta, Gupta, Debashis, and Faruk, Md Jobair Hossain
- Subjects
MACHINE learning ,ALGORITHMS ,POLICY sciences ,BIAS (Law) ,MACHINE theory - Abstract
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Weather Radar High-Resolution Spectral Moment Estimation Using Bidirectional Extreme Learning Machine.
- Author
-
Zhongyuan Wang, Ling Qiao, Yu Jiang, Mingwei Shen, and Guodong Han
- Subjects
MACHINE learning ,POWER spectra ,RADAR meteorology ,PROBLEM solving ,ALGORITHMS - Abstract
Since the performance of the spectral moment estimation algorithm commonly used in engineering degrades under the conditions of low SNR, this paper introduces the Extreme Learning Machine (ELM) to the spectral moment estimation of weather signals based on the correlation of the signals of adjacent range cells. To solve the problem that the hidden layer nodes of ELM algorithm are difficult to be determined, the Bidirectional Extreme Learning Machine (B-ELM) algorithm is applied to achieve the high resolution of spectral moments. Firstly, to improve the SNR of the training samples, time-domain pulse signals are converted into weather power spectrum by Welch method. Then, the parameters of the B-ELM hidden layer nodes are directly calculated by backpropagation of network residuals. The model parameters are optimized according to the least-squares solution, where the optimal number of hidden layer nodes is determined adaptively. Finally, the optimized B-ELM model is employed for the spectral moment estimation of weather signals. The algorithm is validated to be fast and accurate for spectral moment estimation using the measured IDRA weather radar data and is easy to implement in engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A Study of Entity Relationship Extraction Algorithms Based on Symmetric Interaction between Data, Models, and Inference Algorithms.
- Author
-
Feng, Ping, Su, Nannan, Xing, Jiamian, Bian, Jing, and Ouyang, Dantong
- Subjects
MACHINE learning ,ALGORITHMS ,CHINESE language ,WORD recognition ,SEMANTICS - Abstract
The purpose of this paper is to address the extraction of entities and relationships from unstructured Chinese text, with a particular emphasis on the challenges of Named Entity Recognition (NER) and Relation Extraction (RE). This will be achieved by integrating external lexical information and utilizing the abundant semantic information available in Chinese. We utilize a pipeline model that is applied separately to NER and RE by introducing an innovative NER model that integrates Chinese pinyin, characters, and words to enhance recognition capabilities. Simultaneously, we incorporate information such as entity distance, sentence length, and part-of-speech to improve the performance of relation extraction. We also delve into the interactions among data, models, and inference algorithms to improve learning efficiency in addressing this challenge. In comparison to existing methods, our model has achieved significant results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. CNN-VAE: An intelligent text representation algorithm.
- Author
-
Xu, Saijuan, Guo, Canyang, Zhu, Yuhan, Liu, Genggeng, and Xiong, Neal
- Subjects
CONVOLUTIONAL neural networks ,BIG data ,MACHINE learning ,POLYSEMY ,SUPPORT vector machines ,K-nearest neighbor classification ,ALGORITHMS - Abstract
Collecting and analyzing data from all devices to improve the efficiency of business processes is an important task of Industrial Internet of Things (IIoT). In the age of data explosion, extensive text data generated by the IIoT have given birth to a variety of text representation methods. The task of text representation is to convert the natural language to a form that computer can understand with retaining the original semantics. However, these methods are difficult to effectively extract the semantic features among words and distinguish polysemy in natural language. Combining the advantages of convolutional neural network (CNN) and variational autoencoder (VAE), this paper proposes an intelligent CNN-VAE text representation algorithm as an advanced learning method for social big data within next-generation IIoT, which help users identify the information collected by sensors and perform further processing. This method employs the convolution layer to capture the local features of the context and uses the variational technique to reconstruct feature space to make it conform to the normal distribution. In addition, the improved word2vec model based on topical word embedding (TWE) is utilized to add topical information to word vectors to distinguish polysemy. This paper takes the social big data as an example to illustrate the way of the proposed algorithm applied in the next-generation IIoT and utilizes Cnews dataset to verify the performance of proposed method with four evaluating metrics (i.e., recall, accuracy, precision, and F1-score). Experimental results indicate that the proposed method outperforms word2vec-avg and CNN-AE in K-nearest neighbor (KNN), random forest (RF), and support vector machine (SVM) classifiers and distinguishes polysemy effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Algorithm Composition and Emotion Recognition Based on Machine Learning.
- Author
-
He, Jiao
- Subjects
EMOTION recognition ,COSINE function ,FEATURE extraction ,MACHINE learning ,ALGORITHMS ,ENTROPY (Information theory) ,INFORMATION modeling - Abstract
This paper proposes a new algorithm composition network from the perspective of machine learning, based on an in-depth study of related literature. At the same time, this paper examines the characteristics of music and develops a model for recognising musical emotions. Using the model's information entropy of pitch and intensity to extract the main melody track, note features are extracted from bar features. Finally, the cosine of the vector included angle is used to judge the similarity between feature vectors of several adjacent sections, allowing the music to be divided into several independent segments. The emotional model of music is used to analyze each segment's emotion. By quantifying music features, this paper classifies and quantifies music emotion based on the mapping relationship between music features and emotion. Music emotion can be accurately identified by the model. The model's emotion recognition accuracy is up to 93.78 percent, and the algorithm's recall rate is up to 96.3 percent, according to simulation results. The recognition method used in this paper has a higher recognition ability than other methods, and the emotion recognition result is more reliable. This paper can not only meet the composer's auxiliary creative needs, but it can also help intelligent music services. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Regional 3D geological modeling along metro lines based on stacking ensemble model.
- Author
-
Xia Bian, Zhuyi Fan, Jiaxing Liu, Xiaozhao Li, and Peng Zhao
- Subjects
GEOLOGICAL modeling ,BOREHOLES ,MACHINE learning ,STACKING machines ,ALGORITHMS - Abstract
This paper presents a regional 3D geological modeling method based on the stacking ensemble technique to overcome the challenges of sparse borehole data in large-scale linear underground projects. The proposed method transforms the 3D geological modeling problem into a stratigraphic property classification problem within a subsurface space grid cell framework. Borehole data is pre-processed and trained using stacking method with five different machine learning algorithms. The resulting modelled regional cells are then classified, forming a regional 3D grid geological model. A case study for an area of 324 km2 along Xuzhou metro lines is presented to demonstrate the effectiveness of the proposed model. The study shows an overall prediction accuracy of 85.4%. However, the accuracy for key stratigraphy layers influencing the construction risk, such as karst carve strata, is only 4.3% due to the limited borehole data. To address this issue, an oversampling technique based on the synthetic minority oversampling technique (SMOTE) algorithm is proposed. This technique effectively increases the number of sparse stratigraphic samples and significantly improves the prediction accuracy for karst caves to 65.4%. Additionally, this study analyzes the impact of sampling distance on model accuracy. It is found that a lower sampling interval results in higher prediction accuracy, but also increases computational resources and time costs. Therefore, in this study, an optimal sampling distance of 1 m is chosen to balance prediction accuracy and computation cost. Furthermore, the number of geological strata is found to have a negative effect on prediction accuracy. To mitigate this, it is recommended to merge less significant stratigraphy layers, reducing computation time. For key strata layers, such as karst caves, which have a significant impact on construction risk, further onsite sampling or oversampling using the SMOTE technique is recommended. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Discriminative shapelet learning via temporal clustering and matrix factorization.
- Author
-
Chen, Bo, Fang, Min, and Wang, GuiZhi
- Subjects
MACHINE learning ,MATRIX decomposition ,TIME series analysis ,CLASSIFICATION ,ALGORITHMS - Abstract
Identifying discriminative patterns, known as shapelets, within time series is a critical step in many time series classification tasks. A major limitation of shapelet learning is that often hindered by their unsupervised methods, treating shapelet learning as an unsupervised subsequence clustering process and discovery based on pre-defined metric, which performed sequentially. This sequential procedure presents challenges, as it fails to establish a direct connection between shapelets and samples, and lacks the capacity to explicitly incorporate label information. In this paper, we proposed a novel shapelet learning algorithm called Discriminative Shapelet Learning via Temporal Clustering and Matrix Factorization (DSLMF). DSLMF introduced a joint framework that combines matrix factorization and coherent temporal clustering to discovery salient and coherent feature subsets. To further enhance discriminability and prevent arbitrary shapelet shapes, DSLMF integrates a label-specific shapelet regularization as a guiding mechanism enabling the learning of shapelets optimized for higher classification performance. The proposed algorithm has shown to be effective for capturing the temporal cluster structure and interpretability of shapelet-based method. The results of experiments showcased in this paper highlight DSLMF's effectiveness in capturing temporal cluster structures and learning meaningful shapelets, ultimately leading to promising performance on benchmark datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Multimodal machine learning in precision health: A scoping review.
- Author
-
Kline, Adrienne, Wang, Hanyin, Li, Yikuan, Dennis, Saya, Hutch, Meghan, Xu, Zhenxing, Wang, Fei, Cheng, Feixiong, and Luo, Yuan
- Subjects
ONLINE information services ,NEUROLOGY ,SYSTEMATIC reviews ,MACHINE learning ,INDIVIDUALIZED medicine ,LITERATURE reviews ,MEDLINE ,HEALTH equity ,ALGORITHMS ,ONCOLOGY - Abstract
Machine learning is frequently being leveraged to tackle problems in the health sector including utilization for clinical decision-support. Its use has historically been focused on single modal data. Attempts to improve prediction and mimic the multimodal nature of clinical expert decision-making has been met in the biomedical field of machine learning by fusing disparate data. This review was conducted to summarize the current studies in this field and identify topics ripe for future research. We conducted this review in accordance with the PRISMA extension for Scoping Reviews to characterize multi-modal data fusion in health. Search strings were established and used in databases: PubMed, Google Scholar, and IEEEXplore from 2011 to 2021. A final set of 128 articles were included in the analysis. The most common health areas utilizing multi-modal methods were neurology and oncology. Early fusion was the most common data merging strategy. Notably, there was an improvement in predictive performance when using data fusion. Lacking from the papers were clear clinical deployment strategies, FDA-approval, and analysis of how using multimodal approaches from diverse sub-populations may improve biases and healthcare disparities. These findings provide a summary on multimodal data fusion as applied to health diagnosis/prognosis problems. Few papers compared the outputs of a multimodal approach with a unimodal prediction. However, those that did achieved an average increase of 6.4% in predictive accuracy. Multi-modal machine learning, while more robust in its estimations over unimodal methods, has drawbacks in its scalability and the time-consuming nature of information concatenation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Research on Small Acceptance Domain Text Detection Algorithm Based on Attention Mechanism and Hybrid Feature Pyramid.
- Author
-
Liu, Mingzhu, Li, Ben, and Zhang, Wei
- Subjects
TEXT recognition ,PYRAMIDS ,FEATURE extraction ,ALGORITHMS ,MACHINE learning ,VIDEO compression - Abstract
In the traditional text detection process, the text area of the small receptive field in the video image is easily ignored, the features that can be extracted are few, and the calculation is large. These problems are not conducive to the recognition of text information. In this paper, a lightweight network structure on the basis of the EAST algorithm, the Convolution Block Attention Module (CBAM), is proposed. It is suitable for the spatial and channel hybrid attention module of text feature extraction of the natural scene video images. The improved structure proposed in this paper can obtain deep network features of text and reduce the computation of text feature extraction. Additionally, a hybrid feature pyramid + BLSTM network is designed to improve the attention to the small acceptance domain text regions and the text sequence features of the region. The test results on the ICDAR2015 demonstrate that the improved construction can effectively boost the attention of small acceptance domain text regions and improve the sequence feature detection accuracy of small acceptance domain of long text regions without significantly increasing computation. At the same time, the proposed network constructions are superior to the traditional EAST algorithm and other improved algorithms in accuracy rate P, recall rate R, and F-value. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. A Semi-Automatic Magnetic Resonance Imaging Annotation Algorithm Based on Semi-Weakly Supervised Learning.
- Author
-
Chen, Shaolong and Zhang, Zhiyong
- Subjects
MAGNETIC resonance imaging ,SUPERVISED learning ,MACHINE learning ,ITERATIVE learning control ,ALGORITHMS ,ANNOTATIONS ,DEEP learning - Abstract
The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Improve robustness of machine learning via efficient optimization and conformal prediction.
- Author
-
Yan, Yan
- Subjects
OPTIMIZATION algorithms ,MACHINE learning ,FORECASTING ,ALGORITHMS ,DIAGNOSIS - Abstract
The advance of machine learning (ML) systems in real‐world scenarios usually expects safe deployment in high‐stake applications (e.g., medical diagnosis) for critical decision‐making process. To this end, provable robustness of ML is usually required to measure and understand how reliable the deployed ML system is and how trustworthy their predictions can be. Many studies have been done to enhance the robustness in recent years from different angles, such as variance‐regularized robust objective functions and conformal prediction (CP) for uncertainty quantification on testing data. Although these tools provably improve the robustness of ML model, there is still an inevitable gap to integrate them into an end‐to‐end deployment. For example, robust objectives usually require carefully designed optimization algorithms, while CP treats ML models as black boxes. This paper is a brief introduction to our recent research focusing on filling this gap. Specifically, for learning robust objectives, we designed sample‐efficient stochastic optimization algorithms that achieves the optimal (or faster compared to existing algorithms) convergence rates. Moreover, for CP‐based uncertainty quantification, we established a framework to analyze the expected prediction set size (smaller size means more efficiency) of CP methods in both standard and adversarial settings. This paper elaborates the key challenges and our exploration towards efficient algorithms with details of background methods, notions for robustness measure, concepts of algorithmic efficiency, our proposed algorithms and results. All of them further motivate our future research on risk‐aware ML that can be critical for AI–human collaborative systems. The future work mainly targets designing conformal robust objectives and their efficient optimization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Channel Prediction for Underwater Acoustic Communication: A Review and Performance Evaluation of Algorithms.
- Author
-
Liu, Haotian, Ma, Lu, Wang, Zhaohui, and Qiao, Gang
- Subjects
DEEP learning ,UNDERWATER acoustic communication ,MACHINE learning ,ALGORITHMS ,TELECOMMUNICATION systems ,FORECASTING - Abstract
Underwater acoustic (UWA) channel prediction technology, as an important topic in UWA communication, has played an important role in UWA adaptive communication network and underwater target perception. Although many significant advancements have been achieved in underwater acoustic channel prediction over the years, a comprehensive summary and introduction is still lacking. As the first comprehensive overview of UWA channel prediction, this paper introduces past works and algorithm implementation methods of channel prediction from the perspective of linear, kernel-based, and deep learning approaches. Importantly, based on available at-sea experiment datasets, this paper compares the performance of current primary UWA channel prediction algorithms under a unified system framework, providing researchers with a comprehensive and objective understanding of UWA channel prediction. Finally, it discusses the directions and challenges for future research. The survey finds that linear prediction algorithms are the most widely applied, and deep learning, as the most advanced type of algorithm, has moved this field into a new stage. The experimental results show that the linear algorithms have the lowest computational complexity, and when the training samples are sufficient, deep learning algorithms have the best prediction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
DATA structures ,MACHINE learning ,PRIVATE networks ,BLOCKCHAINS ,ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A MULTI-SENTENCE MUSIC HUMMING RETRIEVAL ALGORITHM BASED ON RELATIVE FEATURES AND DEEP LEARNING.
- Author
-
YELIN ZHANG
- Subjects
DEEP learning ,MACHINE learning ,SPEECH perception ,DATABASES ,ALGORITHMS - Abstract
This project will study a fast retrieval method for music humming speech recognition based on sentence features and deep learning. The method proposed in this paper can realize the fast extraction of songs. According to the characteristics of the natural pause mode of the song, the song database and the song fragments provided by the user are divided into different sentences. The deep learning algorithm of BDTW is used to calculate the similarity of the song's pitch, and users can set matching conditions according to their preferences. It can identify the most significant differences between music fragments and the order of queries in the database. Then, a retrieval method of a music database based on DIS is proposed. It can shorten the acquisition time. Experiments show that the algorithm can recognize humming songs quickly and efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Recommendation Algorithm of Industry Stock Trading Model with TODIM.
- Author
-
Lv, Dongdong, Gong, Yingli, Chen, Jianting, and Xiang, Yang
- Subjects
STOCKS (Finance) ,MACHINE learning ,FINANCIAL markets ,ALGORITHMS ,PROBLEM solving ,CRYPTOCURRENCIES - Abstract
In stock trading, a common phenomenon is that the trends of stocks in the same industry are very similar. In contrast, the movements of stocks in different industries are often different. Therefore, applying the same model to all stock trading is inappropriate without distinguishing the industries in which the stocks belong. However, recommending an optimal industry stock trading model is very challenging based on performance evaluation indicators. First, the indicators of the trading model are diverse. Second, the ranking of multiple indicators is often inconsistent. In the paper, we model the problem to be solved as a multi-criteria decision-making process. Therefore, we first divide stock dataset into nine industries according to their main business. Then, we apply several machine learning algorithms as candidate models to generate trading signals. Second, we conduct daily trading backtesting based on the trading signals to obtain multiple performance evaluation indicators. Third, we propose an optimal recommendation algorithm for the industry stock trading model with TODIM. The experimental results in the US stock market and China's A-share market show that the proposed algorithm can get a better trading model out-of-sample industry stock. Moreover, we effectively evaluate the generalization ability of the algorithm based on the proposed metrics. Finally, the proposed long–short portfolios based on the algorithm have achieved returns exceeding the benchmark on most out-of-sample datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.