5,830 results
Search Results
2. RF-KELM indoor positioning algorithm based on WiFi RSS fingerprint.
- Author
-
Hou, Bingnan and Wang, Yanchun
- Subjects
HUMAN fingerprints ,MACHINE learning ,ALGORITHMS ,FINGERPRINT databases ,SIGNAL processing ,ELECTRONIC data processing - Abstract
WiFi-based fingerprint indoor positioning technology has been widely concerned, but it has been facing the challenge of lack of robustness to signal changes, and the positioning service requires fast and accurate positioning estimation. Therefore, an random forest-kernel extreme learning machine (RF-KELM) positioning algorithm with good comprehensive performance is proposed in this paper. Both offline and online phases are included by this algorithm. In the offline phase, the original data of WiFi fingerprint is first transformed into a form more suitable for positioning. Then, access point (AP) selection is performed on the fingerprint database containing many useless APs, in which an RF which can evaluate the importance of features is used. Finally, the KELM is trained with the sub-database that have undergone data transformation and AP selection. In the online phase, firstly, the obtained signal is processed, and then the trained KELM is used to predict the position of the data processed signal. In this paper, the performance of the proposed RF-KELM positioning algorithm is thoroughly tested on a publicly available dataset, and the experimental results demonstrate that the proposed algorithm not only has high positioning accuracy and robustness, but also takes only 0.08 s to position online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. SOFTWARE DEFECT PREDICTION APPROACHES REVISITED.
- Author
-
Shebl, Khaled S., Afify, Yasmine M., and Badr, Nagwa
- Subjects
SEMANTICS ,DATABASES ,ALGORITHMS ,COMPUTER software testing ,MACHINE learning - Abstract
A crucial field in software development and testing is Software Defect Prediction (SDP) because the quality, dependability, efficiency, and cost of the software are all improved by forecasting software defects at an earlier stage. Many existing models predict defects to facilitate software testing process for testers. A comprehensive review of these models from different perspectives is crucial to help new researchers enter this field and learn about its latest developments. Algorithms, method types, datasets, and tools were the only perspectives discussed in the current literature. A comprehensive study that takes into account a wide spectrum of viewpoints hasn't yet been published. Examining the development and advancement of SDP-related studies is the goal of this literature review. It provides a comprehensive and updated state-of-the-art that satisfies all stated criteria. Out of 591 papers retrieved from 6 reputable databases, 73 papers were eligible for analysis. This review addresses relevant research questions regarding techniques & method types, data details, tools, code syntax, semantics, structural and domain information. Motivation to conduct this comprehensive review is to equip the readers with the necessary information and keep them informed about the software defect prediction domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Performance analysis of deep learning-based object detection algorithms on COCO benchmark: a comparative study.
- Author
-
Tian, Jiya, Jin, Qiangshan, Wang, Yizong, Yang, Jie, Zhang, Shuping, and Sun, Dengxun
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,MACHINE learning ,ALGORITHMS ,SMART cities ,URBAN renewal - Abstract
This paper thoroughly explores the role of object detection in smart cities, specifically focusing on advancements in deep learning-based methods. Deep learning models gain popularity for their autonomous feature learning, surpassing traditional approaches. Despite progress, challenges remain, such as achieving high accuracy in urban scenes and meeting real-time requirements. The study aims to contribute by analyzing state-of-the-art deep learning algorithms, identifying accurate models for smart cities, and evaluating real-time performance using the Average Precision at Medium Intersection over Union (IoU) metric. The reported results showcase various algorithms' performance, with Dynamic Head (DyHead) emerging as the top scorer, excelling in accurately localizing and classifying objects. Its high precision and recall at medium IoU thresholds signify robustness. The paper suggests considering the mean Average Precision (mAP) metric for a comprehensive evaluation across IoU thresholds, if available. Despite this, DyHead stands out as the superior algorithm, particularly at medium IoU thresholds, making it suitable for precise object detection in smart city applications. The performance analysis using Average Precision at Medium IoU is reinforced by the Average Precision at Low IoU (APL), consistently depicting DyHead's superiority. These findings provide valuable insights for researchers and practitioners, guiding them toward employing DyHead for tasks prioritizing accurate object localization and classification in smart cities. Overall, the paper navigates through the complexities of object detection in urban environments, presenting DyHead as a leading solution with robust performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,ALGORITHMS ,MACHINE learning ,INFORMATION technology ,MEDICAL care ,MOTION capture (Human mechanics) ,MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
6. Weather Radar High-Resolution Spectral Moment Estimation Using Bidirectional Extreme Learning Machine.
- Author
-
Zhongyuan Wang, Ling Qiao, Yu Jiang, Mingwei Shen, and Guodong Han
- Subjects
MACHINE learning ,POWER spectra ,RADAR meteorology ,PROBLEM solving ,ALGORITHMS - Abstract
Since the performance of the spectral moment estimation algorithm commonly used in engineering degrades under the conditions of low SNR, this paper introduces the Extreme Learning Machine (ELM) to the spectral moment estimation of weather signals based on the correlation of the signals of adjacent range cells. To solve the problem that the hidden layer nodes of ELM algorithm are difficult to be determined, the Bidirectional Extreme Learning Machine (B-ELM) algorithm is applied to achieve the high resolution of spectral moments. Firstly, to improve the SNR of the training samples, time-domain pulse signals are converted into weather power spectrum by Welch method. Then, the parameters of the B-ELM hidden layer nodes are directly calculated by backpropagation of network residuals. The model parameters are optimized according to the least-squares solution, where the optimal number of hidden layer nodes is determined adaptively. Finally, the optimized B-ELM model is employed for the spectral moment estimation of weather signals. The algorithm is validated to be fast and accurate for spectral moment estimation using the measured IDRA weather radar data and is easy to implement in engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Survey on Machine Learning Biases and Mitigation Techniques.
- Author
-
Siddique, Sunzida, Haque, Mohd Ariful, George, Roy, Gupta, Kishor Datta, Gupta, Debashis, and Faruk, Md Jobair Hossain
- Subjects
MACHINE learning ,ALGORITHMS ,POLICY sciences ,BIAS (Law) ,MACHINE theory - Abstract
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Privacy-Preserving Federated Deep Learning Diagnostic Method for Multi-Stage Diseases.
- Author
-
Jinbo Yang, Hai Huang, Lailai Yin, Jiaxing Qu, and Wanjuan Xie
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,INTEGRATED circuits ,DATA privacy ,ALGORITHMS ,NATURAL languages ,DEEP learning - Abstract
Diagnosing multi-stage diseases typically requires doctors to consider multiple data sources, including clinical symptoms, physical signs, biochemical test results, imaging findings, pathological examination data, and even genetic data. When applying machine learning modeling to predict and diagnose multi-stage diseases, several challenges need to be addressed. Firstly, the model needs to handle multimodal data, as the data used by doctors for diagnosis includes image data, natural language data, and structured data. Secondly, privacy of patients' data needs to be protected, as these data contain the most sensitive and private information. Lastly, considering the practicality of the model, the computational requirements should not be too high. To address these challenges, this paper proposes a privacy-preserving federated deep learning diagnostic method for multi-stage diseases. This method improves the forward and backward propagation processes of deep neural network modeling algorithms and introduces a homomorphic encryption step to design a federated modeling algorithm without the need for an arbiter. It also utilizes dedicated integrated circuits to implement the hardware Paillier algorithm, providing accelerated support for homomorphic encryption in modeling. Finally, this paper designs and conducts experiments to evaluate the proposed solution. The experimental results show that in privacy-preserving federated deep learning diagnostic modeling, the method in this paper achieves the same modeling performance as ordinary modeling without privacy protection, and has higher modeling speed compared to similar algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Community Discovery Algorithm Based on Multi-Relationship Embedding.
- Author
-
Dongming Chen, Mingshuo Nie, Jie Wang, and Dongqi Wang
- Subjects
EMBEDDED computer systems ,ALGORITHMS ,MATRICES (Mathematics) ,CONVOLUTIONAL neural networks ,MACHINE learning - Abstract
Complex systems in the real world often can be modeled as network structures, and community discovery algorithms for complex networks enable researchers to understand the internal structure and implicit information of networks. Existing community discovery algorithms are usually designed for single-layer networks or single-interaction relationships and do not consider the attribute information of nodes. However, many real-world networks consist of multiple types of nodes and edges, and there may be rich semantic information on nodes and edges. The methods for single-layer networks cannot effectively tackle multi-layer information, multi-relationship information, and attribute information. This paper proposes a community discovery algorithm based on multi-relationship embedding. The proposed algorithm first models the nodes in the network to obtain the embedding matrix for each node relationship type and generates the node embedding matrix for each specific relationship type in the network by node encoder. The node embedding matrix is provided as input for aggregating the node embedding matrix of each specific relationship type using a Graph Convolutional Network (GCN) to obtain the final node embedding matrix. This strategy allows capturing of rich structural and attributes information in multi-relational networks. Experiments were conducted on different datasets with baselines, and the results show that the proposed algorithm obtains significant performance improvement in community discovery, node clustering, and similarity search tasks, and compared to the baseline with the best performance, the proposed algorithm achieves an average improvement of 3.1% on Macro-F1 and 4.7% on Micro-F1, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches--A Systematic Literature Review and Mapping Study.
- Author
-
García-Peñalvo, Francisco José, Vázquez-Ingelmo, Andrea, and García-Holgado, Alicia
- Subjects
ARTIFICIAL intelligence ,LITERATURE reviews ,SOFTWARE engineering ,ALGORITHMS ,HEURISTIC ,SOFTWARE engineers - Abstract
The exponential use of artificial intelligence (AI) to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed. While AI is a powerful means to discover interesting patterns and obtain predictive models, the use of these algorithms comes with a great responsibility, as an incomplete or unbalanced set of training data or an unproper interpretation of the models' outcomes could result in misleading conclusions that ultimately could become very dangerous. For these reasons, it is important to rely on expert knowledge when applying these methods. However, not every user can count on this specific expertise; non-AI-expert users could also benefit from applying these powerful algorithms to their domain problems, but they need basic guidelines to obtain the most out of AI models. The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features. The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering. As a result, 9 papers that tackle AI algorithm recommendation through tangible and traceable rules and heuristics were collected. The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. A Machine Learning Model to Predict Citation Counts of Scientific Papers in Otology Field.
- Author
-
Alohali, Yousef A., Fayed, Mahmoud S., Mesallam, Tamer, Abdelsamad, Yassin, Almuhawas, Fida, and Hagr, Abdulrahman
- Subjects
DECISION trees ,SERIAL publications ,NATURAL language processing ,BIBLIOMETRICS ,MACHINE learning ,REGRESSION analysis ,RANDOM forest algorithms ,CITATION analysis ,DESCRIPTIVE statistics ,PREDICTION models ,ARTIFICIAL neural networks ,MEDICAL research ,MEDICAL specialties & specialists ,ALGORITHMS - Abstract
One of the most widely used measures of scientific impact is the number of citations. However, due to its heavy-tailed distribution, citations are fundamentally difficult to predict but can be improved. This study was aimed at investigating the factors and parts influencing the citation number of a scientific paper in the otology field. Therefore, this work proposes a new solution that utilizes machine learning and natural language processing to process English text and provides a paper citation as the predicted results. Different algorithms are implemented in this solution, such as linear regression, boosted decision tree, decision forest, and neural networks. The application of neural network regression revealed that papers' abstracts have more influence on the citation numbers of otological articles. This new solution has been developed in visual programming using Microsoft Azure machine learning at the back end and Programming Without Coding Technology at the front end. We recommend using machine learning models to improve the abstracts of research articles to get more citations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. CyMac: Diving Deep into the Application of Machine Learning Algorithms in Cyber Security.
- Author
-
Das, Bishwajit, Yadav, Nikita, Chauhan, Deepa, and Gupta, Sanju
- Subjects
INTERNET security ,ALGORITHMS ,MACHINE learning ,PHISHING prevention ,JURISDICTION - Abstract
Machine learning has emerged as a climatic technology in contemporary and prospective cyber threat intel systems, with numerous jurisdictions seamlessly integrating it into their operations. However, the current state of machine learning in cyber defence is still in its early stages, foreshadowing a noticeable unexplored research territory and practical implementation. This paper marks the initial endeavour to offer a comprehensive understanding of machine learning within the entire spectrum of cybersecurity jurisdictions, catering to potential end users with enthusiasm in this field of study. This paper aims to serve as a source of inspiration for significant advancements in ML within the cyber defence zone, laying the groundwork for the broader adoption of ML mitigations to safeguard present and heuristic systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
13. A MULTI-SENTENCE MUSIC HUMMING RETRIEVAL ALGORITHM BASED ON RELATIVE FEATURES AND DEEP LEARNING.
- Author
-
YELIN ZHANG
- Subjects
DEEP learning ,MACHINE learning ,SPEECH perception ,DATABASES ,ALGORITHMS - Abstract
This project will study a fast retrieval method for music humming speech recognition based on sentence features and deep learning. The method proposed in this paper can realize the fast extraction of songs. According to the characteristics of the natural pause mode of the song, the song database and the song fragments provided by the user are divided into different sentences. The deep learning algorithm of BDTW is used to calculate the similarity of the song's pitch, and users can set matching conditions according to their preferences. It can identify the most significant differences between music fragments and the order of queries in the database. Then, a retrieval method of a music database based on DIS is proposed. It can shorten the acquisition time. Experiments show that the algorithm can recognize humming songs quickly and efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. DDPG-Based Convex Programming Algorithm for the Midcourse Guidance Trajectory of Interceptor.
- Author
-
Li, Wan-Li, Li, Jiong, Ye, Ji-Kun, Shao, Lei, and Zhou, Chi-Jun
- Subjects
REINFORCEMENT learning ,DEEP reinforcement learning ,MACHINE learning ,NONCONVEX programming ,CONVEX programming ,ALGORITHMS ,APPROXIMATION error - Abstract
To address the problem of low accuracy and efficiency in trajectory planning algorithms for interceptors facing multiple constraints during the midcourse guidance phase, an improved trajectory convex programming method based on the lateral distance domain is proposed. This algorithm can achieve fast trajectory planning, reduce the approximation error of the planned trajectory, and improve the accuracy of trajectory guidance. First, the concept of lateral distance domain is proposed, and the motion model of the midcourse guidance segment in the interceptor is converted from the time domain to the lateral distance domain. Second, the motion model and multiple constraints are convexly and discretely transformed, and the discrete trajectory convex model is established in the lateral distance domain. Third, the deep reinforcement learning algorithm is used to learn and train the initial solution of trajectory convex programming, and a high-quality initial solution trajectory is obtained. Finally, a dynamic adjustment method based on the distribution of approximate solution errors is designed to achieve efficient dynamic adjustment of grid points in iterative solving. The simulation experiments show that the improved trajectory convex programming algorithm proposed in this paper not only improves the accuracy and efficiency of the algorithm but also has good optimization performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Formulation of Feature and Label Space Using Modified Delphi in Support of Developing a Machine-Learning Algorithm to Automate Clash Resolution.
- Author
-
Harode, Ashit, Thabet, Walid, and Leite, Fernanda
- Subjects
MACHINE learning ,LITERATURE reviews ,ALGORITHMS ,EVIDENCE gaps ,CONSTRUCTION projects - Abstract
To improve the current manual and iterative nature of clash resolution on construction projects, current research efforts continue to explore and test the utilization of machine-learning algorithms to automate the process. Though current research shows significant accuracy in automating clash resolution, many have failed to provide clear explanation and justification for the selection of their feature and label space. Since this is critical in developing an effective and explainable solution in machine learning, it is crucial to address this research gap. In this paper, the authors utilize an in-depth literature review and industry interviews to capture domain knowledge on how design clashes are resolved by industry experts. From analysis of the knowledge captured, we identified 23 factors considered by experts when resolving clashes and five alternative solutions/options to resolve a clash. Using a pool of industry experts, a modified Delphi approach was conducted to validate the factors and options and to determine a priority ranking. The authors identified 94 industry experts based on a predetermined qualification matrix to take part in the modified Delphi. Twelve participants responded and took part in the first round, and 11 completed the second round. A consensus was reached on all clash factors and resolution options. Factors including "clashing elements type," "constrained slope," "critical element in the clash," "location of the clash," "code compliance," and "project stage clashing element is in" were ranked as the most important factors, while "clashing element material" and "insulation type" were considered the least important. Participants also showed more preference to the "moving the clashing element with low priority in/along x-y-z directions" option to resolve clashes. These identified factors and options will be utilized to collect specific clash data to train and test effective and explainable machine-learning algorithms toward automating clash resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Formulation of Feature and Label Space Using Modified Delphi in Support of Developing a Machine-Learning Algorithm to Automate Clash Resolution.
- Author
-
Harode, Ashit, Thabet, Walid, and Leite, Fernanda
- Subjects
LITERATURE reviews ,ALGORITHMS ,EVIDENCE gaps ,MACHINE learning ,CONSTRUCTION projects - Abstract
To improve the current manual and iterative nature of clash resolution on construction projects, current research efforts continue to explore and test the utilization of machine-learning algorithms to automate the process. Though current research shows significant accuracy in automating clash resolution, many have failed to provide clear explanation and justification for the selection of their feature and label space. Since this is critical in developing an effective and explainable solution in machine learning, it is crucial to address this research gap. In this paper, the authors utilize an in-depth literature review and industry interviews to capture domain knowledge on how design clashes are resolved by industry experts. From analysis of the knowledge captured, we identified 23 factors considered by experts when resolving clashes and five alternative solutions/options to resolve a clash. Using a pool of industry experts, a modified Delphi approach was conducted to validate the factors and options and to determine a priority ranking. The authors identified 94 industry experts based on a predetermined qualification matrix to take part in the modified Delphi. Twelve participants responded and took part in the first round, and 11 completed the second round. A consensus was reached on all clash factors and resolution options. Factors including "clashing elements type," "constrained slope," "critical element in the clash," "location of the clash," "code compliance," and "project stage clashing element is in" were ranked as the most important factors, while "clashing element material" and "insulation type" were considered the least important. Participants also showed more preference to the "moving the clashing element with low priority in/along x-y-z directions" option to resolve clashes. These identified factors and options will be utilized to collect specific clash data to train and test effective and explainable machine-learning algorithms toward automating clash resolution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. An Efficient Optimization Approach for Designing Machine Models Based on Combined Algorithm.
- Author
-
Larijani, Ata and Dehghani, Farbod
- Subjects
INTRUSION detection systems (Computer security) ,SUPERVISED learning ,MACHINE design ,SUPPORT vector machines ,ALGORITHMS ,SUBSET selection - Abstract
Many intrusion detection algorithms that use optimization have been developed and are commonly used to detect intrusions. The process of selecting features and the parameters of the classifier are essential parts of how well an intrusion detection system works. This paper provides a detailed explanation and discussion of an improved intrusion detection method for multiclass classification. The proposed solution uses a combination of the modified teaching–learning-based optimization (MTLBO) algorithm, the modified JAYA (MJAYA) algorithm, and a support vector machine (SVM). MTLBO is used with supervised machine learning (ML) to select subsets of features. Selection of the fewest features possible without impairing the accuracy of the results in feature subset selection (FSS) is a multiobjective optimization issue. This paper presents MTLBO as a mechanism and investigates its algorithm-specific, parameter-free idea. This study used the modified JAYA (MJAYA) algorithm to optimize the C and gamma parameters of the support vector machine (SVM) classifier. When the proposed MTLBO-MJAYA-SVM algorithm was compared with the original TLBO and JAYA algorithms on a well-known intrusion detection dataset, it was found to outperform them significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A Study of Entity Relationship Extraction Algorithms Based on Symmetric Interaction between Data, Models, and Inference Algorithms.
- Author
-
Feng, Ping, Su, Nannan, Xing, Jiamian, Bian, Jing, and Ouyang, Dantong
- Subjects
MACHINE learning ,ALGORITHMS ,CHINESE language ,WORD recognition ,SEMANTICS - Abstract
The purpose of this paper is to address the extraction of entities and relationships from unstructured Chinese text, with a particular emphasis on the challenges of Named Entity Recognition (NER) and Relation Extraction (RE). This will be achieved by integrating external lexical information and utilizing the abundant semantic information available in Chinese. We utilize a pipeline model that is applied separately to NER and RE by introducing an innovative NER model that integrates Chinese pinyin, characters, and words to enhance recognition capabilities. Simultaneously, we incorporate information such as entity distance, sentence length, and part-of-speech to improve the performance of relation extraction. We also delve into the interactions among data, models, and inference algorithms to improve learning efficiency in addressing this challenge. In comparison to existing methods, our model has achieved significant results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Predicting Money Laundering Using Machine Learning and Artificial Neural Networks Algorithms in Banks.
- Author
-
Lokanan, Mark E.
- Subjects
ARTIFICIAL neural networks ,MONEY laundering ,MACHINE learning ,ALGORITHMS ,RANDOM forest algorithms - Abstract
This paper aims to build a machine learning and a neural network model to detect the probability of money laundering in banks. The paper's data came from a simulation of actual transactions flagged for money laundering in Middle Eastern banks. The main findings highlight that criminal networks mainly use the integration stage to integrate money into the financial system. Fraudsters prefer to launder funds in the early hours, morning followed by the business day's afternoon time intervals. Additionally, the Naïve Bayes and Random Forest classifiers were identified as the two best-performing models to predict bank money laundering transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Recent Advances and Applications of Textile Technology in Patient Monitoring.
- Author
-
Stern, Lindsay and Roshan Fekr, Atena
- Subjects
SLEEP quality ,SUPPORT vector machines ,TEXTILES ,VITAL signs ,PRESSURE ulcers ,WEARABLE technology ,MACHINE learning ,PATIENT monitoring ,SLEEP ,BODY movement ,HEART beat ,TECHNOLOGY ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
Sleep monitoring has become a prevalent area of research where body position and physiological data, such as heart rate and respiratory rate, are monitored. Numerous critical health problems are associated with poor sleep, such as pressure sore development, sleep disorders, and low sleep quality, which can lead to an increased risk of falls, cardiovascular diseases, and obesity. Current monitoring systems can be costly, laborious, and taxing on hospital resources. This paper reviews the most recent solutions for contactless textile technology in the form of bed sheets or mats to monitor body positions, vital signs, and sleep, both commercially and in the literature. This paper is organized into four categories: body position and movement monitoring, physiological monitoring, sleep monitoring, and commercial products. A detailed performance evaluation was carried out, considering the detection accuracy as well as the sensor types and algorithms used. The areas that need further research and the challenges for each category are discussed in detail. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. 多视图融合TextRCNN的论文自动推荐算法.
- Author
-
杨秀璋, 武帅, 杨琪, 项美玉, 李娜, 周既松, and 赵小明
- Subjects
CONVOLUTIONAL neural networks ,DEEP learning ,MACHINE learning ,AUTOMATIC classification ,ACCURACY of information ,ALGORITHMS - Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
22. A Multi-Agent RL Algorithm for Dynamic Task Offloading in D2D-MEC Network with Energy Harvesting †.
- Author
-
Mi, Xin, He, Huaiwen, and Shen, Hong
- Subjects
ENERGY harvesting ,MACHINE learning ,ALGORITHMS ,INTEGER programming ,DYNAMIC loads ,MOBILE computing ,NONLINEAR programming - Abstract
Delay-sensitive task offloading in a device-to-device assisted mobile edge computing (D2D-MEC) system with energy harvesting devices is a critical challenge due to the dynamic load level at edge nodes and the variability in harvested energy. In this paper, we propose a joint dynamic task offloading and CPU frequency control scheme for delay-sensitive tasks in a D2D-MEC system, taking into account the intricacies of multi-slot tasks, characterized by diverse processing speeds and data transmission rates. Our methodology involves meticulous modeling of task arrival and service processes using queuing systems, coupled with the strategic utilization of D2D communication to alleviate edge server load and prevent network congestion effectively. Central to our solution is the formulation of average task delay optimization as a challenging nonlinear integer programming problem, requiring intelligent decision making regarding task offloading for each generated task at active mobile devices and CPU frequency adjustments at discrete time slots. To navigate the intricate landscape of the extensive discrete action space, we design an efficient multi-agent DRL learning algorithm named MAOC, which is based on MAPPO, to minimize the average task delay by dynamically determining task-offloading decisions and CPU frequencies. MAOC operates within a centralized training with decentralized execution (CTDE) framework, empowering individual mobile devices to make decisions autonomously based on their unique system states. Experimental results demonstrate its swift convergence and operational efficiency, and it outperforms other baseline algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Cognitive decline assessment using semantic linguistic content and transformer deep learning architecture.
- Author
-
PL, Rini and KS, Gayathri
- Subjects
DIAGNOSIS of dementia ,COGNITION disorders diagnosis ,SPEECH evaluation ,CROSS-sectional method ,PREDICTION models ,TASK performance ,DESCRIPTIVE statistics ,NATURAL language processing ,LINGUISTICS ,EXPERIMENTAL design ,DEEP learning ,COMPUTER-aided diagnosis ,LATENT semantic analysis ,NEUROPSYCHOLOGICAL tests ,RESEARCH ,SEMANTIC memory ,EARLY diagnosis ,COMPARATIVE studies ,MACHINE learning ,FACTOR analysis ,ALGORITHMS ,DEMENTIA patients - Abstract
Background: Dementia is a cognitive decline that leads to the progressive deterioration of an individual's ability to perform daily activities independently. As a result, a considerable amount of time and resources are spent on caretaking. Early detection of dementia can significantly reduce the effort and resources needed for caretaking. Aims: This research proposes an approach for assessing cognitive decline by analysing speech data, specifically focusing on speech relevance as a crucial indicator for memory recall. Methods & Procedures: This is a cross‐sectional, online, self‐administered. The proposed method used deep learning architecture based on transformers, with BERT (Bidirectional Encoder Representations from Transformers) and Sentence‐Transformer to derive encoded representations of speech transcripts. These representations provide contextually descriptive information that is used to analyse the relevance of sentences in their respective contexts. The encoded information is then compared using cosine similarity metrics to measure the relevance of uttered sequences of sentences. The study uses the Pitt Corpus Dementia dataset for experimentation, which consists of speech data from individuals with and without dementia. The accuracy of the proposed multi‐QA‐MPNet (Multi‐Query Maximum Inner Product Search Pretraining) model is compared with other pretrained transformer models of Sentence‐Transformer. Outcomes & Results: The results show that the proposed approach outperforms the other models in capturing context level information, particularly semantic memory. Additionally, the study explores the suitability of different similarity measures to evaluate the relevance of uttered sequences of sentences. The experimentation reveals that cosine similarity is the most appropriate measure for this task. Conclusions & Implications: This finding has significant implications for the early warning signs of dementia, as it suggests that cosine similarity metrics can effectively capture the semantic relevance of spoken language. The persistent cognitive decline over time acts as one of the indicators for prevalence of dementia. Additionally early dementia could be recognised by analysis on other modalities like speech and brain images. WHAT THIS PAPER ADDS: What is already known on this subject: It is already known that speech‐ and language‐based detection methods can be useful for dementia diagnosis, as language difficulties are often early signs of the disease. Additionally, deep learning algorithms have shown promise in detecting and diagnosing dementia through analysing large datasets, particularly in speech‐ and language‐based detection methods. However, further research is needed to validate the performance of these algorithms on larger and more diverse datasets and to address potential biases and limitations. What this paper adds to existing knowledge: This study presents a unique and effective approach for cognitive decline assessment through analysing speech data. The study provides valuable insights into the importance of context and semantic memory in accurately detecting the potential in dementia and demonstrates the applicability of deep learning models for this purpose. The findings of this study have important clinical implications and can inform future research and development in the field of dementia detection and care. What are the potential or actual clinical implications of this work?: The proposed approach for cognitive decline assessment using speech data and deep learning models has significant clinical implications. It has the potential to improve the accuracy and efficiency of dementia diagnosis, leading to earlier detection and more effective treatments, which can improve patient outcomes and quality of life. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Detection of Deepfake Media Using a Hybrid CNN–RNN Model and Particle Swarm Optimization (PSO) Algorithm.
- Author
-
Al-Adwan, Aryaf, Alazzam, Hadeel, Al-Anbaki, Noor, and Alduweib, Eman
- Subjects
PARTICLE swarm optimization ,CONVOLUTIONAL neural networks ,MACHINE learning ,DEEP learning ,DEEPFAKES ,ALGORITHMS ,VIDEOS - Abstract
Deepfakes are digital audio, video, or images manipulated using machine learning algorithms. These manipulated media files can convincingly depict individuals doing or saying things they never actually did. Deepfakes pose significant risks to our lives, including national security, financial markets, and personal privacy. The ability to create convincing deep fakes can also harm individuals' reputations and can be used to spread disinformation and fake news. As such, there is a growing need for reliable and accurate methods to detect deep fakes and prevent their harmful effects. In this paper, a hybrid convolutional neural network (CNN) and recurrent neural network (RNN) with a particle swarm optimization (PSO) algorithm is utilized to demonstrate a deep learning strategy for detecting deepfake videos. High accuracy, sensitivity, specificity, and F1 score were attained by the proposed approach when tested on two publicly available datasets: Celeb-DF and the Deepfake Detection Challenge Dataset (DFDC). Specifically, the proposed method achieved an average accuracy of 97.26% on Celeb-DF and an average accuracy of 94.2% on DFDC. The results were compared to other state-of-the-art methods and showed that the proposed method outperformed many. The proposed method can effectively detect deepfake videos, which is essential for identifying and preventing the spread of manipulated content online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Research on health monitoring and damage recognition algorithm of building structures based on image processing.
- Author
-
Tang, Sicong and Wang, Hailong
- Subjects
IMAGE processing ,MACHINE learning ,PARAMETER identification ,NOISE control ,ALGORITHMS ,IMAGE encryption ,DIGITAL images - Abstract
With the continuous deepening of the urbanization process and the progress of science and technology, people transform nature and develop nature on a larger and larger scale, among which the most iconic transformation is a variety of building structures built by people. And with the passage of time, the building structure in the perennial wind and sun, there will be signs of "illness", if not timely treatment, it will have a huge impact on the stability and safety of the building structure. Based on this, in this paper, according to the characteristics of crack identification on the surface of concrete structure, background subtraction algorithm is selected for image noise reduction processing. Through three steps of digital image noise reduction, crack extraction and crack parameter identification, the quantitative recognition of cracks is completed and a complete system of crack parameter identification is formed. The experimental results show that the machine learning model of building structure health monitoring and damage recognition algorithm proposed in this paper has excellent statistical performance, and the relative error accuracy of recognition can be controlled within 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Classification of high-dimensional imbalanced biomedical data based on spectral clustering SMOTE and marine predators algorithm.
- Author
-
Qin, Xiwen, Zhang, Siqi, Dong, Xiaogang, Shi, Hongyu, and Yuan, Liping
- Subjects
LINEAR operators ,CLASSIFICATION ,ALGORITHMS ,LEARNING strategies ,FEATURE selection ,LOTKA-Volterra equations ,MACHINE learning ,RANDOM forest algorithms - Abstract
The research of biomedical data is crucial for disease diagnosis, health management, and medicine development. However, biomedical data are usually characterized by high dimensionality and class imbalance, which increase computational cost and affect the classification performance of minority class, making accurate classification difficult. In this paper, we propose a biomedical data classification method based on feature selection and data resampling. First, use the minimal-redundancy maximal-relevance (mRMR) method to select biomedical data features, reduce the feature dimension, reduce the computational cost, and improve the generalization ability; then, a new SMOTE oversampling method (Spectral-SMOTE) is proposed, which solves the noise sensitivity problem of SMOTE by an improved spectral clustering method; finally, the marine predators algorithm is improved using piecewise linear chaotic maps and random opposition-based learning strategy to improve the algorithm's optimization seeking ability and convergence speed, and the key parameters of the spectral-SMOTE are optimized using the improved marine predators algorithm, which effectively improves the performance of the over-sampling approach. In this paper, five real biomedical datasets are selected to test and evaluate the proposed method using four classifiers, and three evaluation metrics are used to compare with seven data resampling methods. The experimental results show that the method effectively improves the classification performance of biomedical data. Statistical test results also show that the proposed PRMPA-Spectral-SMOTE method outperforms other data resampling methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. NUMERICAL COMPUTATION OF ENTROPY-REGULARIZED QUADRATIC OPTIMIZATION PROBLEMS.
- Author
-
PIQIN SHI, CHENGJING WANG, CAN XIANG, and PEIPEI TANG
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,PROBLEM solving ,MACHINE learning ,NUMERICAL analysis - Abstract
Entropy-regularized quadratic optimization problems are a special class of optimization problems with wide applications in various fields, such as transportation and machine learning. In this paper, we apply the augmented Lagrangian method to this problem with its subproblem solved by the block coordinate descent method. Under certain mild conditions, we analyze the global convergence of this algorithm. Numerical experiments demonstrate the effectiveness of this algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Personalized Treatment Policies with the Novel Buckley-James Q-Learning Algorithm.
- Author
-
Lee, Jeongjin and Kim, Jong-Min
- Subjects
MACHINE learning ,ALGORITHMS ,SURVIVAL analysis (Biometry) ,TIME management ,PATIENT care ,REINFORCEMENT learning - Abstract
This research paper presents the Buckley-James Q-learning (BJ-Q) algorithm, a cutting-edge method designed to optimize personalized treatment strategies, especially in the presence of right censoring. We critically assess the algorithm's effectiveness in improving patient outcomes and its resilience across various scenarios. Central to our approach is the innovative use of the survival time to impute the reward in Q-learning, employing the Buckley-James method for enhanced accuracy and reliability. Our findings highlight the significant potential of personalized treatment regimens and introduce the BJ-Q learning algorithm as a viable and promising approach. This work marks a substantial advancement in our comprehension of treatment dynamics and offers valuable insights for augmenting patient care in the ever-evolving clinical landscape. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. An Algorithm of Complete Coverage Path Planning for Deep‐Sea Mining Vehicle Clusters Based on Reinforcement Learning.
- Author
-
Xing, Bowen, Wang, Xiao, and Liu, Zhenchong
- Subjects
DEEP reinforcement learning ,MACHINE learning ,OCEAN mining ,ALGORITHMS - Abstract
This paper proposes a deep reinforcement learning algorithm to achieve complete coverage path planning for deep‐sea mining vehicle clusters. First, the mining vehicles and the deep‐sea mining environment are modeled. Then, this paper implements a series of algorithm designs and optimizations based on Deep Q Networks (DQN). The map fusion mechanism can integrate the grid matrix data from multiple mining vehicles to get the state matrix of the complete environment. In this paper, a preprocessing method for the state matrix is also designed to provide suitable training data for the neural network. The reward function and action selection mechanism of the algorithm are also optimized according to the requirements of cluster cooperative operation. Furthermore, the algorithm uses distance constraints to prevent the entanglement of underwater hoses. To improve the training efficiency of the neural network, the algorithm filters and extracts training samples for training through the sample quality score. Considering the requirement of cluster complete coverage mission, this paper introduces Long Short‐Term Memory (LSTM) based on the neural network to achieve a better training effect. After completing the above optimization and design, the algorithm proposed in this paper is verified through simulation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A new two-step inertial algorithm for solving convex bilevel optimization problems with application in data classification problems.
- Author
-
Puntita Sae-jia and Suthep Suantai
- Subjects
BILEVEL programming ,MACHINE learning ,ALGORITHMS ,NON-communicable diseases ,CLASSIFICATION - Abstract
In this paper, we propose a new accelerated algorithm for solving convex bilevel optimization problems using some fixed point and two-step inertial techniques. Our focus is on analyzing the convergence behavior of the proposed algorithm. We establish a strong convergence theorem for our algorithm under some control conditions. To demonstrate the effectiveness of our algorithm, we utilize it as a machine learning algorithm to solve data classification problems of some noncommunicable diseases, and compare its efficacy with BiG-SAM and iBiG-SAM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Performance Comparison of Different HTM-Spatial Pooler Algorithms Based on Information-Theoretic Measures.
- Author
-
Sanati, Shiva, Rouhani, Modjtaba, and Hodtani, Ghosheh Abed
- Subjects
MACHINE learning ,SHORT-term memory ,LONG-term memory ,SEQUENTIAL learning ,ALGORITHMS ,DISTRIBUTED algorithms ,BOOSTING algorithms - Abstract
Hierarchical temporal memory (HTM) is a promising unsupervised machine-learning algorithm that models key principles of neocortical computation. One of the main components of HTM is the spatial pooler (SP), which encodes binary input streams into sparse distributed representations (SDRs). In this paper, we propose an information-theoretic framework for the performance comparison of HTM-spatial pooler (SP) algorithms, specifically, for quantifying the similarities and differences between sparse distributed representations in SP algorithms. We evaluate SP's standalone performance, as well as HTM's overall performance. Our comparison of various SP algorithms using Renyi mutual information, Renyi divergence, and Henze–Penrose divergence measures reveals that the SP algorithm with learning and a logarithmic boosting function yields the most effective and useful data representation. Moreover, the most effective SP algorithm leads to superior HTM results. In addition, we utilize our proposed framework to compare HTM with other state-of-the-art sequential learning algorithms. We illustrate that HTM exhibits superior adaptability to pattern changes over time than long short term memory (LSTM), gated recurrent unit (GRU) and online sequential extreme learning machine (OS-ELM) algorithms. This superiority is evident from the lower Renyi divergence of HTM (0.23) compared to LSTM6000 (0.33), LSTM3000 (0.38), GRU (0.41), and OS-ELM (0.49). HTM also achieved the highest Renyi mutual information value of 0.79, outperforming LSTM6000 (0.73), LSTM3000 (0.71), GRU (0.68), and OS-ELM (0.62). These findings not only confirm the numerous advantages of HTM over other sequential learning algorithm, but also demonstrate the effectiveness of our proposed information-theoretic approach as a powerful framework for comparing and evaluating various learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Fedisp: an incremental subgradient-proximal-based ring-type architecture for decentralized federated learning.
- Author
-
Huang, Jianjun, Rui, Zihao, and Kang, Li
- Subjects
FEDERATED learning ,DATA privacy ,MACHINE learning ,DISTRIBUTED algorithms ,ITERATIVE learning control ,ALGORITHMS - Abstract
Federated learning (FL) represents a promising distributed machine learning paradigm for resolving data isolation due to data privacy concerns. Nevertheless, most vanilla FL algorithms, which depend on a server, encounter the problem of reliability and a high communication burden in real cases. Decentralized federated learning (DFL) that does not follow the star topology faces the challenges of weight divergence and inferior communication efficiency. In this paper, a novel DFL framework called federated incremental subgradient-proximal (FedISP) is proposed that utilizes the incremental method to perform model updates to alleviate weight divergence. In our setup, multiple clients are distributed in a ring topology and communicate in a cyclic manner, which significantly mitigates the communication load. A convergence guarantee is given under the convex condition to demonstrate the impact of the learning rate on our algorithms, which further improves the performance of FedISP. Extensive experiments on benchmark datasets validate the effectiveness of the proposed approach in both independent and identically distributed (IID) and non-IID settings while illustrating the advantages of the FedISP algorithm in achieving model consensus and saving communication costs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Computational thematics: comparing algorithms for clustering the genres of literary fiction.
- Author
-
Sobchuk, Oleg and Šeļa, Artjoms
- Subjects
LITERARY form ,MACHINE learning ,ALGORITHMS ,THEMATIC analysis ,FEATURE extraction - Abstract
What are the best methods of capturing thematic similarity between literary texts? Knowing the answer to this question would be useful for automatic clustering of book genres, or any other thematic grouping. This paper compares a variety of algorithms for unsupervised learning of thematic similarities between texts, which we call "computational thematics". These algorithms belong to three steps of analysis: text pre-processing, extraction of text features, and measuring distances between the lists of features. Each of these steps includes a variety of options. We test all the possible combinations of these options. Every combination of algorithms is given a task to cluster a corpus of books belonging to four pre-tagged genres of fiction. This clustering is then validated against the "ground truth" genre labels. Such comparison of algorithms allows us to learn the best and the worst combinations for computational thematic analysis. To illustrate the difference between the best and the worst methods, we then cluster 5000 random novels from the HathiTrust corpus of fiction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Automating the Analysis of Negative Test Verdicts: A Future-Forward Approach Supported by Augmented Intelligence Algorithms.
- Author
-
Gnacy-Gajdzik, Anna and Przystałka, Piotr
- Subjects
MACHINE learning ,ARTIFICIAL neural networks ,COMPUTER software testing ,ALGORITHMS ,ARTIFICIAL intelligence ,OPEN source intelligence - Abstract
In the epoch characterized by the anticipation of autonomous vehicles, the quality of the embedded system software, its reliability, safety, and security is significant. The testing of embedded software is an increasingly significant element of the development process. The application of artificial intelligence (AI) algorithms in the process of testing embedded software in vehicles constitutes a significant area of both research and practical consideration, arising from the escalating complexity of these systems. This paper presents the preliminary development of the AVESYS framework which facilitates the application of open-source artificial intelligence algorithms in the embedded system testing process. The aim of this work is to evaluate its effectiveness in identifying anomalies in the test environment that could potentially affect testing results. The raw data from the test environment, mainly communication signals and readings from temperature, as well as current and voltage sensors are pre-processed and used to train machine learning models. A verification study is carried out, proving the high practical potential of the application of AI algorithms in embedded software testing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Reweighted Extreme Learning Machine-Based Clutter Suppression and Range Compensation Algorithm for Non-Side-Looking Airborne Radar.
- Author
-
Liu, Jing, Liao, Guisheng, Zeng, Cao, Tao, Haihong, Xu, Jingwei, Zhu, Shengqi, and Juwono, Filbert H.
- Subjects
RADAR in aeronautics ,MACHINE learning ,ALGORITHMS ,MATHEMATICAL complexes - Abstract
Non-side-looking airborne radar provides important applications on account of its all-round multi-angle airspace coverage. However, it suffers clutter range dependence that makes the samples fail to satisfy the condition of being independent and identically distributed (IID), and it severely degrades traditional approaches to clutter suppression and target detection. In this paper, a novel reweighted extreme learning machine (ELM)-based clutter suppression and range compensation algorithm is proposed for non-side-looking airborne radar. The proposed method involves first designing the pre-processing stage, the special reweighted complex-valued activation function containing an unknown range compensation matrix, and two new objective outputs for constructing an initial reweighted ELM-based network with its training. Then, two other objective outputs, a new loss function, and a reverse feedback framework driven by the specifically designed objectives are proposed for the unknown range compensation matrix. Finally, aiming to estimate and reconstruct the unknown compensation matrix, special processes of the complex-valued structures and the theoretical derivations are designed and analyzed in detail. Consequently, with the updated and compensated samples, further processing including space–time adaptive processing (STAP) can be performed for clutter suppression and target detection. Compared with the classic relevant methods, the proposed algorithm achieves significantly superior performance with reasonable computation time. It provides an obviously higher detection probability and better improvement factor (IF). The simulation results verify that the proposed algorithm is effective and has many advantages. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A hybrid feature selection algorithm combining information gain and grouping particle swarm optimization for cancer diagnosis.
- Author
-
Yang, Fangyuan, Xu, Zhaozhao, Wang, Hong, Sun, Lisha, Zhai, Mengjiao, and Zhang, Juan
- Subjects
FEATURE selection ,PARTICLE swarm optimization ,MACHINE learning ,CANCER diagnosis ,ALGORITHMS ,SUPPORT vector machines - Abstract
Background: Cancer diagnosis based on machine learning has become a popular application direction. Support vector machine (SVM), as a classical machine learning algorithm, has been widely used in cancer diagnosis because of its advantages in high-dimensional and small sample data. However, due to the high-dimensional feature space and high feature redundancy of gene expression data, SVM faces the problem of poor classification effect when dealing with such data. Methods: Based on this, this paper proposes a hybrid feature selection algorithm combining information gain and grouping particle swarm optimization (IG-GPSO). The algorithm firstly calculates the information gain values of the features and ranks them in descending order according to the value. Then, ranked features are grouped according to the information index, so that the features in the group are close, and the features outside the group are sparse. Finally, grouped features are searched using grouping PSO and evaluated according to in-group and out-group. Results: Experimental results show that the average accuracy (ACC) of the SVM on the feature subset selected by the IG-GPSO is 98.50%, which is significantly better than the traditional feature selection algorithm. Compared with KNN, the classification effect of the feature subset selected by the IG-GPSO is still optimal. In addition, the results of multiple comparison tests show that the feature selection effect of the IG-GPSO is significantly better than that of traditional feature selection algorithms. Conclusion: The feature subset selected by IG-GPSO not only has the best classification effect, but also has the least feature scale (FS). More importantly, the IG-GPSO significantly improves the ACC of SVM in cancer diagnostic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. A Review: Current Trend of Immersive Technologies for Indoor Navigation and the Algorithms.
- Author
-
Sariman, Muhammad Shazmin, Othman, Maisara, Akir, Rohaida Mat, Mahamad, Abd Kadir, and Rahman, Munirah Ab
- Subjects
MACHINE learning ,DEEP learning ,ALGORITHMS ,SHOPPING malls ,HISTORY of technology ,AERONAUTICAL navigation ,NAVIGATION - Abstract
The term "indoor navigation system" pertains to a technological or practical approach that facilitates the navigation and orientation of individuals within indoor settings, such as museums, airports, shopping malls, or buildings. Over several years, significant advancements have been made in indoor navigation. Numerous studies have been conducted on the issue. However, a fair evaluation and comparison of indoor navigation algorithms have not been discussed further. This paper presents a comprehensive review of collective algorithms developed for indoor navigation. The in-depth analysis of these articles concentrates on both advantages and disadvantages, as well as the different types of algorithms used in each article. A systematic literature review (SLR) methodology guided our article-finding, vetting, and grading processes. Finally, we narrowed the pool down to 75 articles using SLR. We organized them into several groups according to their topics. In these quick analyses, we pull out the most important concepts, article types, rating criteria, and the positives and negatives of each piece. Based on the findings of this review, we can conclude that an efficient solution for indoor navigation that uses the capabilities of ARTICLE INFO embedded data and technological advances Article history: in immersive technologies can be achieved by training the shortest path algorithm with a deep learning algorithm to enhance the indoor navigation system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Optimal selection of benchmarking datasets for unbiased machine learning algorithm evaluation.
- Author
-
Pereira, João Luiz Junho, Smith-Miles, Kate, Muñoz, Mario Andrés, and Lorena, Ana Carolina
- Subjects
MACHINE learning ,SUPERVISED learning ,METAHEURISTIC algorithms ,CLASSIFICATION algorithms ,ALGORITHMS - Abstract
Whenever a new supervised machine learning (ML) algorithm or solution is developed, it is imperative to evaluate the predictive performance it attains for diverse datasets. This is done in order to stress test the strengths and weaknesses of the novel algorithms and provide evidence for situations in which they are most useful. A common practice is to gather some datasets from public benchmark repositories for such an evaluation. But little or no specific criteria are used in the selection of these datasets, which is often ad-hoc. In this paper, the importance of gathering a diverse benchmark of datasets in order to properly evaluate ML models and really understand their capabilities is investigated. Leveraging from meta-learning studies evaluating the diversity of public repositories of datasets, this paper introduces an optimization method to choose varied classification and regression datasets from a pool of candidate datasets. The method is based on maximum coverage, circular packing, and the meta-heuristic Lichtenberg Algorithm for ensuring that diverse datasets able to challenge the ML algorithms more broadly are chosen. The selections were compared experimentally with a random selection of datasets and with clustering by k-medoids and proved to be more effective regarding the diversity of the chosen benchmarks and the ability to challenge the ML algorithms at different levels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. High-Dimensional Ensemble Learning Classification: An Ensemble Learning Classification Algorithm Based on High-Dimensional Feature Space Reconstruction.
- Author
-
Zhao, Miao and Ye, Ning
- Subjects
MACHINE learning ,CLASSIFICATION algorithms ,FEATURE selection ,NAIVE Bayes classification ,HIGH-dimensional model representation ,CLASSIFICATION ,ALGORITHMS ,PROBLEM solving - Abstract
When performing classification tasks on high-dimensional data, traditional machine learning algorithms often fail to filter out valid information in the features adequately, leading to low levels of classification accuracy. Therefore, this paper explores the high-dimensional data from both the data feature dimension and the model ensemble dimension. We propose a high-dimensional ensemble learning classification algorithm focusing on feature space reconstruction and classifier ensemble, called the HDELC algorithm. First, the algorithm considers feature space reconstruction and then generates a feature space reconstruction matrix. It effectively achieves feature selection and reconstruction for high-dimensional data. An optimal feature space is generated for the subsequent ensemble of the classifier, which enhances the representativeness of the feature space. Second, we recursively determine the number of classifiers and the number of feature subspaces in the ensemble model. Different classifiers in the ensemble system are assigned mutually exclusive non-intersecting feature subspaces for model training. The experimental results show that the HDELC algorithm has advantages compared with most high-dimensional datasets due to its more efficient feature space ensemble capability and relatively reliable ensemble operation performance. The HDELC algorithm makes it possible to solve the classification problem for high-dimensional data effectively and has vital research and application value. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems.
- Author
-
Priyadarshini, Ishaani
- Subjects
OPTIMIZATION algorithms ,BIOLOGICALLY inspired computing ,DEEP learning ,MACHINE learning ,METAHEURISTIC algorithms ,PROBLEM solving ,ALGORITHMS - Abstract
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm's feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO's wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. YOLOv7oSAR: A Lightweight High-Precision Ship Detection Model for SAR Images Based on the YOLOv7 Algorithm.
- Author
-
Liu, Yilin, Ma, Yong, Chen, Fu, Shang, Erping, Yao, Wutao, Zhang, Shuyan, and Yang, Jin
- Subjects
SHIP models ,SYNTHETIC aperture radar ,MACHINE learning ,SOLID state drives ,ALGORITHMS ,DEEP learning - Abstract
Researchers have explored various methods to fully exploit the all-weather characteristics of Synthetic aperture radar (SAR) images to achieve high-precision, real-time, computationally efficient, and easily deployable ship target detection models. These methods include Constant False Alarm Rate (CFAR) algorithms and deep learning approaches such as RCNN, YOLO, and SSD, among others. While these methods outperform traditional algorithms in SAR ship detection, challenges still exist in handling the arbitrary ship distributions and small target features in SAR remote sensing images. Existing models are complex, with a large number of parameters, hindering effective deployment. This paper introduces a YOLOv7 oriented bounding box SAR ship detection model (YOLOv7oSAR). The model employs a rotation box detection mechanism, uses the KLD loss function to enhance accuracy, and introduces a Bi-former attention mechanism to improve small target detection. By redesigning the network's width and depth and incorporating a lightweight P-ELAN structure, the model effectively reduces its size and computational requirements. The proposed model achieves high-precision detection results on the public RSDD dataset (94.8% offshore, 66.6% nearshore), and its generalization ability is validated on a custom dataset (94.2% overall detection accuracy). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Sea Ice Extraction via Remote Sensing Imagery: Algorithms, Datasets, Applications and Challenges.
- Author
-
Huang, Wenjun, Yu, Anzhu, Xu, Qing, Sun, Qun, Guo, Wenyue, Ji, Song, Wen, Bowei, and Qiu, Chunping
- Subjects
SEA ice ,DEEP learning ,REMOTE sensing ,IMAGE recognition (Computer vision) ,GEOGRAPHIC information systems ,ALGORITHMS - Abstract
Deep learning, which is a dominating technique in artificial intelligence, has completely changed image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications and future trends. Our review focuses on research published from 2016 to the present, with a specific focus on deep-learning-based approaches in the last five years. We divided all related algorithms into three categories, including the conventional image classification approach, the machine learning-based approach and deep-learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in four aspects including climate research, navigation, geographic information systems (GIS) production and others. This paper also provides insightful observations and inspiring future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Artificial Intelligence in Pediatrics: Learning to Walk Together.
- Author
-
Demirbaş, Kaan Can, Yıldız, Mehmet, Saygılı, Seha, Canpolat, Nur, and Kasapçopur, Özgür
- Subjects
GENOME editing ,COMPUTER assisted instruction ,ARTIFICIAL intelligence ,PEDIATRICS ,MACHINE learning ,LEARNING strategies ,ROBOTICS ,RISK assessment ,CHILD health services ,EDUCATIONAL technology ,DECISION making in clinical medicine ,PREDICTION models ,ALGORITHMS ,EVALUATION - Abstract
In this era of rapidly advancing technology, artificial intelligence (AI) has emerged as a transformative force, even being called the Fourth Industrial Revolution, along with gene editing and robotics. While it has undoubtedly become an increasingly important part of our daily lives, it must be recognized that it is not an additional tool, but rather a complex concept that poses a variety of challenges. AI, with considerable potential, has found its place in both medical care and clinical research. Within the vast field of pediatrics, it stands out as a particularly promising advancement. As pediatricians, we are indeed witnessing the impactful integration of AI-based applications into our daily clinical practice and research efforts. These tools are being used for simple to more complex tasks such as diagnosing clinically challenging conditions, predicting disease outcomes, creating treatment plans, educating both patients and healthcare professionals, and generating accurate medical records or scientific papers. In conclusion, the multifaceted applications of AI in pediatrics will increase efficiency and improve the quality of healthcare and research. However, there are certain risks and threats accompanying this advancement including the biases that may contribute to health disparities and, inaccuracies. Therefore, it is crucial to recognize and address the technical, ethical, and legal challenges as well as explore the benefits in both clinical and research fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Scientific papers and artificial intelligence. Brave new world?
- Author
-
Nexøe, Jørgen
- Subjects
COMPUTERS ,MANUSCRIPTS ,ARTIFICIAL intelligence ,MACHINE learning ,DATA analysis ,MEDICAL literature ,MEDICAL research ,ALGORITHMS - Published
- 2023
- Full Text
- View/download PDF
45. Forecasting model for short-term wind speed using robust local mean decomposition, deep neural networks, intelligent algorithm, and error correction.
- Author
-
Li, Jiawen, Liu, Minghao, Wen, Lei, Tian, Zhongda, and Ramirez, Carlos Andrés Perez
- Subjects
ARTIFICIAL neural networks ,WIND speed ,DEEP learning ,MACHINE learning ,ALGORITHMS ,WIND power ,BIOCHEMICAL oxygen demand - Abstract
Wind power generation has aroused widespread concern worldwide. Accurate prediction of wind speed is very important for the safe and economic operation of the power grid. This paper presents a short-term wind speed prediction model which includes data decomposition, deep learning, intelligent algorithm optimization, and error correction modules. First, the robust local mean decomposition (RLMD) is applied to the original wind speed data to reduce the non-stationarity of the data. Then, the salp swarm algorithm (SSA) is used to determine the optimal parameter combination of the bidirectional gated recurrent unit (BiGRU) to ensure prediction quality. In order to eliminate the predictable components of the error further, a correction module based on the improved salp swarm algorithm (ISSA) and deep extreme learning machine (DELM) is constructed. The exploration and exploitation capability of the original SSA is enhanced by introducing a crazy operator and dynamic learning strategy, and the input weights and thresholds in the DELM are optimized by the ISSA to improve the generalization ability of the model. The actual data of wind farms are used to verify the advancement of the proposed model. Compared with other models, the results show that the proposed model has the best prediction performance. As a powerful tool, the developed forecasting system is expected to be further used in the energy system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A penalized variable selection ensemble algorithm for high-dimensional group-structured data.
- Author
-
Li, Dongsheng, Pan, Chunyan, Zhao, Jing, and Luo, Anfei
- Subjects
LOW birth weight ,STANDARD deviations ,HIGH-dimensional model representation ,MATHEMATICAL variables ,MACHINE learning ,ALGORITHMS - Abstract
This paper presents a multi-algorithm fusion model (StackingGroup) based on the Stacking ensemble learning framework to address the variable selection problem in high-dimensional group structure data. The proposed algorithm takes into account the differences in data observation and training principles of different algorithms. It leverages the strengths of each model and incorporates Stacking ensemble learning with multiple group structure regularization methods. The main approach involves dividing the data set into K parts on average, using more than 10 algorithms as basic learning models, and selecting the base learner based on low correlation, strong prediction ability, and small model error. Finally, we selected the grSubset + grLasso, grLasso, and grSCAD algorithms as the base learners for the Stacking algorithm. The Lasso algorithm was used as the meta-learner to create a comprehensive algorithm called StackingGroup. This algorithm is designed to handle high-dimensional group structure data. Simulation experiments showed that the proposed method outperformed other R
2 , RMSE, and MAE prediction methods. Lastly, we applied the proposed algorithm to investigate the risk factors of low birth weight in infants and young children. The final results demonstrate that the proposed method achieves a mean absolute error (MAE) of 0.508 and a root mean square error (RMSE) of 0.668. The obtained values are smaller compared to those obtained from a single model, indicating that the proposed method surpasses other algorithms in terms of prediction accuracy. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
47. Research on Obstacle Avoidance Planning for UUV Based on A3C Algorithm.
- Author
-
Wang, Hongjian, Gao, Wei, Wang, Zhao, Zhang, Kai, Ren, Jingfei, Deng, Lihui, and He, Shanshan
- Subjects
DEEP learning ,REINFORCEMENT learning ,DEEP reinforcement learning ,MACHINE learning ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
Deep reinforcement learning is an artificial intelligence technology that combines deep learning and reinforcement learning and has been widely applied in multiple fields. As a type of deep reinforcement learning algorithm, the A3C (Asynchronous Advantage Actor-Critic) algorithm can effectively utilize computer resources and improve training efficiency by synchronously training Actor-Critic in multiple threads. Inspired by the excellent performance of the A3C algorithm, this paper uses the A3C algorithm to solve the UUV (Unmanned Underwater Vehicle) collision avoidance planning problem in unknown environments. This collision avoidance planning algorithm can have the ability to plan in real-time while ensuring a shorter path length, and the output action space can meet the kinematic constraints of UUVs. In response to the problem of UUV collision avoidance planning, this paper designs the state space, action space, and reward function. The simulation results show that the A3C collision avoidance planning algorithm can guide a UUV to avoid obstacles and reach the preset target point. The path planned by this algorithm meets the heading constraints of the UUV, and the planning time is short, which can meet the requirements of real-time planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A Modified Inexact SARAH Algorithm With Stabilized Barzilai-Borwein Step-Size in Machine Learning.
- Author
-
Fusheng Wang, Yiming Yang, Xiaotong Li, and Petrosian, Ovanes
- Subjects
MACHINE learning ,ALGORITHMS - Abstract
The Inexact SARAH (iSARAH) algorithm as a variant of SARAH algorithm, which does not require computation of the exact gradient, can be applied to solving general expectation minimization problems rather than only finite sum problems. The performance of iSARAH algorithm is frequently affected by the step size selection, and how to choose an appropriate step size is still a worthwhile problem for study. In this paper, we propose to use the stabilized Barzilai-Borwein (SBB) method to automatically compute step size for iSARAH algorithm, which leads to a new algorithm called iSARAH-SBB. By introducing this adaptive step size in the design of the new algorithm, iSARAH-SBB can take better advantages of both iSARAH and SBB methods. We analyse the convergence rate and complexity of the modified algorithm under the usual assumptions. Numerical experimental results on standard data sets demonstrate the feasibility and effectiveness of our proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. LGNN: a novel linear graph neural network algorithm.
- Author
-
Shujuan Cao, Xiaoming Wang, Zhonglin Ye, Mingyuan Li, and Haixing Zhao
- Subjects
DEEP learning ,ALGORITHMS ,GRAPH algorithms ,IMAGE recognition (Computer vision) ,MACHINE learning ,MACHINE performance ,LINEAR operators ,SPARSE graphs - Abstract
The emergence of deep learning has not only brought great changes in the field of image recognition, but also achieved excellent node classification performance in graph neural networks. However, the existing graph neural network framework often uses methods based on spatial domain or spectral domain to capture network structure features. This process captures the local structural characteristics of graph data, and the convolution process has a large amount of calculation. It is necessary to use multi-channel or deep neural network structure to achieve the goal of modeling the high-order structural characteristics of the network. Therefore, this paper proposes a linear graph neural network framework [Linear Graph Neural Network (LGNN)] with superior performance. The model first preprocesses the input graph, and uses symmetric normalization and feature normalization to remove deviations in the structure and features. Then, by designing a high-order adjacency matrix propagation mechanism, LGNN enables nodes to iteratively aggregate and learn the feature information of high-order neighbors. After obtaining the node representation of the network structure, LGNN uses a simple linear mapping to maintain computational efficiency and obtain the final node representation. The experimental results show that the performance of the LGNN algorithm in some tasks is slightly worse than that of the existing mainstream graph neural network algorithms, but it shows or exceeds the machine learning performance of the existing algorithms in most graph neural network performance evaluation tasks, especially on sparse networks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. LC-NPLA: Label and Community Information-Based Network Presentation Learning Algorithm.
- Author
-
Shihu Liu, Chunsheng Yang, and Yingjie Liu
- Subjects
MACHINE learning ,RANDOM walks ,STOCHASTIC processes ,STATISTICAL sampling ,ALGORITHMS - Abstract
Many network presentation learning algorithms (NPLA) have originated from the process of the random walk between nodes in recent years. Despite these algorithms can obtain great embedding results, there may be also some limitations. For instance, only the structural information of nodes is considered when these kinds of algorithms are constructed. Aiming at this issue, a label and community information-based network presentation learning algorithm (LC-NPLA) is proposed in this paper. First of all, by using the community information and the label information of nodes, the first-order neighbors of nodes are reconstructed. In the next, the random walk strategy is improved by integrating the degree information and label information of nodes. Then, the node sequence obtained from random walk sampling is transformed into the node representation vector by the Skip-Gram model. At last, the experimental results on ten real-world networks demonstrate that the proposed algorithm has great advantages in the label classification, network reconstruction and link prediction tasks, compared with three benchmark algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.