3,222 results
Search Results
2. 卷积融合文本和异质信息网络的 学术论文推荐算法.
- Author
-
吴俊超, 刘柏嵩, 沈小烽, and 张雪垣
- Subjects
- *
INFORMATION networks , *CONVOLUTIONAL neural networks , *MACHINE learning , *PRODUCT design , *ALGORITHMS - Abstract
In view of the problems of data sparsity and the diversity in academic paper recom-mender systems,based on CONVNCF, this paper proposed an algorithm of convolution with word and heterogeneous information network for academic paper recommendation ( WN -APR) . Firstly, WN -APR algorithm learned user and paper' s diverse features from different semantics to alleviate the sparsity problem. Then it designed an outer product fusing way to seamlessly combine user features with paper features. Replacing of 2D CNN, this algorithm applied 3 D convolution to mine the influence of different features on the performance. Finally, it modified the BPR loss function to enhance diversity in recommendations. Experimental results on CiteULike-a and CiteULike-t datasets show that WN-APR improves the performance of accuracy and diversity over the baseline models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. A Machine Learning Model to Predict Citation Counts of Scientific Papers in Otology Field.
- Author
-
Alohali, Yousef A., Fayed, Mahmoud S., Mesallam, Tamer, Abdelsamad, Yassin, Almuhawas, Fida, and Hagr, Abdulrahman
- Subjects
- *
DECISION trees , *SERIAL publications , *NATURAL language processing , *BIBLIOMETRICS , *MACHINE learning , *REGRESSION analysis , *RANDOM forest algorithms , *CITATION analysis , *DESCRIPTIVE statistics , *PREDICTION models , *ARTIFICIAL neural networks , *MEDICAL research , *MEDICAL specialties & specialists , *ALGORITHMS - Abstract
One of the most widely used measures of scientific impact is the number of citations. However, due to its heavy-tailed distribution, citations are fundamentally difficult to predict but can be improved. This study was aimed at investigating the factors and parts influencing the citation number of a scientific paper in the otology field. Therefore, this work proposes a new solution that utilizes machine learning and natural language processing to process English text and provides a paper citation as the predicted results. Different algorithms are implemented in this solution, such as linear regression, boosted decision tree, decision forest, and neural networks. The application of neural network regression revealed that papers' abstracts have more influence on the citation numbers of otological articles. This new solution has been developed in visual programming using Microsoft Azure machine learning at the back end and Programming Without Coding Technology at the front end. We recommend using machine learning models to improve the abstracts of research articles to get more citations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. A BPNN Model-Based AdaBoost Algorithm for Estimating Inside Moisture of Oil–Paper Insulation of Power Transformer.
- Author
-
Liu, Jiefeng, Ding, Zheshi, Fan, Xianhao, Geng, Chuhan, Song, Boshu, Wang, Qingyin, and Zhang, Yiyi
- Subjects
- *
POWER transformers , *TRANSFORMER insulation , *MOISTURE , *ALGORITHMS , *MACHINE learning , *CLASSIFICATION algorithms - Abstract
The traditional method for transformer moisture diagnosis is to establish empirical equations between feature parameters extracted from frequency domain spectroscopy (FDS) and the transformer’s moisture content. However, the established empirical equation may not be applicable to a novel testing environment, resulting in an unreliable evaluation result. In this regard, it is acknowledged that FDS combined with machine learning is more suitable for estimating moisture content in a variety of test environments. Nonetheless, the accuracy of the estimation results obtained using the existing method is limited by the algorithm’s inability to generalize. To address this issue, we propose an AdaBoost algorithm-enhanced back-propagation neural network (BP_AdaBoost). This study creates a database by extracting feature parameters from the FDS that characterize the insulation states of the prepared samples. Then, using the BP_AdaBoost algorithm and the newly constructed database, the moisture estimation models are trained. Finally, the results of the estimation are discussed in terms of laboratory and field transformers. By comparing the proposed BP_AdaBoost algorithm to other intelligence algorithms, it is demonstrated that it not only performs better in generalization, but also maintains a high level of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. SDP-Based Bounds for the Quadratic Cycle Cover Problem via Cutting-Plane Augmented Lagrangian Methods and Reinforcement Learning: INFORMS Journal on Computing Meritorious Paper Awardee.
- Author
-
de Meijer, Frank and Sotirov, Renata
- Subjects
- *
REINFORCEMENT learning , *COMBINATORIAL optimization , *TRAVELING salesman problem , *ALGORITHMS , *SEMIDEFINITE programming , *MACHINE learning , *DIRECTED graphs - Abstract
We study the quadratic cycle cover problem (QCCP), which aims to find a node-disjoint cycle cover in a directed graph with minimum interaction cost between successive arcs. We derive several semidefinite programming (SDP) relaxations and use facial reduction to make these strictly feasible. We investigate a nontrivial relationship between the transformation matrix used in the reduction and the structure of the graph, which is exploited in an efficient algorithm that constructs this matrix for any instance of the problem. To solve our relaxations, we propose an algorithm that incorporates an augmented Lagrangian method into a cutting-plane framework by utilizing Dykstra's projection algorithm. Our algorithm is suitable for solving SDP relaxations with a large number of cutting-planes. Computational results show that our SDP bounds and efficient cutting-plane algorithm outperform other QCCP bounding approaches from the literature. Finally, we provide several SDP-based upper bounding techniques, among which is a sequential Q-learning method that exploits a solution of our SDP relaxation within a reinforcement learning environment. Summary of Contribution: The quadratic cycle cover problem (QCCP) is the problem of finding a set of node-disjoint cycles covering all the nodes in a graph such that the total interaction cost between successive arcs is minimized. The QCCP has applications in many fields, among which are robotics, transportation, energy distribution networks, and automatic inspection. Besides this, the problem has a high theoretical relevance because of its close connection to the quadratic traveling salesman problem (QTSP). The QTSP has several applications, for example, in bioinformatics, and is considered to be among the most difficult combinatorial optimization problems nowadays. After removing the subtour elimination constraints, the QTSP boils down to the QCCP. Hence, an in-depth study of the QCCP also contributes to the construction of strong bounds for the QTSP. In this paper, we study the application of semidefinite programming (SDP) to obtain strong bounds for the QCCP. Our strongest SDP relaxation is very hard to solve by any SDP solver because of the large number of involved cutting-planes. Because of that, we propose a new approach in which an augmented Lagrangian method is incorporated into a cutting-plane framework by utilizing Dykstra's projection algorithm. We emphasize an efficient implementation of the method and perform an extensive computational study. This study shows that our method is able to handle a large number of cuts and that the resulting bounds are currently the best QCCP bounds in the literature. We also introduce several upper bounding techniques, among which is a distributed reinforcement learning algorithm that exploits our SDP relaxations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Critical Appraisal of a Machine Learning Paper: A Guide for the Neurologist.
- Author
-
Vinny, Pulikottil W., Garg, Rahul, Srivastava, M. V. Padma, Lal, Vivek, and Vishnu, Venugoapalan Y.
- Subjects
- *
DEEP learning , *NEUROLOGISTS , *EVIDENCE-based medicine , *MACHINE learning , *BENCHMARKING (Management) , *TERMS & phrases , *ARTIFICIAL neural networks , *PREDICTION models , *ALGORITHMS - Abstract
Machine learning (ML), a form of artificial intelligence (AI), is being increasingly employed in neurology. Reported performance metrics often match or exceed the efficiency of average clinicians. The neurologist is easily baffled by the underlying concepts and terminologies associated with ML studies. The superlative performance metrics of ML algorithms often hide the opaque nature of its inner workings. Questions regarding ML model's interpretability and reproducibility of its results in real-world scenarios, need emphasis. Given an abundance of time and information, the expert clinician should be able to deliver comparable predictions to ML models, a useful benchmark while evaluating its performance. Predictive performance metrics of ML models should not be confused with causal inference between its input and output. ML and clinical gestalt should compete in a randomized controlled trial before they can complement each other for screening, triaging, providing second opinions and modifying treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Canadian Association of Radiologists White Paper on De-identification of Medical Imaging: Part 2, Practical Considerations.
- Author
-
Parker, William, Jaremko, Jacob L., Cicero, Mark, Azar, Marleine, El-Emam, Khaled, Gray, Bruce G., Hurrell, Casey, Lavoie-Cardinal, Flavie, Desjardins, Benoit, Lum, Andrea, Sheremeta, Lori, Lee, Emil, Reinhold, Caroline, Tang, An, and Bromwich, Rebecca
- Subjects
- *
ALGORITHMS , *ARTIFICIAL intelligence , *DATA encryption , *DATABASE management , *DIAGNOSTIC imaging , *HEALTH services accessibility , *MACHINE learning , *MEDICAL protocols , *DICOM (Computer network protocol) , *COVID-19 pandemic - Abstract
The application of big data, radiomics, machine learning, and artificial intelligence (AI) algorithms in radiology requires access to large data sets containing personal health information. Because machine learning projects often require collaboration between different sites or data transfer to a third party, precautions are required to safeguard patient privacy. Safety measures are required to prevent inadvertent access to and transfer of identifiable information. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI Ethical and Legal standing committee with the mandate to guide the medical imaging community in terms of best practices in data management, access to health care data, de-identification, and accountability practices. Part 2 of this article will inform CAR members on the practical aspects of medical imaging de-identification, strengths and limitations of de-identification approaches, list of de-identification software and tools available, and perspectives on future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Physics driven behavioural clustering of free-falling paper shapes.
- Author
-
Howison, Toby, Hughes, Josie, Giardina, Fabio, and Iida, Fumiya
- Subjects
- *
PHYSICS , *SET functions , *MACHINE learning , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Deep Learning Algorithms for Traffic Forecasting: A Comprehensive Review and Comparison with Classical Ones.
- Author
-
Afandizadeh, Shahriar, Abdolahi, Saeid, Mirzahossein, Hamid, and Li, Ruimin
- Subjects
- *
MACHINE learning , *TRAFFIC estimation , *TRANSPORTATION management system , *DEEP learning , *INTELLIGENT transportation systems , *ALGORITHMS , *FORECASTING , *TRAFFIC safety - Abstract
Accurate and timely forecasting of critical components is pivotal in intelligent transportation systems and traffic management, crucially mitigating congestion and enhancing safety. This paper aims to comprehensively review deep learning algorithms and classical models employed in traffic forecasting. Spanning diverse traffic datasets, the study encompasses various scenarios, offering a nuanced understanding of traffic forecasting methods. Reviewing 111 seminal research works since the 1980s, encompassing both deep learning and classical models, the paper begins by detailing the data sources utilized in transportation systems. Subsequently, it delves into the theoretical underpinnings of prevalent deep learning algorithms and classical models prevalent in traffic forecasting. Furthermore, it investigates the application of these algorithms and models in forecasting key traffic characteristics, informed by their utility in transport and traffic analyses. Finally, the study elucidates the merits and drawbacks of proposed models through applied research in traffic forecasting. Findings indicate that while deep learning algorithms and classic models serve as valuable tools, their suitability varies across contexts, necessitating careful consideration in future studies. The study underscores research opportunities in road traffic forecasting, providing a comprehensive guide for future endeavors in this domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Machine Learning Models to Predict Readmission Risk of Patients with Schizophrenia in a Spanish Region.
- Author
-
Góngora Alonso, Susel, Herrera Montano, Isabel, Ayala, Juan Luis Martín, Rodrigues, Joel J. P. C., Franco-Martín, Manuel, and de la Torre Díez, Isabel
- Subjects
- *
MACHINE learning , *MENTAL health services , *PATIENT readmissions , *PEOPLE with schizophrenia , *PUBLIC hospitals - Abstract
Currently, high hospital readmission rates have become a problem for mental health services, because it is directly associated with the quality of patient care. The development of predictive models with machine learning algorithms allows the assessment of readmission risk in hospitals. The main objective of this paper is to predict the readmission risk of patients with schizophrenia in a region of Spain, using machine learning algorithms. In this study, we used a dataset with 6089 electronic admission records corresponding to 3065 patients with schizophrenia disorders. Data were collected in the period 2005–2015 from acute units of 11 public hospitals in a Spain region. The Random Forest classifier obtained the best results in predicting the readmission risk, in the metrics accuracy = 0.817, recall = 0.887, F1-score = 0.877, and AUC = 0.879. This paper shows the algorithm with highest accuracy value and determines the factors associated with readmission risk of patients with schizophrenia in this population. It also shows that the development of predictive models with a machine learning approach can help improve patient care quality and develop preventive treatments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Ensemble Learning Improves the Efficiency of Microseismic Signal Classification in Landslide Seismic Monitoring.
- Author
-
Xin, Bingyu, Huang, Zhiyong, Huang, Shijie, and Feng, Liang
- Subjects
- *
SIGNAL classification , *DATABASES , *RANDOM forest algorithms , *DECISION trees , *ALGORITHMS , *LANDSLIDES - Abstract
A deep-seated landslide could release numerous microseismic signals from creep-slip movement, which includes a rock-soil slip from the slope surface and a rock-soil shear rupture in the subsurface. Machine learning can effectively enhance the classification of microseismic signals in landslide seismic monitoring and interpret the mechanical processes of landslide motion. In this paper, eight sets of triaxial seismic sensors were deployed inside the deep-seated landslide, Jiuxianping, China, and a large number of microseismic signals related to the slope movement were obtained through 1-year-long continuous monitoring. All the data were passed through the seismic event identification mode, the ratio of the long-time average and short-time average. We selected 11 days of data, manually classified 4131 data into eight categories, and created a microseismic event database. Classical machine learning algorithms and ensemble learning algorithms were tested in this paper. In order to evaluate the seismic event classification performance of each algorithmic model, we evaluated the proposed algorithms through the dimensions of the accuracy, precision, and recall of each model. The validation results demonstrated that the best performing decision tree algorithm among the classical machine learning algorithms had an accuracy of 88.75%, while the ensemble algorithms, including random forest, Gradient Boosting Trees, Extreme Gradient Boosting, and Light Gradient Boosting Machine, had an accuracy range from 93.5% to 94.2% and also achieved better results in the combined evaluation of the precision, recall, and F1 score. The specific classification tests for each microseismic event category showed the same results. The results suggested that the ensemble learning algorithms show better results compared to the classical machine learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. 基于强化和模仿学习的多智能体 寻路干扰者鉴别通信机制.
- Author
-
李梦甜, 向颖岑, 谢志峰, and 马利庄
- Subjects
- *
MACHINE learning , *REINFORCEMENT learning , *PROBLEM solving , *ALGORITHMS , *SCALABILITY - Abstract
Most of the existing MAPF methods based on communication learning have poor scalability or aggregate too much redundant information, resulting in inefficient communication. To solve these problems, this paper proposed disruptor identifiable communication (DIC), which learned concise communication excluding non-disruptors by judging whether the agent in the center of the field of view would change its decision-making due to the presence of neighbors, and successfully filtered out redundant information. At the same time, this paper further instantiated DIC and developed a new highly scalable distributed MAPF solver: disruptor identifiable communication based on reinforcement and imitation learning algorithm (DICRIA). Firstly, the disruptor discriminator and the policy output layer of DICRIA identified the disruptor. Secondly, the algorithm updated the information of the disruptor and the communication wish sender in two rounds of communication respectively. Finally, DICRIA output the final policy according to the coding results of each module. Experimental results show that DICRIA S performance is better than other similar solvers in almost all environment settings, and the algorithm increases the success rate by 5.2% on average compared to the baseline solver. Especially in dense problem instances with large-size maps, the algorithm even increases the success rate of DICRIA by 44.5% compared to the baseline solver. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A High-Performance Anti-Noise Algorithm for Arrhythmia Recognition.
- Author
-
Feng, Jianchao, Si, Yujuan, Zhang, Yu, Sun, Meiqi, and Yang, Wenke
- Subjects
- *
BLIND source separation , *INDEPENDENT component analysis , *ARRHYTHMIA , *SIGNAL separation , *PRINCIPAL components analysis , *ALGORITHMS - Abstract
In recent years, the incidence of cardiac arrhythmias has been on the rise because of changes in lifestyle and the aging population. Electrocardiograms (ECGs) are widely used for the automated diagnosis of cardiac arrhythmias. However, existing models possess poor noise robustness and complex structures, limiting their effectiveness. To solve these problems, this paper proposes an arrhythmia recognition system with excellent anti-noise performance: a convolutionally optimized broad learning system (COBLS). In the proposed COBLS method, the signal is convolved with blind source separation using a signal analysis method based on high-order-statistic independent component analysis (ICA). The constructed feature matrix is further feature-extracted and dimensionally reduced using principal component analysis (PCA), which reveals the essence of the signal. The linear feature correlation between the data can be effectively reduced, and redundant attributes can be eliminated to obtain a low-dimensional feature matrix that retains the essential features of the classification model. Then, arrhythmia recognition is realized by combining this matrix with the broad learning system (BLS). Subsequently, the model was evaluated using the MIT-BIH arrhythmia database and the MIT-BIH noise stress test database. The outcomes of the experiments demonstrate exceptional performance, with impressive achievements in terms of the overall accuracy, overall precision, overall sensitivity, and overall F1-score. Specifically, the results indicate outstanding performance, with figures reaching 99.11% for the overall accuracy, 96.95% for the overall precision, 89.71% for the overall sensitivity, and 93.01% for the overall F1-score across all four classification experiments. The model proposed in this paper shows excellent performance, with 24 dB, 18 dB, and 12 dB signal-to-noise ratios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. An effective video inpainting technique using morphological Haar wavelet transform with krill herd based criminisi algorithm.
- Author
-
Srinivasan, M. Nuthal, Chinnadurai, M., Senthilkumar, S., and Dinesh, E.
- Subjects
- *
WAVELET transforms , *MACHINE learning , *INPAINTING , *ANIMAL herds , *ALGORITHMS , *SIGNAL-to-noise ratio - Abstract
In recent times, video inpainting techniques have intended to fill the missing areas or gaps in a video by utilizing known pixels. The variety in brightness or difference of the patches causes the state-of-the-art video inpainting techniques to exhibit high computation complexity and create seams in the target areas. To resolve these issues, this paper introduces a novel video inpainting technique that employs the Morphological Haar Wavelet Transform combined with the Krill Herd based Criminisi algorithm (MHWT-KHCA) to address the challenges of high computational demand and visible seam artifacts in current inpainting practices. The proposed MHWT-KHCA algorithm strategically reduces computation times and enhances the seamlessness of the inpainting process in videos. Through a series of experiments, the technique is validated against standard metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), where it demonstrates superior performance compared to existing methods. Additionally, the paper outlines potential real-world applications ranging from video restoration to real-time surveillance enhancement, highlighting the technique's versatility and effectiveness. Future research directions include optimizing the algorithm for diverse video formats and integrating machine learning models to advance its capabilities further. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Citation recommendation using modified HITS algorithm.
- Author
-
Kammari, Monachary and Bhavani, S. Durga
- Subjects
- *
DEEP learning , *ALGORITHMS , *COMPUTER performance , *WEBSITES , *MACHINE learning - Abstract
Over the years the number of research publications per year is growing exponentially. Finding research papers of quality from the massive literature of relevant articles is a challenging and time-consuming task. The approaches in the latest literature address citation recommendation by utilizing large bibliographic information and use machine learning and deep learning methods for the task. These techniques clearly require a large amount of training data as well as machines with high processing power. To overcome these issues, we propose a novel method by modifying the popular hyperlink induced topic search (HITS), a web page ranking algorithm, as citation recommendation using hyperlink induced topic search (CR-HITS) that works on a directed and weighted heterogeneous bibliographic network containing diverse types of nodes and edges. We define effective scoring schemes for nodes and edges based on basic bibliographic information like citations of papers, number of publications of an author, etc. Given a few seed papers, the citation recommendation algorithm CR-HITS is run on small neighborhoods of the seed papers and hence the time taken by the execution is very small to yield the final recommendations. To the best of our knowledge, HITS has been used for the first time for the citation recommendation problem. We perform extensive experimentation on DBLP (version-11) and ACM (version-9) datasets and compare the results with many baseline methods in terms of MAP, MRR, and recall@N measures. The performance of the proposed algorithms is superior with respect to the MAP metric and matches the second best for the other two metrics. Since the top two algorithms use deep learning methods and use much larger bibliographic information including abstracts of the papers, we claim that our approach utilizes very low resources, yet yields recommendations that are very close to the top recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Applying Machine Learning in Marketing: An Analysis Using the NMF and k-Means Algorithms.
- Author
-
Gallego, Victor, Lingan, Jessica, Freixes, Alfons, Juan, Angel A., and Osorio, Celia
- Subjects
- *
K-means clustering , *MACHINE learning , *ARTIFICIAL intelligence , *ADVERTISING effectiveness , *DATABASES - Abstract
The integration of machine learning (ML) techniques into marketing strategies has become increasingly relevant in modern business. Utilizing scientific manuscripts indexed in the Scopus database, this article explores how this integration is being carried out. Initially, a focused search is undertaken for academic articles containing both the terms "machine learning" and "marketing" in their titles, which yields a pool of papers. These papers have been processed using the Supabase platform. The process has included steps like text refinement and feature extraction. In addition, our study uses two key ML methodologies: topic modeling through NMF and a comparative analysis utilizing the k-means clustering algorithm. Through this analysis, three distinct clusters emerged, thus clarifying how ML techniques are influencing marketing strategies, from enhancing customer segmentation practices to optimizing the effectiveness of advertising campaigns. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology.
- Author
-
Jaremko, Jacob L., Azar, Marleine, Bromwich, Rebecca, Lum, Andrea, Alicia Cheong, Li Hsia, Gibert, Martin, Laviolette, François, Gray, Bruce, Reinhold, Caroline, Cicero, Mark, Chong, Jaron, Shaw, James, Rybicki, Frank J., Hurrell, Casey, Lee, Emil, and Tang, An
- Subjects
- *
ARTIFICIAL intelligence laws , *ACQUISITION of property , *ALGORITHMS , *ARTIFICIAL intelligence , *AUTONOMY (Psychology) , *CONCEPTUAL structures , *MEDICAL ethics , *MEDICAL practice , *MEDICAL specialties & specialists , *PRIVACY , *RADIOLOGISTS , *DATA security - Abstract
Artificial intelligence (AI) software that analyzes medical images is becoming increasingly prevalent. Unlike earlier generations of AI software, which relied on expert knowledge to identify imaging features, machine learning approaches automatically learn to recognize these features. However, the promise of accurate personalized medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimization, and prognostication. Radiology is positioned to lead development and implementation of AI algorithms and to manage the associated ethical and legal challenges. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Prediction of volume loss of reinforced polytetrafluoroethylene matrix composites using machine learning algorithms.
- Author
-
Ibrahim, M. A., Gidado, A. Y., Auwal, S. T., Kunya, B. I., Nura, M., and Jacqueline, L.
- Subjects
- *
MACHINE learning , *STANDARD deviations , *PARTICLE swarm optimization , *POLYTEF , *ALGORITHMS - Abstract
Machine learning (ML) algorithms are getting unsurpassed exposure as a potential technique for solving and modelling the wear behaviour of polymer matrix composites (PMCs). This paper presents the application of ML algorithms in predicting volume loss of reinforced polytetrafluoroethylene (PTFE) matrix composites. Firstly, the Taguchi L27 was harnessed to generate data set in a regulated way. Then multi linear regression (MLR), support vector regression (SVR), particle swarm optimization (PSO) and Harris Hawk's optimization (HHO) coupled with SVR ML algorithms were developed to accurately predict the volume loss of reinforced PTFE matrix composites. Based on the results achieved, it was found that SVR-HHO ML algorithm predicted the volume loss of reinforced PTFE matrix composites better than the other algorithms with determination coefficient (96 %) and root mean square error of 11 %. The ML algorithms could be used for prediction of volume loss of reinforced PTFE matrix composites and development of new PMCs with specific volume loss resistance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Channel Prediction for Underwater Acoustic Communication: A Review and Performance Evaluation of Algorithms.
- Author
-
Liu, Haotian, Ma, Lu, Wang, Zhaohui, and Qiao, Gang
- Subjects
- *
DEEP learning , *UNDERWATER acoustic communication , *MACHINE learning , *ALGORITHMS , *TELECOMMUNICATION systems , *FORECASTING - Abstract
Underwater acoustic (UWA) channel prediction technology, as an important topic in UWA communication, has played an important role in UWA adaptive communication network and underwater target perception. Although many significant advancements have been achieved in underwater acoustic channel prediction over the years, a comprehensive summary and introduction is still lacking. As the first comprehensive overview of UWA channel prediction, this paper introduces past works and algorithm implementation methods of channel prediction from the perspective of linear, kernel-based, and deep learning approaches. Importantly, based on available at-sea experiment datasets, this paper compares the performance of current primary UWA channel prediction algorithms under a unified system framework, providing researchers with a comprehensive and objective understanding of UWA channel prediction. Finally, it discusses the directions and challenges for future research. The survey finds that linear prediction algorithms are the most widely applied, and deep learning, as the most advanced type of algorithm, has moved this field into a new stage. The experimental results show that the linear algorithms have the lowest computational complexity, and when the training samples are sufficient, deep learning algorithms have the best prediction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. VIS-SLAM: A Real-Time Dynamic SLAM Algorithm Based on the Fusion of Visual, Inertial, and Semantic Information.
- Author
-
Wang, Yinglong, Liu, Xiaoxiong, Zhao, Minkun, and Xu, Xinlong
- Subjects
- *
MOBILE robots , *MACHINE learning , *MOBILE learning , *DEEP learning , *ALGORITHMS , *INFORMATION measurement , *PROBABILITY theory , *GEOMETRY - Abstract
A deep learning-based Visual Inertial SLAM technique is proposed in this paper to ensure accurate autonomous localization of mobile robots in environments with dynamic objects. Addressing the limitations of real-time performance in deep learning algorithms and the poor robustness of pure visual geometry algorithms, this paper presents a deep learning-based Visual Inertial SLAM technique. Firstly, a non-blocking model is designed to extract semantic information from images. Then, a motion probability hierarchy model is proposed to obtain prior motion probabilities of feature points. For image frames without semantic information, a motion probability propagation model is designed to determine the prior motion probabilities of feature points. Furthermore, considering that the output of inertial measurements is unaffected by dynamic objects, this paper integrates inertial measurement information to improve the estimation accuracy of feature point motion probabilities. An adaptive threshold-based motion probability estimation method is proposed, and finally, the positioning accuracy is enhanced by eliminating feature points with excessively high motion probabilities. Experimental results demonstrate that the proposed algorithm achieves accurate localization in dynamic environments while maintaining real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
- *
DATA structures , *MACHINE learning , *PRIVATE networks , *BLOCKCHAINS , *ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Cognitive decline assessment using semantic linguistic content and transformer deep learning architecture.
- Author
-
PL, Rini and KS, Gayathri
- Subjects
- *
DIAGNOSIS of dementia , *COGNITION disorders diagnosis , *SPEECH evaluation , *CROSS-sectional method , *PREDICTION models , *TASK performance , *DESCRIPTIVE statistics , *NATURAL language processing , *LINGUISTICS , *EXPERIMENTAL design , *DEEP learning , *COMPUTER-aided diagnosis , *LATENT semantic analysis , *NEUROPSYCHOLOGICAL tests , *RESEARCH , *SEMANTIC memory , *EARLY diagnosis , *COMPARATIVE studies , *MACHINE learning , *FACTOR analysis , *ALGORITHMS , *DEMENTIA patients - Abstract
Background: Dementia is a cognitive decline that leads to the progressive deterioration of an individual's ability to perform daily activities independently. As a result, a considerable amount of time and resources are spent on caretaking. Early detection of dementia can significantly reduce the effort and resources needed for caretaking. Aims: This research proposes an approach for assessing cognitive decline by analysing speech data, specifically focusing on speech relevance as a crucial indicator for memory recall. Methods & Procedures: This is a cross‐sectional, online, self‐administered. The proposed method used deep learning architecture based on transformers, with BERT (Bidirectional Encoder Representations from Transformers) and Sentence‐Transformer to derive encoded representations of speech transcripts. These representations provide contextually descriptive information that is used to analyse the relevance of sentences in their respective contexts. The encoded information is then compared using cosine similarity metrics to measure the relevance of uttered sequences of sentences. The study uses the Pitt Corpus Dementia dataset for experimentation, which consists of speech data from individuals with and without dementia. The accuracy of the proposed multi‐QA‐MPNet (Multi‐Query Maximum Inner Product Search Pretraining) model is compared with other pretrained transformer models of Sentence‐Transformer. Outcomes & Results: The results show that the proposed approach outperforms the other models in capturing context level information, particularly semantic memory. Additionally, the study explores the suitability of different similarity measures to evaluate the relevance of uttered sequences of sentences. The experimentation reveals that cosine similarity is the most appropriate measure for this task. Conclusions & Implications: This finding has significant implications for the early warning signs of dementia, as it suggests that cosine similarity metrics can effectively capture the semantic relevance of spoken language. The persistent cognitive decline over time acts as one of the indicators for prevalence of dementia. Additionally early dementia could be recognised by analysis on other modalities like speech and brain images. WHAT THIS PAPER ADDS: What is already known on this subject: It is already known that speech‐ and language‐based detection methods can be useful for dementia diagnosis, as language difficulties are often early signs of the disease. Additionally, deep learning algorithms have shown promise in detecting and diagnosing dementia through analysing large datasets, particularly in speech‐ and language‐based detection methods. However, further research is needed to validate the performance of these algorithms on larger and more diverse datasets and to address potential biases and limitations. What this paper adds to existing knowledge: This study presents a unique and effective approach for cognitive decline assessment through analysing speech data. The study provides valuable insights into the importance of context and semantic memory in accurately detecting the potential in dementia and demonstrates the applicability of deep learning models for this purpose. The findings of this study have important clinical implications and can inform future research and development in the field of dementia detection and care. What are the potential or actual clinical implications of this work?: The proposed approach for cognitive decline assessment using speech data and deep learning models has significant clinical implications. It has the potential to improve the accuracy and efficiency of dementia diagnosis, leading to earlier detection and more effective treatments, which can improve patient outcomes and quality of life. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Research on health monitoring and damage recognition algorithm of building structures based on image processing.
- Author
-
Tang, Sicong and Wang, Hailong
- Subjects
- *
IMAGE processing , *MACHINE learning , *PARAMETER identification , *NOISE control , *ALGORITHMS , *IMAGE encryption , *DIGITAL images - Abstract
With the continuous deepening of the urbanization process and the progress of science and technology, people transform nature and develop nature on a larger and larger scale, among which the most iconic transformation is a variety of building structures built by people. And with the passage of time, the building structure in the perennial wind and sun, there will be signs of "illness", if not timely treatment, it will have a huge impact on the stability and safety of the building structure. Based on this, in this paper, according to the characteristics of crack identification on the surface of concrete structure, background subtraction algorithm is selected for image noise reduction processing. Through three steps of digital image noise reduction, crack extraction and crack parameter identification, the quantitative recognition of cracks is completed and a complete system of crack parameter identification is formed. The experimental results show that the machine learning model of building structure health monitoring and damage recognition algorithm proposed in this paper has excellent statistical performance, and the relative error accuracy of recognition can be controlled within 10%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Classification of high-dimensional imbalanced biomedical data based on spectral clustering SMOTE and marine predators algorithm.
- Author
-
Qin, Xiwen, Zhang, Siqi, Dong, Xiaogang, Shi, Hongyu, and Yuan, Liping
- Subjects
- *
LINEAR operators , *CLASSIFICATION , *ALGORITHMS , *LEARNING strategies , *FEATURE selection , *LOTKA-Volterra equations , *MACHINE learning , *RANDOM forest algorithms - Abstract
The research of biomedical data is crucial for disease diagnosis, health management, and medicine development. However, biomedical data are usually characterized by high dimensionality and class imbalance, which increase computational cost and affect the classification performance of minority class, making accurate classification difficult. In this paper, we propose a biomedical data classification method based on feature selection and data resampling. First, use the minimal-redundancy maximal-relevance (mRMR) method to select biomedical data features, reduce the feature dimension, reduce the computational cost, and improve the generalization ability; then, a new SMOTE oversampling method (Spectral-SMOTE) is proposed, which solves the noise sensitivity problem of SMOTE by an improved spectral clustering method; finally, the marine predators algorithm is improved using piecewise linear chaotic maps and random opposition-based learning strategy to improve the algorithm's optimization seeking ability and convergence speed, and the key parameters of the spectral-SMOTE are optimized using the improved marine predators algorithm, which effectively improves the performance of the over-sampling approach. In this paper, five real biomedical datasets are selected to test and evaluate the proposed method using four classifiers, and three evaluation metrics are used to compare with seven data resampling methods. The experimental results show that the method effectively improves the classification performance of biomedical data. Statistical test results also show that the proposed PRMPA-Spectral-SMOTE method outperforms other data resampling methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. An Algorithm of Complete Coverage Path Planning for Deep‐Sea Mining Vehicle Clusters Based on Reinforcement Learning.
- Author
-
Xing, Bowen, Wang, Xiao, and Liu, Zhenchong
- Subjects
- *
DEEP reinforcement learning , *MACHINE learning , *OCEAN mining , *ALGORITHMS - Abstract
This paper proposes a deep reinforcement learning algorithm to achieve complete coverage path planning for deep‐sea mining vehicle clusters. First, the mining vehicles and the deep‐sea mining environment are modeled. Then, this paper implements a series of algorithm designs and optimizations based on Deep Q Networks (DQN). The map fusion mechanism can integrate the grid matrix data from multiple mining vehicles to get the state matrix of the complete environment. In this paper, a preprocessing method for the state matrix is also designed to provide suitable training data for the neural network. The reward function and action selection mechanism of the algorithm are also optimized according to the requirements of cluster cooperative operation. Furthermore, the algorithm uses distance constraints to prevent the entanglement of underwater hoses. To improve the training efficiency of the neural network, the algorithm filters and extracts training samples for training through the sample quality score. Considering the requirement of cluster complete coverage mission, this paper introduces Long Short‐Term Memory (LSTM) based on the neural network to achieve a better training effect. After completing the above optimization and design, the algorithm proposed in this paper is verified through simulation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Data understanding and preparation in business domain: Importance of meta-features characterization.
- Author
-
Oreški, Dijana and Pihir, Igor
- Subjects
- *
MACHINE learning , *ALGORITHMS , *DEEP learning , *EXPERTISE - Abstract
Various machine learning algorithms are developed with an aim to create precise and trustworthy models and extract knowledge from data sources. Deep expertise in the field of machine learning is required for the challenging task of choosing the right algorithms for a specific dataset. There is no single algorithm that outperforms all others across all applications and different datasets. The difficulty of choosing an appropriate algorithm for a specific task in specific domain is related to the properties of the dataset. Properties of the dataset are measured through meta-features. Meta-features describe task and can provide explanation how one machine learning approach outperforms other algorithms on a given dataset. Learning about the effectiveness of learning algorithms, or meta-learning was developed to deal with this issue. Focus is required because previous research papers have not successfully identified meta-features in particular domains. In this research, we have evaluated various meta-feature characterization methodologies and have concentrated on basic meta-features. Business domain data is in the focus of this paper. We computed basic (general) meta-features and illustrated several use cases for their applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A modified fuzzy K-nearest neighbor using sine cosine algorithm for two-classes and multi-classes datasets.
- Author
-
Zheng, Chengfeng, Kasihmuddin, Mohd Shareduwan Mohd, Mansor, Mohd. Asyraf, Jamaludin, Siti Zulaikha Mohd, and Zamri, Nur Ezlin
- Subjects
- *
K-nearest neighbor classification , *MACHINE learning , *ALGORITHMS , *COSINE function - Abstract
The sine and cosine algorithm has become a widely researched swarm optimization method in recent years due to its simplicity and effectiveness. Based on the advantages, the study in this paper delves deeper into the key parameters that influence the performance of the algorithm, and has implemented modifications such as integrating the reverse learning algorithm and adding elite opposition solution to create the modified Sine and Cosine Algorithm (the modified SCA). Furthermore, by combining the fuzzy k-nearest neighbor method with the modified SCA, the study simulates numeric datasets with two or multiple classes, and analyzes the results. The accuracy rate (ACC) achieved by the modified SCA FKNN in this paper is compared to other models, with data comparison results and tables presented for each. The modified SCA FKNN proposed in this paper has obvious advantages on accuracy rate(ACC). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Comparative analysis of algorithms used for Twitter spam drift detection.
- Author
-
Thomas, Libina, Nirvinda, Mona, Mounika, Lalitha, and Hulipalled, Vishwanath
- Subjects
- *
SPAM email , *ALGORITHMS , *COMPARATIVE studies , *SOCIAL networks , *SOCIAL interaction , *MACHINE learning - Abstract
Twitter is known to be one of the familiar social networking platform these days, among many others, with a lot of user engagement. This microblogging site encourages social interactions, allowing users to stay up to date on the latest news and events and share them with others in real time. Tweets are limited to 280 characters and is allowed to include links to related websites and tools. With a platform having such wide reach, it is prone to be targeted negatively and spams are one way to do it. Spammers use this platform to display malicious content that is inappropriate and harmful to users worldwide. Machine Learning uses various approaches that can be used to detect spam and overcome it. However, with the advent of recent technologies it has been observed that the properties of tweets vary overtime making it difficult to detect spam leading to the "Twitter Spam Drift" problem. This paper reviews the papers published since 2018 that have focused on the spam drift problem and gives a comparative analysis of the different algorithms that are utilized on the various data sets to tackle such a problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Advancements in UAV Path Planning: A Deep Reinforcement Learning Approach with Soft Actor-Critic for Enhanced Navigation.
- Author
-
Guo, Jingrui, Zhou, Guanzhong, Huang, Hailong, and Huang, Chao
- Subjects
- *
DEEP reinforcement learning , *REINFORCEMENT learning , *DRONE aircraft , *MACHINE learning , *ALGORITHMS - Abstract
This paper tackles the intricate challenge of autonomous navigation for Unmanned Aerial Vehicles (UAVs) through dynamically changing environments. We focus on a sophisticated Deep Reinforcement Learning (DRL) approach using the Soft Actor-Critic (SAC) algorithm, optimized for UAV path planning within a continuous action space. This methodology leverages environmental image data to enhance the precision of flight maneuvers and effective obstacle avoidance. Our approach, validated through extensive simulations in Gazebo and field tests, demonstrates the algorithm’s efficacy in enabling UAVs to adeptly navigate obstacles using depth maps. The study further explores the robustness of the SAC algorithm by comparing it with traditional DRL methods, emphasizing its superior performance in real-world applications. This research contributes significantly to advancing UAV technology, particularly in autonomous motion planning, by integrating cutting-edge machine learning techniques. The video link is: https://www.youtube.com/watch?v=Nd_aMzejNXY. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. An Adaptive Transfer Learning Framework for Functional Classification.
- Author
-
Qin, Caihong, Xie, Jinhan, Li, Ting, and Bai, Yang
- Subjects
- *
MACHINE learning , *INFORMATION resources , *PRIOR learning , *CLASSIFICATION , *ALGORITHMS - Abstract
AbstractIn this paper, we study the transfer learning problem in functional classification, aiming to improve the classification accuracy of the target data by leveraging information from related source datasets. To facilitate transfer learning, we propose a novel transferability function tailored for classification problems, enabling a more accurate evaluation of the similarity between source and target dataset distributions. Interestingly, we find that a source dataset can offer more substantial benefits under certain conditions than another dataset with an identical distribution to the target dataset. This observation renders the commonly-used debiasing step in the parameter-based transfer learning algorithm unnecessary under some circumstances to the classification problem. In particular, we propose two adaptive transfer learning algorithms based on the functional Distance Weighted Discrimination (DWD) classifier for scenarios with and without prior knowledge regarding informative sources. Furthermore, we establish the upper bound on the excess risk of the proposed classifiers, providing the statistical gain via transfer learning mathematically provable. Simulation studies are conducted to thoroughly examine the finite-sample performance of the proposed algorithms. Finally, we implement the proposed method to Beijing air-quality data, and significantly improve the prediction of the PM 2.5 level of a target station by effectively incorporating information from source datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Research on ELoran Demodulation Algorithm Based on Multiclass Support Vector Machine.
- Author
-
Liu, Shiyao, Yan, Baorong, Guo, Wei, Hua, Yu, Zhang, Shougang, Lu, Jun, Xu, Lu, and Yang, Dong
- Subjects
- *
SUPPORT vector machines , *PULSE modulation , *DEMODULATION , *ALGORITHMS , *EXHIBITIONS - Abstract
Demodulation and decoding are pivotal for the eLoran system's timing and information transmission capabilities. This paper proposes a novel demodulation algorithm leveraging a multiclass support vector machine (MSVM) for pulse position modulation (PPM) of eLoran signals. Firstly, the existing demodulation method based on envelope phase detection (EPD) technology is reviewed, highlighting its limitations. Secondly, a detailed exposition of the MSVM algorithm is presented, demonstrating its theoretical foundations and comparative advantages over the traditional method and several other methods proposed in this study. Subsequently, through comprehensive experiments, the algorithm parameters are optimized, and the parallel comparison of different demodulation methods is carried out in various complex environments. The test results show that the MSVM algorithm is significantly superior to traditional methods and other kinds of machine learning algorithms in demodulation accuracy and stability, particularly in high-noise and -interference scenarios. This innovative algorithm not only broadens the design approach for eLoran receivers but also fully meets the high-precision timing service requirements of the eLoran system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Reducing the Overfitting in Convolutional Neural Network using Nature-Inspired Algorithm: A Novel Hybrid Approach.
- Author
-
Alamri, Nawaf Mohammad
- Subjects
- *
CONVOLUTIONAL neural networks , *BEES algorithm , *MACHINE learning , *ALGORITHMS , *DEEP learning , *STIMULUS generalization - Abstract
Convolutional neural network (CNN) is one of the well-known deep learning algorithms that uses convolutional filters to extract the features in the images. The most important issue when training CNN is the overfitting which prevents the model from generalization to unseen data. This paper addressed this issue by proposing a novel hybrid approach that uses bees algorithm (BA) to optimize the regularization parameter and weight regularization factor to adjust the regularization value in each convolutional layer and fully connected layer resulting in a hybrid algorithm called bees algorithm regularized convolutional neural network (BA-RCNN). It was applied to three different datasets for classification or predictions purposes and showed an improvement in the validation and testing accuracy leading to a lower difference with the training accuracy which means the overfitting is reduced comparing to the original CNN. Applying the BA-RCNN algorithm to 'Cifar10DataDir' improved the validation accuracy from 80.34% for the original CNN to 82.80% for the hybrid BA-RCNN algorithm, in the electrocardiogram the improvement was from 87.80 to 90.47% and both datasets were used for classification. Furthermore, the hybrid BA-RCNN algorithm was applied to predict the porosity percentage based on artificial porosity images and the results showed that the validation accuracy was improved from 81.67% for the original CNN to 87.33% for the hybrid BA-RCNN algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. High-Precision Direction of Arrival Estimation Based on LightGBM.
- Author
-
Wang, Fuwei, Zhang, Xiaoyu, Liu, Lu, Chen, Chen, He, Xingrui, and Zhou, Yan
- Subjects
- *
DIRECTION of arrival estimation , *SIGNAL-to-noise ratio , *MACHINE learning , *HISTOGRAMS , *ALGORITHMS , *GENERALIZATION - Abstract
Machine learning-based direction-of-arrival (DOA) estimation methods can have a good predictive ability even in complex scenarios. However, their estimation performance in the face of unknown data is poor because they rely on the generalization ability of the algorithm itself. Therefore, this paper proposes a DOA estimation method based on the Light Gradient Boosting Machine (LightGBM) algorithm. Using the histogram algorithm, gradient-based one-sided sampling and exclusive feature bundling measures, the LightGBM algorithm can reduce the time to find the best segmentation point, reduce the amount of data and the number of features in the dataset, thus reducing the model training time, and achieve high prediction accuracy. Applying the LightGBM algorithm to the DOA estimation problem and using a large dataset for training can improve the estimation accuracy while reducing the training cost. Simulation and real experimental results verify the effectiveness of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. DeepEPhishNet: a deep learning framework for email phishing detection using word embedding algorithms.
- Author
-
Somesha, M and Pais, Alwyn Roshan
- Subjects
- *
DEEP learning , *PHISHING , *SOCIAL engineering (Fraud) , *ARTIFICIAL neural networks , *EMAIL , *ALGORITHMS , *MACHINE learning - Abstract
Email phishing is a social engineering scheme that uses spoofed emails intended to trick the user into disclosing legitimate business and personal credentials. Many phishing email detection techniques exist based on machine learning, deep learning, and word embedding. In this paper, we propose a new technique for the detection of phishing emails using word embedding (Word2Vec, FastText, and TF-IDF) and deep learning techniques (DNN and BiLSTM network). Our proposed technique makes use of only four header based (From, Returnpath, Subject, Message-ID) features of the emails for the email classification. We applied several word embeddings for the evaluation of our models. From the experimental evaluation, we observed that the DNN model with FastText-SkipGram achieved an accuracy of 99.52% and BiLSTM model with FastText-SkipGram achieved an accuracy of 99.42%. Among these two techniques, DNN outperformed BiLSTM using the same word embedding (FastText-SkipGram) techniques with an accuracy of 99.52%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. An end-to-end framework for private DGA detection as a service.
- Author
-
Maia, Ricardo J. M., Ray, Dustin, Pentyala, Sikha, Dowsley, Rafael, De Cock, Martine, Nascimento, Anderson C. A., and Jacobi, Ricardo
- Subjects
- *
MACHINE learning , *DATA entry , *ALGORITHMS , *PRIVACY , *MALWARE - Abstract
Domain Generation Algorithms (DGAs) are used by malware to generate pseudorandom domain names to establish communication between infected bots and command and control servers. While DGAs can be detected by machine learning (ML) models with great accuracy, offering DGA detection as a service raises privacy concerns when requiring network administrators to disclose their DNS traffic to the service provider. The main scientific contribution of this paper is to propose the first end-to-end framework for privacy-preserving classification as a service of domain names into DGA (malicious) or non-DGA (benign) domains. Our framework achieves these goals by carefully designed protocols that combine two privacy-enhancing technologies (PETs), namely secure multi-party computation (MPC) and differential privacy (DP). Through MPC, our framework enables an enterprise network administrator to outsource the problem of classifying a DNS (Domain Name System) domain as DGA or non-DGA to an external organization without revealing any information about the domain name. Moreover, the service provider's ML model used for DGA detection is never revealed to the network administrator. Furthermore, by using DP, we also ensure that the classification result cannot be used to learn information about individual entries of the training data. Finally, we leverage post-training float16 quantization of deep learning models in MPC to achieve efficient, secure DGA detection. We demonstrate that by using quantization achieves a significant speed-up, resulting in a 23% to 42% reduction in inference runtime without reducing accuracy using a three party secure computation protocol tolerating one corruption. Previous solutions are not end-to-end private, do not provide differential privacy guarantees for the model's outputs, and assume that model embeddings are publicly known. Our best protocol in terms of accuracy runs in about 0.22s. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Single-Machine Scheduling with Simultaneous Learning Effects and Delivery Times.
- Author
-
Liu, Zheng and Wang, Ji-Bo
- Subjects
- *
MACHINE learning , *HEURISTIC algorithms , *TARDINESS , *CONSUMERS , *ALGORITHMS , *COMPUTER scheduling - Abstract
This paper studies the single-machine scheduling problem with truncated learning effect, time-dependent processing time, and past-sequence-dependent delivery time. The delivery time is the time that the job is delivered to the customer after processing is complete. The goal is to determine an optimal job schedule to minimize the total weighted completion time and maximum tardiness. In order to solve the general situation of the problem, we propose a branch-and-bound algorithm and other heuristic algorithms. Computational experiments also prove the effectiveness of the given algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Prioritized Experience Replay–Based Path Planning Algorithm for Multiple UAVs.
- Author
-
Ren, Chongde, Chen, Jinchao, Du, Chenglie, and Chen, Pengyun
- Subjects
- *
MACHINE learning , *REINFORCEMENT learning , *DRONE aircraft , *COMBINATORIAL optimization , *ALGORITHMS - Abstract
Unmanned aerial vehicles (UAVs) have been extensively researched and deployed in both military and civilian applications due to their tiny size, low cost, and great ease. Although UAVs working together on complicated jobs can significantly increase productivity and reduce costs, they can cause major issues with path planning. In complex environments, the path planning problem, which is a multiconstraint combinatorial optimization problem and hard to settle, requires considering numerous constraints and limitations and generates the best paths for each UAV to accomplish group tasks. In this paper, we study the path planning problem for multiple UAVs and propose a reinforcement learning algorithm: PERDE‐MADDPG based on prioritized experience replay (PER) and delayed update skills. First, we adopt a PER mechanism based on temporal difference (TD) error to enhance the efficiency of experience utilization and accelerate the convergence speed of the algorithm. Second, we use delayed updates in the process of updating network parameters to ensure stability in training multiple agents. Finally, we propose the PERDE‐MADDPG algorithm based on PER and delayed update skills, which is evaluated against the MATD3, MADDPG, and SAC methods in simulation scenarios to confirm its efficacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. DDSC-SMOTE: an imbalanced data oversampling algorithm based on data distribution and spectral clustering.
- Author
-
Li, Xinqi and Liu, Qicheng
- Subjects
- *
DATA distribution , *CLASSIFICATION algorithms , *K-nearest neighbor classification , *ALGORITHMS , *MACHINE learning , *OUTLIER detection , *CLUSTER sampling - Abstract
Imbalanced data poses a significant challenge in machine learning, as conventional classification algorithms often prioritize majority class samples, while accurately classifying minority class samples is more crucial. The synthetic minority oversampling technique (SMOTE) represents one of the most renowned methods for handling imbalanced data. However, both SMOTE and its variants have limitations due to their insufficient consideration of data distribution, leading to the generation of incorrect and unnecessary samples. This paper, therefore, introduces a novel oversampling algorithm called data distribution and spectral clustering-based SMOTE (DDSC-SMOTE). This algorithm addresses the shortcomings of SMOTE by introducing three innovative data distribution-based improvement strategies: adaptive allocation of synthetic sample quantities strategy, seed sample adaptive selection strategy, and synthetic sample improvement strategy. First, we use the k-nearest neighbor sample labels and the local outlier factor algorithm to remove noisy and outlier samples. Next, we leverage spectral clustering to identify clusters within the minority class and propose a dual-weight factor that considers inter-cluster and intra-cluster distances to allocate the number of synthetic samples effectively, addressing interclass and intraclass imbalances. Furthermore, we introduce a relative position weight coefficient to determine the probability of selecting seed samples within the subcluster, ensuring that important minority samples have higher chances of being sampled. Finally, we improve the SMOTE sample synthesis formula for safer generation. Extensive comparisons on real datasets from the UCI repository demonstrate that DDSC-SMOTE outperforms seven state-of-the-art oversampling algorithms significantly in terms of G-mean and F1-score, presenting a data distribution-focused solution for addressing imbalanced data challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Deep Learning System for User Identification Using Sensors on Doorknobs.
- Author
-
Vegas, Jesús, Rao, A. Ravishankar, and Llamas, César
- Subjects
- *
DEEP learning , *SYSTEM identification , *MACHINE learning , *DOOR knobs , *ALGORITHMS , *GYROSCOPES - Abstract
Door access control systems are important to protect the security and integrity of physical spaces. Accuracy and speed are important factors that govern their performance. In this paper, we investigate a novel approach to identify users by measuring patterns of their interactions with a doorknob via an embedded accelerometer and gyroscope and by applying deep-learning-based algorithms to these measurements. Our identification results obtained from 47 users show an accuracy of 90.2%. When the sex of the user is used as an input feature, the accuracy is 89.8% in the case of male individuals and 97.0% in the case of female individuals. We study how the accuracy is affected by the sample duration, finding that is its possible to identify users using a sample of 0.5 s with an accuracy of 68.5%. Our results demonstrate the feasibility of using patterns of motor activity to provide access control, thus extending with it the set of alternatives to be considered for behavioral biometrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A Fair Contribution Measurement Method for Federated Learning.
- Author
-
Guo, Peng, Yang, Yanqing, Guo, Wei, and Shen, Yanping
- Subjects
- *
FEDERATED learning , *COOPERATIVE game theory , *DATA privacy , *MACHINE learning , *ALGORITHMS - Abstract
Federated learning is an effective approach for preserving data privacy and security, enabling machine learning to occur in a distributed environment and promoting its development. However, an urgent problem that needs to be addressed is how to encourage active client participation in federated learning. The Shapley value, a classical concept in cooperative game theory, has been utilized for data valuation in machine learning services. Nevertheless, existing numerical evaluation schemes based on the Shapley value are impractical, as they necessitate additional model training, leading to increased communication overhead. Moreover, participants' data may exhibit Non-IID characteristics, posing a significant challenge to evaluating participant contributions. Non-IID data have greatly affected the accuracy of the global model, weakened the marginal effect of the participants, and led to the underestimated contribution measurement results of the participants. Current work often overlooks the impact of heterogeneity on model aggregation. This paper presents a fair federated learning contribution measurement scheme that addresses the need for additional model computations. By introducing a novel aggregation weight, it enhances the accuracy of the contribution measurement. Experiments on the MNIST and Fashion MNIST dataset show that the proposed method can accurately compute the contributions of participants. Compared to existing baseline algorithms, the model accuracy is significantly improved, with a similar time cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. A Method for Reducing Training Time of ML-Based Cascade Scheme for Large-Volume Data Analysis.
- Author
-
Izonin, Ivan, Muzyka, Roman, Tkachenko, Roman, Dronyuk, Ivanna, Yemets, Kyrylo, and Mitoulis, Stergios-Aristoteles
- Subjects
- *
PRINCIPAL components analysis , *FEATURE extraction , *DATA analysis , *TRAINING needs , *ALGORITHMS - Abstract
We live in the era of large data analysis, where processing vast datasets has become essential for uncovering valuable insights across various domains of our lives. Machine learning (ML) algorithms offer powerful tools for processing and analyzing this abundance of information. However, the considerable time and computational resources needed for training ML models pose significant challenges, especially within cascade schemes, due to the iterative nature of training algorithms, the complexity of feature extraction and transformation processes, and the large sizes of the datasets involved. This paper proposes a modification to the existing ML-based cascade scheme for analyzing large biomedical datasets by incorporating principal component analysis (PCA) at each level of the cascade. We selected the number of principal components to replace the initial inputs so that it ensured 95% variance retention. Furthermore, we enhanced the training and application algorithms and demonstrated the effectiveness of the modified cascade scheme through comparative analysis, which showcased a significant reduction in training time while improving the generalization properties of the method and the accuracy of the large data analysis. The improved enhanced generalization properties of the scheme stemmed from the reduction in nonsignificant independent attributes in the dataset, which further enhanced its performance in intelligent large data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Exterior-Point Optimization for Sparse and Low-Rank Optimization.
- Author
-
Das Gupta, Shuvomoy, Stellato, Bartolomeo, and Van Parys, Bart P. G.
- Subjects
- *
CONVEX functions , *MACHINE learning , *PROBLEM solving , *DATA science , *ALGORITHMS - Abstract
Many problems of substantial current interest in machine learning, statistics, and data science can be formulated as sparse and low-rank optimization problems. In this paper, we present the nonconvex exterior-point optimization solver (NExOS)—a first-order algorithm tailored to sparse and low-rank optimization problems. We consider the problem of minimizing a convex function over a nonconvex constraint set, where the set can be decomposed as the intersection of a compact convex set and a nonconvex set involving sparse or low-rank constraints. Unlike the convex relaxation approaches, NExOS finds a locally optimal point of the original problem by solving a sequence of penalized problems with strictly decreasing penalty parameters by exploiting the nonconvex geometry. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We then implement and test NExOS on many instances from a wide variety of sparse and low-rank optimization problems, empirically demonstrating that our algorithm outperforms specialized methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Identifying divertor detachment using a machine learning model trained on divertor camera images from DIII-D.
- Author
-
Victor, B. S. and Scotti, F.
- Subjects
- *
MACHINE learning , *CONVOLUTIONAL neural networks , *CAMERAS , *ALGORITHMS , *GEOMETRY - Abstract
This paper describes the application of a machine learning (ML) algorithm using a convolution neural network, first developed in Boyer et al. ["Classification and prediction of detachment in DIII-D using neural networks trained on C III imaging," Nucl. Fusion (submitted) (2024)], to detect divertor detachment in DIII-D. Detachment detection is based on images from tangentially viewing upper and lower filtered divertor cameras that measure CIII emission at 465 nm. Separate ML models are developed for lower single null and upper single null configurations with mostly closed divertor shapes. Due to the viewing angle and divertor geometry, camera images of the upper divertor show a stark contrast in CIII emission between attached and detached conditions and the model identified detachment with 100% accuracy in the test dataset. For the lower divertor images, the contrast between attached and detached conditions is lower and the model identifies detachment with 96% accuracy. This ML model will be applied to the image data after each shot to provide a rapid assessment of divertor detachment to aid operation of DIII-D with the potential extension to other devices in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Combining random forest and multicollinearity modeling for index tracking.
- Author
-
Cao, Yuanyuan, Li, Hongying, and Yang, Yuehan
- Subjects
- *
MACHINE learning , *RANDOM forest algorithms , *INDEX mutual funds , *STATISTICAL models , *ALGORITHMS - Abstract
This paper studies the combination of random forest (RF) and classical statistical modeling. We propose two algorithms: RF cluster + ridge and RF regression + ridge, in which the RF reduces overfitting and has a good anti-noise ability. The ridge regression can effectively handle the correlated data by the ridge penalty. To test the performance of the proposed techniques, we model and track the S&P 500 index. Index tracking aims to replicate the returns of an index fund and is very popular in finance. We compare the performance of the proposed methods with those of several existing methods. The portfolio obtained by each technique is quantified using the tracking error. We find that this combination always has higher accuracy. Empirical results show that the proposed methods are qualified for index funds. Our findings shed light on the synthesis of machine learning and statistical modeling techniques and provide an effective practice of this combination. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Research on Unsupervised Feature Point Prediction Algorithm for Multigrid Image Stitching.
- Author
-
Li, Jun, Chen, Yufeng, and Mu, Aiming
- Subjects
- *
MACHINE learning , *MATRIX inversion , *FIX-point estimation , *ALGORITHMS , *PARAMETERIZATION - Abstract
The conventional feature point-based image stitching algorithm exhibits inconsistencies in the quality of feature points across diverse scenes. This may result in the deterioration of the alignment effect or even the inability to align two images. To address this issue, this paper presents an unsupervised multigrid image alignment method that integrates the conventional feature point-based image alignment algorithm with deep learning techniques. The method postulates that the feature points are uniformly distributed in the image and employs a deep learning network to predict their displacements, thereby enhancing the robustness of the feature points. Furthermore, the precision of image alignment is enhanced through the parameterization of APAP (As-projective-as-possible image stitching with moving DLT) multigrid deformation. Ultimately, based on the symmetry exhibited by the homography matrix and its inverse matrix throughout the projection process, image chunking inverse warping is introduced to obtain the stitched images for the multigrid deep learning network. Additionally, the mesh shape-preserving loss is introduced to constrain the shape of the multigrid. The experimental results demonstrate that in the real-world UDIS-D dataset, the method achieves notable improvements in feature point matching and homography estimation tasks, and exhibits superior alignment performance on the traditional image stitching dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Research on Model Selection-Based Weighted Averaged One-Dependence Estimators.
- Author
-
Zhang, Chengzhen, Chen, Shenglei, and Ke, Huihang
- Subjects
- *
BAYESIAN analysis , *MACHINE learning , *CLASSIFICATION , *ALGORITHMS - Abstract
The Averaged One-Dependence Estimators (AODE) is a popular and effective method of Bayesian classification. In AODE, selecting the optimal sub-model based on a cross-validated risk minimization strategy can further enhance classification performance. However, existing cross-validation risk minimization strategies do not consider the differences in attributes in classification decisions. Consequently, this paper introduces an algorithm for Model Selection-based Weighted AODE (SWAODE). To express the differences in attributes in classification decisions, the ODE corresponding to attributes are weighted, with mutual information commonly used in the field of machine learning adopted as weights. Then, these weighted sub-models are evaluated and selected using leave-one-out cross-validation (LOOCV) to determine the best model. The new method can improve the accuracy and robustness of the model and better adapt to different data features, thereby enhancing the performance of the classification algorithm. Experimental results indicate that the algorithm merges the benefits of weighting with model selection, markedly enhancing the classification efficiency of the AODE algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Three-layer data center-based intelligent slice admission control algorithm for C-RAN using approximate reinforcement learning.
- Author
-
Khani, Mohsen, Jamali, Shahram, and Sohrabi, Mohammad Karim
- Subjects
- *
MACHINE learning , *RADIO access networks , *5G networks , *ALGORITHMS , *REINFORCEMENT learning - Abstract
C-RAN (Cloud Radio Access Network) is a 5G architecture that consists of sites and three-layer Data Centers (DCs), which include the central office DC, local DC, and regional DC. Network slicing, which enables infrastructure providers (InP) to create independent logical networks, is essential in this architecture. By utilizing this technology, InPs can maximize the utility of the network by providing slices to service providers in response to their slice requests. However, almost all of the recent research on slice admission control (SAC) schemes has only considered one or two layers of DCs, which limits the efficiency of the slicing process and decreases network utility. To address these issues, this paper proposes an intelligent SAC scheme called ISAC that considers all three-layer DCs. Instead of relying on reinforcement learning algorithms like Q-learning, which are effective in discrete environments with limited state space but give poor performance in continuous environments, ISAC employs the Approximate Reinforcement Learning (ARL) algorithm. ARL is better suited for 5G network modeling because it can adapt to continuous environments, allowing for a more accurate representation of the underlying physical processes. Extensive simulation studies demonstrate that ISAC significantly improves performance in terms of slice request rejection rates, InP revenue, accepting more slices, and optimizing resource utilization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Privacy-Preserving Breast Cancer Prediction Based on Logistic Regression.
- Author
-
Chen, Shuangquan, Li, Jinguo, Zhang, Kai, Di, Aoran, and Lu, Mengli
- Subjects
- *
MACHINE learning , *ALGORITHMS , *BREAST cancer , *LOGISTIC regression analysis - Abstract
With the increasing strain on today's healthcare resources, there is a growing demand for pre-diagnosis testing. In response, researchers have suggested diverse machine learning models for disease prediction, among which logistic regression stands out as one of the most effective models. Its objective is to enhance the accuracy and efficiency of pre-diagnosis testing, thereby alleviating the burden on healthcare resources. However, when multiple medical institutions collaborate to train models, the untrusted cloud server may pose a risk of private data leakage, enabling participants to steal data from one another. Existing privacy-preserving methods often suffer from drawbacks such as high communication costs, long training times and lack of security proofs. Therefore, it is imperative to jointly train an excellent model collaboratively and uphold data privacy. In this paper, we develop a highly optimized two-party logistic regression algorithm based on CKKS scheme. The algorithm optimizes ciphertext operations by employing ciphertext segmentation and minimizing the multiplication depth, resulting in time savings. Furthermore, it utilizes least squares to approximate sigmoid functions within specific intervals that cannot be handled by homomorphic encryption. Finally, the proposed algorithm is evaluated on a breast cancer dataset, and simulation experiments demonstrate that the model's prediction accuracy, after machine learning training, exceeds 96% for two-sided encrypted data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. An Intelligent Apple Identification Method via the Collaboration of YOLOv5 Algorithm and Fast-Guided Filter Theory.
- Author
-
Zhang, Eryue and Zhang, He
- Subjects
- *
MACHINE learning , *OBJECT recognition (Computer vision) , *RECOGNITION (Psychology) , *ORCHARDS , *ALGORITHMS , *DEEP learning , *APPLES , *APPLE orchards - Abstract
Apple-picking robot can promote the development of smart agriculture, and accurate object recognition in complex natural environments using deep learning algorithms is critical. However, research has shown that changes in illumination and object occlusion remain significant challenges for recognition. In order to improve the accuracy of apple apple-picking robot's identification and positioning of apples in natural environment, a method using YOLOv5 (You Only Look Once, YOLO) combined with fast-guided filter is proposed. By introducing a fast-guided filtering module, the ability to extract image features is improved, and the problem of inaccurate occlusion targets and edge detection is solved; K -means clustering algorithm is introduced in improving YOLOv5, which can realize automatic adjustment of image size and step size; BiFPN structure is introduced in Neck network to add weighted feature fusion to highlight the detailed features. The results show that the algorithm proposed in this paper can well remove noise information such as occlusion edge blurring in apple images in a natural light environment. In the real orchard environment, the apple recognition accuracy rate reached 97.8%, the recall rate was 97.3% and the recognition rate was about 26.84 fps. The results show that this research based on YOLOv5 and fast-guided filtering can realize fast and accurate identification of apple fruits in natural environment, and meet the practical application requirements of real-time target detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. The HHL algorithm: Implementation and research directions.
- Author
-
Sambhaje, Varsha and Chaurasia, Anju
- Subjects
- *
RESEARCH implementation , *ALGORITHMS , *QUANTUM computing , *LINEAR systems , *ARTIFICIAL intelligence , *MACHINE learning , *QUANTUM computers - Abstract
Linear systems of equations lie at the heart of numerous scientific and engineering challenges. In cutting-edge arena like artificial intelligence, machine learning and neuro-computation, these systems serve as a fundamental tool for mathematical modeling. Classical algorithms for solving linear systems have been extensively developed and forms the backbone of diverse applications across various scientific disciplines. While classical algorithms exist for solving linear systems, they often encounter limitations termed “NP-completeness” as data complexity increases. The emerging field of quantum computing offers a revolutionary approach to deal with these kinds of problems. The Harrow–Hassidim–Lloyd (HHL) algorithm tackles these challenges and opens new avenues for research. This study delves into the contemporary effectiveness of the HHL algorithm to address systems of linear equations. By examining recent research in quantum machine learning, we aim to assess the HHL algorithm’s potential to revolutionize the process of optimizing hyperparameters for machine learning models, resulting in increased efficiency and cost savings. This paper meticulously analyzes the HHL algorithm and explores its evolution from conception to the latest advancements. A comprehensive examination of the HHL algorithm, including its evolution over time, is thoroughly explored. The investigation delves into the potential challenges and limitations that might hinder the practical deployment of the HHL algorithm. Identifying these roadblocks will pave the way for future research and development efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.