108,537 results
Search Results
52. Quality assurance strategies for machine learning applications in big data analytics: an overview.
- Author
-
Ogrizović, Mihajlo, Drašković, Dražen, and Bojić, Dragan
- Subjects
MACHINE learning ,NATURAL language processing ,COMPUTER vision ,ARTIFICIAL intelligence ,DATA analytics ,DEEP learning - Abstract
Machine learning (ML) models have gained significant attention in a variety of applications, from computer vision to natural language processing, and are almost always based on big data. There are a growing number of applications and products with built-in machine learning models, and this is the area where software engineering, artificial intelligence and data science meet. The requirement for a system to operate in a real-world environment poses many challenges, such as how to design for wrong predictions the model may make; How to assure safety and security despite possible mistakes; which qualities matter beyond a model's prediction accuracy; How can we identify and measure important quality requirements, including learning and inference latency, scalability, explainability, fairness, privacy, robustness, and safety. It has become crucial to test thoroughly these models to assess their capabilities and potential errors. Existing software testing methods have been adapted and refined to discover faults in machine learning and deep learning models. This paper covers a taxonomy, a methodologically uniform presentation of all presented solutions to the aforementioned issues, as well as conclusions about possible future development trends. The main contributions of this paper are a classification that closely follows the structure of the ML-pipeline, a precisely defined role of each team member within that pipeline, an overview of trends and challenges in the combination of ML and big data analytics, with uses in the domains of industry and education. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
53. DPNet: Scene text detection based on dual perspective CNN-transformer.
- Author
-
Li, Yuan
- Subjects
MACHINE learning ,COMPUTER vision ,VISUAL fields ,TRANSFORMER models ,CONTEXTUAL learning ,DEEP learning - Abstract
With the continuous advancement of deep learning, research in scene text detection has evolved significantly. However, complex backgrounds and various text forms complicate the task of detecting text from images. CNN is a deep learning algorithm that automatically extracts features through convolution operation. In the task of scene text detection, it can capture local text features in images, but it lacks global attributes. In recent years, inspired by the application of transformers in the field of computer vision, it can capture the global information of images and describe them intuitively. Therefore, this paper proposes scene text detection based on dual perspective CNN-transformer. The channel enhanced self-attention module (CESAM) and spatial enhanced self-attention module (SESAM) proposed in this paper are integrated into the traditional ResNet backbone network. This integration effectively facilitates the learning of global contextual information and positional relationships of text, thereby alleviating the challenge of detecting small target text. Furthermore, this paper introduces a feature decoder designed to refine the effective text information within the feature map and enhance the perception of detailed information. Experiments show that the method proposed in this paper significantly improves the robustness of the model for different types of text detection. Compared to the baseline, it achieves performance improvements of 2.51% (83.81 vs. 81.3) on the Total-Text dataset, 1.87% (86.07 vs. 84.2) on the ICDAR 2015 dataset, and 3.63% (86.72 vs. 83.09) on the MSRA-TD500 dataset, while also demonstrating better visual effects. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
54. Trustworthy and ethical AI-enabled cardiovascular care: a rapid review.
- Author
-
Mooghali, Maryam, Stroud, Austin M., Yoo, Dong Whi, Barry, Barbara A., Grimshaw, Alyssa A., Ross, Joseph S., Zhu, Xuan, and Miller, Jennifer E.
- Subjects
DATA privacy ,LITERATURE reviews ,MEDICAL personnel ,MACHINE learning ,ARTIFICIAL intelligence ,BIBLIOGRAPHIC databases ,MEDICAL equipment - Abstract
Background: Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients' and healthcare providers' perspectives when using AI in cardiovascular care. Methods: In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients', caregivers', or healthcare providers' perspectives. The search was completed on May 24, 2022 and was not limited by date or study design. Results: After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients' interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights. Conclusion: This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients' and healthcare providers' perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
55. INTRODUCTION TO THE SPECIAL ISSUE ON NEXT GENERATION PERVASIVE RECONFIGURABLE COMPUTING FOR HIGH PERFORMANCE REAL TIME APPLICATIONS.
- Author
-
VENKATESAN, C., YU-DONG ZHANG, CHOW CHEE ONN, and AND YONG SHI
- Subjects
MACHINE learning ,REINFORCEMENT learning ,HIGH performance computing ,COMPUTER vision ,ARTIFICIAL intelligence ,PARSING (Computer grammar) ,DEEP learning - Abstract
This document introduces a special issue of the journal "Scalable Computing: Practice & Experience" focused on next-generation pervasive reconfigurable computing for high-performance real-time applications. The authors discuss the importance of adaptable platforms for real-time tasks and highlight the benefits of reconfigurable computing in accelerating applications like image processing and machine learning. The special issue aims to explore recent advancements in this field and includes research papers on topics such as network security, malware detection, software reliability prediction, and optimization algorithms for wing design. The papers cover a range of computer science and technology topics, showcasing advancements and their potential impact on various computing domains. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
56. A Non-Intrusive Load Decomposition Model Based on Multiple Electrical Parameters to Point.
- Author
-
Yang, Meng, Cheng, Zhiyou, and Liu, Xinyuan
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,INDUSTRIAL equipment ,BUSES - Abstract
The sliding window method is commonly used for non-intrusive load disaggregation. However, it is difficult to choose the appropriate window size, and the disaggregation effect is poor in low-frequency industrial environments. To better handle low-frequency industrial load data, in this paper, we propose a vertical non-intrusive load disaggregation model that is different from the sliding window method. By training multiple electrical parameters at a single point on the bus end with the corresponding load data at the branch end, the proposed method, called multiple electrical parameters to point (Mep2point), takes the electrical parameter data sampled at a single point on the bus end as its input and outputs the load data of the target device sampled at the corresponding point. First, the electrical parameters of the bus end are processed, and each item is normalized to the range from 0–1. Then, the electrical parameters are vertically arranged by their time point, and a convolutional neural network (CNN) is used to train the model. The proposed method is analyzed on low-frequency industrial user data sampled at a frequency of 1/120 Hz in the real world. We compare our method with three advanced sliding window methods, achieving an average improvement ranging from 9.23% to 22.51% in evaluation metrics, while showing substantial superiority in the actual decomposed images. Compared with three classical machine learning algorithms, our model, using the same amount of data, significantly outperforms these methods. Finally, we also compared our method with the multi-channel low window sequence-to-point (MLSP) method, which also selects multiple electrical parameters. Our model's complexity is much less than that of the MLSP model, and its performance remains high. The superiority of our model, as presented in this paper, is fully verified by experimental analysis, which can produce better actual load decomposition results from each branch and contribute to the analysis and monitoring of loads in industrial environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
57. Makine öğrenmesi yöntemleri ile hisse senedi fiyat tahmini: kâğıt firması örneği.
- Author
-
BARDAK, Selahattin, ERSEN, Nadir, POLAT, Kinyas, and AKYÜZ, Kadri Cemil
- Subjects
STOCK prices ,MACHINE learning ,RANDOM forest algorithms ,ARTIFICIAL neural networks ,FINANCIAL ratios - Abstract
Copyright of Artvin Çoruh Üniversitesi Orman Fakültesi Dergisi is the property of Artvin Coruh University and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
58. Advancement in Paper-Based Electrochemical Biosensing and Emerging Diagnostic Methods.
- Author
-
Benjamin, Stephen Rathinaraj, de Lima, Fábio, Nascimento, Valter Aragão do, de Andrade, Geanne Matos, and Oriá, Reinaldo Barreto
- Subjects
THREE-dimensional printing ,POINT-of-care testing ,MACHINE learning ,MACHINE tools ,BIOSENSORS ,MACHINE theory - Abstract
The utilization of electrochemical detection techniques in paper-based analytical devices (PADs) has revolutionized point-of-care (POC) testing, enabling the precise and discerning measurement of a diverse array of (bio)chemical analytes. The application of electrochemical sensing and paper as a suitable substrate for point-of-care testing platforms has led to the emergence of electrochemical paper-based analytical devices (ePADs). The inherent advantages of these modified paper-based analytical devices have gained significant recognition in the POC field. In response, electrochemical biosensors assembled from paper-based materials have shown great promise for enhancing sensitivity and improving their range of use. In addition, paper-based platforms have numerous advantageous characteristics, including the self-sufficient conveyance of liquids, reduced resistance, minimal fabrication cost, and environmental friendliness. This study seeks to provide a concise summary of the present state and uses of ePADs with insightful commentary on their practicality in the field. Future developments in ePADs biosensors include developing novel paper-based systems, improving system performance with a novel biocatalyst, and combining the biosensor system with other cutting-edge tools such as machine learning and 3D printing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
59. Paper-based fluorescence sensor array with functionalized carbon quantum dots for bacterial discrimination using a machine learning algorithm.
- Author
-
Wang, Fangbin, Xiao, Minghui, Qi, Jing, and Zhu, Liang
- Subjects
QUANTUM dots ,SENSOR arrays ,POLYMYXIN B ,FLUORESCENCE ,FLUORESCENCE quenching ,BACTERIAL cell surfaces - Abstract
The rapid discrimination of bacteria is currently an emerging trend in the fields of food safety, medical detection, and environmental observation. Traditional methods often require lengthy culturing processes, specialized analytical equipment, and bacterial recognition receptors. In response to this need, we have developed a paper-based fluorescence sensor array platform for identifying different bacteria. The sensor array is based on three unique carbon quantum dots (CQDs) as sensing units, each modified with a different antibiotic (polymyxin B, ampicillin, and gentamicin). These antibiotic-modified CQDs can aggregate on the bacterial surface, triggering aggregation-induced fluorescence quenching. The sensor array exhibits varying fluorescent responses to different bacterial species. To achieve low-cost and portable detection, CQDs were formulated into fluorescent ink and used with an inkjet printer to manufacture paper-based sensor arrays. A smartphone was used to collect the responses generated by the bacteria and platform. Diverse machine learning algorithms were utilized to discriminate bacterial types. Our findings showcase the platform's remarkable capability to differentiate among five bacterial strains, within a detection range spanning from 1.0 × 10
3 CFU/mL to 1.0 × 107 CFU/mL. Its practicality is further validated through the accurate identification of blind bacterial samples. With its cost-effectiveness, ease of fabrication, and high degree of integration, this platform holds significant promise for on-site detection of diverse bacteria. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
60. Factors associating with or predicting more cited or higher quality journal articles: An Annual Review of Information Science and Technology (ARIST) paper.
- Author
-
Kousha, Kayvan and Thelwall, Mike
- Subjects
ABSTRACTING ,PUBLISHING ,READABILITY (Literary style) ,SERIAL publications ,METADATA ,BIBLIOGRAPHY ,CONFERENCES & conventions ,REGRESSION analysis ,MACHINE learning ,CITATION analysis ,INFORMATION science ,BIBLIOGRAPHICAL citations ,INTERPROFESSIONAL relations ,PERIODICAL articles ,IMPACT factor (Citation analysis) ,INFORMATION technology ,ABSTRACTING & indexing services ,MEDICAL research - Abstract
Identifying factors that associate with more cited or higher quality research may be useful to improve science or to support research evaluation. This article reviews evidence for the existence of such factors in article text and metadata. It also reviews studies attempting to estimate article quality or predict long‐term citation counts using statistical regression or machine learning for journal articles or conference papers. Although the primary focus is on document‐level evidence, the related task of estimating the average quality scores of entire departments from bibliometric information is also considered. The review lists a huge range of factors that associate with higher quality or more cited research in some contexts (fields, years, journals) but the strength and direction of association often depends on the set of papers examined, with little systematic pattern and rarely any cause‐and‐effect evidence. The strongest patterns found include the near universal usefulness of journal citation rates, author numbers, reference properties, and international collaboration in predicting (or associating with) higher citation counts, and the greater usefulness of citation‐related information for predicting article quality in the medical, health and physical sciences than in engineering, social sciences, arts, and humanities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
61. Introduction to ACSOS 2022 Special Issue.
- Author
-
Di Nitto, Elisabetta, Gerostathopoulos, Ilias, and Bellman, Kirstie
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,SELF-organizing systems ,REINFORCEMENT learning ,ARTIFICIAL intelligence ,CYBER physical systems ,INTRUSION detection systems (Computer security) - Published
- 2024
- Full Text
- View/download PDF
62. A Meta-Survey on Intelligent Energy-Efficient Buildings.
- Author
-
Islam, Md Babul, Guerrieri, Antonio, Gravina, Raffaele, and Fortino, Giancarlo
- Subjects
MACHINE learning ,REINFORCEMENT learning ,SMART cities ,DEEP learning ,INDUSTRIAL ecology ,INTELLIGENT buildings - Abstract
The rise of the Internet of Things (IoT) has enabled the development of smart cities, intelligent buildings, and advanced industrial ecosystems. When the IoT is matched with machine learning (ML), the advantages of the resulting enhanced environments can span, for example, from energy optimization to security improvement and comfort enhancement. Together, IoT and ML technologies are widely used in smart buildings, in particular, to reduce energy consumption and create Intelligent Energy-Efficient Buildings (IEEBs). In IEEBs, ML models are typically used to analyze and predict various factors such as temperature, humidity, light, occupancy, and human behavior with the aim of optimizing building systems. In the literature, many review papers have been presented so far in the field of IEEBs. Such papers mostly focus on specific subfields of ML or on a limited number of papers. This paper presents a systematic meta-survey, i.e., a review of review articles, that compares the state of the art in the field of IEEBs using the Prisma approach. In more detail, our meta-survey aims to give a broader view, with respect to the already published surveys, of the state-of-the-art in the IEEB field, investigating the use of supervised, unsupervised, semi-supervised, and self-supervised models in a variety of IEEB-based scenarios. Moreover, our paper aims to compare the already published surveys by answering five important research questions about IEEB definitions, architectures, methods/models used, datasets and real implementations utilized, and main challenges/research directions defined. This meta-survey provides insights that are useful both for newcomers to the field and for researchers who want to learn more about the methodologies and technologies used for IEEBs' design and implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
63. Comprehensive Review: Machine and Deep Learning in Brain Stroke Diagnosis.
- Author
-
Fernandes, João N. D., Cardoso, Vitor E. M., Comesaña-Campos, Alberto, and Pinheira, Alberto
- Subjects
DEEP learning ,STROKE ,MACHINE learning ,ELECTRONIC data processing ,DIAGNOSIS ,PATIENT monitoring - Abstract
Brain stroke, or a cerebrovascular accident, is a devastating medical condition that disrupts the blood supply to the brain, depriving it of oxygen and nutrients. Each year, according to the World Health Organization, 15 million people worldwide experience a stroke. This results in approximately 5 million deaths and another 5 million individuals suffering permanent disabilities. The complex interplay of various risk factors highlights the urgent need for sophisticated analytical methods to more accurately predict stroke risks and manage their outcomes. Machine learning and deep learning technologies offer promising solutions by analyzing extensive datasets including patient demographics, health records, and lifestyle choices to uncover patterns and predictors not easily discernible by humans. These technologies enable advanced data processing, analysis, and fusion techniques for a comprehensive health assessment. We conducted a comprehensive review of 25 review papers published between 2020 and 2024 on machine learning and deep learning applications in brain stroke diagnosis, focusing on classification, segmentation, and object detection. Furthermore, all these reviews explore the performance evaluation and validation of advanced sensor systems in these areas, enhancing predictive health monitoring and personalized care recommendations. Moreover, we also provide a collection of the most relevant datasets used in brain stroke analysis. The selection of the papers was conducted according to PRISMA guidelines. Furthermore, this review critically examines each domain, identifies current challenges, and proposes future research directions, emphasizing the potential of AI methods in transforming health monitoring and patient care. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
64. Forecasting Future Research Trends in the Construction Engineering and Management Domain Using Machine Learning and Social Network Analysis.
- Author
-
Ali, Gasser G., El-adaway, Islam H., Ahmed, Muaz O., Eissa, Radwa, Nabi, Mohamad Abdul, Elbashbishy, Tamima, and Khalef, Ramy
- Subjects
ENGINEERING management ,INDUSTRIAL engineering ,SOCIAL network analysis ,CONSTRUCTION management ,MACHINE learning ,CLASSIFICATION algorithms - Abstract
Construction Engineering and Management (CEM) is a broad domain with publications covering interrelated subdisciplines and considered a key source of knowledge sharing. Previous studies used scientometric methods to assess the current impact of CEM publications; however, there is a need to predict future citations of CEM publications to identify the expected high-impact trends in the future and guide new research efforts. To tackle this gap in the literature, the authors conducted a study using Machine Learning (ML) algorithms and Social Network Analysis (SNA) to predict CEM-related citation metrics. Using a dataset of 93,868 publications, the authors trained and tested two machine learning classification algorithms: Random Forest and XGBoost. Validation of the RF and XGBoost resulted in a balanced accuracy of 79.1% and 79.5%, respectively. Accordingly, XGBoost was selected. Testing of the XGBoost model revealed a balanced accuracy of 80.71%. Using SNA, it was found that while the top CEM subdisciplines in terms of the number of predicted impactful papers are "Project planning and design", "Organizational issues", and "Information technologies, robotics, and automation"; the lowest was "Legal and contractual issues". This paper contributes to the body of knowledge by studying the citation level, strength, and interconnectivity between CEM subdisciplines as well as identifying areas more likely to result in highly cited publications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
65. Special Issue of Natural Logic Meets Machine Learning (NALOMA): Selected Papers from the First Three Workshops of NALOMA.
- Author
-
Kalouli, Aikaterini-Lida, Abzianidze, Lasha, and Chatzikyriakidis, Stergios
- Subjects
DEEP learning ,MACHINE learning ,QUESTION answering systems ,LANGUAGE models ,NATURAL language processing ,ARTIFICIAL neural networks ,MACHINE translating - Abstract
The text discusses the intersection of natural language understanding (NLU) and reasoning in the context of large language models (LLMs) and traditional logic-based approaches. It highlights the strengths and weaknesses of both approaches and explores the potential for hybrid models that combine symbolic and distributional representations. The text also mentions specific applications of hybrid approaches in natural language inference, question-answering, sentiment analysis, and dialog. The document concludes by introducing a special issue that features selected contributions from the NALOMA workshop series, which focuses on hybrid methods in NLU. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
66. Smartphone-based pH titration for liquid food applications.
- Author
-
Xiao, Yuhui, Huang, Yaqiu, Qiu, Junhong, Cai, Honghao, and Ni, Hui
- Abstract
The pH detection helps control food quality, prevent spoilage, determine storage methods, and monitor additive levels. In the previous studies, colorimetric pH detection involved manual capture of target regions and classification of acid–base categories, leading to time-consuming processes. Additionally, some researchers relied solely on R*G*B* or H*S*V* to build regression models, potentially limiting their generalizability and robustness. To address the limitations, this study proposed a colorimetric method that combines pH paper, smartphone, computer vision, and machine learning for fast and precise pH detection. Advantages of the computer vision model YOLOv5 include its ability to quickly capture the target region of the pH paper and automatically categorize it as either acidic or basic. Subsequently, recursive feature elimination was applied to filter out irrelevant features from the R*G*B*, H*S*V*, L*a*b*, Gray, X
R , XG , and XB . Finally, the support vector regression was used to develop the regression model for pH value prediction. YOLOv5 demonstrated exceptional performance with mean average precision of 0.995, classification accuracy of 100%, and detection time of 4.9 ms. The pH prediction model achieved a mean absolute error (MAE) of 0.023 for acidity and 0.061 for alkalinity, signifying a notable advancement compared to the MAE range of 0.03–0.46 observed in the previous studies. The proposed approach shows potential in improving the dependability and effectiveness of pH detection, specifically in resource-constrained scenarios. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
67. A new fusion neural network model and credit card fraud identification.
- Author
-
Jiang, Shan, Liao, Xiaofeng, Feng, Yuming, Gao, Zilin, and Onasanya, Babatunde Oluwaseun
- Subjects
ARTIFICIAL neural networks ,CREDIT card fraud ,MACHINE learning ,IDENTIFICATION cards ,FRAUD ,DEEP learning - Abstract
Credit card fraud identification is an important issue in risk prevention and control for banks and financial institutions. In order to establish an efficient credit card fraud identification model, this article studied the relevant factors that affect fraud identification. A credit card fraud identification model based on neural networks was constructed, and in-depth discussions and research were conducted. First, the layers of neural networks were deepened to improve the prediction accuracy of the model; second, this paper increase the hidden layer width of the neural network to improve the prediction accuracy of the model. This article proposes a new fusion neural network model by combining deep neural networks and wide neural networks, and applies the model to credit card fraud identification. The characteristic of this model is that the accuracy of prediction and F1 score are relatively high. Finally, use the random gradient descent method to train the model. On the test set, the proposed method has an accuracy of 96.44% and an F1 value of 96.17%, demonstrating good fraud recognition performance. After comparison, the method proposed in this paper is superior to machine learning models, ensemble learning models, and deep learning models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
68. Research on fire accident prediction and risk assessment algorithm based on data mining and machine learning.
- Author
-
Zhang, Ziyang, Tan, Lingye, and Tiong, Robert
- Subjects
ARTIFICIAL neural networks ,FOREST fires ,FIRE risk assessment ,DATA mining ,DATABASES - Abstract
Forest fire is a kind of natural disaster that is destructive, easy to spread, and difficult to extinguish. It greatly harms the balance of ecosystem and human life and property. The prediction and risk assessment of forest fire accidents can find forest fires as early as possible and then take corresponding remedial measures; the loss of forest fires will be minimized. This paper first collected the relevant data information in the Lesser Hinggan Mountain region, established the fire driving factor database, and then analyzed the impact and distribution characteristics of forest fire driving factors such as temperature and rainfall. Finally, a fire accident prediction model was built based on a deep neural network, and the model's performance was compared with the SVM and RF models. The analysis results show that this paper's fire accident prediction model is more accurate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
69. Editorial Preview.
- Author
-
Su-Cheng Haw
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,DATA mining ,SOFTWARE engineering ,RECOMMENDER systems - Abstract
Effective from Volume 3, JIWE has transitioned to a triannual publication release effort. Specifically, releases occur each February, June, and October. This particular October 2024 release contains 12 papers within the regular section that covers a broad range of application concerning Machine Learning (ML), Artificial Intelligence (AI), Data Mining (DM), Software Engineering, Recommender Systems, Cybersecurity, Healthcare, and other key areas in web engineering. Additionally, this edition presents a captivating collection of 7 papers curated by our Thematic Editor, Dr. Ji-Jian Chin, under the theme "Pervasive Computing." In his editorial, Dr. Ji-Jian Chin highlights cutting-edge research on the integration of computing into everyday environments. Moreover, these papers are aligned to some of the United Nations Sustainable Development Goals (SDGs), particularly SDG 3 (Good Health and Well-being) through advancements in healthcare technologies, SDG 9 (Industry, Innovation, and Infrastructure) through software and systems innovation research, and SDG 16 (Peace, Justice, and Strong Institutions) through contributions to privacy and cybersecurity research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
70. Reinforcement Learning-Based Multimodal Model for the Stock Investment Portfolio Management Task.
- Author
-
Du, Sha and Shen, Hailong
- Subjects
REINFORCEMENT learning ,INVESTORS ,MACHINE learning ,SENTIMENT analysis ,INVESTMENT management - Abstract
Machine learning has been applied by more and more scholars in the field of quantitative investment, but traditional machine learning methods cannot provide high returns and strong stability at the same time. In this paper, a multimodal model based on reinforcement learning (RL) is constructed for the stock investment portfolio management task. Most of the previous methods based on RL have chosen the value-based RL methods. Policy gradient-based RL methods have been proven to be superior to value-based RL methods by a growing number of research. Commonly used policy gradient-based reinforcement learning methods are DDPG, TD3, SAC, and PPO. We conducted comparative experiments to select the most suitable method for the dataset in this paper. The final choice was DDPG. Furthermore, there will rarely be a way to refine the raw data before training the agent. The stock market has a large amount of data, and the data are complex. If the raw stock market data are fed directly to the agent, the agent cannot learn the information in the data efficiently and quickly. We use state representation learning (SRL) to process the raw stock data and then feed the processed data to the agent. It is not enough to train the agent using only stock data; we also added comment text data and image data. The comment text data comes from investors' comments on stock bars. Image data are derived from pictures that can represent the overall direction of the market. We conducted experiments on three datasets and compared our proposed model with 11 other methods. We set up three evaluation indicators in the paper. Taken together, our proposed model works best. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
71. CONSTRUCTION AND DEMOLITION WASTE MANAGEMENT AND ARTIFICIAL INTELLIGENCE-A SYSTEMATIC REVIEW.
- Author
-
de Melo Nunes Lopes, Carolina, Abrahão Cury, Alexandre, and Castro Mendes, Júlia
- Subjects
MACHINE learning ,DECISION support systems ,CONSTRUCTION & demolition debris ,ARTIFICIAL intelligence ,WASTE recycling - Abstract
Copyright of Environmental & Social Management Journal / Revista de Gestão Social e Ambiental is the property of Environmental & Social Management Journal and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
72. Extraction of Features for Time Series Classification Using Noise Injection.
- Author
-
Kim, Gyu Il and Chung, Kyungyong
- Subjects
DIGITAL signal processing ,DATA augmentation ,DEEP learning ,FOURIER transforms ,FEATURE extraction - Abstract
Time series data often display complex, time-varying patterns, which pose significant challenges for effective classification due to data variability, noise, and imbalance. Traditional time series classification techniques frequently fall short in addressing these issues, leading to reduced generalization performance. Therefore, there is a need for innovative methodologies to enhance data diversity and quality. In this paper, we introduce a method for the extraction of features for time series classification using noise injection to address these challenges. By employing noise injection techniques for data augmentation, we enhance the diversity of the training data. Utilizing digital signal processing (DSP), we extract key frequency features from time series data through sampling, quantization, and Fourier transformation. This process enhances the quality of the training data, thereby maximizing the model's generalization performance. We demonstrate the superiority of our proposed method by comparing it with existing time series classification models. Additionally, we validate the effectiveness of our approach through various experimental results, confirming that data augmentation and DSP techniques are potent tools in time series data classification. Ultimately, this research presents a robust methodology for time series data analysis and classification, with potential applications across a broad spectrum of data analysis problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
73. Cloud-Edge Collaborative Defect Detection Based on Efficient Yolo Networks and Incremental Learning.
- Author
-
Lei, Zhenwu, Zhang, Yue, Wang, Jing, and Zhou, Meng
- Subjects
MACHINE learning ,MANUFACTURING defects ,FEATURE extraction ,ELECTRONICS manufacturing ,MANUFACTURING processes ,DEEP learning - Abstract
Defect detection constitutes one of the most crucial processes in industrial production. With a continuous increase in the number of defect categories and samples, the defect detection model underpinned by deep learning finds it challenging to expand to new categories, and the accuracy and real-time performance of product defect detection are also confronted with severe challenges. This paper addresses the problem of insufficient detection accuracy of existing lightweight models on resource-constrained edge devices by presenting a new lightweight YoloV5 model, which integrates four modules, SCDown, GhostConv, RepNCSPELAN4, and ScalSeq. Here, this paper abbreviates it as SGRS-YoloV5n. Through the incorporation of these modules, the model notably enhances feature extraction and computational efficiency while reducing the model size and computational load, making it more conducive for deployment on edge devices. Furthermore, a cloud-edge collaborative defect detection system is constructed to improve detection accuracy and efficiency through initial detection by edge devices, followed by additional inspection by cloud servers. An incremental learning mechanism is also introduced, enabling the model to adapt promptly to new defect categories and update its parameters accordingly. Experimental results reveal that the SGRS-YoloV5n model exhibits superior detection accuracy and real-time performance, validating its value and stability for deployment in resource-constrained environments. This system presents a novel solution for achieving efficient and accurate real-time defect detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
74. Clinical named entity extraction for extracting information from medical data.
- Author
-
Kuttaiyapillai, Dhanasekaran, Madasamy, Anand, Ayyavu, Shobanadevi, and Sayeed, Md Shohel
- Subjects
CONVOLUTIONAL neural networks ,DATA mining ,DATA analytics ,MACHINE learning ,RESEARCH personnel ,DEEP learning - Abstract
Clinical named entity extraction (NER) based on deep learning gained much attention among researchers and data analysts. This paper proposes a NER approach to extract valuable Parkinson’s disease-related information. To develop an effective NER method and to handle problems in disease data analytics, a unique NER technique applies a “recognize-map-extract (RME)” mechanism and aims to deal with complex relationships present in the data. Due to the fast-growing medical data, there is a challenge in the development of suitable deep-learning methods for NER. Furthermore, the traditional machine learning approaches rely on the time-consuming process of creating corpora and cannot extract information for specific needs and locations in certain situations. This paper presents a clinical NER approach based on a convolutional neural network (CNN) for better use of specific features around medical entities and analyzes the performance of the proposed approach through fine-tuning NER with effective pre-training on the BC5CDR dataset. The proposed method uses annotation of entities for various medical concepts. The second stage develops a clinically NER method. This proposed method shows interesting results on the performance measures achieving a precision of 92.57%, recall of 92.22%, and F1- measure of 91.6% [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
75. Investigating Japanese Government Actions During the Pandemic and the Implications of The Nudge Theory.
- Author
-
Miyata, Eisuke
- Subjects
COVID-19 pandemic ,MACHINE learning ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,NATURAL language processing - Abstract
As the COVID-19 pandemic has impacted the world, face masks and vaccinations are almost necessary daily. More specifically, the country of Japan has maintained a remarkably high rate throughout the pandemic. Furthermore, the Japanese government was able to implement very effective policies. We review papers that collect data on circumstances Japan faced during the pandemic, compile statistics on mask usage and vaccination rates from other countries and take a quantitative approach to this question. We then conclude whether Japan had used the nudge theory to mitigate the effects of the COVID-19 pandemic and provide some suggestions for future policies. Although we cannot fully assert that the Japanese government had intentionally used the nudge theory, due to the remarkable success they had, we concluded that the nudge theory did in fact play a major role in mitigating the effects of the pandemic in Japan. Viewing the COVID-19 situation based on the nudge theory, which is an extent from behavioral economics, is something that not many papers have investigated. Behavioral economics is crucial when it comes to our day-to-day lives. Incorporating this into recent catastrophes, such as the earthquake that hit Japan at the start of 2024, or the recession caused by lockdowns and restrictions from the pandemic, enables us to view them in different ways. For example, we can focus on how people react and behave to these phenomena and analyze them to understand human nature better. We can further prepare for upcoming pandemics by effectively implementing them in policies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
76. DDOS ATTACKS DETECTION USING DIFFERENT DECISION TREE ALGORITHMS.
- Author
-
DAYANANDAM, G., REDDY, E. SRINIVASA, and BABU, D. BUJJI
- Subjects
MACHINE learning ,CART algorithms ,DENIAL of service attacks ,BANKING industry ,INSURANCE companies ,DECISION trees - Abstract
In today's world, the banking sector, government organizations, and various users in the finance and insurance sectors have grown exponentially. In such situations, they become primary targets for attackers. The main focus of these attackers is to disrupt services for legitimate users. Recently, attackers have targeted banks in Ukraine during the Russia-Ukraine war, causing a shortage of money in banks and making it difficult for people to withdraw funds. These types of attacks fall under the category of Distributed Denial of Service (DDoS) attacks. The primary objectives of these DDoS attacks are to gain financial control and damage the reputation of the affected organization or country. The purpose of this paper is to detect DDoS attacks using various Decision Tree Classifiers in Machine Learning algorithms. We utilized the 'caret' package in R, which is well-known for its Classification and Regression Techniques. We split the KDD'99 dataset based on the outcome variable. We employed the 'rpart' method to classify the dataset using CART and C4.5 algorithms. Experimental results indicate that our classification methods achieve a better accuracy rate compared to other decision tree methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
77. Decision Support Systems for Disease Detection and Diagnosis.
- Author
-
Rizzi, Maria
- Subjects
CLINICAL decision support systems ,MACHINE learning ,MEDICAL personnel ,DECISION support systems ,CONVOLUTIONAL neural networks ,BREAST ,DEEP learning - Abstract
This document discusses the recent advancements in decision support systems (DSSs) for disease detection and diagnosis. The combination of biomedical studies and information technology has led to the development of accessible and accurate solutions that can improve patient survival rates. The document highlights several research papers that cover a wide range of topics, including multiple sclerosis detection, neurodegenerative disease detection, breast lesion classification, COVID-19 mortality prediction, melanoma diagnosis, and prediction of second primary skin cancer. The adoption of efficient DSSs can aid clinical assessment, reduce misdiagnosis, and facilitate evidence-based decision-making. However, challenges such as validation, training, and user interface design need to be addressed for widespread application of DSSs in clinical practice. The document concludes by emphasizing the importance of future studies and developments in overcoming limitations and expanding the use of DSSs in different contexts. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
78. Navigating the Multimodal Landscape: A Review on Integration of Text and Image Data in Machine Learning Architectures.
- Author
-
Binte Rashid, Maisha, Rahaman, Md Shahidur, and Rivas, Pablo
- Subjects
MACHINE learning ,TECHNOLOGICAL innovations ,IMAGE fusion ,DATA integration ,MACHINE parts - Abstract
Images and text have become essential parts of the multimodal machine learning (MMML) framework in today's world because data are always available, and technological breakthroughs bring disparate forms together, and while text adds semantic richness and narrative to images, images capture visual subtleties and emotions. Together, these two media improve knowledge beyond what would be possible with just one revolutionary application. This paper investigates feature extraction and advancement from text and image data using pre-trained models in MMML. It offers a thorough analysis of fusion architectures, outlining text and image data integration and evaluating their overall advantages and effects. Furthermore, it draws attention to the shortcomings and difficulties that MMML currently faces and guides areas that need more research and development. We have gathered 341 research articles from five digital library databases to accomplish this. Following a thorough assessment procedure, we have 88 research papers that enable us to evaluate MMML in detail. Our findings demonstrate that pre-trained models, such as BERT for text and ResNet for images, are predominantly employed for feature extraction due to their robust performance in diverse applications. Fusion techniques, ranging from simple concatenation to advanced attention mechanisms, are extensively adopted to enhance the representation of multimodal data. Despite these advancements, MMML models face significant challenges, including handling noisy data, optimizing dataset size, and ensuring robustness against adversarial attacks. Our findings highlight the necessity for further research to address these challenges, particularly in developing methods to improve the robustness of MMML models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
79. Gradient Boosted Trees and Denoising Autoencoder to Correct Numerical Wave Forecasts.
- Author
-
Yanchin, Ivan and Guedes Soares, C.
- Subjects
MACHINE learning ,PREDICTION models ,FORECASTING ,NOISE ,BUOYS - Abstract
This paper is dedicated to correcting the WAM/ICON numerical wave model predictions by reducing the residue between the model's predictions and the actual buoy observations. The two parameters used in this paper are significant wave height and wind speed. The paper proposes two machine learning models to solve this task. Both models are multioutput models and correct the significant wave height and wind speed simultaneously. The first machine learning model is based on gradient boosted trees, which is trained to predict the residue between the model's forecasts and the actual buoy observations using the other parameters predicted by the numerical model as inputs. This paper demonstrates that this model can significantly reduce errors for all used geographical locations. This paper also uses SHapley Additive exPlanation values to investigate the influence that the numerically predicted wave parameters have when the machine learning model predicts the residue. To design the second model, it is assumed that the residue can be modelled as noise added to the actual values. Therefore, this paper proposes to use the denoising autoencoder to remove this noise from the numerical model's prediction. The results demonstrate that denoising autoencoders can remove the noise for the wind speed parameter, but their performance is poor for the significant wave height. This paper provides some explanations as to why this may happen. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
80. A Semi-Automated Solution Approach Recommender for a Given Use Case: a Case Study for AI/ML in Oncology via Scopus and OpenAI.
- Author
-
Kılıç, Deniz Kenan, Vasegaard, Alex Elkjær, Desoeuvres, Aurélien, and Nielsen, Peter
- Subjects
ARTIFICIAL intelligence ,LITERATURE reviews ,MACHINE learning ,DATABASES - Abstract
Nowadays, literature review is a necessary task when trying to solve a given problem. However, an exhaustive literature review is very time-consuming in today's vast literature landscape. It can take weeks, even if looking only for abstracts or surveys. Moreover, choosing a method among others, and targeting searches within relevant problem and solution domains, are not easy tasks. These are especially true for young researchers or engineers starting to work in their field. Even if surveys that provide methods used to solve a specific problem already exist, an automatic way to do it for any use case is missing, especially for those who don't know the existing literature. Our proposed tool, SARBOLD-LLM, allows discovering and choosing among methods related to a given problem, providing additional information about their uses in the literature to derive decision-making insights, in only a few hours. The SARBOLD-LLM comprises three modules: (1: Scopus search) paper selection using a keyword selection scheme to query Scopus API; (2: Scoring and method extraction) relevancy and popularity scores calculation and solution method extraction in papers utilizing OpenAI API (GPT 3.5); (3: Analyzes) sensitivity analysis and post-analyzes which reveals trends, relevant papers and methods. Comparing the SARBOLD-LLM to manual ground truth using precision, recall, and F1-score metrics, the performance results of AI in the oncology case study are 0.68, 0.9, and 0.77, respectively. SARBOLD-LLM demonstrates successful outcomes across various domains, showcasing its robustness and effectiveness. The SARBOLD-LLM addresses engineers more than researchers, as it proposes methods and trends without adding pros and cons. It is a useful tool to select which methods to investigate first and comes as a complement to surveys. This can limit the global search and accumulation of knowledge for the end user. However, it can be used as a director or recommender for future implementation to solve a problem. Highlights: Automated support for literature choice and solution selection for any use case. A generalized keyword selection scheme for literature database queries. Trends in literature: detecting AI methods for a case study using Scopus and OpenAI. A better understanding of the tool by sensitivity analyzes for Scopus and OpenAI. Robust tool for different domains with promising OpenAI performance results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
81. A Review Paper on Exploring the Concept of Data Science: A Comprehensive Analysis.
- Author
-
Gaikwad, Samiksha, Chaudhari, Parimal, Jadhav, Dipali, Bodade, Punam, and Shirbhate, Dhiraj
- Subjects
DATA science ,TECHNOLOGICAL innovations ,ARTIFICIAL intelligence ,BLOCKCHAINS ,EDGE computing ,MACHINE learning - Abstract
Data science is a rapidly growing technology in the technical world that fulfills the requirements for data and various data aspects. The core of all emerging technologies is data science, which includes machine learning, artificial intelligence, robotics, edge computing, and blockchain technology. In this review paper, we consider the detailed concept on data science, such as where the data is generated, the skills to handle data, its growth, how it works, and the impact of data science on other technologies. The basic aim of this review paper is to provide a basic summary of data science that everyone easily understands. [ABSTRACT FROM AUTHOR]
- Published
- 2024
82. An efficient and adaptive design of reinforcement learning environment to solve job shop scheduling problem with soft actor-critic algorithm.
- Author
-
Si, Jinghua, Li, Xinyu, Gao, Liang, and Li, Peigen
- Subjects
DEEP reinforcement learning ,PRODUCTION scheduling ,REINFORCEMENT learning ,MACHINE learning ,FLOW shops - Abstract
Shop scheduling is deeply involved in manufacturing. In order to improve the efficiency of scheduling and fit dynamic scenarios, many Deep Reinforcement Learning (DRL) methods are studied to solve scheduling problems like job shop and flow shop. But most studies focus on using the latest algorithms while ignoring that the environment plays an important role in agent learning. In this paper, we design an effective, robust and size-agnostic environment for job shop scheduling. The proposed design of environment uses centralised training and decentralised execution (CTDE) to implement a multi-agent architecture. Together with the observation space we design, environmental information that is irrelevant to the current decision is eliminated as much as possible. The proposed action space enlarges the decision space of agents, which performs better than the traditional way. Finally, Soft Actor-Critic (SAC) algorithm is adapted to learning within this environment. By comparing with traditional scheduling rules, other reinforcement learning algorithms, and relevant literature, the superiority of the results obtained in this study is demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
83. Genetic algorithms for planning and scheduling engineer-to-order production: a systematic review.
- Author
-
Neumann, Anas, Hajji, Adnene, Rekik, Monia, and Pellerin, Robert
- Subjects
GENETIC algorithms ,MACHINE learning ,PRODUCTION scheduling ,ENGINEERING design ,EVOLUTIONARY algorithms - Abstract
This paper provides a systematic review of the Genetic Algorithm (GA)s proposed to solve planning and scheduling problems in Engineer-To-Order (ETO) contexts. Our review focuses on how the key characteristics of ETO projects affect both the problem studied and the GA algorithmic features. Typical ETO projects consist of one-of-a-kind products with complex structures and uncertain designs. A deep analysis of the papers published between 2000 and 2022 enables identifying 10 main characteristics of ETO projects, six activity types, 10 decision types, eight groups of constraints, and 10 optimisation objectives. Our study shows that none of the reported papers integrates all 10 ETO characteristics. The less studied ETO characteristics are incorporating design and engineering information in the problem definition and the design uncertainty. Our review also identifies 10 recurrent encoding formats and emphasises the most frequently used genetic operators. We observed that most planning and scheduling problems consider objectives and decisions related to product customisation or supply chain configuration yielding multi-objective problems. Most multi-objective GAs use a weighted sum or are based on NSGAII. Diversity maintenance methods, adaptive and parameter tunning mechanisms, or hybridisation with machine learning models are still not used in this context. A systematic review of genetic algorithms dedicated to industrial planning and scheduling Analysis on how the characteristics of ETO projects impact the design of genetic representation and operators Recommendation on approaches employed to reach high-quality solutions [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
84. Fostering Undergraduate Academic Research: Rolling out a Tech Stack with AI-Powered Tools in a Library.
- Author
-
Michalak, Russell
- Subjects
ARTIFICIAL intelligence ,ACADEMIC libraries ,UNIVERSITY research ,UNDERGRADUATES ,RESEARCH personnel ,MACHINE learning - Abstract
With the increasing integration of AI tools like Yewno Discover, Scholarcy, and Grammarly in academic libraries, undergraduate research has witnessed transformative changes. These tools, while elevating the research process, also bring forth challenges rooted in ethics and application. This paper explores the synergy between modern technology and academic exploration, highlighting the benefits and potential pitfalls of using AI in the research workflow. It emphasizes that while Yewno Discover and similar tools offer streamlined navigation of vast information databases, it is imperative for undergraduates to remain cognizant of potential biases and other ethical considerations. This paper underscores the need for proactive measures in academic settings, including specialized training and policy development, to ensure that undergraduate researchers harness the power of AI responsibly and efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
85. Featured Papers on Network Security and Privacy.
- Author
-
Mongay Batalla, Jordi
- Subjects
COMPUTER network security ,MACHINE learning ,INTERNET domain naming system ,PRIVACY ,DATA privacy ,UNIFORM Resource Locators - Abstract
This document is a summary of a journal article titled "Featured Papers on Network Security and Privacy." The article discusses the importance of security-by-design in networks and the need for security to be considered throughout the entire lifecycle of a network. It distinguishes between security and privacy in networks and highlights the Zero Trust approach as a means of increasing network privacy protection. The article also provides an overview of several published articles on network security and privacy, including topics such as cryptographic methods, artificial intelligence (AI) techniques, homoglyph replacement detection, privacy preservation in blockchain technology, trust models, and click fraud detection. The authors emphasize the role of AI and machine learning (ML) in improving network security and protecting network assets. They also discuss the challenges of protecting end devices and propose ML/AI algorithms for mitigating availability threats. Overall, the article highlights the importance of incorporating security and privacy measures in network design and the potential of ML/AI in enhancing network security. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
86. Unveiling Recent Trends in Biomedical Artificial Intelligence Research: Analysis of Top-Cited Papers.
- Author
-
Glicksberg, Benjamin S. and Klang, Eyal
- Subjects
ARTIFICIAL intelligence ,PROTEIN structure prediction ,TECHNOLOGICAL innovations ,MEDICAL education ,INDIVIDUALIZED medicine - Abstract
This review analyzes the most influential artificial intelligence (AI) studies in health and life sciences from the past three years, delineating the evolving role of AI in these fields. We identified and analyzed the top 50 cited articles on AI in biomedicine, revealing significant trends and thematic categorizations, including Drug Development, Real-World Clinical Implementation, and Ethical and Regulatory Aspects, among others. Our findings highlight a predominant focus on AIs application in clinical settings, particularly in diagnostics, telemedicine, and medical education, accelerated by the COVID-19 pandemic. The emergence of AlphaFold marked a pivotal moment in protein structure prediction, catalyzing a cascade of related research and signifying a broader shift towards AI-driven approaches in biological research. The review underscores AIs pivotal role in disease subtyping and patient stratification, facilitating a transition towards more personalized medicine strategies. Furthermore, it illustrates AIs impact on biology, particularly in parsing complex genomic and proteomic data, enhancing our capabilities to disentangle complex, interconnected molecular processes. As AI continues to permeate the health and life sciences, balancing its rapid technological advancements with ethical stewardship and regulatory vigilance will be crucial for its sustainable and effective integration into healthcare and research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
87. A Graph-Based Topic Modeling Approach to Detection of Irrelevant Citations.
- Author
-
Pham, Phu, Le, Hieu, Tam, Nguyen Thanh, and Tran, Quang-Dieu
- Subjects
NATURAL language processing ,DEEP learning ,MACHINE learning ,INFORMATION retrieval - Abstract
In the recent years, the academic paper influence analysis has been widely studied due to its potential applications in the multiple areas of science information metric and retrieval. By identifying the academic influence of papers, authors, etc., we can directly support researchers to easily reach academic papers. These recommended candidate papers are not only highly relevant with their desired research topics but also highly-attended by the research community within these topics. For very recent years, the rapid developments of academic networks, like Google Scholar, Research Gate, CiteSeerX, etc., have significantly boosted the number of new published papers annually. It also helps to strengthen the borderless cooperation between researchers who are interested on the same research topics. However, these current academic networks still lack the capabilities of provisioning researchers deeper into most-influenced papers. They also largely ignore quite/irrelevant papers, which are not fully related with their current interest topics. Moreover, the distributions of topics within these academic papers are considered as varying and it is difficult to extract the main concentrated topics in these papers. Thus, it leads to challenges for researchers to find their appropriated/high-qualified reference resources while doing researches. To overcome this limitation, in this paper, we proposed a novel approach of paper influence analysis through their content-based and citation relationship-based analyses within the biographical network. In order to effectively extract the topic-based relevance from papers, we apply the integrated graph-based citation relationship analysis with topic modeling approach to automatically learn the distributions of keyword-based labeled topics in forms of unsupervised learning approach, named as TopCite. Then, we base on the constructed graph-based paper–topic structure to identify their relevancy levels. Upon the identified relevancy levels between papers, we can support for improving the accuracy performance of other bibliographic network mining tasks, such as paper similarity measurement, recommendation, etc. Extensive experiments in real-world AMiner bibliographic dataset demonstrate the effectiveness of our proposed ideas in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
88. Recent advances in crack detection technologies for structures: a survey of 2022-2023 literature.
- Author
-
Kaveh, Hessam, Alhajj, Reda, Shah, Pritesh, and Kulkarni, Sanjay
- Subjects
TRANSFORMER models ,FRACTURE mechanics ,INFRASTRUCTURE (Economics) ,MAINTENANCE costs ,MACHINE learning - Abstract
Introduction: Cracks, as structural defects or fractures in materials like concrete, asphalt, and metal, pose significant challenges to the stability and safety of various structures. Addressing crack detection is of paramount importance due to its implications for public safety, infrastructure integrity, maintenance costs, asset longevity, preventive maintenance, economic impact, and environmental considerations. Methods: In this survey paper, we present a comprehensive analysis of recent advancements and developments in crack detection technologies for structures, with a specific focus on articles published between 2022 and 2023. Our methodology involves an exhaustive search of the Scopus database using keywords related to crack detection and machine learning techniques. Among the 129 papers reviewed, 85 were closely aligned with our research focus. Results: We explore datasets that underpin crack detection research, categorizing them as public datasets, papers with their own datasets, and those using a hybrid approach. The prevalence and usage patterns of public datasets are presented, highlighting datasets like Crack500, Crack Forest Dataset (CFD), and Deep Crack. Furthermore, papers employing proprietary datasets and those combining public and proprietary sources are examined. The survey comprehensively investigates the algorithms and methods utilized, encompassing CNN, YOLO, UNet, ResNet, and others, elucidating their contributions to crack detection. Evaluation metrics such as accuracy, precision, recall, F1-score, and IoU are discussed in the context of assessing model performance. The results of the 85 papers are summarized, demonstrating advancements in crack detection accuracy, efficiency, and applicability. Discussion: Notably, we observe a trend towards using modern and novel algorithms, such as Vision Transformers (ViT), and a shift away from traditional methods. The conclusion encapsulates the current state of crack detection research, highlighting the integration of multiple algorithms, expert models, and innovative data collection techniques. As a future direction, the adoption of emerging algorithms like ViT is suggested. This survey paper serves as a valuable resource for researchers, practitioners, and engineers working in the field of crack detection, offering insights into the latest trends, methodologies, and challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
89. Data Analytics for the Effect Of Digital Inclusive Finance on Farmers' Entrepreneurship Decisions in Rural Areas Using the Supervised Learning-Based Regression in Rural China.
- Author
-
Zheng Li, Ho-Jyun Wong, Shiwei Tan, and Weikun Zhang
- Subjects
MACHINE learning ,FINANCIAL inclusion ,DATA analytics ,HIGH technology industries ,DIGITAL inclusion - Abstract
Farmers' entrepreneurship is essential in developing the countryside and improving farmers' income. Also, with the arrival of the digital economy, rural formal finance (FF), which was brought about by digital inclusion finance (DIF), promoted entrepreneurship among the farmers. Based on data analytics from 2021 to 2023 in China, this paper empirically performs the quantitative analysis of the effect of DIF on farmers' entrepreneurship decisions (FED). Further, it addresses the role of FF as an alternative for informal finance (IF) brought about by the development of DIF in the process of FED from a social network (SN) perspective using the supervised learning-based regression. The results show that DIF can significantly promote farmers' entrepreneurship (FE), especially in self-employed off-farm entrepreneurship. DIF has a positive effect on FE with low levels of education and weak social networks, suggesting that DIF has a truly "inclusive" role to play. Further mechanistic studies have found that DIF, by FF, compensates for the fact that farmers used to rely mainly on SN for financing from IF channels, increasing their financial accessibility, reducing their borrowing costs, and thus promoting their entrepreneurship. Our results from the estimation applied to the supervised learning algorithm considering the instrument variable provide scientific implications for promoting DIF matched with the rural credit system's perfection to improve farmers' production and operation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
90. Comment on Martínez-Delgado et al. Using Absorption Models for Insulin and Carbohydrates and Deep Leaning to Improve Glucose Level Predictions. Sensors 2021, 21 , 5273.
- Author
-
Misplon, Josiah Z. R., Saini, Varun, Sloves, Brianna P., Meerts, Sarah H., and Musicant, David R.
- Subjects
INSULIN ,CARBOHYDRATES ,TYPE 1 diabetes ,GLUCOSE ,MACHINE learning ,ABSORPTION - Abstract
The paper "Using Absorption Models for Insulin and Carbohydrates and Deep Leaning to Improve Glucose Level Predictions" (Sensors 2021, 21, 5273) proposes a novel approach to predicting blood glucose levels for people with type 1 diabetes mellitus (T1DM). By building exponential models from raw carbohydrate and insulin data to simulate the absorption in the body, the authors reported a reduction in their model's root-mean-square error (RMSE) from 15.5 mg/dL (raw) to 9.2 mg/dL (exponential) when predicting blood glucose levels one hour into the future. In this comment, we demonstrate that the experimental techniques used in that paper are flawed, which invalidates its results and conclusions. Specifically, after reviewing the authors' code, we found that the model validation scheme was malformed, namely, the training and test data from the same time intervals were mixed. This means that the reported RMSE numbers in the referenced paper did not accurately measure the predictive capabilities of the approaches that were presented. We repaired the measurement technique by appropriately isolating the training and test data, and we discovered that their models actually performed dramatically worse than was reported in the paper. In fact, the models presented in the that paper do not appear to perform any better than a naive model that predicts future glucose levels to be the same as the current ones. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
91. Machine Learning and Graph Signal Processing Applied to Healthcare: A Review.
- Author
-
Calazans, Maria Alice Andrade, Ferreira, Felipe A. B. S., Santos, Fernando A. N., Madeiro, Francisco, and Lima, Juliano B.
- Subjects
PATTERN recognition systems ,SIGNAL processing ,DEEP learning ,GRAPH theory ,SIGNALS & signaling - Abstract
Signal processing is a very useful field of study in the interpretation of signals in many everyday applications. In the case of applications with time-varying signals, one possibility is to consider them as graphs, so graph theory arises, which extends classical methods to the non-Euclidean domain. In addition, machine learning techniques have been widely used in pattern recognition activities in a wide variety of tasks, including health sciences. The objective of this work is to identify and analyze the papers in the literature that address the use of machine learning applied to graph signal processing in health sciences. A search was performed in four databases (Science Direct, IEEE Xplore, ACM, and MDPI), using search strings to identify papers that are in the scope of this review. Finally, 45 papers were included in the analysis, the first being published in 2015, which indicates an emerging area. Among the gaps found, we can mention the need for better clinical interpretability of the results obtained in the papers, that is not to restrict the results or conclusions simply to performance metrics. In addition, a possible research direction is the use of new transforms. It is also important to make new public datasets available that can be used to train the models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
92. TILIME CROP YIELD PREDICTION USING MACHINE LEARNING ALGORITHMS.
- Author
-
KAMANGIRA, STEVE OSCAR and MEDI, CHIPATSO
- Subjects
CROP yields ,SUBSISTENCE farming ,MACHINE learning ,SUSTAINABLE agriculture ,SOIL quality - Abstract
Agriculture stands as the bedrock of Malawi's economy, involving nearly 90% of the population in subsistence farming. However, the sector faces challenges arising from unpredictable weather patterns, climate shifts, and environmental factors that threaten its sustainability. This paper proposes a pioneering solution leveraging Machine Learning (ML) to address these challenges, presenting a robust decision support system for Crop Yield Prediction (CYP). By harnessing ML capabilities, the system aids in crucial decisions related to crop selection and management throughout the growing season, specifically tailored for the unique agricultural landscape of Malawi. This approach aims to empower farmers by providing valuable insights into soil quality, composition, and nutrients, enabling informed decisions to maximize crop yield. Through the integration of advanced technology into the agricultural domain, this paper seeks to usher in a transformative era for Malawian agriculture, fostering resilience and sustainability in the face of evolving environmental dynamics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
93. Machine-Learning-Based Prediction Modeling for Debris Flow Occurrence: A Meta-Analysis.
- Author
-
Yang, Lianbing, Ge, Yonggang, Chen, Baili, Wu, Yuhong, and Fu, Runde
- Subjects
DEBRIS avalanches ,PREDICTION models ,EVIDENCE gaps ,SCIENCE databases ,WEB databases ,MACHINE learning - Abstract
Machine learning (ML) has become increasingly popular in the prediction of debris flow occurrence, but the various ML models utilized as baseline predictors reported in previous studies are typically limited to individual case bases. A comprehensive and systematic evaluation of existing empirical evidence on the utilization of ML as baseline predictors for debris flow occurrence is lacking. To address this gap, we conducted a meta-analysis of ML-based prediction modeling of debris flow occurrence by retrieving papers that were published between 2000 and 2023 from the Scopus and Web of Science databases. The general findings were as follows: (1) A total of 84 papers, distributed across 37 different journals in this time period, reflecting an overall upward trend. (2) Debris flow disasters occur throughout the world, and a total of 13 countries carried out research on the prediction of debris flow occurrence based on ML; China made significant contributions, but more research efforts in African countries should be considered. (3) A total of 36 categories of ML models were utilized as baseline predictors for debris flow occurrence, with logistic regression (LR) and random forest (RF) emerging as the most popular choices. (4) Feature engineering and model comparison were the most commonly utilized strategies in predicting debris flow occurrence based on ML (53 and 46 papers, respectively). (5) Interpretation methods were rarely utilized in predicting debris flow occurrence based on ML, with only 16 papers reporting their utilization. (6) In the prediction of debris flow occurrence based on ML, interpretation methods were rarely utilized, searching by data materials was the most important sample data source, the topographic factors were the most commonly utilized category of candidate variables, and the area under the ROC curve (AUROC) was the most frequently reported evaluation metric. (7) LR's prediction performance for debris flow occurrence was inferior to that of RF, BPNN, and SVM; SVM was comparable to RF, and all superior to BPNN. (8) The application process for the prediction of debris flow occurrence based on ML consisted of three main steps: data preparation, model construction and evaluation, and prediction outcomes. The research gaps in predicting debris flow occurrence based on ML include utilizing new ML techniques and enhancing the interpretability of ML. Consequently, this study contributes both to academic ML research and to practical applications in the prediction of debris flow occurrence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
94. Deep Learning for 3D Reconstruction, Augmentation, and Registration: A Review Paper.
- Author
-
Vinodkumar, Prasoon Kumar, Karabulut, Dogus, Avots, Egils, Ozcinar, Cagri, and Anbarjafari, Gholamreza
- Subjects
DEEP learning ,COMPUTER vision ,GRAPH neural networks ,ARTIFICIAL intelligence ,MACHINE learning ,GENERATIVE adversarial networks - Abstract
The research groups in computer vision, graphics, and machine learning have dedicated a substantial amount of attention to the areas of 3D object reconstruction, augmentation, and registration. Deep learning is the predominant method used in artificial intelligence for addressing computer vision challenges. However, deep learning on three-dimensional data presents distinct obstacles and is now in its nascent phase. There have been significant advancements in deep learning specifically for three-dimensional data, offering a range of ways to address these issues. This study offers a comprehensive examination of the latest advancements in deep learning methodologies. We examine many benchmark models for the tasks of 3D object registration, augmentation, and reconstruction. We thoroughly analyse their architectures, advantages, and constraints. In summary, this report provides a comprehensive overview of recent advancements in three-dimensional deep learning and highlights unresolved research areas that will need to be addressed in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
95. Distributed Solar Generation: Current Knowledge and Future Trends.
- Author
-
Ali, Gasser G. and El-adaway, Islam H.
- Subjects
DISTRIBUTED power generation ,SOCIAL network analysis ,MACHINE learning ,INTERDISCIPLINARY research ,ARTIFICIAL intelligence - Abstract
Distributed solar generation (DSG) has been growing over the previous years because of its numerous advantages of being sustainable, flexible, reliable, and increasingly affordable. DSG is a broad and multidisciplinary research field because it relates to various fields in engineering, social sciences, economics, public policy, and others. Developing a holistic understanding of the state of research related to DSG can be difficult. Motivated to provide that understanding, the goal of this paper is to explore current and emerging multidisciplinary research trends associated with DSG. To achieve that, (1) a large data set of approximately 66,000 publications was collected; (2) the papers were labeled using keywords for topics including "Batteries and Storage," "Solar," "Complex Modeling," "Machine Learning (ML) and Artificial Intelligence (AI)," "Resilience, Vulnerability, and Disasters," "Policies and Incentives," "Social Aspects," "Economics," "Smart Grid," "Finance," "Social Equity," "Microgrid," and "Virtual Power Plant"; and (3) the data set was analyzed using scientometric and social network analysis (SNA) in respect to publication counts, citation counts, and interconnectivity between the topics. Notable findings were analyzed to describe current and emerging trends. It was found that social equity has high citation counts contrasted by few publications, indicating a possible strong need for research. There is also rapidly growing research in ML and AI in the context of DSG during recent years. Other research topics, such as smart grids, have been attracting fewer publications. The results also highlight the need for multidisciplinary research connecting the topics. To conclude, future research is suggested to explore research needs in the areas of social aspects, social justice and equity, public policy and incentives, and ML/AI. The findings should benefit researchers and stakeholders with a holistic understanding of multidisciplinary DSG-related research and provide insight for planning new research and funding opportunities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
96. A systematic literature review of weak signal identification and evolution for corporate foresight.
- Author
-
Zhao, Dongyuan, Tang, Zhongjun, and He, Duokui
- Subjects
MACHINE theory ,SIGNAL processing ,THREE-dimensional modeling ,MACHINE learning ,HUMAN resources departments - Abstract
Purpose: With the intensification of market competition, there is a growing demand for weak signal identification and evolutionary analysis for enterprise foresight. For decades, many scholars have conducted relevant research. However, the existing research only cuts in from a single angle and lacks a systematic and comprehensive overview. In this paper, the authors summarize the articles related to weak signal recognition and evolutionary analysis, in an attempt to make contributions to relevant research. Design/methodology/approach: The authors develop a systematic overview framework based on the most classical three-dimensional space model of weak signals. Framework comprehensively summarizes the current research insights and knowledge from three dimensions of research field, identification methods and interpretation methods. Findings: The research results show that it is necessary to improve the automation level in the process of weak signal recognition and analysis and transfer valuable human resources to the decision-making stage. In addition, it is necessary to coordinate multiple types of data sources, expand research subfields and optimize weak signal recognition and interpretation methods, with a view to expanding weak signal future research, making theoretical and practical contributions to enterprise foresight, and providing reference for the government to establish weak signal technology monitoring, evaluation and early warning mechanisms. Originality/value: The authors develop a systematic overview framework based on the most classical three-dimensional space model of weak signals. It comprehensively summarizes the current research insights and knowledge from three dimensions of research field, identification methods and interpretation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
97. How to use no-code artificial intelligence to predict and minimize the inventory distortions for resilient supply chains.
- Author
-
Jauhar, Sunil Kumar, Jani, Shashank Mayurkumar, Kamble, Sachin S., Pratap, Saurabh, Belhadi, Amine, and Gupta, Shivam
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,SUPPLY chains ,INVENTORY control ,ELECTRONIC commerce ,INVENTORIES ,SUPPLY chain disruptions - Abstract
Consumers' dramatic demand has a pernicious effect throughout the supply chain. It exacerbates inventory distortion because of significant revenue loss caused by stock-level issues. Despite the availability of several forecasting techniques, large organisations, manufacturing firms, and e-commerce websites collectively lose around $1.8 trillion annually to inventory distortion. If this problem is solved, sales may increase by 10.3 percent. The businesses are concerned about mitigating this loss. Artificial intelligence (AI) can play a significant role in building resilient supply chains. However, developing AI models consumes time and cost. In this paper, we propose a No Code Artificial Intelligence (NCAI) enabling non-technical companies to build machine learning models based on production quantity and inventory replenishment. The development of the NCAI model is fast and inexpensive. However, little research deals with applying NCAI to operations and supply chain problems. Addressing the existing gap, we show the application of NCAI in the retail industry. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
98. Towards knowledge graph reasoning for supply chain risk management using graph neural networks.
- Author
-
Kosasih, Edward Elson, Margaroli, Fabrizio, Gelli, Simone, Aziz, Ajmal, Wildgoose, Nick, and Brintrup, Alexandra
- Subjects
GRAPH neural networks ,SUPPLY chain management ,KNOWLEDGE graphs ,SUPPLY chain disruptions ,MACHINE learning - Abstract
Modern supply chains are complex, interconnected systems that contain emergent, invisible dependencies. Lack of visibility often hinders effective risk planning and results in delayed discovery of supply chain problems, with examples ranging from product contamination, unsustainable production practices, or exposure to suppliers clustered in geographical areas prone to natural or man-made disasters. Initiatives that rely on manual collection of data often fail due to supply chain complexity and unwillingness of suppliers to share data. In this paper, we propose a neurosymbolic machine learning technique to proactively uncover hidden risks in supply chains and discover new information. Our method uses a combination of graph neural networks and knowledge graph reasoning. Unlike existing research our model is able to infer multiple types of hidden relationship risks, presenting a step change in automated supply chain surveillance. The approach has been tested on two empirical datasets from the automotive and energy industries, illustrating that it can provide inference in multiple types of links such as companies, products, production capabilities, certifications; thereby facilitating complex queries that go beyond who-supplies-whom. As such, additional risk insights can emerge from graph structure, providing practitioners with new knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
99. Operational policies and performance analysis for overhead robotic compact warehousing systems with bin reshuffling.
- Author
-
Wang, Rong, Yang, Peng, Gong, Yeming, and Chen, Cheng
- Subjects
AUTOMATED storage retrieval systems ,POLICY analysis ,ROBOTICS ,REINFORCEMENT learning ,WAREHOUSES ,BINS - Abstract
This paper studies a novel robotic warehousing system called the overhead robotic compact storage and retrieval system, which can free up the floor space occupation at a low cost. Bins, as basic storage containers, are stacked on top of each other to form a bin stack. Along overhead tracks, bin-picking robots transport bins between storage/retrieval positions and workstations with the aid of track-changing robots. Little research has been done to study operational policies and performance analysis for this new robotic compact warehousing system. We propose a nested queuing network model that considers two transportation resources and performs reinforcement learning using real data to improve the reshuffling efficiency. We find that reinforcement learning based reshuffling policy greatly reduces the reshuffling distance and saves computation time compared to existing policies. We find that the storage policy of stacks affects the optimal width/length ratio regardless of the system height. Interestingly, we obtain the number of robots that can stabilise the system to avoid an explosion of the order queue; two more robots than that number will produce relatively low throughput times. Compared to an AutoStore system, using our system reduces cost by 30% with a slight increase in throughput time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
100. A sequential cross-product knowledge accumulation, extraction and transfer framework for machine learning-based production process modelling.
- Author
-
Xie, Jiarui, Zhang, Chonghui, Sage, Manuel, Safdar, Mutahar, and Zhao, Yaoyao Fiona
- Subjects
MACHINE tools ,MANUFACTURING processes ,AUXETIC materials ,FEATURE selection ,GAS turbines - Abstract
Machine learning is a promising method to model production processes and predict product quality. It is challenging to accurately model complex systems due to data scarcity, as mass customisation leads to various high-variety low-volume products. This study conceptualised knowledge accumulation, extraction, and transfer (KAET) to exploit the knowledge embedded in similar entities to address data scarcity. A sequential cross-product KAET (SeqTrans) is proposed to conduct KAET, integrating data preparation and preprocessing, feature selection (FS), feature learning (FL), and transfer learning (TL). The FS and FL modules conduct knowledge extraction and help address various practical challenges such as changing operating conditions and unbalanced datasets. In this paper, sequential TL is introduced to production modelling to conduct knowledge transfer among multiple entities. The first case study of auxetic material performance prediction demonstrates the effectiveness of sequential TL. Compared with conventional TL, sequential TL can achieve the same test mean square errors with 300 fewer training examples when facing data scarcity. In the second case study, balancing anomaly detection models were constructed for two gas turbines in the same series using real-world production data. With SeqTrans, the F1-score of the anomaly detection model of the data-poor engine was improved from 0.769 to 0.909. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.