507 results
Search Results
2. Numbers Do Not Lie: A Bibliometric Examination of Machine Learning Techniques in Fake News Research.
- Author
-
Sandu, Andra, Ioanăș, Ioana, Delcea, Camelia, Florescu, Margareta-Stela, and Cotfas, Liviu-Adrian
- Subjects
FAKE news ,MACHINE learning ,BIBLIOMETRICS ,WEB analytics ,RESEARCH personnel ,ELECTRONIC publications ,NEWS websites - Abstract
Fake news is an explosive subject, being undoubtedly among the most controversial and difficult challenges facing society in the present-day environment of technology and information, which greatly affects the individuals who are vulnerable and easily influenced, shaping their decisions, actions, and even beliefs. In the course of discussing the gravity and dissemination of the fake news phenomenon, this article aims to clarify the distinctions between fake news, misinformation, and disinformation, along with conducting a thorough analysis of the most widely read academic papers that have tackled the topic of fake news research using various machine learning techniques. Utilizing specific keywords for dataset extraction from Clarivate Analytics' Web of Science Core Collection, the bibliometric analysis spans six years, offering valuable insights aimed at identifying key trends, methodologies, and notable strategies within this multidisciplinary field. The analysis encompasses the examination of prolific authors, prominent journals, collaborative efforts, prior publications, covered subjects, keywords, bigrams, trigrams, theme maps, co-occurrence networks, and various other relevant topics. One noteworthy aspect related to the extracted dataset is the remarkable growth rate observed in association with the analyzed subject, indicating an impressive increase of 179.31%. The growth rate value, coupled with the relatively short timeframe, further emphasizes the research community's keen interest in this subject. In light of these findings, the paper draws attention to key contributions and gaps in the existing literature, providing researchers and decision-makers innovative viewpoints and perspectives on the ongoing battle against the spread of fake news in the age of information. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Comparison of Reinforcement Learning Algorithms for Edge Computing Applications Deployed by Serverless Technologies.
- Author
-
Femminella, Mauro and Reali, Gianluca
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,EDGE computing ,COMPUTER systems ,DATA protection - Abstract
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in order to benefit from the offered opportunities includes the need to use a reduced amount of resources compared to the traditional cloud environment. Indeed, it may happen that only one computing node can be used. In these situations, it is essential to introduce computing and memory resource management techniques that allow resources to be optimized while still guaranteeing acceptable performance, in terms of latency and probability of rejection. For this reason, the use of serverless technologies, managed by reinforcement learning algorithms, is an active area of research. In this paper, we explore and compare the performance of some machine learning algorithms for managing horizontal function autoscaling in a serverless edge computing system. In particular, we make use of open serverless technologies, deployed in a Kubernetes cluster, to experimentally fine-tune the performance of the algorithms. The results obtained allow both the understanding of some basic mechanisms typical of edge computing systems and related technologies that determine system performance and the guiding of configuration choices for systems in operation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Domain-Specific Few-Shot Table Prompt Question Answering via Contrastive Exemplar Selection.
- Author
-
Mo, Tianjin, Xiao, Qiao, Zhang, Hongyi, Li, Ren, and Wu, Yunsong
- Subjects
LANGUAGE models ,NATURAL language processing ,SQL ,DESIGN templates ,NATURAL languages ,QUESTION answering systems - Abstract
As a crucial task in natural language processing, table question answering has garnered significant attention from both the academic and industrial communities. It enables intelligent querying and question answering over structured data by translating natural language into corresponding SQL statements. Recently, there have been notable advancements in the general domain table question answering task, achieved through prompt learning with large language models. However, in specific domains, where tables often have a higher number of columns and questions tend to be more complex, large language models are prone to generating invalid SQL or NoSQL statements. To address the above issue, this paper proposes a novel few-shot table prompt question answering approach. Specifically, we design a prompt template construction strategy for structured SQL generation. It utilizes prompt templates to restructure the input for each test data and standardizes the model output, which can enhance the integrity and validity of generated SQL. Furthermore, this paper introduces a contrastive exemplar selection approach based on the question patterns and formats in domain-specific contexts. This enables the model to quickly retrieve the relevant exemplars and learn characteristics about given question. Experimental results on the two datasets in the domains of electric energy and structural inspection show that the proposed approach outperforms the baseline models across all comparison settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Information Retrieval and Machine Learning Methods for Academic Expert Finding.
- Author
-
de Campos, Luis M., Fernández-Luna, Juan M., Huete, Juan F., Ribadas-Pena, Francisco J., and Bolaños, Néstor
- Subjects
MACHINE learning ,INFORMATION retrieval ,DEEP learning ,RECOMMENDER systems ,ATTRIBUTION of authorship - Abstract
In the context of academic expert finding, this paper investigates and compares the performance of information retrieval (IR) and machine learning (ML) methods, including deep learning, to approach the problem of identifying academic figures who are experts in different domains when a potential user requests their expertise. IR-based methods construct multifaceted textual profiles for each expert by clustering information from their scientific publications. Several methods fully tailored for this problem are presented in this paper. In contrast, ML-based methods treat expert finding as a classification task, training automatic text classifiers using publications authored by experts. By comparing these approaches, we contribute to a deeper understanding of academic-expert-finding techniques and their applicability in knowledge discovery. These methods are tested with two large datasets from the biomedical field: PMSC-UGR and CORD-19. The results show how IR techniques were, in general, more robust with both datasets and more suitable than the ML-based ones, with some exceptions showing good performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Artificial Intelligence in Modeling and Simulation.
- Author
-
Fachada, Nuno and David, Nuno
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,GENERATIVE artificial intelligence ,AUTOMATED storage retrieval systems ,SCIENTIFIC knowledge - Abstract
This document is a summary of a journal article titled "Artificial Intelligence in Modeling and Simulation." The article discusses the integration of artificial intelligence (AI) into modeling and simulation (M&S) processes. It highlights the various applications of AI in fields such as engineering, physics, social sciences, and biology. The article also provides an overview of 11 selected papers from a special issue on AI and M&S, covering topics such as AI techniques for simulation and optimization, AI in agent-based modeling, AI for data processing and classification models, and artificial neural network (ANN) methods for improved M&S. The papers explore different methodologies and approaches to enhance the efficiency and validity of modeling and simulation using AI. The article concludes by emphasizing the progress and diverse uses of AI in M&S and expressing gratitude to the authors, reviewers, and editorial team involved in the special issue. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
7. Linear System Identification-Oriented Optimal Tampering Attack Strategy and Implementation Based on Information Entropy with Multiple Binary Observations.
- Author
-
Bai, Zhongwei, Yu, Peng, Liu, Yan, and Guo, Jin
- Subjects
STRATEGIC planning ,PARTICLE swarm optimization ,CYBER physical systems ,COMPUTER engineering ,TELECOMMUNICATION ,LINEAR systems ,ENTROPY (Information theory) - Abstract
With the rapid development of computer technology, communication technology, and control technology, cyber-physical systems (CPSs) have been widely used and developed. However, there are massive information interactions in CPSs, which lead to an increase in the amount of data transmitted over the network. The data communication, once attacked by the network, will seriously affect the security and stability of the system. In this paper, for the data tampering attack existing in the linear system with multiple binary observations, in the case where the estimation algorithm of the defender is unknown, the optimization index is constructed based on information entropy from the attacker's point of view, and the problem is modeled. For the problem of the multi-parameter optimization with energy constraints, this paper uses particle swarm optimization (PSO) to obtain the optimal data tampering attack solution set, and gives the estimation method of unknown parameters in the case of unknown parameters. To implement the real-time improvement of online implementation, the BP neural network is designed. Finally, the validity of the conclusions is verified through numerical simulation. This means that the attacker can construct effective metrics based on information entropy without the knowledge of the defense's discrimination algorithm. In addition, the optimal attack strategy implementation based on PSO and BP is also effective. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Bi-Objective, Dynamic, Multiprocessor Open-Shop Scheduling: A Hybrid Scatter Search–Tabu Search Approach.
- Author
-
Abdelmaguid, Tamer F.
- Subjects
GREY Wolf Optimizer algorithm ,SEARCH algorithms ,METAHEURISTIC algorithms ,GENETIC algorithms ,NP-hard problems ,TABU search algorithm - Abstract
This paper presents a novel, multi-objective scatter search algorithm (MOSS) for a bi-objective, dynamic, multiprocessor open-shop scheduling problem (Bi-DMOSP). The considered objectives are the minimization of the maximum completion time (makespan) and the minimization of the mean weighted flow time. Both are particularly important for improving machines' utilization and customer satisfaction level in maintenance and healthcare diagnostic systems, in which the studied Bi-DMOSP is mostly encountered. Since the studied problem is NP-hard for both objectives, fast algorithms are needed to fulfill the requirements of real-life circumstances. Previous attempts have included the development of an exact algorithm and two metaheuristic approaches based on the non-dominated sorting genetic algorithm (NSGA-II) and the multi-objective gray wolf optimizer (MOGWO). The exact algorithm is limited to small-sized instances; meanwhile, NSGA-II was found to produce better results compared to MOGWO in both small- and large-sized test instances. The proposed MOSS in this paper attempts to provide more efficient non-dominated solutions for the studied Bi-DMOSP. This is achievable via its hybridization with a novel, bi-objective tabu search approach that utilizes a set of efficient neighborhood search functions. Parameter tuning experiments are conducted first using a subset of small-sized benchmark instances for which the optimal Pareto front solutions are known. Then, detailed computational experiments on small- and large-sized instances are conducted. Comparisons with the previously developed NSGA-II metaheuristic demonstrate the superiority of the proposed MOSS approach for small-sized instances. For large-sized instances, it proves its capability of producing competitive results for instances with low and medium density. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Virtual Machine Platform Providing Machine Learning as a Programmable and Distributed Service for IoT and Edge On-Device Computing: Architecture, Transformation, and Evaluation of Integer Discretization.
- Author
-
Bosse, Stefan
- Subjects
INSTRUCTION set architecture ,FLOATING-point arithmetic ,VIRTUAL machine systems ,SENSOR networks ,DISTRIBUTED sensors - Abstract
Data-driven models used for predictive classification and regression tasks are commonly computed using floating-point arithmetic and powerful computers. We address constraints in distributed sensor networks like the IoT, edge, and material-integrated computing, providing only low-resource embedded computers with sensor data that are acquired and processed locally. Sensor networks are characterized by strong heterogeneous systems. This work introduces and evaluates a virtual machine architecture that provides ML as a service layer (MLaaS) on the node level and addresses very low-resource distributed embedded computers (with less than 20 kB of RAM). The VM provides a unified ML instruction set architecture that can be programmed to implement decision trees, ANN, and CNN model architectures using scaled integer arithmetic only. Models are trained primarily offline using floating-point arithmetic, finally converted by an iterative scaling and transformation process, demonstrated in this work by two tests based on simulated and synthetic data. This paper is an extended version of the FedCSIS 2023 conference paper providing new algorithms and ML applications, including ANN/CNN-based regression and classification tasks studying the effects of discretization on classification and regression accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An Efficient Optimization of the Monte Carlo Tree Search Algorithm for Amazons.
- Author
-
Zhang, Lijun, Zou, Han, and Zhu, Yungang
- Subjects
CONTESTS ,PARALLEL algorithms ,BOARD games ,SEARCH algorithms ,NATIONAL championships - Abstract
Amazons is a computerized board game with complex positions that are highly challenging for humans. In this paper, we propose an efficient optimization of the Monte Carlo tree search (MCTS) algorithm for Amazons, fusing the 'Move Groups' strategy and the 'Parallel Evaluation' optimization strategy (MG-PEO). Specifically, we explain the high efficiency of the Move Groups strategy by defining a new criterion: the winning convergence distance. We also highlight the strategy's potential issue of falling into a local optimum and propose that the Parallel Evaluation mechanism can compensate for this shortcoming. Moreover, We conducted rigorous performance analysis and experiments. Performance analysis results indicate that the MCTS algorithm with the Move Groups strategy can improve the playing ability of the Amazons game by 20–30 times compared to the traditional MCTS algorithm. The Parallel Evaluation optimization further enhances the playing ability of the Amazons game by 2–3 times. Experimental results show that the MCTS algorithm with the MG-PEO strategy achieves a 23% higher game-winning rate on average compared to the traditional MCTS algorithm. Additionally, the MG-PEO Amazons program proposed in this paper won first prize in the Amazons Competition at the 2023 China Collegiate Computer Games Championship & National Computer Games Tournament. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Algorithm for Assessment of the Switching Angles in the Unipolar SPWM Technique for Single-Phase Inverters.
- Author
-
Ponce-Silva, Mario, Sánchez-Vargas, Óscar, Cortés-García, Claudia, Aguayo-Alquicira, Jesús, and De León-Aldaco, Susana Estefany
- Subjects
ELECTRIC inverters ,DC-AC converters ,MATHEMATICAL analysis ,ELECTRIC motors ,RENEWABLE energy sources - Abstract
The main contribution of this paper is to present a simple algorithm that theoretically and numerically assesses the switching angles of an inverter operated with the SPWM technique. This technique is the most widely used for eliminating harmonics in DC-AC converters for powering motors, renewable energy applications, household appliances, etc. Unlike conventional implementations of the SPWM technique based on the analog or digital comparison of a sinusoidal signal with a triangular signal, this paper mathematically performs this comparison. It proposes a simple solution to solve the transcendental equations arising from the mathematical analysis numerically. The technique is validated by calculating the total harmonic distortion (THD) of the generated signal theoretically and numerically, and the results indicate that the calculated angles produce the same distribution of harmonics calculated analytically and numerically. The algorithm is limited to single-phase inverters with unipolar SPWM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. An Improved Adam's Algorithm for Stomach Image Classification.
- Author
-
Sun, Haijing, Yu, Hao, Shao, Yichuan, Wang, Jiantao, Xing, Lei, Zhang, Le, and Zhao, Qian
- Subjects
OPTIMIZATION algorithms ,IMAGE recognition (Computer vision) ,MACHINE learning ,GASTRIC diseases ,DIAGNOSTIC imaging - Abstract
Current stomach disease detection and diagnosis is challenged by data complexity and high dimensionality and requires effective deep learning algorithms to improve diagnostic accuracy. To address this challenge, in this paper, an improved strategy based on the Adam algorithm is proposed, which aims to alleviate the influence of local optimal solutions, overfitting, and slow convergence rates by controlling the restart strategy and the gradient norm joint clipping technique. This improved algorithm is abbreviated as the CG-Adam algorithm. The control restart strategy performs a restart operation by periodically checking the number of steps and once the number of steps reaches a preset restart period. After the restart is completed, the algorithm will restart the optimization process. It helps the algorithm avoid falling into the local optimum and maintain convergence stability. Meanwhile, gradient norm joint clipping combines both gradient clipping and norm clipping techniques, which can avoid gradient explosion and gradient vanishing problems and help accelerate the convergence of the optimization process by restricting the gradient and norm to a suitable range. In order to verify the effectiveness of the CG-Adam algorithm, experimental validation is carried out on the MNIST, CIFAR10, and Stomach datasets and compared with the Adam algorithm as well as the current popular optimization algorithms. The experimental results demonstrate that the improved algorithm proposed in this paper achieves an accuracy of 98.59%, 70.7%, and 73.2% on the MNIST, CIFAR10, and Stomach datasets, respectively, surpassing the Adam algorithm. The experimental results not only prove the significant effect of the CG-Adam algorithm in accelerating the model convergence and improving generalization performance but also demonstrate its wide potential and practical application value in the field of medical image recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Fuzzy Fractional Brownian Motion: Review and Extension.
- Author
-
Urumov, Georgy, Chountas, Panagiotis, and Chaussalet, Thierry
- Subjects
WIENER processes ,POISSON processes ,UNCERTAIN systems ,PRICES ,FUZZY systems - Abstract
In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, considering a range of possible values for the parameters. This paper presents a comprehensive review of existing work on fuzzy fractional Brownian motion and proposes an extension in the context of financial option pricing. In this paper, we define a unified framework combining fractional Brownian motion with fuzzy processes, creating a joint product measure space that captures both randomness and fuzziness. The approach allows for the consideration of individual risk preferences and beliefs about parameter uncertainties. By extending Merton's jump-diffusion model to include fuzzy fractional Brownian motion, this paper addresses the modelling needs of hybrid systems with uncertain variables. The proposed model, which includes fuzzy Poisson processes and fuzzy volatility, demonstrates advantageous properties such as long-range dependence and self-similarity, providing a robust tool for modelling financial markets. By incorporating fuzzy numbers and the belief degree, this approach provides a more flexible framework for practitioners to make their investment decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,ALGORITHMS ,MACHINE learning ,INFORMATION technology ,MEDICAL care ,MOTION capture (Human mechanics) ,MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
15. Application of Split Coordinate Channel Attention Embedding U2Net in Salient Object Detection.
- Author
-
Wu, Yuhuan and Wu, Yonghong
- Subjects
OBJECT recognition (Computer vision) ,FEATURE extraction ,DEEP learning ,TRACKING algorithms ,LEARNING ability - Abstract
Salient object detection (SOD) aims to identify the most visually striking objects in a scene, simulating the function of the biological visual attention system. The attention mechanism in deep learning is commonly used as an enhancement strategy which enables the neural network to concentrate on the relevant parts when processing input data, effectively improving the model's learning and prediction abilities. Existing saliency object detection methods based on RGB deep learning typically treat all regions equally by using the extracted features, overlooking the fact that different regions have varying contributions to the final predictions. Based on the U2Net algorithm, this paper incorporates the split coordinate channel attention (SCCA) mechanism into the feature extraction stage. SCCA conducts spatial transformation in width and height dimensions to efficiently extract the location information of the target to be detected. While pixel-level semantic segmentation based on annotation has been successful, it assigns the same weight to each pixel which leads to poor performance in detecting the boundary of objects. In this paper, the Canny edge detection loss is incorporated into the loss calculation stage to improve the model's ability to detect object edges. Based on the DUTS and HKU-IS datasets, experiments confirm that the proposed strategies effectively enhance the model's detection performance, resulting in a 0.8% and 0.7% increase in the F
1 -score of U2Net. This paper also compares the traditional attention modules with the newly proposed attention, and the SCCA attention module achieves a top-three performance in prediction time, mean absolute error (MAE), F1 -score, and model size on both experimental datasets. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
16. A Review of Machine Learning's Role in Cardiovascular Disease Prediction: Recent Advances and Future Challenges.
- Author
-
Naser, Marwah Abdulrazzaq, Majeed, Aso Ahmed, Alsabah, Muntadher, Al-Shaikhli, Taha Raad, and Kaky, Kawa M.
- Subjects
MACHINE learning ,CARDIOVASCULAR diseases ,ARTIFICIAL intelligence ,EARLY diagnosis ,TREATMENT delay (Medicine) - Abstract
Cardiovascular disease is the leading cause of global mortality and responsible for millions of deaths annually. The mortality rate and overall consequences of cardiac disease can be reduced with early disease detection. However, conventional diagnostic methods encounter various challenges, including delayed treatment and misdiagnoses, which can impede the course of treatment and raise healthcare costs. The application of artificial intelligence (AI) techniques, especially machine learning (ML) algorithms, offers a promising pathway to address these challenges. This paper emphasizes the central role of machine learning in cardiac health and focuses on precise cardiovascular disease prediction. In particular, this paper is driven by the urgent need to fully utilize the potential of machine learning to enhance cardiovascular disease prediction. In light of the continued progress in machine learning and the growing public health implications of cardiovascular disease, this paper aims to offer a comprehensive analysis of the topic. This review paper encompasses a wide range of topics, including the types of cardiovascular disease, the significance of machine learning, feature selection, the evaluation of machine learning models, data collection & preprocessing, evaluation metrics for cardiovascular disease prediction, and the recent trends & suggestion for future works. In addition, this paper offers a holistic view of machine learning's role in cardiovascular disease prediction and public health. We believe that our comprehensive review will contribute significantly to the existing body of knowledge in this essential area. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Minimizing Query Frequency to Bound Congestion Potential for Moving Entities at a Fixed Target Time †.
- Author
-
Evans, William and Kirkpatrick, David
- Subjects
INTERSECTION graph theory ,ONLINE algorithms - Abstract
Consider a collection of entities moving continuously with bounded speed, but otherwise unpredictably, in some low-dimensional space. Two such entities encroach upon one another at a fixed time if their separation is less than some specified threshold. Encroachment, of concern in many settings such as collision avoidance, may be unavoidable. However, the associated difficulties are compounded if there is uncertainty about the precise location of entities, giving rise to potential encroachment and, more generally, potential congestion within the full collection. We adopt a model in which entities can be queried for their current location (at some cost) and the uncertainty region associated with an entity grows in proportion to the time since that entity was last queried. The goal is to maintain low potential congestion, measured in terms of the (dynamic) intersection graph of uncertainty regions, at specified (possibly all) times, using the lowest possible query cost. Previous work in the same uncertainty model addressed the problem of minimizing the congestion potential of point entities using location queries of some bounded frequency. It was shown that it is possible to design query schemes that are O (1) -competitive, in terms of worst-case congestion potential, with other, even clairvoyant query schemes (that exploit knowledge of the trajectories of all entities), subject to the same bound on query frequency. In this paper, we initiate the treatment of a more general problem with the complementary optimization objective: minimizing the query frequency, measured as the reciprocal of the minimum time between queries (granularity), while guaranteeing a fixed bound on congestion potential of entities with positive extent at one specified target time. This complementary objective necessitates quite different schemes and analyses. Nevertheless, our results parallel those of the earlier papers, specifically tight competitive bounds on required query frequency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Multiobjective Path Problems and Algorithms in Telecommunication Network Design—Overview and Trends.
- Author
-
Craveirinha, José, Clímaco, João, Girão-Silva, Rita, and Pascoal, Marta
- Subjects
TELECOMMUNICATION systems ,ALGORITHMS ,QUALITY of service - Abstract
A major area of application of multiobjective path problems and resolution algorithms is telecommunication network routing design, taking into account the extremely rapid technological and service evolutions. The need for explicit consideration of heterogeneous Quality of Service metrics makes it advantageous for the development of routing models where various technical–economic aspects, often conflicting, should be tackled. Our work is focused on multiobjective path problem formulations and resolution methods and their applications to routing methods. We review basic concepts and present main formulations of multiobjective path problems, considering different types of objective functions. We outline the different types of resolution methods for these problems, including a classification and overview of relevant algorithms concerning different types of problems. Afterwards, we outline background concepts on routing models and present an overview of selected papers considered as representative of different types of applications of multiobjective path problem formulations and algorithms. A broad characterization of major types of path problems relevant in this context is shown regarding the overview of contributions in different technological and architectural network environments. Finally, we outline research trends in this area, in relation to recent technological evolutions in communication networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Comparative Analysis of Classification Methods and Suitable Datasets for Protocol Recognition in Operational Technologies.
- Author
-
Holasova, Eva, Fujdiak, Radek, and Misurec, Jiri
- Subjects
COMPUTER network traffic ,INFORMATION technology ,CLASSIFICATION ,COMPARATIVE studies ,COMPARATIVE method - Abstract
The interconnection of Operational Technology (OT) and Information Technology (IT) has created new opportunities for remote management, data storage in the cloud, real-time data transfer over long distances, or integration between different OT and IT networks. OT networks require increased attention due to the convergence of IT and OT, mainly due to the increased risk of cyber-attacks targeting these networks. This paper focuses on the analysis of different methods and data processing for protocol recognition and traffic classification in the context of OT specifics. Therefore, this paper summarizes the methods used to classify network traffic, analyzes the methods used to recognize and identify the protocol used in the industrial network, and describes machine learning methods to recognize industrial protocols. The output of this work is a comparative analysis of approaches specifically for protocol recognition and traffic classification in OT networks. In addition, publicly available datasets are compared in relation to their applicability for industrial protocol recognition. Research challenges are also identified, highlighting the lack of relevant datasets and defining directions for further research in the area of protocol recognition and classification in OT environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Anomaly Detection in Blockchain Networks Using Unsupervised Learning: A Survey.
- Author
-
Cholevas, Christos, Angeli, Eftychia, Sereti, Zacharoula, Mavrikos, Emmanouil, and Tsekouras, George E.
- Subjects
DATA structures ,MACHINE learning ,PRIVATE networks ,BLOCKCHAINS ,ALGORITHMS - Abstract
In decentralized systems, the quest for heightened security and integrity within blockchain networks becomes an issue. This survey investigates anomaly detection techniques in blockchain ecosystems through the lens of unsupervised learning, delving into the intricacies and going through the complex tapestry of abnormal behaviors by examining avant-garde algorithms to discern deviations from normal patterns. By seamlessly blending technological acumen with a discerning gaze, this survey offers a perspective on the symbiotic relationship between unsupervised learning and anomaly detection by reviewing this problem with a categorization of algorithms that are applied to a variety of problems in this field. We propose that the use of unsupervised algorithms in blockchain anomaly detection should be viewed not only as an implementation procedure but also as an integration procedure, where the merits of these algorithms can effectively be combined in ways determined by the problem at hand. In that sense, the main contribution of this paper is a thorough study of the interplay between various unsupervised learning algorithms and how this can be used in facing malicious activities and behaviors within public and private blockchain networks. The result is the definition of three categories, the characteristics of which are recognized in terms of the way the respective integration takes place. When implementing unsupervised learning, the structure of the data plays a pivotal role. Therefore, this paper also provides an in-depth presentation of the data structures commonly used in unsupervised learning-based blockchain anomaly detection. The above analysis is encircled by a presentation of the typical anomalies that have occurred so far along with a description of the general machine learning frameworks developed to deal with them. Finally, the paper spotlights challenges and directions that can serve as a comprehensive compendium for future research efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Intelligent Ship Scheduling and Path Planning Method for Maritime Emergency Rescue.
- Author
-
Ying, Wen, Wang, Zhaohui, Li, Hui, Du, Sheng, and Zhao, Man
- Subjects
NAVIGATION in shipping ,EMERGENCY management ,RESCUE work ,MARITIME safety ,SHIPS ,INTELLIGENT buildings ,CONTAINER terminals ,SAILING - Abstract
Intelligent ship navigation scheduling and planning is of great significance for ensuring the safety of maritime production and life and promoting the development of the marine economy. In this paper, an intelligent ship scheduling and path planning method is proposed for a practical application scenario wherein the emergency rescue center receives rescue messages and dispatches emergency rescue ships to the incident area for rescue. Firstly, the large-scale sailing route of the task ship is pre-planned in the voyage planning stage by using the improved A* algorithm. Secondly, the full-coverage path planning algorithm is used to plan the ship's search route in the regional search stage by updating the ship's navigation route in real time. In order to verify the effectiveness of the proposed algorithm, comparative experiments were carried out with the conventional algorithm in the two operation stages of rushing to the incident sea area and regional search and rescue. The experimental results show that the proposed algorithm can adapt to emergency search and rescue tasks in the complex setting of the sea area and can effectively improve the efficiency of the operation, ensure the safety of the operation process, and provide a more intelligent and efficient solution for the planning of maritime emergency rescue tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. The Algorithm of Gu and Eisenstat and D-Optimal Design of Experiments.
- Author
-
Forbes, Alistair
- Subjects
OPTIMAL designs (Statistics) ,EXPERIMENTAL design ,FACTORIZATION ,ALGORITHMS - Abstract
This paper addresses the following problem: given m potential observations to determine n parameters, m > n , what is the best choice of n observations. The problem can be formulated as finding the n × n submatrix of the complete m × n observation matrix that has maximum determinant. An algorithm by Gu and Eisenstat for a determining a strongly rank-revealing QR factorisation of a matrix can be adapted to address this latter formulation. The algorithm starts with an initial selection of n rows of the observation matrix and then performs a sequence of row interchanges, with the determinant of the current submatrix strictly increasing at each step until no further improvement can be made. The algorithm implements rank-one updating strategies, which leads to a compact and efficient algorithm. The algorithm does not necessarily determine the global optimum but provides a practical approach to designing an effective measurement strategy. In this paper, we describe how the Gu–Eisenstat algorithm can be adapted to address the problem of optimal experimental design and used with the QR algorithm with column pivoting to provide effective designs. We also describe implementations of sequential algorithms to add further measurements that optimise the information gain at each step. We illustrate performance on several metrology examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. An Overview of Demand Analysis and Forecasting Algorithms for the Flow of Checked Baggage among Departing Passengers.
- Author
-
Jiang, Bo, Ding, Guofu, Fu, Jianlin, Zhang, Jian, and Zhang, Yong
- Subjects
BAGGAGE handling in airports ,ECONOMIC demand ,DEMAND forecasting ,AIRPORTS ,TRAFFIC estimation ,LUGGAGE ,ARTIFICIAL neural networks - Abstract
The research on baggage flow plays a pivotal role in achieving the efficient and intelligent allocation and scheduling of airport service resources, as well as serving as a fundamental element in determining the design, development, and process optimization of airport baggage handling systems. This paper examines baggage checked in by departing passengers at airports. The crrent state of the research on baggage flow demand is first reviewed and analyzed. Then, using examples of objective data, it is concluded that while there is a significant correlation between airport passenger flow and baggage flow, an increase in passenger flow does not necessarily result in a proportional increase in baggage flow. According to the existing research results on the influencing factors of baggage flow sorting and classification, the main influencing factors of baggage flow are divided into two categories: macro-influencing factors and micro-influencing factors. When studying the relationship between the economy and baggage flow, it is recommended to use a comprehensive analysis that includes multiple economic indicators, rather than relying solely on GDP. This paper provides a brief overview of prevalent transportation flow prediction methods, categorizing algorithmic models into three groups: based on mathematical and statistical models, intelligent algorithmic-based models, and combined algorithmic models utilizing artificial neural networks. The structures, strengths, and weaknesses of various transportation flow prediction algorithms are analyzed, as well as their application scenarios. The potential advantages of using artificial neural network-based combined prediction models for baggage flow forecasting are explained. It concludes with an outlook on research regarding the demand for baggage flow. This review may provide further research assistance to scholars in airport management and baggage handling system development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Diabetic Retinopathy Lesion Segmentation Method Based on Multi-Scale Attention and Lesion Perception.
- Author
-
Bian, Ye, Si, Chengyong, and Wang, Lei
- Subjects
DIABETIC retinopathy ,DEEP learning ,VISION disorders ,RETINAL imaging ,STIMULUS generalization ,IMAGE segmentation - Abstract
The early diagnosis of diabetic retinopathy (DR) can effectively prevent irreversible vision loss and assist ophthalmologists in providing timely and accurate treatment plans. However, the existing methods based on deep learning have a weak perception ability of different scale information in retinal fundus images, and the segmentation capability of subtle lesions is also insufficient. This paper aims to address these issues and proposes MLNet for DR lesion segmentation, which mainly consists of the Multi-Scale Attention Block (MSAB) and the Lesion Perception Block (LPB). The MSAB is designed to capture multi-scale lesion features in fundus images, while the LPB perceives subtle lesions in depth. In addition, a novel loss function with tailored lesion weight is designed to reduce the influence of imbalanced datasets on the algorithm. The performance comparison between MLNet and other state-of-the-art methods is carried out in the DDR dataset and DIARETDB1 dataset, and MLNet achieves the best results of 51.81% mAUPR, 49.85% mDice, and 37.19% mIoU in the DDR dataset, and 67.16% mAUPR and 61.82% mDice in the DIARETDB1 dataset. The generalization experiment of MLNet in the IDRiD dataset achieves 59.54% mAUPR, which is the best among other methods. The results show that MLNet has outstanding DR lesion segmentation ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Efficient Algorithm for Proportional Lumpability and Its Application to Selfish Mining in Public Blockchains.
- Author
-
Piazza, Carla, Rossi, Sabina, and Smuseva, Daria
- Subjects
POLYNOMIAL time algorithms ,MARKOV processes ,BLOCKCHAINS ,ALGORITHMS ,STOCHASTIC models ,PETRI nets - Abstract
This paper explores the concept of proportional lumpability as an extension of the original definition of lumpability, addressing the challenges posed by the state space explosion problem in computing performance indices for large stochastic models. Lumpability traditionally relies on state aggregation techniques and is applicable to Markov chains demonstrating structural regularity. Proportional lumpability extends this idea, proposing that the transition rates of a Markov chain can be modified by certain factors, resulting in a lumpable new Markov chain. This concept facilitates the derivation of precise performance indices for the original process. This paper establishes the well-defined nature of the problem of computing the coarsest proportional lumpability that refines a given initial partition, ensuring a unique solution exists. Additionally, a polynomial time algorithm is introduced to solve this problem, offering valuable insights into both the concept of proportional lumpability and the broader realm of partition refinement techniques. The effectiveness of proportional lumpability is demonstrated through a case study that consists of designing a model to investigate selfish mining behaviors on public blockchains. This research contributes to a better understanding of efficient approaches for handling large stochastic models and highlights the practical applicability of proportional lumpability in deriving exact performance indices. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Background Subtraction for Dynamic Scenes Using Gabor Filter Bank and Statistical Moments.
- Author
-
Romero-González, Julio-Alejandro, Córdova-Esparza, Diana-Margarita, Terven, Juan, Herrera-Navarro, Ana-Marcela, and Jiménez-Hernández, Hugo
- Subjects
FILTER banks ,VIDEO surveillance ,GABOR filters ,OBJECT recognition (Computer vision) ,COMPUTER vision - Abstract
This paper introduces a novel background subtraction method that utilizes texture-level analysis based on the Gabor filter bank and statistical moments. The method addresses the challenge of accurately detecting moving objects that exhibit similar color intensity variability or texture to the surrounding environment, which conventional methods struggle to handle effectively. The proposed method accurately distinguishes between foreground and background objects by capturing different frequency components using the Gabor filter bank and quantifying the texture level through statistical moments. Extensive experimental evaluations use datasets featuring varying lighting conditions, uniform and non-uniform textures, shadows, and dynamic backgrounds. The performance of the proposed method is compared against other existing methods using metrics such as sensitivity, specificity, and false positive rate. The experimental results demonstrate that the proposed method outperforms other methods in accuracy and robustness. It effectively handles scenarios with complex backgrounds, lighting changes, and objects that exhibit similar texture or color intensity as the background. Our method retains object structure while minimizing false detections and noise. This paper provides valuable insights into computer vision and object detection, offering a promising solution for accurate foreground detection in various applications such as video surveillance and motion tracking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Solar Irradiance Forecasting with Natural Language Processing of Cloud Observations and Interpretation of Results with Modified Shapley Additive Explanations.
- Author
-
Matrenin, Pavel V., Gamaley, Valeriy V., Khalyasmaa, Alexandra I., and Stepanova, Alina I.
- Subjects
NATURAL language processing ,ARTIFICIAL intelligence ,SOLAR power plants ,PHOTOVOLTAIC power systems ,SURFACE of the earth ,SOLAR technology ,FORECASTING ,MACHINE learning - Abstract
Forecasting the generation of solar power plants (SPPs) requires taking into account meteorological parameters that influence the difference between the solar irradiance at the top of the atmosphere calculated with high accuracy and the solar irradiance at the tilted plane of the solar panel on the Earth's surface. One of the key factors is cloudiness, which can be presented not only as a percentage of the sky area covered by clouds but also many additional parameters, such as the type of clouds, the distribution of clouds across atmospheric layers, and their height. The use of machine learning algorithms to forecast the generation of solar power plants requires retrospective data over a long period and formalising the features; however, retrospective data with detailed information about cloudiness are normally recorded in the natural language format. This paper proposes an algorithm for processing such records to convert them into a binary feature vector. Experiments conducted on data from a real solar power plant showed that this algorithm increases the accuracy of short-term solar irradiance forecasts by 5–15%, depending on the quality metric used. At the same time, adding features makes the model less transparent to the user, which is a significant drawback from the point of view of explainable artificial intelligence. Therefore, the paper uses an additive explanation algorithm based on the Shapley vector to interpret the model's output. It is shown that this approach allows the machine learning model to explain why it generates a particular forecast, which will provide a greater level of trust in intelligent information systems in the power industry. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Active Data Selection and Information Seeking.
- Author
-
Parr, Thomas, Friston, Karl, and Zeidman, Peter
- Subjects
INFORMATION-seeking behavior ,OPTIMAL designs (Statistics) ,SUBSET selection ,BAYESIAN field theory ,EXPERIMENTAL design ,LABORATORY animals - Abstract
Bayesian inference typically focuses upon two issues. The first is estimating the parameters of some model from data, and the second is quantifying the evidence for alternative hypotheses—formulated as alternative models. This paper focuses upon a third issue. Our interest is in the selection of data—either through sampling subsets of data from a large dataset or through optimising experimental design—based upon the models we have of how those data are generated. Optimising data-selection ensures we can achieve good inference with fewer data, saving on computational and experimental costs. This paper aims to unpack the principles of active sampling of data by drawing from neurobiological research on animal exploration and from the theory of optimal experimental design. We offer an overview of the salient points from these fields and illustrate their application in simple toy examples, ranging from function approximation with basis sets to inference about processes that evolve over time. Finally, we consider how this approach to data selection could be applied to the design of (Bayes-adaptive) clinical trials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches.
- Author
-
Tahir, Noor Ul Ain, Zhang, Zuping, Asim, Muhammad, Chen, Junhong, and ELAffendi, Mohammed
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,INTELLIGENT transportation systems ,COMPUTER vision ,PEDESTRIANS ,LITERATURE reviews ,GEOGRAPHICAL perception ,WEATHER ,AUTONOMOUS vehicles - Abstract
Enhancing the environmental perception of autonomous vehicles (AVs) in intelligent transportation systems requires computer vision technology to be effective in detecting objects and obstacles, particularly in adverse weather conditions. Adverse weather circumstances present serious difficulties for object-detecting systems, which are essential to contemporary safety procedures, infrastructure for monitoring, and intelligent transportation. AVs primarily depend on image processing algorithms that utilize a wide range of onboard visual sensors for guidance and decisionmaking. Ensuring the consistent identification of critical elements such as vehicles, pedestrians, and road lanes, even in adverse weather, is a paramount objective. This paper not only provides a comprehensive review of the literature on object detection (OD) under adverse weather conditions but also delves into the ever-evolving realm of the architecture of AVs, challenges for automated vehicles in adverse weather, the basic structure of OD, and explores the landscape of traditional and deep learning (DL) approaches for OD within the realm of AVs. These approaches are essential for advancing the capabilities of AVs in recognizing and responding to objects in their surroundings. This paper further investigates previous research that has employed both traditional and DL methodologies for the detection of vehicles, pedestrians, and road lanes, effectively linking these approaches with the evolving field of AVs. Moreover, this paper offers an in-depth analysis of the datasets commonly employed in AV research, with a specific focus on the detection of key elements in various environmental conditions, and then summarizes the evaluation matrix. We expect that this review paper will help scholars to gain a better understanding of this area of research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Root Cause Tracing Using Equipment Process Accuracy Evaluation for Looper in Hot Rolling.
- Author
-
Jing, Fengwei, Li, Fenghe, Song, Yong, Li, Jie, Feng, Zhanbiao, and Guo, Jin
- Subjects
HOT rolling ,ROLLING (Metalwork) ,PRODUCT quality - Abstract
The concept of production stability in hot strip rolling encapsulates the ability of a production line to consistently maintain its output levels and uphold the quality of its products, thus embodying the steady and uninterrupted nature of the production yield. This scholarly paper focuses on the paramount looper equipment in the finishing rolling area, utilizing it as a case study to investigate approaches for identifying the origins of instabilities, specifically when faced with inadequate looper performance. Initially, the paper establishes the equipment process accuracy evaluation (EPAE) model for the looper, grounded in the precision of the looper's operational process, to accurately depict the looper's functioning state. Subsequently, it delves into the interplay between the EPAE metrics and overall production stability, advocating for the use of EPAE scores as direct indicators of production stability. The study further introduces a novel algorithm designed to trace the root causes of issues, categorizing them into material, equipment, and control factors, thereby facilitating on-site fault rectification. Finally, the practicality and effectiveness of this methodology are substantiated through its application on the 2250 hot rolling equipment production line. This paper provides a new approach for fault tracing in the hot rolling process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Application of Genetic Algorithms for Periodicity Recognition and Finite Sequences Sorting.
- Author
-
Zhassuzak, Mukhtar, Akhmet, Marat, Amirgaliyev, Yedilkhan, and Buribayev, Zholdas
- Subjects
GENETIC algorithms ,BINARY sequences ,CHAOS theory - Abstract
Unpredictable strings are sequences of data with complex and erratic behavior, which makes them an object of interest in various scientific fields. Unpredictable strings related to chaos theory was investigated using a genetic algorithm. This paper presents a new genetic algorithm for converting large binary sequences into their periodic form. The MakePeriod method is also presented, which is aimed at optimizing the search for such periodic sequences, which significantly reduces the number of generations to achieve the result of the problem under consideration. The analysis of the deviation of a nonperiodic sequence from its considered periodic transformation was carried out, and methods of crossover and mutation were investigated. The proposed algorithm and its associated conclusions can be applied to processing large sequences and different values of the period, and also emphasize the importance of choosing the right methods of crossover and mutation when applying genetic algorithms to this task. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A Heterogeneity-Aware Car-Following Model: Based on the XGBoost Method.
- Author
-
Zhu, Kefei, Yang, Xu, Zhang, Yanbo, Liang, Mengkun, and Wu, Jun
- Subjects
DRIVER assistance systems - Abstract
With the rising popularity of the Advanced Driver Assistance System (ADAS), there is an increasing demand for more human-like car-following performance. In this paper, we consider the role of heterogeneity in car-following behavior within car-following modeling. We incorporate car-following heterogeneity factors into the model features. We employ the eXtreme Gradient Boosting (XGBoost) method to build the car-following model. The results show that our model achieves optimal performance with a mean squared error of 0.002181, surpassing the model that disregards heterogeneity factors. Furthermore, utilizing model importance analysis, we determined that the cumulative importance score of heterogeneity factors in the model is 0.7262. The results demonstrate the significant impact of heterogeneity factors on car-following behavior prediction and highlight the importance of incorporating heterogeneity factors into car-following models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Following the Writer's Path to the Dynamically Coalescing Reactive Chains Design Pattern.
- Author
-
Oliveira Marum, João Paulo, Cunningham, H. Conrad, Jones, J. Adam, and Liu, Yi
- Subjects
WEB-based user interfaces ,USER interfaces ,AUGMENTED reality ,SOFTWARE architecture ,DESIGN software ,POKEMON Go - Abstract
Two recent studies addressed the problem of reducing transitional turbulence in applications developed in C# on.NET. The first study investigated this problem in desktop and Web GUI applications and the second in virtual and augmented reality applications using the Unity3D game engine. The studies used similar solution approaches, but both were somewhat embedded in the details of their applications and implementation platforms. This paper examines these two families of applications and seeks to extract the common aspects of their problem definitions and solution approaches and codify the problem-solution pair as a new software design pattern. To do so, the paper adopts Wellhausen and Fiesser's writer's path methodology and follows it systematically to discover and write the pattern, recording the reasoning at each step. To evaluate the pattern, the paper applies it to an arbitrary C#/.NET GUI application. The resulting design pattern is named Dynamically Coalescing Reactive Chains (DCRC). It enables the approach to transitional turbulence reduction to be reused across a range of related applications, languages, and user interface technologies. The detailed example of the writer's path can assist future pattern writers in navigating through the complications and subtleties of the pattern-writing process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Pedestrian Detection Based on Feature Enhancement in Complex Scenes.
- Author
-
Su, Jiao, An, Yi, Wu, Jialin, and Zhang, Kai
- Subjects
PEDESTRIANS ,TRANSPORTATION security measures ,COMPUTER vision ,KNOWLEDGE transfer ,PROBLEM solving - Abstract
Pedestrian detection has always been a difficult and hot spot in computer vision research. At the same time, pedestrian detection technology plays an important role in many applications, such as intelligent transportation and security monitoring. In complex scenes, pedestrian detection often faces some challenges, such as low detection accuracy and misdetection due to small target sizes and scale variations. To solve these problems, this paper proposes a pedestrian detection network PT-YOLO based on the YOLOv5. The pedestrian detection network PT-YOLO consists of the YOLOv5 network, the squeeze-and-excitation module (SE), the weighted bi-directional feature pyramid module (BiFPN), the coordinate convolution (coordconv) module and the wise intersection over union loss function (WIoU). The SE module in the backbone allows it to focus on the important features of pedestrians and improves accuracy. The weighted BiFPN module enhances the fusion of multi-scale pedestrian features and information transfer, which can improve fusion efficiency. The prediction head design uses the WIoU loss function to reduce the regression error. The coordconv module allows the network to better perceive the location information in the feature map. The experimental results show that the pedestrian detection network PT-YOLO is more accurate compared with other target detection methods in pedestrian detection and can effectively accomplish the task of pedestrian detection in complex scenes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A Survey on Swarm Robotics for Area Coverage Problem.
- Author
-
Muhsen, Dena Kadhim, Sadiq, Ahmed T., and Raheem, Firas Abdulrazzaq
- Subjects
AGGREGATION (Robotics) - Abstract
The area coverage problem solution is one of the vital research areas which can benefit from swarm robotics. The greatest challenge to the swarm robotics system is to complete the task of covering an area effectively. Many domains where area coverage is essential include exploration, surveillance, mapping, foraging, and several other applications. This paper introduces a survey of swarm robotics in area coverage research papers from 2015 to 2022 regarding the algorithms and methods used, hardware, and applications in this domain. Different types of algorithms and hardware were dealt with and analysed; according to the analysis, the characteristics and advantages of each of them were identified, and we determined their suitability for different applications in covering the area for many goals. This study demonstrates that naturally inspired algorithms have the most significant role in swarm robotics for area coverage compared to other techniques. In addition, modern hardware has more capabilities suitable for supporting swarm robotics to cover an area, even if the environment is complex and contains static or dynamic obstacles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Algorithms for Various Trigonometric Power Sums.
- Author
-
Kowalenko, Victor
- Subjects
POLYNOMIALS ,ALGORITHMS - Abstract
In this paper, algorithms for different types of trigonometric power sums are developed and presented. Although interesting in their own right, these trigonometric power sums arise during the creation of an algorithm for the four types of twisted trigonometric power sums defined in the introduction. The primary aim in evaluating these sums is to obtain exact results in a rational form, as opposed to standard or direct evaluation, which often results in machine-dependent decimal values that can be affected by round-off errors. Moreover, since the variable, m, appearing in the denominators of the arguments of the trigonometric functions in these sums, can remain algebraic in the algorithms/codes, one can also obtain polynomial solutions in powers of m and the variable r that appears in the cosine factor accompanying the trigonometric power. The degrees of these polynomials are found to be dependent upon v, the value of the trigonometric power in the sum, which must always be specified. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Extended General Malfatti's Problem.
- Author
-
Chiang, Ching-Shoei
- Subjects
COMPUTER-aided design ,DATA visualization ,TRIANGLES ,ALGORITHMS - Abstract
Malfatti's problem involves three circles (called Malfatti circles) that are tangent to each other and two sides of a triangle. In this study, our objective is to extend the problem to find 6, 10, ... ∑ 1 n i (n > 2) circles inside the triangle so that the three corner circles are tangent to two sides of the triangle, the boundary circles are tangent to one side of the triangle, and four other circles (at least two of them being boundary or corner circles) and the inner circles are tangent to six other circles. We call this problem the extended general Malfatti's problem, or the Tri(T
n ) problem, where Tri means that the boundary of these circles is a triangle, and Tn is the number of circles inside the triangle. In this paper, we propose an algorithm to solve the Tri(Tn ) problem. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
38. Determining Thresholds for Optimal Adaptive Discrete Cosine Transformation.
- Author
-
Khanov, Alexander, Shulzhenko, Anastasija, Voroshilova, Anzhelika, Zubarev, Alexander, Karimov, Timur, and Fahmi, Shakeeb
- Subjects
DISCRETE cosine transforms ,VIDEO compression ,IMAGE segmentation ,VIDEO surveillance ,SEARCH algorithms ,IMAGE compression - Abstract
The discrete cosine transform (DCT) is widely used for image and video compression. Lossy algorithms such as JPEG, WebP, BPG and many others are based on it. Multiple modifications of DCT have been developed to improve its performance. One of them is adaptive DCT (ADCT) designed to deal with heterogeneous image structure and it may be found, for example, in the HEVC video codec. Adaptivity means that the image is divided into an uneven grid of squares: smaller ones retain information about details better, while larger squares are efficient for homogeneous backgrounds. The practical use of adaptive DCT algorithms is complicated by the lack of optimal threshold search algorithms for image partitioning procedures. In this paper, we propose a novel method for optimal threshold search in ADCT using a metric based on tonal distribution. We define two thresholds: pm, the threshold defining solid mean coloring, and ps, defining the quadtree fragment splitting. In our algorithm, the values of these thresholds are calculated via polynomial functions of the tonal distribution of a particular image or fragment. The polynomial coefficients are determined using the dedicated optimization procedure on the dataset containing images from the specific domain, urban road scenes in our case. In the experimental part of the study, we show that ADCT allows a higher compression ratio compared to non-adaptive DCT at the same level of quality loss, up to 66% for acceptable quality. The proposed algorithm may be used directly for image compression, or as a core of video compression framework in traffic-demanding applications, such as urban video surveillance systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Multi-Objective Unsupervised Feature Selection and Cluster Based on Symbiotic Organism Search.
- Author
-
AL-Gburi, Abbas Fadhil Jasim, Nazri, Mohd Zakree Ahmad, Yaakub, Mohd Ridzwan Bin, and Alyasseri, Zaid Abdi Alkareem
- Subjects
ARTIFICIAL intelligence ,FEATURE selection ,MACHINE learning ,SUPERVISED learning ,DATA analytics - Abstract
Unsupervised learning is a type of machine learning that learns from data without human supervision. Unsupervised feature selection (UFS) is crucial in data analytics, which plays a vital role in enhancing the quality of results and reducing computational complexity in huge feature spaces. The UFS problem has been addressed in several research efforts. Recent studies have witnessed a surge in innovative techniques like nature-inspired algorithms for clustering and UFS problems. However, very few studies consider the UFS problem as a multi-objective problem to find the optimal trade-off between the number of selected features and model accuracy. This paper proposes a multi-objective symbiotic organism search algorithm for unsupervised feature selection (SOSUFS) and a symbiotic organism search-based clustering (SOSC) algorithm to generate the optimal feature subset for more accurate clustering. The efficiency and robustness of the proposed algorithm are investigated on benchmark datasets. The SOSUFS method, combined with SOSC, demonstrated the highest f-measure, whereas the KHCluster method resulted in the lowest f-measure. SOSFS effectively reduced the number of features by more than half. The proposed symbiotic organisms search-based optimal unsupervised feature-selection (SOSUFS) method, along with search-based optimal clustering (SOSC), was identified as the top-performing clustering approach. Following this, the SOSUFS method demonstrated strong performance. In summary, this empirical study indicates that the proposed algorithm significantly surpasses state-of-the-art algorithms in both efficiency and effectiveness. Unsupervised learning in artificial intelligence involves machine-learning techniques that learn from data without human supervision. Unlike supervised learning, unsupervised machine-learning models work with unlabeled data to uncover patterns and insights independently, without explicit guidance or instruction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Enhancing Indoor Positioning Accuracy with WLAN and WSN: A QPSO Hybrid Algorithm with Surface Tessellation.
- Author
-
Scavino, Edgar, Abd Rahman, Mohd Amiruddin, Farid, Zahid, Ahmad, Sadique, and Asim, Muhammad
- Subjects
WIRELESS LANs ,WIRELESS sensor networks ,GLOBAL Positioning System ,PARTICLE swarm optimization ,TILES - Abstract
In large indoor environments, accurate positioning and tracking of people and autonomous equipment have become essential requirements. The application of increasingly automated moving transportation units in large indoor spaces demands a precise knowledge of their positions, for both efficiency and safety reasons. Moreover, satellite-based Global Positioning System (GPS) signals are likely to be unusable in deep indoor spaces, and technologies like WiFi and Bluetooth are susceptible to signal noise and fading effects. For these reasons, a hybrid approach that employs at least two different signal typologies proved to be more effective, resilient, robust, and accurate in determining localization in indoor environments. This paper proposes an improved hybrid technique that implements fingerprinting-based indoor positioning using Received Signal Strength (RSS) information from available Wireless Local Area Network (WLAN) access points and Wireless Sensor Network (WSN) technology. Six signals were recorded on a regular grid of anchor points covering the research surface. For optimization purposes, appropriate raw signal weighing was applied in accordance with previous research on the same data. The novel approach in this work consisted of performing a virtual tessellation of the considered indoor surface with a regular set of tiles encompassing the whole area. The optimization process was focused on varying the size of the tiles as well as their relative position concerning the signal acquisition grid, with the goal of minimizing the average distance error based on tile identification accuracy. The optimization process was conducted using a standard Quantum Particle Swarm Optimization (QPSO), while the position error estimate for each tile configuration was performed using a 3-layer Multilayer Perceptron (MLP) neural network. These experimental results showed a 16% reduction in the positioning error when a suitable tile configuration was calculated in the optimization process. Our final achieved value of 0.611 m of location incertitude shows a sensible improvement compared to our previous results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Identification of Crude Distillation Unit: A Comparison between Neural Network and Koopman Operator.
- Author
-
Abubakar, Abdulrazaq Nafiu, Khaldi, Mustapha Kamel, Aldhaifallah, Mujahed, Patwardhan, Rohit, and Salloum, Hussain
- Subjects
LINEAR operators ,SYSTEM identification ,DISTILLATION ,GENERALIZATION ,COMPARATIVE studies - Abstract
In this paper, we aimed to identify the dynamics of a crude distillation unit (CDU) using closed-loop data with NARX−NN and the Koopman operator in both linear (KL) and bilinear (KB) forms. A comparative analysis was conducted to assess the performance of each method under different experimental conditions, such as the gain, a delay and time constant mismatch, tight constraints, nonlinearities, and poor tuning. Although NARX−NN showed good training performance with the lowest Mean Squared Error (MSE), the KB demonstrated better generalization and robustness, outperforming the other methods. The KL observed a significant decline in performance in the presence of nonlinearities in inputs, yet it remained competitive with the KB under other circumstances. The use of the bilinear form proved to be crucial, as it offered a more accurate representation of CDU dynamics, resulting in enhanced performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. On the Complexity of the Bipartite Polarization Problem: From Neutral to Highly Polarized Discussions.
- Author
-
Alsinet, Teresa, Argelich, Josep, Béjar, Ramón, and Martínez, Santi
- Subjects
COMBINATORIAL optimization ,POLARIZATION (Social sciences) ,WEIGHTED graphs ,SOCIAL networks ,SOCIAL media ,BIPARTITE graphs - Abstract
The bipartite polarization problem is an optimization problem where the goal is to find the highest polarized bipartition on a weighted and labeled graph that represents a debate developed through some social network, where nodes represent user's opinions and edges agreement or disagreement between users. This problem can be seen as a generalization of the maxcut problem, and in previous work, approximate solutions and exact solutions have been obtained for real instances obtained from Reddit discussions, showing that such real instances seem to be very easy to solve. In this paper, we further investigate the complexity of this problem by introducing an instance generation model where a single parameter controls the polarization of the instances in such a way that this correlates with the average complexity to solve those instances. The average complexity results we obtain are consistent with our hypothesis: the higher the polarization of the instance, the easier is to find the corresponding polarized bipartition. In view of the experimental results, it is computationally feasible to implement transparent mechanisms to monitor polarization on online discussions and to inform about solutions for creating healthier social media environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Augmented Dataset for Vision-Based Analysis of Railroad Ballast via Multi-Dimensional Data Synthesis.
- Author
-
Ding, Kelin, Luo, Jiayi, Huang, Haohang, Hart, John M., Qamhia, Issam I. A., and Tutumluer, Erol
- Subjects
COMPUTER vision ,DEEP learning ,RAILROAD management ,POINT cloud ,DATABASES ,BALLAST (Railroads) - Abstract
Ballast serves a vital structural function in supporting railroad tracks under continuous loading. The degradation of ballast can result in issues such as inadequate drainage, lateral instability, excessive settlement, and potential service disruptions, necessitating efficient evaluation methods to ensure safe and reliable railroad operations. The incorporation of computer vision techniques into ballast inspection processes has proven effective in enhancing accuracy and robustness. Given the data-driven nature of deep learning approaches, the efficacy of these models is intrinsically linked to the quality of the training datasets, thereby emphasizing the need for a comprehensive and meticulously annotated ballast aggregate dataset. This paper presents the development of a multi-dimensional ballast aggregate dataset, constructed using empirical data collected from field and laboratory environments, supplemented with synthetic data generated by a proprietary ballast particle generator. The dataset comprises both two-dimensional (2D) data, consisting of ballast images annotated with 2D masks for particle localization, and three-dimensional (3D) data, including heightmaps, point clouds, and 3D annotations for particle localization. The data collection process encompassed various environmental lighting conditions and degradation states, ensuring extensive coverage and diversity within the training dataset. A previously developed 2D ballast particle segmentation model was trained on this augmented dataset, demonstrating high accuracy in field ballast inspections. This comprehensive database will be utilized in subsequent research to advance 3D ballast particle segmentation and shape completion, thereby facilitating enhanced inspection protocols and the development of effective ballast maintenance methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. An Image Processing-Based Correlation Method for Improving the Characteristics of Brillouin Frequency Shift Extraction in Distributed Fiber Optic Sensors.
- Author
-
Konstantinov, Yuri, Krivosheev, Anton, and Barkov, Fedor
- Subjects
OPTICAL fiber detectors ,SIGNAL processing ,BRILLOUIN scattering ,CURVE fitting ,IMAGE sensors ,OPTICAL time-domain reflectometry - Abstract
This paper demonstrates how the processing of Brillouin gain spectra (BGS) by two-dimensional correlation methods improves the accuracy of Brillouin frequency shift (BFS) extraction in distributed fiber optic sensor systems based on the BOTDA/BOTDR (Brillouin optical time domain analysis/reflectometry) principles. First, the spectra corresponding to different spatial coordinates of the fiber sensor are resampled. Subsequently, the resampled spectra are aligned by the position of the maximum by shifting in frequency relative to each other. The spectra aligned by the position of the maximum are then averaged, which effectively increases the signal-to-noise ratio (SNR). Finally, the Lorentzian curve fitting (LCF) method is applied to the spectrum with improved characteristics, including a reduced scanning step and an increased SNR. Simulations and experiments have demonstrated that the method is particularly efficacious when the signal-to-noise ratio does not exceed 8 dB and the frequency scanning step is coarser than 4 MHz. This is particularly relevant when designing high-speed sensors, as well as when using non-standard laser sources, such as a self-scanning frequency laser, for distributed fiber-optic sensing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Computer Vision Algorithms on a Raspberry Pi 4 for Automated Depalletizing.
- Author
-
Greco, Danilo, Fasihiany, Majid, Ranjbar, Ali Varasteh, Masulli, Francesco, Rovetta, Stefano, and Cabri, Alberto
- Subjects
OBJECT recognition (Computer vision) ,SINGLE-board computers ,INDUSTRIAL robots ,COMPUTER algorithms ,RASPBERRY Pi - Abstract
The primary objective of a depalletizing system is to automate the process of detecting and locating specific variable-shaped objects on a pallet, allowing a robotic system to accurately unstack them. Although many solutions exist for the problem in industrial and manufacturing settings, the application to small-scale scenarios such as retail vending machines and small warehouses has not received much attention so far. This paper presents a comparative analysis of four different computer vision algorithms for the depalletizing task, implemented on a Raspberry Pi 4, a very popular single-board computer with low computer power suitable for the IoT and edge computing. The algorithms evaluated include the following: pattern matching, scale-invariant feature transform, Oriented FAST and Rotated BRIEF, and Haar cascade classifier. Each technique is described and their implementations are outlined. Their evaluation is performed on the task of box detection and localization in the test images to assess their suitability in a depalletizing system. The performance of the algorithms is given in terms of accuracy, robustness to variability, computational speed, detection sensitivity, and resource consumption. The results reveal the strengths and limitations of each algorithm, providing valuable insights for selecting the most appropriate technique based on the specific requirements of a depalletizing system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Frequency-Domain and Spatial-Domain MLMVN-Based Convolutional Neural Networks.
- Author
-
Aizenberg, Igor and Vasko, Alexander
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,IMAGE recognition (Computer vision) ,MATHEMATICAL optimization ,FOURIER transforms - Abstract
This paper presents a detailed analysis of a convolutional neural network based on multi-valued neurons (CNNMVN) and a fully connected multilayer neural network based on multi-valued neurons (MLMVN), employed here as a convolutional neural network in the frequency domain. We begin by providing an overview of the fundamental concepts underlying CNNMVN, focusing on the organization of convolutional layers and the CNNMVN learning algorithm. The error backpropagation rule for this network is justified and presented in detail. Subsequently, we consider how MLMVN can be used as a convolutional neural network in the frequency domain. It is shown that each neuron in the first hidden layer of MLMVN may work as a frequency-domain convolutional kernel, utilizing the Convolution Theorem. Essentially, these neurons create Fourier transforms of the feature maps that would have resulted from the convolutions in the spatial domain performed in regular convolutional neural networks. Furthermore, we discuss optimization techniques for both networks and compare the resulting convolutions to explore which features they extract from images. Finally, we present experimental results showing that both approaches can achieve high accuracy in image recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Exploring Clique Transversal Variants on Distance-Hereditary Graphs: Computational Insights and Algorithmic Approaches.
- Author
-
Lee, Chuan-Min
- Subjects
GRAPH theory ,DYNAMIC programming ,TRANSVERSAL lines ,ALGORITHMS ,INTERSECTION graph theory - Abstract
The clique transversal problem is a critical concept in graph theory, focused on identifying a minimum subset of vertices that intersects all maximal cliques in a graph. This problem and its variations—such as the k-fold clique, { k } -clique, minus clique, and signed clique transversal problems—have received significant interest due to their theoretical importance and practical applications. This paper examines the k-fold clique, { k } -clique, minus clique, and signed clique transversal problems on distance-hereditary graphs. Known for their distinctive structural properties, distance hereditary graphs provide an ideal framework for studying these problem variants. By exploring these issues in the context of distance-hereditary graphs, this research enhances the understanding of the computational challenges and the potential for developing efficient algorithms to address these problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A System Design Perspective for Business Growth in a Crowdsourced Data Labeling Practice.
- Author
-
Hajipour, Vahid, Jalali, Sajjad, Santos-Arteaga, Francisco Javier, Vazifeh Noshafagh, Samira, and Di Caprio, Debora
- Subjects
SYSTEMS design ,CROWDSOURCING ,PAYMENT ,PARTICIPATION ,PROFITABILITY - Abstract
Data labeling systems are designed to facilitate the training and validation of machine learning algorithms under the umbrella of crowdsourcing practices. The current paper presents a novel approach for designing a customized data labeling system, emphasizing two key aspects: an innovative payment mechanism for users and an efficient configuration of output results. The main problem addressed is the labeling of datasets where golden items are utilized to verify user performance and assure the quality of the annotated outputs. Our proposed payment mechanism is enhanced through a modified skip-based golden-oriented function that balances user penalties and prevents spam activities. Additionally, we introduce a comprehensive reporting framework to measure aggregated results and accuracy levels, ensuring the reliability of the labeling output. Our findings indicate that the proposed solutions are pivotal in incentivizing user participation, thereby reinforcing the applicability and profitability of newly launched labeling systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A Non-Smooth Numerical Optimization Approach to the Three-Point Dubins Problem (3PDP).
- Author
-
Piazza, Mattia, Bertolazzi, Enrico, and Frego, Marco
- Subjects
OPTIMIZATION algorithms ,ROBOTIC path planning ,TRAVELING salesman problem ,TRIGONOMETRY ,CURVATURE - Abstract
This paper introduces a novel non-smooth numerical optimization approach for solving the Three-Point Dubins Problem (3PDP). The 3PDP requires determining the shortest path of bounded curvature that connects given initial and final positions and orientations while traversing a specified waypoint. The inherent discontinuity of this problem precludes the use of conventional optimization algorithms. We propose two innovative methods specifically designed to address this challenge. These methods not only effectively solve the 3PDP but also offer significant computational efficiency improvements over existing state-of-the-art techniques. Our contributions include the formulation of these new algorithms, a detailed analysis of their theoretical foundations, and their implementation. Additionally, we provide a thorough comparison with current leading approaches, demonstrating the superior performance of our methods in terms of accuracy and computational speed. This work advances the field of path planning in robotics, providing practical solutions for applications requiring efficient and precise motion planning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Precedence Table Construction Algorithm for CFGs Regardless of Being OPGs.
- Author
-
Lizcano, Leonardo, Angulo, Eduardo, and Márquez, José
- Subjects
PROBLEM solving ,GRAMMAR ,SIGNS & symbols ,ALGORITHMS ,LANGUAGE & languages - Abstract
Operator precedence grammars (OPG) are context-free grammars (CFG) that are characterized by the absence of two adjacent non-terminal symbols in the body of each production (right-hand side). Operator precedence languages (OPL) are deterministic and context-free. Three possible precedence relations between pairs of terminal symbols are established for these languages. Many CFGs are not OPGs because the operator precedence cannot be applied to them as they do not comply with the basic rule. To solve this problem, we have conducted a thorough redefinition of the Left and Right sets of terminals that are the basis for calculating the precedence relations, and we have defined a new Leftmost set. The algorithms for calculating them are also described in detail. Our work's most significant contribution is that we establish precedence relationships between terminals by overcoming the basic rule of not having two consecutive non-terminals using an algorithm that allows building the operator precedence table for a CFG regardless of whether it is an OPG. The paper shows the complexities of the proposed algorithms and possible exceptions to the proposed rules. We present examples by using an OPG and two non-OPGs to illustrate the operation of the proposed algorithms. With these, the operator precedence table is built, and bottom-up parsing is carried out correctly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.