1,098 results
Search Results
2. Breast Cancer Detection Using Machine Learning in Medical Imaging – A Survey.
- Author
-
P, Harsha Latha, Ravi, S., and A, Saranya
- Subjects
COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,EARLY detection of cancer ,MACHINE learning ,BREAST cancer - Abstract
Breast Cancer (BC) is a significant cause in women and is the leading cause of death worldwide. The analysis and diagnosis of breast cancer using medical imaging analysis is a promising research area that facilitates the decision-making of different diseases. This paper provides advanced research papers that mainly define breast cancer detection (BCD) using machine learning (ML) in medical imaging. The research articles are tabulated with their advantages, disadvantages, machine learning methods, the dataset used in their experiments, and the performance metrics obtained from their experimental results. Some research challenges and future research suggestions in breast cancer detection using ML in medical imaging are discussed. This paper has more potential and is helpful for beginners in this research field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. The current research status of solving blockchain scalability issue.
- Author
-
Lincopinis, Darllaine R. and Llantos, Orven E.
- Subjects
BLOCKCHAINS ,SCALABILITY ,ARTIFICIAL intelligence ,TECHNOLOGICAL innovations ,MACHINE learning ,BIG data - Abstract
Blockchain is an emerging technology with Big data, Artificial Intelligence, and Machine Learning. It disrupted industries such as health, education, manufacturing, and banking. However, the increasing popularity of Blockchain ex- poses the scalability issues of major public blockchain platforms (e.g., Bitcoin and Ethereum) and dramatically affects its development. The scalability problem manifests in terms of Low throughput, high transaction latency, and massive energy consumption. Several reviews and studies cover these factors and their potential solutions, yet these studies need to highlight more information through actual application to natural systems or projects. This study investigates all relevant papers on current research solutions for public blockchain scalability issues. The scope of this paper is to explore the implementation of different state-of-the-art scalability solutions to natural systems and projects while simultaneously highlighting the results. This study discusses the methods and techniques used and the challenges encountered that have yet to future researchers must explore. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Comprehensive Empirical Study of Python JWT Libraries.
- Author
-
Shatnawi, Ahmed S., Al-Duwairi, Basheer, and Samarneh, Ala' A.
- Subjects
PYTHON programming language ,EMPIRICAL research ,DEAF children - Abstract
JSON Web Token (JWT) is a simple, compact way to share claims in a space-constrained environment. JWT is part of the interoperable JSON-based identity suite. Many libraries that provide JWT-based authentication and authorization exist. While the JWT standard is secure, some implementations still need to be made. This research paper delves into a comprehensive analysis of the prominent Python libraries utilized for JWT authentication. By meticulously examining these libraries, we aim to provide an in-depth understanding of their features and capabilities. Our investigation encompasses an enumeration of the distinct signing algorithms that are supported by each of these JWT Python libraries. To ensure the robustness and security of these libraries, we employ a multifaceted approach that utilizes various statistical application Security Testing (SAST) tools. These tools play a pivotal role in our assessment by not only evaluating the adherence of the codebase to the PEP8 standard but also by meticulously scanning for common security vulnerabilities and bugs that could potentially compromise the integrity of the authentication process. Our research goes beyond mere identification; we meticulously analyze each warning generated by the SAST tools, emphasizing those warnings that hold the most tremendous significance regarding potential security risks. Furthermore, our investigation extends to gauging the popularity and adoption of each library. To achieve this, we leverage GitHub statistics and harness the power of the Sourcegraph code search utility. By delving into these metrics, we gain a comprehensive view of the community's engagement, usage trends, and overall traction of each library. In summary, this paper thoroughly explores the landscape of JWT authentication in Python, encompassing library evaluation, security assessment, warning analysis, and popularity metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Analysing The Patient Sentiments in Healthcare Domain Using Machine Learning.
- Author
-
Madan, Prof. Mamta, Madan, Ms.Rishima, and Thakur, Dr Praveen
- Subjects
AFFECTIVE computing ,ARTIFICIAL intelligence ,PATIENT experience ,HEALTH facilities ,PATIENTS' attitudes - Abstract
Emotion AI deals with the sentiments of human beings at various domains. It is named emotion AI as using the capabilities of AI the emotions of humans would be interpreted and analysed. The objective of this paper is to learn from the experience of patients towards the healthcare facilities by studying and analysing the sentiments of the patients using machine learning. This paper focus on training the machine for reading the reviews given by patients who have used various healthcare facilities. The machine will be trained to understand the polarity for each healthcare facility in terms of cleanliness, availability of doctors, interaction of doctors with patients etc. The code is implemented in python with various libraries required for machine Learning. The code is able to extract the polarity and is able to handle the emotions of the patients for questions answered in the dataset by patients. The paper would contribute and help the patients decides which healthcare facility they shall choose based on the experiences of various other patients. For this the goodness score for every healthcare facility is calculated and implemented using Machine learning. It's the contribution of artificial Intelligence and machine learning for healthcare Domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Employing Collective Intelligence at the IoT Edge for Spatial Decisions.
- Author
-
Spezzano, Giandomenico
- Subjects
SWARM intelligence ,MACHINE learning ,INTERNET of things ,COMPUTER performance ,EDGE computing - Abstract
This paper explores the synergies between collective intelligence and edge computing, presenting a novel paradigm that harnesses decentralized processing power for collaborative problem-solving. Edge devices, such as sensors and IoT devices, collect spatial data in real time from their local environments. They can incorporate machine learning algorithms to analyze spatial data, enabling quicker and more context-aware decision-making. Spatial clustering, a pivotal strategy in IoT edge computing, is examined to optimize localized data processing, enhance resource efficiency, and enable real-time analytics in decentralized environments. By leveraging the physical proximity of devices, spatial clustering contributes to the effectiveness and sustainability of IoT deployments at the edge. The paper introduces an innovative approach to adaptive spatial clustering by adopting swarm intelligence, drawing inspiration from the collective behavior of a flock of birds. Building upon the classical flock model of Reynolds, our extended model incorporates movement in a multi-dimensional space and introduces different types of birds. In this context, the birds serve as agents for discovering points with specific characteristics in a multidimensional space. The integration of swarm intelligence into spatial clustering presents a promising avenue for addressing the challenges of decentralized processing in edge computing environments, paving the way for more efficient and responsive IoT deployments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Enhancing privacy in VANETs through homomorphic encryption in machine learning applications.
- Author
-
Ameur, Yulliwas and Bouzefrane, Samia
- Subjects
MACHINE learning ,INTELLIGENT transportation systems ,IN-vehicle computing ,INFRASTRUCTURE (Economics) ,VEHICULAR ad hoc networks ,PRIVACY ,K-nearest neighbor classification ,DATA privacy ,IMAGE encryption - Abstract
This paper presents a novel framework for enhancing privacy in Vehicular Ad Hoc Networks (VANETs) by integrating homomorphic encryption with machine learning applications. VANETs, essential for Intelligent Transport Systems (ITS), face significant challenges in privacy and security due to their highly dynamic and heterogeneous nature. Our framework addresses these challenges by employing a simplified but effective machine learning algorithm, the K-nearest neighbors (KNN), to ensure the security and privacy of the network. The flexibility of the framework allows for the incorporation of other machine learning algorithms, enhancing its adaptability and efficiency in various VANET scenarios. Key to this framework is the use of homomorphic encryption (HE), a cryptographic technique that enables computations on encrypted data without the need for decryption. This feature preserves data confidentiality and allows for secure third-party computations. Our paper discusses the evolution and types of homomorphic encryption, emphasizing the importance of Fully Homomorphic Encryption (FHE) for its ability to evaluate complex polynomial functions. The paper also highlights the different domains of cybersecurity concerns in VANETs, including in-vehicle systems, ad-hoc and infrastructure networks, and data analysis. The proposed framework aims to mitigate these vulnerabilities, particularly focusing on preventing common attacks like DoS and location tracking. A significant advantage of our approach is its general nature, making it applicable to various privacy issues in VANETs. We propose the potential integration of homomorphic encryption with other privacy-preserving techniques, such as differential privacy or secure multi-party computation, to enhance computation times while ensuring robust privacy protection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Navigating the Digital Landscape of Diabetes Care: Current State of the Art and Future Directions.
- Author
-
Gonçalves, Helena, Silva, Firmino, Rodrigues, Catarina, and Godinho, António
- Subjects
BLOOD sugar monitors ,CONTINUOUS glucose monitoring ,INSULIN pumps ,MACHINE learning ,GLYCEMIC control ,MEDICAL personnel ,DIABETES - Abstract
Diabetes mellitus remains a global health challenge, requiring innovative solutions for effective disease management. This paper offers a thorough analysis of diabetes technologies, highlighting their various roles in diabetes care. Through a thorough review of the literature and analysis of emerging trends, we explore the multifaceted impact of technology on diabetes care. We investigate the key role of continuous glucose monitoring systems, insulin pumps and smart insulin pens in achieving optimal glycaemic control. The paper also evaluates the integration of artificial intelligence and machine learning algorithms in predictive modelling for early detection of glucose fluctuations, ultimately preventing diabetes-related complications. Additionally, studies the potential of telemedicine and mobile applications in enhancing patient engagement and self-management. Moreover, the review covers advancements in closed-loop insulin delivery systems, offering insights into their clinical effectiveness and potential to revolutionize diabetes care. Ethical and privacy considerations related to the use of patient data in these technologies are discussed, emphasizing the importance of striking a balance between technological innovation and patient security. This paper's evidence synthesis underscores the increasing influence of diabetes technologies on patient outcomes, quality of life, and healthcare systems. It underscores the need for multidisciplinary collaboration between healthcare professionals, researchers and technology developers to ensure the seamless integration and accessibility of these tools to patients living with diabetes. This study serves as a valuable resource for clinicians, researchers, and policymakers, providing a comprehensive view of evolving diabetes technologies and their potential in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. ChatGPT based recommendation system for retail shops.
- Author
-
Duwadi, Saroj and Cautinho, Carlos
- Subjects
RECOMMENDER systems ,CHATGPT ,ARTIFICIAL intelligence ,SATISFACTION ,MACHINE learning - Abstract
The rapid growth of e-commerce platforms has emphasized the significance of personalized recommendation systems in enhancing user engagement and satisfaction. This research paper presents the development and evaluation of an innovative Product Recommendation System that leverages advanced Artificial Intelligence (AI) techniques to provide tailored product suggestions. The primary objective is to create a user-centric experience by integrating an AI assistant, enabling natural and interactive interactions. Through a comprehensive survey conducted to understand customer behaviour while purchasing product using AI, the study aims to assess the system's effectiveness in delivering accurate recommendations and providing a seamless purchasing experience. The paper contributes to the field by showcasing the practical implementation of AI-driven recommendation systems, highlighting their potential to transform e-commerce interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Towards an AI-Driven User Interface Design for Web Applications.
- Author
-
Costa, André, Silva, Firmino, and Moreira, José Joaquim
- Subjects
USER interfaces ,WEB-based user interfaces ,LANDSCAPE design ,NATURAL language processing ,WEB design ,ARTIFICIAL intelligence ,DESIGN techniques - Abstract
The increasing exploitation of Artificial Intelligence (AI) technologies has enabled the design of user interfaces in a way that integrating artificial intelligence capabilities has become crucial in the modern digital landscape. Exploring the main features and best practices for designing user interfaces for Web applications, which effectively support and leverage AI functionalities, is currently one of the relevant topics in this context. This research work discusses the fundamental principles of user interface (UI) design, and the challenges posed by the integration of AI into web applications. It emphasizes the need to strike a balance between the AI advanced capabilities and the users' ability to understand and control the system. Furthermore, the paper highlights the importance of creating intuitive and engaging UI designs that empower users to interact with AI-driven features effortlessly. The study presents a comprehensive analysis of various UI design techniques specifically tailored for AI-enabled web applications user interfaces. Additionally, the paper explores the incorporation of AI-driven recommendation systems, personalized interfaces, and adaptive designs, which dynamically adapt to users' preferences and behavior. To validate the proposed user interface design principles, the study presents a proposal for a guidelines structure that promotes empirical evaluations through user studies and usability testing. Results collected via a survey based on measuring the effectiveness and user satisfaction of AI-enabled Web interfaces. User interfaces in real-life scenarios are presented and provides information on the impact of UI design decisions on user interaction and overall experience. The outcomes of this research work contribute to a deeper understanding of UI design for AI-supported Web applications user interfaces and offer practical guidelines for designers and developers. By embracing the suggested principles, organizations and designers can create Web interfaces that effectively harness the power of AI while prioritizing user-centricity, accessibility, and ethical considerations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A Smart System Facilitating Emotional Regulation in Neurodivergent Children.
- Author
-
Tejasvi, Prarthana and Kumar, Tarun
- Subjects
EMOTION regulation ,MACHINE learning ,REINFORCEMENT learning ,DIGITAL technology ,PSYCHOLOGICAL stress ,SCHOOL children - Abstract
This paper acknowledges the need for a user-centric solution that helps with emotional regulation and stress management in children with ADHD. The paper presents a unique and comprehensive solution that integrates Reinforcement Learning (RL) algorithms to enhance user experience and aid children with ADHD to regulate their emotions and behaviours through a reward-based system. Through careful analysis of existing literature, and user requirements assessment, a comprehensive framework that integrates machine learning algorithms, physical and digital solution components through a user-centric design approach has been proposed. The core objective is to design and develop a sensory regulation system specifically tailored to the requirements of children with ADHD. Through the development of an engaging and impactful sensory regulation system, children can experience social and academic aspects of school positively while also having the opportunity to expand their social circle through inclusive play environments and ultimately improving their daily experiences. This paper aims to address the imminent need for emotional regulation and stress management tools catering to children with ADHD. By incorporating Reinforcement Learning (RL) algorithms with a reward-based interaction, this paper aims to solve critical challenges faced by children with ADHD, like emotional regulation difficulties, stress management, poor social skills, and academic performance issues so that they can lead more holistic lives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Software Fault Prediction Using Optimal Classifier Selection: An Ensemble Approach.
- Author
-
Agrawalla, Bikash and Reddy, B Ramachandra
- Subjects
PLURALITY voting ,SYSTEMS software ,COMPUTER software ,VIDEO coding ,FORECASTING ,MACHINE learning - Abstract
Fault prediction is the process of using data analysis and machine learning models to anticipate potential defects or faults in the software system. Using only the base machine learning models for software fault prediction leads to limited performance, difficulty in handling non-linear relationships and imbalanced data, inadequate feature representation, and limited complexity handling. Hence, in order to overcome these challenges, this paper proposes a new technique for the selection of classifiers that forms a heterogeneous ensemble. The main goal is to remove or trim out the classifiers that show poor performance compared to the other base classifiers, which can result into a more effective ensemble and can produce better results. The algorithm proposed in this paper finds a set of classifiers that can perform better than using all the classifiers. The challenge that was faced was how to identify the poor-performing classifiers. This challenge is dealt with by performing an experiment using different threshold values to choose the trimmed set of classifiers. For evaluation of the proposed model, 8 different benchmark software fault datasets were used, which are taken from PROMISE and the Apache repository, and AUC is used as the performance measure. The results obtained after the experimental analysis demonstrate the effectiveness of our algorithm compared to the traditional approaches, which used all the base classifiers. There is a significant increase in the AUC values for 6 datasets out of 8, while using the average of probabilities and majority voting, it was seen that there is improvement in 7 out of 8 datasets used. The best-performing dataset by using the average of probabilities is ARC, where the AUC values increase from 0.6505 to 0.694, and while using majority voting, the best-performing dataset is XALAN, where the AUC values increase from 0.5455 to 0.679. From this, it can be seen that the proposed ensemble approach achieved higher AUC values for the tested datasets when compared to the base machine learning classifiers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Handling incomplete data using Radial basis Kernelized Intuitionistic Fuzzy C-Means.
- Author
-
Sethia, Kavita, Singh, Jaspreeti, and Gosain, Anjana
- Subjects
MULTIPLE imputation (Statistics) ,MISSING data (Statistics) ,MACHINE learning ,FUZZY clustering technique ,CENTROID ,FUZZY sets ,EUCLIDEAN metric ,METRIC spaces - Abstract
Missing data imputation is a critical task in the data pre-processing stage to ensure the quality, stability, and reliability of machine learning models. If missing values are imputed incorrectly, it can result in erroneous predictions and inconsistent model performance. Traditional imputation methods often struggle with complex data patterns having non-linearity and uncertainties. By integrating soft clustering principles, the proposed work provides a flexible framework for imputing missing data by taking into account the underlying inherent data pattern. To address the missing data challenges, this paper presents a novel imputation technique called Radial Basis Kernel Intuitionistic Fuzzy Ⅽ-Means Imputation (KIFCMI), which builds upon the standard Intuitionistic Fuzzy Ⅽ-Means (IFCM) technique. KIFCMI explores the application of centroid-based imputation using IFCM by incorporating the RBF kernel-induced metric in the data space as a replacement for the original Euclidean norm metric. By employing a kernel function, KIFCMI enables the clustering of data that lacks linear separability within the original space, enabling the formation of homogeneous clusters in a space of higher dimensionality. The effectiveness of the proposed imputation technique is validated on ten diverse real-world datasets obtained from the Public Library UCI with 10% and 20% missing data. The comparative analyses of the proposed technique KIFCMI, is carried out with three conventional techniques, namely fuzzy c-means imputation (FCMI), kernelized fuzzy c-means imputation (KFCMI), and intuitionistic fuzzy c-means imputation (IFCMI). The experimental results using two performance measures, namely RMSE and MAE, showcase the robustness and versatility of the proposed technique across other imputation outcomes. This research paper contributes to the evolving landscape of missing data imputation, offering insights into the practical applications of fuzzy clustering techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Air writing with Effective Communication Enhancement for Dyslexic Learners.
- Author
-
Shravya, Vattikuti, Revilla, Yaswitha, M, Sree Neha, and M., Supriya
- Subjects
DEEP learning ,CHILDREN with dyslexia ,PATTERN recognition systems ,IMAGE recognition (Computer vision) ,COMPUTER vision ,PEOPLE with dyslexia - Abstract
Air writing is a unique method that allows individuals to freely write or draw in the air without the need for sensor devices. One of the significant applications is aimed at addressing the challenges faced by dyslexic children, who often struggle with reading, writing, and spelling. Currently, there is a lack of tailored applications to guide dyslexic children through their learning process. This paper presents a groundbreaking solution designed to assist dyslexic individuals in mastering the art of writing alphabets, digits, and words. Using various hand gestures, children can effortlessly write any character, digit, or word on a virtual canvas and memorize them easily through the hand movements. This approach not only makes learning enjoyable but also boosts a child's confidence. Additionally, the application allows children to unleash their imagination by creating doodles. While existing air writing methods have limitations, such as reliance on devices and restricting content to digits and characters within predefined boundaries. These limitations have been overcome in our proposed approach. Leveraging afordable technologies, our system caters to the needs of dyslexic learners and serves as an interactive tool to enhance their writing skills. It also provides a means of communication for the deaf and mute community. Utilizing cutting-edge technologies such as machine learning, deep learning, computer vision, and web development, we have developed a practical application that seamlessly tracks finger movements and gestures, capturing trajectories and images for classification. Our model exhibits remarkable performance, achieving a training accuracy rate of 98.57% for digit recognition, 98.80% for character recognition, and 97.09% for doodle recognition. This paper contributes valuable insights into the development of an effective and engaging tool to improve learning skills for students, particularly dyslexic children, while also addressing communication barriers for the deaf and mute community. Our approach opens up new possibilities for inclusive and accessible learning experiences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A Knowledge-Based Deep Learning Approach for Automatic Fake News Detection using BERT on Twitter.
- Author
-
Nair, Vinita, Pareek, Dr. Jyoti, and Bhatt, Sanskruti
- Subjects
DEEP learning ,FAKE news ,MACHINE learning ,NATURAL language processing ,DIGITAL technology ,TRANSFORMER models - Abstract
Fake news generation and propagation is a major challenge of the digital age, resulting in various social impacts namely bandwagon, validity, echo chamber effects, deceiving the public with spams, misinformation, malicious content and many more. The widespread proliferation of fake news not only fosters misinformation but also undermines the credibility of news sources. The veracity of the information is a major concern at all the stages of generation, publication, and propagation. To comprehend the critical need for addressing this pervasive problem, this research paper presents a framework for automatic detection of fake news using a knowledge-based approach. An automatic fact checking mechanism is applied using concepts of Information Retrieval (IR), Natural Language Processing (NLP) and Graph theory. The knowledge base is generated using Twitter dataset, which basically contains four attributes: Subject-Predicate-Object (SPO) triplet, SPO sentiment polarity, SPO occurrence, and topic modeling. These attributes serve as pivotal indicators for the development of a knowledge base, subsequently employed to detect prevalent patterns and traits linked to deceptive or false information. We have employed Named Entity Recognition (NER) model to extract SPO triples and Latent Dirichlet Allocation (LDA) for topic modeling, thereby contributing to knowledge base generation. To evaluate the efficacy and efficiency of our proposed model, we utilize deep learning algorithms like RNN, GRU, LSTM, GPT-3 and BERT Transformer providing an acceptable level of accuracy. This research paper delivers valuable insights into addressing the proliferation of fake news on Twitter, employing data-driven approaches and advanced deep learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Software Fault Prediction Using FeatBoost Feature Selection Algorithm.
- Author
-
Medicharla, Sirisha, Kumar, Shubham, Devarakonda, Praphul, Agrawalla, Bikash, and Reddy, B Ramachandra
- Subjects
FEATURE selection ,COMPUTER software testing ,MACHINE learning ,ALGORITHMS ,SOFTWARE engineering ,SOFTWARE reliability - Abstract
A critical aspect of software engineering is Software fault prediction which aims to identify and prevent errors in software systems before their release which can cause failures or issues for its users. Various techniques and tools have been developed to detect software faults, including static code analysis, dynamic testing, and machine learning-based approaches. In past few years, the world has seen a growing interest in the use of ML models for predicting software faults, as they can effectively analyse high dimensional datasets and detect complex patterns which are difficult for human experts to detect. However, developing accurate and reliable software fault detection models requires careful selection of data, feature engineering, and model evaluation. This purpose of this paper is to present a comprehensive analysis of potential applications and future research directions in the field of software fault detection. The study emphasizes the importance of identifying and addressing software faults to ensure the reliability and efficiency of software systems. Additionally, the paper outlines various approaches and techniques that can be employed for effective software fault detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. A TinyML Model for Gesture-Based Air Handwriting Arabic Numbers Recognition.
- Author
-
Lamaakal, Ismail, Makkaoui, Khalid El, Ouahbi, Ibrahim, and Maleh, Yassine
- Subjects
CONVOLUTIONAL neural networks ,MACHINE learning ,HANDWRITING - Abstract
In an era where the demand for efficient and practical machine learning (ML) solutions on resource-constrained devices is evergrowing, the realm of tiny machine learning (TinyML) emerges as a promising frontier. Motivated by the need for lightweight, low-power models that can be deployed on edge devices, this research paper presents an innovative TinyML model tailored to recognize Arabic hand gestures executed in mid-air. With a primary emphasis on the precise classification of Arabic numbers through these expressive hand movements, the paper unveils a comprehensive dataflow architecture. This intricate architecture processes accelerometer and gyroscope data to derive exact 2D gesture coordinates, a fundamental component of the recognition process. The cornerstone of the proposed model is the integration of Convolutional Neural Networks (CNNs), elucidating their exceptional role in achieving an impressive 93.8% accuracy rate in the classification of diverse Arabic Numbers gestures. This remarkable level of precision underscores the model's efficacy and resilience, rendering it an ideal candidate for real-time deployment in various gesture recognition scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A Brief Review of Energy Consumption Forecasting Using Machine Learning Models.
- Author
-
Eddaoudi, Zahra, Aarab, Zineb, Boudmen, Khadija, Elghazi, Asmae, and Rahmani, Moulay Driss
- Subjects
ENERGY consumption forecasting ,ENERGY consumption ,CONSUMPTION (Economics) ,MACHINE learning - Abstract
Energy consumption forecasting plays a pivotal role in modern resource management and sustainable development. This paper presents a concise overview of state-of-the-art techniques and methodologies employed in the field of energy consumption forecasting, with a particular emphasis on the application of Machine Learning (ML) models. The paper surveys recent advancements, addresses key challenges, and identifies promising directions for future research in this critical domain. By examining the current landscape of energy consumption forecasting through the lens of machine learning, this review aims to offer researchers and practitioners valuable insights and guidance for enhancing the accuracy and efficiency of energy consumption pattern prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A Smart System Facilitating Emotional Regulation in Neurodivergent Children.
- Author
-
Tejasvi, Prarthana and Kumar, Tarun
- Subjects
EMOTION regulation ,MACHINE learning ,REINFORCEMENT learning ,DIGITAL technology ,PSYCHOLOGICAL stress ,SCHOOL children - Abstract
This paper acknowledges the need for a user-centric solution that helps with emotional regulation and stress management in children with ADHD. The paper presents a unique and comprehensive solution that integrates Reinforcement Learning (RL) algorithms to enhance user experience and aid children with ADHD to regulate their emotions and behaviours through a reward-based system. Through careful analysis of existing literature, and user requirements assessment, a comprehensive framework that integrates machine learning algorithms, physical and digital solution components through a user-centric design approach has been proposed. The core objective is to design and develop a sensory regulation system specifically tailored to the requirements of children with ADHD. Through the development of an engaging and impactful sensory regulation system, children can experience social and academic aspects of school positively while also having the opportunity to expand their social circle through inclusive play environments and ultimately improving their daily experiences. This paper aims to address the imminent need for emotional regulation and stress management tools catering to children with ADHD. By incorporating Reinforcement Learning (RL) algorithms with a reward-based interaction, this paper aims to solve critical challenges faced by children with ADHD, like emotional regulation difficulties, stress management, poor social skills, and academic performance issues so that they can lead more holistic lives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Software Fault Prediction Using Optimal Classifier Selection: An Ensemble Approach.
- Author
-
Agrawalla, Bikash and Reddy, B Ramachandra
- Subjects
PLURALITY voting ,SYSTEMS software ,COMPUTER software ,VIDEO coding ,FORECASTING ,MACHINE learning - Abstract
Fault prediction is the process of using data analysis and machine learning models to anticipate potential defects or faults in the software system. Using only the base machine learning models for software fault prediction leads to limited performance, difficulty in handling non-linear relationships and imbalanced data, inadequate feature representation, and limited complexity handling. Hence, in order to overcome these challenges, this paper proposes a new technique for the selection of classifiers that forms a heterogeneous ensemble. The main goal is to remove or trim out the classifiers that show poor performance compared to the other base classifiers, which can result into a more effective ensemble and can produce better results. The algorithm proposed in this paper finds a set of classifiers that can perform better than using all the classifiers. The challenge that was faced was how to identify the poor-performing classifiers. This challenge is dealt with by performing an experiment using different threshold values to choose the trimmed set of classifiers. For evaluation of the proposed model, 8 different benchmark software fault datasets were used, which are taken from PROMISE and the Apache repository, and AUC is used as the performance measure. The results obtained after the experimental analysis demonstrate the effectiveness of our algorithm compared to the traditional approaches, which used all the base classifiers. There is a significant increase in the AUC values for 6 datasets out of 8, while using the average of probabilities and majority voting, it was seen that there is improvement in 7 out of 8 datasets used. The best-performing dataset by using the average of probabilities is ARC, where the AUC values increase from 0.6505 to 0.694, and while using majority voting, the best-performing dataset is XALAN, where the AUC values increase from 0.5455 to 0.679. From this, it can be seen that the proposed ensemble approach achieved higher AUC values for the tested datasets when compared to the base machine learning classifiers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. BiT5: A Bidirectional NLP Approach for Advanced Vulnerability Detection in Codebase.
- Author
-
GS, Prabith, M, Rohit Narayanan, A, Arya, R, Aneesh Nadh, and PK, Binu
- Subjects
NATURAL language processing ,COMPUTER software security - Abstract
In this research paper, a detailed investigation presents the utilization of the BiT5 Bidirectional NLP model for detecting vulnerabilities within codebases. The study addresses the pressing need for techniques enhancing software security by effectively identifying vulnerabilities. Methodologically, the paper introduces BiT5, specifically designed for code analysis and vulnerability detection, encompassing dataset collection, preprocessing steps, and model fine-tuning. The key findings underscore BiT5's efficacy in pinpointing vulnerabilities within code snippets, notably reducing both false positives and false negatives. This research contributes by offering a methodology for leveraging BiT5 in vulnerability detection, thus significantly bolstering software security and mitigating risks associated with code vulnerabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Applications of AI/ML in Maritime Cyber Supply Chains.
- Author
-
Diaz, Rafael, Ungo, Ricardo, Smith, Katie, Haghnegahdar, Lida, Singh, Bikash, and Phuong, Tran
- Subjects
REAL-time computing ,SUPPLY chains ,ARTIFICIAL intelligence ,SUPPLY chain management ,SHIPBUILDING ,MACHINE learning ,CYBER physical systems - Abstract
Digital transformation is a new trend that describes enterprise efforts in transitioning manual and likely outdated processes and activities to digital formats dominated by the extensive use of Industry 4.0 elements, including the pervasive use of cyber-physical systems to increase efficiency, reduce waste, and increase responsiveness. A new domain that intersects supply chain management and cybersecurity emerges as many processes as possible of the enterprise require the convergence and synchronizing of resources and information flows in data-driven environments to support planning and execution activities. Protecting the information becomes imperative as big data flows must be parsed and translated into actions requiring speed and accuracy. Machine learning and artificial intelligence have become critical in supporting extensive data collection and real-time processing to assist decision-makers in configuring scarce resources. In this paper, we present four different applications that investigate issues related to the broader maritime supply chain security domain affecting the planning, execution, and performance of complex systems while exploring novel frontiers in cyber research and education. This paper will focus on Machine Learning and AI applications on Unmanned Aerial Systems and Cryptography related to Cybersecurity in Maritimes and Shipbuilding Spheres. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Practical Aspects of Designing a Human-centred AI System in Manufacturing.
- Author
-
Yamamoto, Yuji, Muñoz, Alvaro Aranda, and Sandström, Kristian
- Subjects
ARTIFICIAL intelligence ,MANUFACTURING processes ,SOCIOTECHNICAL systems ,SYSTEMS design ,DESIGN science - Abstract
An increasing number of manufacturing companies have initiated designing and implementing AI systems in manufacturing, however, with limited success. Within our overarching research objective of establishing a methodology for the development of AI systems in manufacturing with socio-technical system consideration, this paper focuses on the early design phase of the development life cycle and aims to identify factors that are essential in the phase but whose importance has been less addressed in the manufacturing literature. To this aim, a case study was conducted adopting a design science approach. The case company was developing an ML-based anomaly detection system for a casting process. The researcher organised an AI system design workshop where participants from the company used the Human-AI design guidelines created by a leading large software company. The workshop enabled the participants to explore a wide range of design concerns. It, however, caused the confusing experience that they had to deal with too many questions simultaneously without clear guidance. Analysing this negative experience has led to identifying four design issues requiring further attention in the research. An example of these issues is that the interdependency of design decisions on operational procedures, human-machine interfaces, ML models, pre-processing, and input data makes it challenging to design these elements in isolation. The study found that a structured approach to dealing with the identified issues was currently lacking. This paper contributes to the manufacturing research community by addressing key unresolved issues in the research through highlighting practical details of designing AI systems in manufacturing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Machine Learning based calibration SDR in Digital Twin application.
- Author
-
Leiras, Valdemar, Dixe, Sandra, Azevedo, L. Filipe, Dias, Sérgio, Faria, Sérgio, Fonseca, Jaime C., Moreira, António H.J., and Borges, João
- Subjects
DIGITAL twins ,SOFTWARE radio ,MACHINE learning ,RADIO (Medium) ,RADIO frequency ,RADIO technology - Abstract
Software-Defined Radios are radio communications devices that have been growing and developing on a larger scale in recent years. Communications are intrinsically embedded in our day by day, thus presenting a higher motivation to use software-defined radios due to its attractive cost. However they present technical limitations. This paper addresses this problem, which is the non-linearity behaviour of gain and frequency in the LimeSDR-USB. That is, this equipment is used to produce a FM signal with an associated frequency and gain before being parameterised according to the internal parameters of each software-defined radio. Each software-defined radio presents a value of frequency and gain of its own, which correlates to the generated signals at the output level. To avoid this, machine learning networks were used, in which networks were trained to adapt to the non-linearity of these devices and correct it without the user noticing. This way, the user sets a desired frequency and gain in a signal, at the output of the software-defined radios, and a neural network calculates which values the software-defined radios should be parameterised, thus mitigating the non-linearity behaviour. This paper presents the evaluation of a laboratory prototype based on low-cost commercial software-defined radios equipment, to replace an expensive metrologically calibrated equipment used for radio frequency tests on a new concept of industrial test station, with description of the integration of Digital Twins, with their physical and virtual parts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. A Framework for Monitoring Stability of Tailings Dams in Realtime Using Digital Twin Simulation and Machine Learning.
- Author
-
Mwanza, Joseph, Mashumba, Peter, and Telukdarie, Arnesh
- Subjects
TAILINGS dams ,DIGITAL twins ,DIGITAL computer simulation ,MACHINE learning ,MINING engineering ,DAM failures ,GEOTECHNICAL engineering - Abstract
Tailings dam failures cause catastrophic impact on the environment and surrounding communities. Incidences of failure in the recent past have caused industrialists and researchers to seek innovative ways for proactively managing their safety and disaster mitigation. Given Industry 4.0 technologies now available, researchers are looking to develop digital tools for cost-effective, realtime monitoring of tailings dams. However, published literature indicates that a reliable framework is still lacking. This paper proposes a framework for developing a data-driven system for monitoring tailings dam stability and early warning detection. The framework relies upon digital twin simulation and machine-learning (ML) techniques, and comprises four main components: realtime data collection, digital twin modelling, ML-based early detection and prediction, and intelligence-driven decision-support. Sensors gather real-time geophysical data from monitored structure, and the digital twin uses this data to simulate dam behaviour. ML algorithms analyse the data and simulations to enable early detection of instability and failure prediction. Literature suggests that digital twin and ML-based approaches may have advantages over traditional monitoring techniques and other AI-based methods. The paper concludes with a discussion of the framework's limitations, opportunities for improvement, and potential for application in mining and geotechnical engineering. The paper serves as a basis for model development and future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Digitization Workflow for Data Mining in Production Technology applied to a Feed Axis of a CNC Milling Machine.
- Author
-
Drowatzky, Lucas, Mälzer, Mauritz, Wejlupek, Kim A., Wiemer, Hajo, and Ihlenfeldt, Steffen
- Subjects
DATA mining ,NUMERICAL control of machine tools ,MILLING-machines ,MACHINE learning ,MINING engineering - Abstract
Condition monitoring and predictive maintenance applications receive ongoing scientific attention in production technology. Larger companies, especially machine and component manufacturers, already offer related products. Small and medium-sized enterprises (SMEs) in particular show interest in developing and offering solutions in this market themselves to gain economic advantages, to improve resource utilization of their machines or to be able to offer these advantages to their own customers. In the development process, however, they often encounter problems already in the digitization of the machines. The first hurdle is to obtain an analysis-capable data set. This is due to the fact that common and established general data mining development process models, such as CRISP-DM, do not focus on production technology, causing difficulties for engineers during deployment. A problem with existing process models is the limited practicality in the engineering domain due to restricted adaptability. In a previous paper, a guideline for engineers for data mining suitable digitization of production machines was developed in order to solve these problems. The related results were provided in the context of a project for condition monitoring of mixing machines. In this paper, the proposed method is applied to components of a 5-axis CNC milling machine in three different monitoring use cases. A complete workflow is presented, including effect analysis, sensor selection, formulation of predictive scenarios, data preparation, training of machine learning algorithms and vizualization. Data and documentation are provided alongside this publication. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. AI-based Integrated Approach for the Development of Intelligent Document Management System (IDMS).
- Author
-
Pandey, Mrinal, Arora, Mamta, Arora, Shraddha, Goyal, Charu, Gera, Varun Kumar, and Yadav, Harsh
- Subjects
OPTICAL character recognition ,ARTIFICIAL intelligence ,RECORDS management ,NATURAL language processing ,ELECTRONIC data processing ,DIGITAL technology - Abstract
In the digital age, organizations confront the challenge of managing diverse documents efficiently while ensuring security, accuracy, and accessibility. Conventional document management approaches often must catch up, leading to inefficiencies and increased costs. This paper introduces the Intelligent Document Management System (IDMS), which employs advanced technologies such as Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), and Optical Character Recognition (OCR) to enhance document workflows. This research extends the capabilities of IDMS to encompass the extraction and processing of data from three important document types: medical bills, Aadhar cards, and PAN cards. The research and development efforts done in this paper have concentrated on seamlessly integrating of these models into the IDMS framework, offering a comprehensive solution for extracting and processing data from various document types. In this paper, two approaches, namely Easy OCR and a hybrid approach of combining NLP (Regular Expression) and CV (OCR) have been applied and compared. The results revealed that the proposed hybrid approach (NLPCV) is better, with higher accuracy of 97%,71 %, and 78% for hospital invoices, Aadhar cards, and PAN cards, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Intelligent RGV Dynamic Scheduling Virtual Simulation Technology Based on Machine Learning.
- Author
-
Wang, Jianghan and Qi, Xiaojing
- Subjects
PARTICLE swarm optimization ,MACHINE learning ,OPTIMIZATION algorithms ,MODULAR construction ,SYSTEM failures ,MODULAR design - Abstract
With the development of workshop automation, the complexity of RGV (Rail Guided Vehicle) dynamic scheduling schemes using virtual simulation technology is increasing. For the widely valued intelligent machining systems, machine learning based optimization algorithms can effectively respond to the increasingly complex RGV dynamic intelligent scheduling. In the whole model construction, how to complete the modular design of the intelligent processing system and optimize the solution is the key problem that needs to be solved urgently at present. This paper studied the use of particle swarm optimization to design the RGV dynamic scheduling model, aiming to improve the material processing production efficiency of RGV dynamic scheduling and reduce the system failure rate. Through problem modeling, solution and simulation experiment analysis, this paper applied particle swarm optimization based on machine learning, combined with RGV structure modular design and task parameter test set samples. According to the data results, the following conclusions can be drawn from the discussion. Under the background of intelligent logistics system, the RGV dynamic scheduling model using particle swarm optimization had higher material processing production efficiency than the traditional scheduling method in all job test samples, and the average increase was 13.25%. Meanwhile, in terms of system failures, optimization algorithms were better than traditional scheduling methods, with an average reduction of 4.6%. This shows that the RGV dynamic scheduling model based on particle swarm optimization has a better practical application effect. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Analysis and Prediction of Differential Operation and Maintenance Cost of Power Transmission and Transformation.
- Author
-
Yang, Fan, Chen, Fulei, Zhao, Chen, Li, Jianqing, and Kang, Jian
- Subjects
MAINTENANCE costs ,PARTICLE swarm optimization ,ELECTRICITY pricing ,COST control ,BIG data ,MACHINE learning ,POWER transmission ,ECONOMIC conditions in China - Abstract
The operation and maintenance expenses of power transmission and transformation projects, as a significant power supply carrier of the nation, continue to rise as a result of the sustained and quick expansion of China's social economy and the quick growth of the country's power demand. Power grid businesses are under a lot of market pressure. To increase the level of lean management of the operation and maintenance costs of power transmission and transformation projects, power grid enterprises must significantly enhance their capacity to estimate the operation and maintenance costs of their organizations in advance. Machine learning algorithms are gradually applied to the operation and maintenance cost prediction of power transmission and transformation projects of power grid enterprises as a result of the ongoing development of big data technology, effectively increasing the accuracy of operation and maintenance cost prediction. In this paper, by analyzing the variables affecting the differential operation and maintenance cost of power transmission and transformation projects, a scientific and reasonable investment analysis model for the differential operation and maintenance cost of power transmission and transformation projects is constructed using the stochastic forest algorithm of particle swarm optimization, and the variables affecting the differential operation and maintenance cost of substations and transmission lines are obtained, which proves that the trend of the prediction model in this paper is more consistent with the actual situation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Research on improving the key indicators of enterprise ESG rating.
- Author
-
Shi, Jiahao, Zhang, Chuhan, Wen, Jiaxin, Zhang, Zihan, and Wu, Tian
- Abstract
ESG ratings, as a metric for assessing corporate social responsibility, have garnered escalating interest from both domestic and international investors within China. However, critical indicators for augmenting ESG scores remain underexplored. Utilizing data from 2017 to 2020 on CSI 300 and CSI 500 constituent stocks, this study employs advanced machine learning algorithms, including XGBoost (XGB), LightGBM (LGB), and Random Forest (RF), to conduct a quantitative analysis of corporate feature values and their correlation with institutional rating outcomes. The findings indicate a predominant influence of corporate governance indicators, particularly in the realm of information disclosure, followed by social responsibility and environmental metrics. The paper further delineates specific, actionable short-term and long-term strategies that corporations can adopt across environmental, social, and governance dimensions, tailored to the nuances of various sub-indicators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A Brief Survey on Graph Anomaly Detection.
- Author
-
Song, Chengxi, Niu, Lingfeng, and Lei, Minglong
- Abstract
Graph anomaly detection (GAD) has been extensively studied in recent years. GAD aims to detect nodes, edges, and subgraphs that exhibit characteristics and distributions different from those of the majority of graph data. With the advancement of deep learning, many researchers have applied machine learning to address anomaly detection at various scales. In this paper, we classify GAD methods into detector-based and classifer-based approaches and provide a brief introduction and summary of relevant articles from the past three years. Finally, we analyze the challenges and future development directions in the field of GAD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Real-time Tiredness Detection System using Nvidia Jetson Nano and OpenCV.
- Author
-
Florian, Nicolae, Popescu, Dumitru, and Hossu, Andrei
- Abstract
In this paper, we explore how to use a Nvidia Jetson Nano and Python to create a system that detects weariness in a person's face using computer vision and machine learning techniques. The system captures the person's face in real time, preprocesses the picture to cut noise, then detects the face using Haar cascades. Next, using computer vision algorithms, characteristics associated with weariness, such as the eyes, are retrieved. These characteristics are then used to build a machine learning model that can predict weariness in the live stream feed. Lastly, using a graphical user interface, the findings are shown, and the system may be fine-tuned to increase accuracy. This device might be used in applications such as driver monitoring and traffic safety. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Enhancing IoT Security: Effective Botnet Attack Detection Through Machine Learning.
- Author
-
Zhukabayeva, Tamara, Zholshiyeva, Lazzat, Ven-Tsen, Khu, Adamova, Aigul, Mardenov, Yerik, and Karabayev, Nurdaulet
- Subjects
CYBERTERRORISM ,RANDOM forest algorithms ,MACHINE learning ,INTERNET of things ,BOTNETS ,ALGORITHMS - Abstract
One of the most dangerous threats in WSNs is botnet attacks, in which attackers use mutual communications between IoT devices to launch large-scale malicious activities. In this regard, developments in the field of effective and reliable means of defence against this type of threat, in particular, reliable methods for detecting, identifying, and countering botnet attacks, are becoming increasingly important and relevant. This paper presents a comprehensive study that applies machine learning techniques, namely Random Forest and XGBoost, to identify botnet attacks on IoT effectively. These algorithms are analyzed, compared, and shown to be highly effective in detecting complex patterns indicative of botnet activity, thus achieving a significant improvement in IoT security. The conducted research aims to make a useful contribution to the problem of securing WSNs and IoT in general. The results of the study demonstrated high accuracy in detecting attacks with an accuracy of 99.18% for XGBoost and Random Forest showed an accuracy of 99.21%. Thus, it was shown that the significance of applying machine learning techniques such as Random Forest and XGBoost can be one of the key approaches in combating botnet attacks and securing the IoT. The results of the work emphasize the promising application of machine learning techniques for effective defense against cyber threats and highlight the importance of further. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Forecasting Future Behavior: Agents in Board Game Strategy.
- Author
-
Damette, Nathan, Szymanski, Maxime, Mualla, Yazan, Tchappi, Igor, Najjar, Amro, and Adda, Mehdi
- Subjects
REINFORCEMENT learning ,MACHINE learning ,ARTIFICIAL intelligence ,BOARD games ,DEEP learning - Abstract
This paper presents findings on machine learning agent behavior prediction in a board game application developed by a group of students. The goal of this research is to create a model facilitating collaboration between a user and an AI to play together in the board game using a Human-in-the-Loop architecture. By injecting explainability, the aim is to enhance communication and understanding between the user and the AI agent. Featuring a competitive Artificial Intelligence (AI) based on the Proximal Policy Optimization model, this research explores methods to make AI decisions transparent for enhanced player understanding. Two predictive models, a Decision Tree (DT) and a Deep Learning (DL) classifier, were developed and compared. The results show that the DT model is effective for short-term predictions but limited in broader applications, while the DL classifier shows potential for long-term prediction without requiring direct access to the game's AI. This study contributes to understanding human-AI interaction in gaming and offers insights into AI decision-making processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Security threat detection performance analysis of a distributed architecture WSN.
- Author
-
Arreaga, Nestor X., Estrada, Rebeca, Blanc, Sara, Noboa, Andres, and Vera, Nelson
- Subjects
COMPUTER network traffic ,ARTIFICIAL neural networks ,WIRELESS sensor networks ,MACHINE learning ,DATA entry - Abstract
IoT technologies are becoming more and more common in our daily activities because the networks they create are capable of collecting information, monitoring and controlling remotely. However, these devices are not exempt from security attacks, as they become vulnerable entry points to data networks. The use of traditional methods to secure networks (e.g., Next Generation Firewalls (NGFW), encryption, etc.) is not recommended because the devices used in this type of network are limited in terms of computing power and storage availability (e.g., nodeMCU). In this paper, we propose to design two intrusion detection systems in embedded systems using machine learning (ML) algorithms, Artificial Neural Networks and K-means. In a distributed architecture Wireless Sensor Network scenario (WSN), we evaluate their performance in terms of connection and response times, detection accuracy and intruder detection time. Simulation results show that both models are able to find irregularities in network traffic within milliseconds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. One Hop Routing Optimization Approach Using Machine Learning.
- Author
-
Kaja, Siddardha, Shakshuki, Elhadi, Malik, Haroon, and Yasar, Ansar
- Subjects
AD hoc computer networks ,DATA packeting ,MACHINE learning ,COMPUTER performance ,MOBILE learning - Abstract
Every data packet must pass through a few intermediate nodes to reach its destination. Among other reasons, tremendous growth in internet devices encourages those intermediate nodes to drop the data packets. Optimizing the data packet route is an effective solution to deal with packet loss. Advanced machine learning approaches have been identified as a powerful support tool for routing optimization in node networks. Cloud computing has kept pace with the continuously developing hardware infrastructure. Improved connection, processing power, and memory units enable real-time machine learning. This paper proposes and evaluates an approach for optimizing the packet path, by one hop, for intermediate nodes as a backup called Cloud Acknowledgement Scheme (CAS). It offers information on the transmission trend and the tendencies of certain adjacent nodes or groups of neighboring nodes in a network. The proposed CAS has been validated via a series of machine learning experiments using real-world node data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Artificial Intelligence Models: A literature review addressing Industry 4.0 approach.
- Author
-
Castro, Hélio, Câmara, Eduardo, Ávila, Paulo, Cruz-Cunha, Manuela, and Ferreira, Luís
- Subjects
ARTIFICIAL intelligence ,LITERATURE reviews ,INFORMATION technology ,INDUSTRY 4.0 ,DEEP learning - Abstract
Industry 4.0 has brought modernization to the production system through the network integration of the constituent entities which, combined with the evolution of information technology, has enabled an increase in productivity, product quality, optimization of production costs, and product customization to customer needs. Despite the complexity of human thought, artificial intelligence tries to replicate it in algorithms, creating models capable of processing databases with a high volume of information, and generating valuable information for decision making. Within this area, there are subfields, such as Machine Learning and Deep Learning, which, through mathematical models, define patterns to predict output data from known input data. In addition to this type of algorithm, there are metaheuristic models capable of optimizing the parameters required in Machine Learning and Deep Learning algorithms. These intelligent systems have applications in various areas such as industry, construction, health, logistics processes, and maintenance management, among others. This paper focuses on Artificial Intelligence models addressing Industry 4.0 approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Image-based Lung Analysis in the Context of Digital Pathology: a Brief Review.
- Author
-
Shahrabadi, Somayeh, Carias, João, Peres, Emanuel, Magalhães, Luís G., Guevara López, Miguel A., Silva, Luís Bastião, and Adão, Telmo
- Subjects
ARTIFICIAL intelligence ,COMPUTER vision ,LUNGS ,IMAGE analysis ,PATHOLOGY ,DEEP learning - Abstract
Lung cancer is the 2
nd most diagnosed cancer worldwide. The corresponding histopathological analysis, being both costly and time-consuming, demands the commitment of skilled professionals who, while engrossed in this task, experience constraints on their ability to attend to other crucial responsibilities. Moreover, as it is a human-driven process, mistakes may lead to incorrect diagnosis and treatment. Given the disease frequency and mortality, automated diagnostic systems, using Artificial Intelligence (AI), represent valuable improvements in diagnostic timing and overall performance. Recently, Deep Learning (DL) has been widely used for extracting features from histopathologic images approaching more accurate and expeditious analysis. With this line of research in mind, a brief review of recent technical/scientific works within the scope of lung Digital Pathology (DP) image analysis is provided in this paper, covering different computer vision tasks including classification, segmentation, and detection. Furthermore, available datasets and open-source annotation tools capable of providing support to the aforementioned DP-related tasks are also overviewed. Afterward, a summary table and a discussion around the reviewed approaches is provided, consolidating critical information such as technique/DL architecture, involved datasets, metrics, etc. From this study, it was observed that the ARA-CNN technique achieved the highest Area Under the Curve (AUC) ranging from 0.72 to 0.99 for classification. On the other hand, the multimodal-based approach, with an AUC of 0.95, performed better for the segmentation task. As for the detection task, the BCNN approach stood out, achieving a high AUC of 0.988. This review work aims to provide a comprehensive overview of recent advancements in lung DP image analysis and serves as a foundation for future research in this critical area. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
39. Port request classification automation through NLP.
- Author
-
Beecher Martins, Samuel António, Garrido, Nuno, and Sebastião, Pedro
- Subjects
AUTOMATIC classification ,RANDOM forest algorithms ,DECISION trees - Abstract
This paper describes a suggested prototype to carry out the automatic classification of requests from a Port Help Desk. It intents to ascertain if the implementation of this framework is viable for this sector. For this purpose different models were employed, such as SVM, Decision Tree, Random Forest, LSTM, BERT and a SVM hierarchical model. To verify their efficiency these models were evaluated using Precision, Recall and F1-Score metrics. We obtained F1-Scores of 94.36% and 92.48% when classifying the request's category and group respectively. A F1-Score of 93.41% while using a SVM model for category classification when employing a hierarchical classification architecture. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. CalAid – A mobile fitness application using machine learning for tracking energy expenditure.
- Author
-
Asoltanei, Maria and Vasilățeanu, Andrei
- Subjects
MOBILE apps ,MACHINE learning ,MOTION detectors ,METABOLIC equivalent ,PHYSICAL fitness mobile apps ,POPULATION aging - Abstract
This paper presents the design, implementation and validation of a novel fitness mobile application called CalAid, aimed at promoting an active lifestyle, in the context of ever growing sedentarism in population of all ages. The application uses sensors present in most modern smartphones such as motion sensors, the 3-axial accelerometer, the gyroscope and the step counter in order to track energy expenditure and to obtain a more precise approximation of consumed calories. By using machine learning we can recognize the activities performed by the application client and, using metabolic equivalents, find the energy expenditure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Adopting Artificial Intelligence in a Decision Support System – Learnings from Comment Moderation Systems.
- Author
-
Riehle, Dennis M., Wolters, Anna, and Müller, Kilian
- Subjects
DECISION support systems ,ARTIFICIAL intelligence ,NATURAL language processing ,VIRTUAL communities ,DESIGN science ,MACHINE learning ,DEEP learning - Abstract
Enterprise Information Systems (EIS) comprise components for decision-making in an organization. While traditional Decision Support Systems (DSS) rely on sophistically designed decision models, this approach has its limitations when it comes to making decisions based on large amounts of unstructured data. In this paper, we present the case of online community management, where moderators need to decide if user content (i.e., comments) can be published. We have implemented a moderation platform that utilizes Natural Language Processing (NLP) and Machine Learning (ML) to support moderators in their decision-making. From the development process and adoption of our platform, which were carried out as a Design Science Research (DSR) project, we have derived six design principles that assist in designing ML-based DSS. Our results imply that an ML-based DSS should be implemented using an open, customizable system, where decisions are made transparent and interpretable to users. Users need special onboarding and should always have the possibility to overrule the system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Using machine learning to understand driving behavior patterns.
- Author
-
Valente, Jorge, Ramalho, Cláudia, Vinha, Pedro, Mora, Carlos, and Jardim, Sandra
- Subjects
PYTHON programming language ,MACHINE learning ,MOTOR vehicle driving ,SUPPORT vector machines ,PROGRAMMING languages ,SCIENTIFIC computing - Abstract
Driver behavior is one of the principal factors associated with road accidents. Much research to date focusing on Machine learning technology has been successfully applied to identifying driving styles and recognizing unsafe behaviors. In this paper, the development of an android mobile application (Driver Alert) is described, with the aim of collecting data from mobile phone sensors data to identify certain patterns and understand drivers' behaviors. Additional information was recorded regarding weather and traffic information, using public API's to complement the data directly collected from the vehicle. Four machine learning models (K-Means, Algorithm Agglomerative Hierarchical, Random Forest and Support Vector Machines) were tested and compared to identify different driver profiles. A native mobile application named DriverAlert was developed to support collect data and make it available, through an online dashboard, to drivers and researchers. Due to the available tools and libraries, it possesses, Python language was used, as it is a powerful programming language for workloads in data science, machine learning, and scientific computing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Predictive Monitoring of Resources Consumption in Project Management.
- Author
-
Laurent-Burle, Guillaume and Quafafou, Mohamed
- Subjects
PROJECT management ,MACHINE learning - Abstract
Monitoring resource consumption is a critical concern in project management, as it can significantly impact a project's success. Project management experts often rely on their experience and heuristics to navigate management challenges and adhere to predefined constraints. However, relying solely on experts' judgments can lead to disagreements and biases, hindering consensus and optimal decision-making. In this context, a data-driven approach leverages accumulated data and machine learning to develop unbiased models suitable for automation, including prediction, classification, anomaly detection, and more. In this paper, we propose a method for predicting resource consumption, encompassing expected, unplanned, and surpassing tasks. Our experiments demonstrate strong predictive performance using real-world datasets across various task types. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Proof of concept: a practical approach to machine learning in video consultations' acceptance in higher education institutions.
- Author
-
Tavares, Jorge and Pereira, Filipe Viana
- Subjects
UNIVERSITIES & colleges ,PROOF of concept ,ARTIFICIAL intelligence ,STATISTICAL learning ,RANDOM forest algorithms ,MACHINE learning ,MEDICAL telematics - Abstract
Video consultations and telemedicine play a key role in healthcare. It is crucial to assist institutions in identifying which patients are more likely to use them, increasing the adoption rate. Even though Artificial Intelligence models are important, certain cases require a human centric approach to inform patients and doctors, ensuring trust in the tools provided. This can be achieved through interpretable Machine Learning models and Statistical Analysis. In this paper we show that even though the accuracy of the Machine Learning models used (Decision Trees, Logistic Regression, and Random Forest) was between 84%-89% on the training set and 69%-76% on the test set the models lacked good ability to identify the users. Afterwards, additional statistical analysis evidenced that the main driver to use video consultations in a sample of participations with a higher-than-average education is the presence of one or more chronic diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Phylogenetic modelling scripts for identifying script versions.
- Author
-
Salman, Osama A. and Hosszú, Gábor
- Subjects
PHYLOGENETIC models ,SCRIPTS ,FEATURE selection ,DIGITAL technology ,STONE ,MACHINE learning - Abstract
As the digital and physical worlds connect, machine-learning tools are increasingly being used by businesses and institutions, including archaeological research institutes and museums. The article presents research that, based on the evolution of human scripts, provides a tool for identifying script variants used in old manuscripts and stone inscriptions. This makes the work of paleography institutes and individual researchers more efficient. Using this research, it is possible to provide some information about millions of hitherto unidentified inscriptions using machine learning. The developed and tested method applies the biological phylogenetics tool to script evolution, an area that has yet to be investigated with machine learning tools. The final aim of the research is to use these tools to clarify the connections between many Arabic, Aramaic and Central Iranian script variants. The described procedure gives a new solution to one of the fundamental issues of phylogenetic analysis, feature selection, the effectiveness of which is verified with test runs. The paper presents a new feature selection method to identify the strength or effect of features before phylogenetic processing. This type of dimensionality reduction removes unnecessary features from the dataset for these analyses and interpretations. The method was applied to some versions of Arabic, Aramaic and Middle Iranian scripts, but the procedure is not script-specific. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Simple CNN as an alternative for large pretrained models for medical image classification - MedMNIST case study.
- Author
-
Wilhelmi, Maciej and Rusiecki, Andrzej
- Subjects
IMAGE recognition (Computer vision) ,COMPUTER-assisted image analysis (Medicine) ,MEDICAL coding ,DIAGNOSTIC imaging ,MACHINE learning - Abstract
For real-life image classification problems, publicly available pretrained deep convolutional neural networks (CNN) are often applied. Such families of architectures as ResNet, VGG, or Inception are usually the first choice because they are widely used and can provide satisfying results. Though universal, such approaches can be too sophisticated and computationally expensive when fitting to certain medical image classification tasks, especially those where images are not varied and do not contain features learnt by the models in the pretraining phase. The question of whether some simplified models, dedicated to such tasks, can be designed is of great concern. We performed a case study, using three datasets from the MedMNIST data collection, to test neural network classifiers with simple architectures. This paper presents simple convolutional neural network architectures that perform for the considered data sets similarly to much more sophisticated pre-trained models and automatic ML environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Interpretable Success Prediction in a Computer Networks Curricular Unit Using Machine Learning.
- Author
-
de Oliveira, Catarina Félix, Sobral, Sónia Rolland, Ferreira, Maria João, and Moreira, Fernando
- Subjects
MACHINE learning ,AT-risk students ,SCHOOL dropouts ,UNIVERSITIES & colleges ,COMPUTER networks ,DATA mining - Abstract
Today, higher education institutions are focused on understanding which factors are associated with the failure or success of students to, early on, be able to implement measures that can reduce the low performance of students and even dropout. The retention rate is positively and negatively influenced by factors belonging to several dimensions (personal, environmental, and institutional). We aim to use information from those dimensions to identify students enrolled in a Computer Networks course at risk of failing the subject. Besides, this needs to happen as early as possible, to be able to provide the students, for example, with extra support or resources to try to prevent that negative outcome. For predicting the grade level on the first test, the best accuracy obtained was 55%. However, most C-level grades were correctly classified, with 63% accuracy in predicting the students that are most at risk of failing, which is one of our main objectives. As for the prediction of the second test's grade level, the best accuracy obtained was 89% and concerned data regarding the students' interaction with the LMS together with students' grades history. All the C-level grades were correctly classified (100% accuracy) and so we were able to correctly predict every student at a high risk of failing. Using the procedure described in this paper, we are able to anticipate the students needing extra support, and provide them with different resources, to try to prevent their negative outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. NLP in SMEs for industry 4.0: opportunities and challenges.
- Author
-
Bourdin, Mathieu, Paviot, Thomas, Pellerin, Robert, and Lamouri, Samir
- Subjects
LANGUAGE models ,SMALL business ,INDUSTRY 4.0 ,NATURAL language processing ,LITERATURE reviews ,SCIENTIFIC literature - Abstract
Natural Language Processing is the field of Computer Science that focuses on analyzing and processing natural language, mainly human text or speech. Recent trends in Natural Language Processing have led to the development of Large Language Models (LLMs): huge models trained on high amounts of data that achieve unprecedented performances in many tasks, such as answering questions, summarizing texts, or coding. These new tools have a wide range of applications and are being developed by many companies. However, Small and Medium Enterprises (SMEs) struggle to implement these new technologies, mainly because of the lack of resources. This paper aims to show the opportunities and challenges related to NLP-based solutions in SMEs based on a literature review. The main result is that NLP-based solutions have a wide range of applications in various companies, including SMEs, and may lead to many changes. However, there are still many obstacles to developing these tools in SMEs: SMEs lack specialized know-how to develop these solutions and do not often have standardized data. Moreover, there exists nearly no support for SMEs in the scientific literature to develop these tools. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A Method for Extracting BPMN Models from Textual Descriptions Using Natural Language Processing.
- Author
-
Licardo, Josip Tomo, Tanković, Nikola, and Etinger, Darko
- Subjects
NATURAL language processing ,LANGUAGE models ,MACHINE translating ,BUSINESS process modeling ,DEEP learning ,MACHINE learning ,GENERATIVE pre-trained transformers - Abstract
Business Process Model and Notation (BPMN) is a standard for formally modeling complex business processes. Manual creation of BPMN models can be time-consuming and error-prone, prompting a need for automation. Existing approaches, such as rule-based methods, machine learning, and machine translation, have progressed but face accuracy and real-world applicability challenges. In this research paper, we propose a novel method for automated extraction of BPMN models from textual descriptions using natural language processing (NLP) tools and deep learning models, including the spaCy library for text processing, a fine-tuned BERT model, and state-of-the-art large language models like GPT-3.5-Turbo and GPT-4. We utilize Graphviz, an open-source graph visualization software, to visualize the extracted processes. Our method supports representing tasks, exclusive gateways, parallel gateways, and start and end events in the generated BPMN models. The evaluation of 31 textual descriptions shows that our method generates process models with 96% accuracy using GPT-4 and 80% accuracy using GPT-3.5-Turbo large language models. Although subject to certain limitations, such as occasional inaccuracies in model outputs and reliance on well-formed input text, our approach offers a valuable contribution to the growing body of research on automating BPMN model generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Using sentiment analysis to assess PMBOK knowledge areas' compatibility with agile methodology.
- Author
-
David, I. and Gelbard, R.
- Subjects
AGILE software development ,SENTIMENT analysis ,SOFTWARE compatibility ,KNOWLEDGE management ,MACHINE learning ,RESEARCH personnel - Abstract
The agile approach to software development has significantly impacted the industry, given its ability to cope with requirements. Despite this, many aspects of the agile paradigm's compatibility with various software project characteristics have not been thoroughly researched. To address this lacuna, the study presents a systematic literature review (SLR) that aims to assess the compatibility of the agile methodology with each of the knowledge areas included in the Project Management Body of Knowledge (PMBOK). By employing Machine Learning (ML) text-mining and sentiment analysis techniques, the paper explores the sentiment regarding each PMBOK knowledge area (PM-KA). It thus provides valuable insights to assist practitioners and researchers in assessing the compatibility of the agile methodology with each PM-KA and identifying potential gaps or significant challenges within the agile paradigm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.