2,559 results
Search Results
2. A review on over-sampling techniques in classification of multi-class imbalanced datasets: insights for medical problems.
- Author
-
Yuxuan Yang, Khorshidi, Hadi Akbarzadeh, and Aickelin, Uwe
- Subjects
DATABASE management ,PREDICTION models ,MEDICAL informatics ,STATISTICAL sampling ,ARTIFICIAL intelligence ,RESEARCH bias ,MACHINE learning ,ALGORITHMS - Abstract
There has been growing attention to multi-class classification problems, particularly those challenges of imbalanced class distributions. To address these challenges, various strategies, including data-level re-sampling treatment and ensemble methods, have been introduced to bolster the performance of predictive models and Artificial Intelligence (AI) algorithms in scenarios where excessive level of imbalance is present. While most research and algorithm development have been focused on binary classification problems, in health informatics there is an increased interest in the field to address the problem of multi-class classification in imbalanced datasets. Multi-class imbalance problems bring forth more complex challenges, as a delicate approach is required to generate synthetic data and simultaneously maintain the relationship between the multiple classes. The aim of this review paper is to examine over-sampling methods tailored for medical and other datasets with multi-class imbalance. Out of 2,076 peer-reviewed papers identified through searches, 197 eligible papers were chosen and thoroughly reviewed for inclusion, narrowing to 37 studies being selected for in-depth analysis. These studies are categorised into four categories: metric, adaptive, structure-based, and hybrid approaches. The most significant finding is the emerging trend toward hybrid resampling methods that combine the strengths of various techniques to effectively address the problem of imbalanced data. This paper provides an extensive analysis of each selected study, discusses their findings, and outlines directions for future research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Applying Machine Learning in Marketing: An Analysis Using the NMF and k-Means Algorithms.
- Author
-
Gallego, Victor, Lingan, Jessica, Freixes, Alfons, Juan, Angel A., and Osorio, Celia
- Subjects
K-means clustering ,MACHINE learning ,ARTIFICIAL intelligence ,ADVERTISING effectiveness ,DATABASES - Abstract
The integration of machine learning (ML) techniques into marketing strategies has become increasingly relevant in modern business. Utilizing scientific manuscripts indexed in the Scopus database, this article explores how this integration is being carried out. Initially, a focused search is undertaken for academic articles containing both the terms "machine learning" and "marketing" in their titles, which yields a pool of papers. These papers have been processed using the Supabase platform. The process has included steps like text refinement and feature extraction. In addition, our study uses two key ML methodologies: topic modeling through NMF and a comparative analysis utilizing the k-means clustering algorithm. Through this analysis, three distinct clusters emerged, thus clarifying how ML techniques are influencing marketing strategies, from enhancing customer segmentation practices to optimizing the effectiveness of advertising campaigns. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Overview of Pest Detection and Recognition Algorithms.
- Author
-
Guo, Boyu, Wang, Jianji, Guo, Minghui, Chen, Miao, Chen, Yanan, and Miao, Yisheng
- Subjects
ARTIFICIAL intelligence ,CROP growth ,FOOD production ,PESTS ,DEEP learning ,ALGORITHMS - Abstract
Detecting and recognizing pests are paramount for ensuring the healthy growth of crops, maintaining ecological balance, and enhancing food production. With the advancement of artificial intelligence technologies, traditional pest detection and recognition algorithms based on manually selected pest features have gradually been substituted by deep learning-based algorithms. In this review paper, we first introduce the primary neural network architectures and evaluation metrics in the field of pest detection and pest recognition. Subsequently, we summarize widely used public datasets for pest detection and recognition. Following this, we present various pest detection and recognition algorithms proposed in recent years, providing detailed descriptions of each algorithm and their respective performance metrics. Finally, we outline the challenges that current deep learning-based pest detection and recognition algorithms encounter and propose future research directions for related algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. What do algorithms explain? The issue of the goals and capabilities of Explainable Artificial Intelligence (XAI).
- Author
-
Renftle, Moritz, Trittenbach, Holger, Poznic, Michael, and Heil, Reinhard
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,ALGORITHMS - Abstract
The increasing ubiquity of machine learning (ML) motivates research on algorithms to "explain" models and their predictions—so-called Explainable Artificial Intelligence (XAI). Despite many publications and discussions, the goals and capabilities of such algorithms are far from being well understood. We argue that this is because of a problematic reasoning scheme in the literature: Such algorithms are said to complement machine learning models with desired capabilities, such as interpretability or explainability. These capabilities are in turn assumed to contribute to a goal, such as trust in a system. But most capabilities lack precise definitions and their relationship to such goals is far from obvious. The result is a reasoning scheme that obfuscates research results and leaves an important question unanswered: What can one expect from XAI algorithms? In this paper, we clarify the modest capabilities of these algorithms from a concrete perspective: that of their users. We show that current algorithms can only answer user questions that can be traced back to the question: "How can one represent an ML model as a simple function that uses interpreted attributes?". Answering this core question can be trivial, difficult or even impossible, depending on the application. The result of the paper is the identification of two key challenges for XAI research: the approximation and the translation of ML models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches--A Systematic Literature Review and Mapping Study.
- Author
-
García-Peñalvo, Francisco José, Vázquez-Ingelmo, Andrea, and García-Holgado, Alicia
- Subjects
ARTIFICIAL intelligence ,LITERATURE reviews ,SOFTWARE engineering ,ALGORITHMS ,HEURISTIC ,SOFTWARE engineers - Abstract
The exponential use of artificial intelligence (AI) to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed. While AI is a powerful means to discover interesting patterns and obtain predictive models, the use of these algorithms comes with a great responsibility, as an incomplete or unbalanced set of training data or an unproper interpretation of the models' outcomes could result in misleading conclusions that ultimately could become very dangerous. For these reasons, it is important to rely on expert knowledge when applying these methods. However, not every user can count on this specific expertise; non-AI-expert users could also benefit from applying these powerful algorithms to their domain problems, but they need basic guidelines to obtain the most out of AI models. The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features. The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering. As a result, 9 papers that tackle AI algorithm recommendation through tangible and traceable rules and heuristics were collected. The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Reproducibility of Deep Learning Algorithms Developed for Medical Imaging Analysis: A Systematic Review.
- Author
-
Moassefi, Mana, Rouzrokh, Pouria, Conte, Gian Marco, Vahdati, Sanaz, Fu, Tianyuan, Tahmasebi, Aylin, Younis, Mira, Farahani, Keyvan, Gentili, Amilcare, Kline, Timothy, Kitamura, Felipe C., Huo, Yuankai, Kuanar, Shiba, Younis, Khaled, Erickson, Bradley J., and Faghani, Shahriar
- Subjects
DEEP learning ,RESEARCH evaluation ,SYSTEMATIC reviews ,ARTIFICIAL intelligence ,DIAGNOSTIC imaging ,DESCRIPTIVE statistics ,ALGORITHMS ,WORLD Wide Web - Abstract
Since 2000, there have been more than 8000 publications on radiology artificial intelligence (AI). AI breakthroughs allow complex tasks to be automated and even performed beyond human capabilities. However, the lack of details on the methods and algorithm code undercuts its scientific value. Many science subfields have recently faced a reproducibility crisis, eroding trust in processes and results, and influencing the rise in retractions of scientific papers. For the same reasons, conducting research in deep learning (DL) also requires reproducibility. Although several valuable manuscript checklists for AI in medical imaging exist, they are not focused specifically on reproducibility. In this study, we conducted a systematic review of recently published papers in the field of DL to evaluate if the description of their methodology could allow the reproducibility of their findings. We focused on the Journal of Digital Imaging (JDI), a specialized journal that publishes papers on AI and medical imaging. We used the keyword "Deep Learning" and collected the articles published between January 2020 and January 2022. We screened all the articles and included the ones which reported the development of a DL tool in medical imaging. We extracted the reported details about the dataset, data handling steps, data splitting, model details, and performance metrics of each included article. We found 148 articles. Eighty were included after screening for articles that reported developing a DL model for medical image analysis. Five studies have made their code publicly available, and 35 studies have utilized publicly available datasets. We provided figures to show the ratio and absolute count of reported items from included studies. According to our cross-sectional study, in JDI publications on DL in medical imaging, authors infrequently report the key elements of their study to make it reproducible. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. DM–AHR : A Self-Supervised Conditional Diffusion Model for AI-Generated Hairless Imaging for Enhanced Skin Diagnosis Applications.
- Author
-
Benjdira, Bilel, M. Ali, Anas, Koubaa, Anis, Ammar, Adel, and Boulila, Wadii
- Subjects
SKIN diseases ,MEDICAL technology ,HAIR removal ,RESEARCH funding ,DIAGNOSTIC imaging ,ARTIFICIAL intelligence ,DESCRIPTIVE statistics ,DATA analysis software ,ALGORITHMS - Abstract
Simple Summary: Skin diseases can be serious, and early detection is key to effective treatment. Unfortunately, the quality of images used to diagnose these diseases often suffers due to interference from hair, making accurate diagnosis challenging. This research introduces a novel technology, the DM–AHR, a self-supervised conditional diffusion model designed specifically to generate clear, hairless images for better skin disease diagnosis. Our work not only presents a new, advanced model that expertly identifies and removes hair from dermoscopic images but also introduces a specialized dataset, DERMAHAIR, to further research and improve diagnostic processes. The enhancements in image quality provided by DM–AHR significantly improve the accuracy of skin disease diagnoses, and it promises to be a valuable tool in medical imaging. Accurate skin diagnosis through end-user applications is important for early detection and cure of severe skin diseases. However, the low quality of dermoscopic images hampers this mission, especially with the presence of hair on these kinds of images. This paper introduces DM–AHR, a novel, self-supervised conditional diffusion model designed specifically for the automatic generation of hairless dermoscopic images to improve the quality of skin diagnosis applications. The current research contributes in three significant ways to the field of dermatologic imaging. First, we develop a customized diffusion model that adeptly differentiates between hair and skin features. Second, we pioneer a novel self-supervised learning strategy that is specifically tailored to optimize performance for hairless imaging. Third, we introduce a new dataset, named DERMAHAIR (DERMatologic Automatic HAIR Removal Dataset), that is designed to advance and benchmark research in this specialized domain. These contributions significantly enhance the clarity of dermoscopic images, improving the accuracy of skin diagnosis procedures. We elaborate on the architecture of DM–AHR and demonstrate its effective performance in removing hair while preserving critical details of skin lesions. Our results show an enhancement in the accuracy of skin lesion analysis when compared to existing techniques. Given its robust performance, DM–AHR holds considerable promise for broader application in medical image enhancement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Artificial intelligence assisted IoT-fog based framework for emergency fire response in smart buildings.
- Author
-
Saini, Munish, Sengupta, Eshan, and Thakur, Suraaj
- Subjects
FIRE management ,ARTIFICIAL intelligence ,EMERGENCY management ,INTERNET of things ,FLOOR plans ,ALGORITHMS - Abstract
Anthropogenic hazards are unrelenting threat to lives and property, with human irresponsibility emerging as a leading source of urban as well as industrial fires. The complexity of urban structures and crowded layouts make these kinds of fires more lethal. This paper presents an Artificial Intelligence (AI) based framework designed for smart buildings as a solution to the devastating obstacles caused by fire crises. Our system creates a 3D model of the building using floor plans and the A* algorithm for escape route identification. The proposed framework includes a YOLO-based smart monitoring system for the identification and counting of people caught in a fire, with the ability to distinguish between conscious and unconscious persons. The proposed system informs inhabitants in the case of a fire and directs them to the closest exit for a safe evacuation. Moreover, fire and rescue officials receive real-time information on affected persons, such as the number and location of adults and children who are conscious and unconscious. Perhaps most significantly, the suggested framework performs exceptionally well, scoring 96% for precision and 98% for recall in the detection of fire and humans. These findings highlight the effectiveness of the model in locating people within infrastructures affected by fire. The framework considerably outperforms the most advanced algorithms in terms of speed and efficiency for shortest path detection, greatly improving the ability of fire rescue teams to quickly find and aid residents who are trapped in a fire. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. AI in teacher education: Unlocking new dimensions in teaching support, inclusive learning, and digital literacy.
- Author
-
Zhang, Jia and Zhang, Zhuo
- Subjects
TEACHER education ,DIGITAL technology ,SCHOOL environment ,INTELLECT ,SCALE analysis (Psychology) ,PSYCHOLOGY of teachers ,T-test (Statistics) ,ARTIFICIAL intelligence ,TEACHING methods ,QUANTITATIVE research ,PSYCHOLOGICAL adaptation ,EDUCATIONAL technology ,DESCRIPTIVE statistics ,COMPUTER literacy ,CONCEPTUAL structures ,COLLEGE teacher attitudes ,PROFESSIONAL employee training ,ABILITY ,LEARNING strategies ,SOCIAL support ,TEACHER-student relationships ,STUDENT attitudes ,PSYCHOLOGY of college students ,COMPUTER assisted instruction ,INTERPERSONAL relations ,DATA analysis software ,ALGORITHMS ,TRAINING - Abstract
Background: AI can positively influence teaching by offering support for classroom management, creating inclusive learning environments, enhancing digital skills, personalizing teaching methods, and strengthening teacher‐student relationships. Objectives: This quantitative research study investigates the opportunities, difficulties, and consequences of incorporating AI into teacher education. Methods: Data were collected through structured questionnaires from 202 college students and 68 staff members. The analysis was conducted using SPSS software. Results: The study provides a novel contribution by its thorough investigation of the diverse effects of AI on teacher education. It offers beneficial perspectives on the possible benefits and challenges, illuminating the far‐reaching changes that AI could bring to the terrain of learning and instruction and teaching methods in the time yet to come. The research sought to assess the effect of AI adoption in teacher education across five main dimensions: (i) its influence on teaching support and classroom management, (ii) its role in creating inclusive and accessible learning environments, (iii) its contribution to improving teachers' digital literacy and computer skills, and enhancing access to digital teaching resources, (iv) its positive influence on identifying students' learning styles and facilitating the adoption of diverse teaching methods, and (v) its role in strengthening teacher‐student relationships through improved interactions. Conclusion: The findings elucidate the promising opportunities that AI presents in the field of teacher education, along with the obstacles that require resolution for the effective fusion of AI educational settings. Lay Description: What is currently known about this topic?: AI has the potential to enhance various aspects of teaching, including classroom management and personalizing teaching methods.Incorporating AI into education has garnered significant interest due to its perceived benefits in improving learning outcomes. What does this paper add?: This paper provides a comprehensive investigation into the effects of AI adoption in teacher education, highlighting both the opportunities and challenges associated with its implementation.It offers insights into how AI can influence different dimensions of teaching, such as classroom management, learning environment inclusivity, and teacher‐student relationships. Implications for practice/or policy: The findings of this study underscore the importance of integrating AI into teacher education programs to leverage its potential benefits in enhancing teaching practices.Policymakers and educators should consider the implications of AI adoption in education and develop strategies to address challenges while maximizing the advantages of AI technologies in teaching and learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Differential Evolution Algorithm with Three Mutation Operators for Global Optimization.
- Author
-
Wang, Xuming and Yu, Xiaobing
- Subjects
EVOLUTIONARY algorithms ,ARTIFICIAL intelligence ,GLOBAL optimization ,ALGORITHMS ,DIFFERENTIAL evolution ,BENCHES - Abstract
Differential evolution algorithm is a very powerful and recently proposed evolutionary algorithm. Generally, only a mutation operator and predefined parameter values of differential evolution algorithm are utilized to solve various optimization problems, which limits the performance of the algorithm. In this paper, six commonly used mutation operators are divided into three categories according to their own features. A mutation pool is established based on the three categories. A parameter pool with three predefined values is designed. During evolution, three mutation operators are randomly chosen from the three categories, and three parameter values are also randomly selected from the parameter pool. The three groups of mutation operators and parameter values are employed to produce trial vectors. The proposed algorithm makes good use of different mutation operators. Three recently proposed differential evolution variants and three non-differential evolution algorithms are used to make comparisons on the 29 testing functions from CEC. The experimental results have demonstrated that the proposed algorithm is very competitive. The proposed algorithm is utilized to solve three real applications, and the results are superior. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Dual-Neighborhood Tabu Search for Computing Stable Extensions in Abstract Argumentation Frameworks.
- Author
-
Ke, Yuanzhi, Hu, Xiaogang, Sun, Junjie, Wu, Xinyun, Xiong, Caiquan, and Luo, Mao
- Subjects
ARTIFICIAL intelligence ,TABOO ,EVALUATION methodology ,TABU search algorithm ,ALGORITHMS - Abstract
Abstract argumentation has become one of the important fields of artificial intelligence. This paper proposes a dual-neighborhood tabu search (DNTS) method specifically designed to find a single stable extension in abstract argumentation frameworks. The proposed algorithm implements an improved dual-neighborhood strategy incorporating a fast neighborhood evaluation method. In addition, by introducing techniques such as tabu and perturbation, this algorithm is able to jump out of the local optimum, which significantly improves the performance of the algorithm. In order to evaluate the effectiveness of the method, the performance of the algorithm on more than 300 randomly generated benchmark datasets was studied and compared with the algorithm in the literature. In the experiment, DNTS outperforms the other method regarding time consumption in more than 50 instances and surpasses the other meta-heuristic method in the number of solved cases. Further analysis shows that the initialization method, the tabu strategy, and the perturbation technique help guarantee the efficiency of the proposed DNTS. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Strategies to improve fairness in artificial intelligence:A systematic literature review.
- Author
-
Trigo, António, Stein, Nubia, and Paulo Belfo, Fernando
- Subjects
ARTIFICIAL intelligence ,LITERATURE reviews ,PREJUDICES ,ALGORITHMS ,DISCRIMINATION (Sociology) - Abstract
Decisions based on artificial intelligence can reproduce biases or prejudices present in biased historical data and poorly formulated systems, presenting serious social consequences for underrepresented groups of individuals. This paper presents a systematic literature review of technical, feasible, and practicable solutions to improve fairness in artificial intelligence classified according to different perspectives: fairness metrics, moment of intervention (pre-processing, processing, or post-processing), research area, datasets, and algorithms used in the research. The main contribution of this paper is to establish common ground regarding the techniques to be used to improve fairness in artificial intelligence, defined as the absence of bias or discrimination in the decisions made by artificial intelligence systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A Comprehensive analysis of Deployment Optimization Methods for CNN-Based Applications on Edge Devices.
- Author
-
Qi Li, Zhenling Su, and Lin Meng
- Subjects
CONVOLUTIONAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
Copyright of Electrotechnical Review / Elektrotehniski Vestnik is the property of Electrotechnical Society of Slovenia and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
16. An Innovative K-Anonymity Privacy-Preserving Algorithm to Improve Data Availability in the Context of Big Data.
- Author
-
Linlin Yuan, Tiantian Zhang, Yuling Chen, Yuxiang Yang, and Huang Li
- Subjects
BIG data ,GREEDY algorithms ,INFORMATION theory ,ALGORITHMS ,ARTIFICIAL intelligence ,STATISTICS ,BLOCKCHAINS - Abstract
The development of technologies such as big data and blockchain has brought convenience to life, but at the same time, privacy and security issues are becoming more and more prominent. The K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users' privacy by anonymizing big data. However, the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data availability. In addition, ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be reduced. Based on this, we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data, while guaranteeing improved data usability. Specifically, we construct a new information loss function based on the information quantity theory. Considering that different quasi-identification attributes have different impacts on sensitive attributes, we set weights for each quasi-identification attribute when designing the information loss function. In addition, to reduce information loss, we improve K-anonymity in two ways. First, we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms, i.e., greedy algorithm and 2-means clustering algorithm. In addition, we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of mass. Meanwhile, we design the K-anonymity algorithm of this scheme based on the constructed information loss function, the improved 2-means clustering algorithm, and the greedy algorithm, which reduces the information loss. Finally, we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Algorithms for Liver Segmentation in Computed Tomography Scans: A Historical Perspective.
- Author
-
Niño, Stephanie Batista, Bernardino, Jorge, and Domingues, Inês
- Subjects
COMPUTED tomography ,IMAGE processing ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,ALGORITHMS ,IMAGE reconstruction algorithms - Abstract
Oncology has emerged as a crucial field of study in the domain of medicine. Computed tomography has gained widespread adoption as a radiological modality for the identification and characterisation of pathologies, particularly in oncology, enabling precise identification of affected organs and tissues. However, achieving accurate liver segmentation in computed tomography scans remains a challenge due to the presence of artefacts and the varying densities of soft tissues and adjacent organs. This paper compares artificial intelligence algorithms and traditional medical image processing techniques to assist radiologists in liver segmentation in computed tomography scans and evaluates their accuracy and efficiency. Despite notable progress in the field, the limited availability of public datasets remains a significant barrier to broad participation in research studies and replication of methodologies. Future directions should focus on increasing the accessibility of public datasets, establishing standardised evaluation metrics, and advancing the development of three-dimensional segmentation techniques. In addition, maintaining a collaborative relationship between technological advances and medical expertise is essential to ensure that these innovations not only achieve technical accuracy, but also remain aligned with clinical needs and realities. This synergy ensures their applicability and effectiveness in real-world healthcare environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,ALGORITHMS ,MACHINE learning ,INFORMATION technology ,MEDICAL care ,MOTION capture (Human mechanics) ,MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
19. Predicting Money Laundering Using Machine Learning and Artificial Neural Networks Algorithms in Banks.
- Author
-
Lokanan, Mark E.
- Subjects
ARTIFICIAL neural networks ,MONEY laundering ,MACHINE learning ,ALGORITHMS ,RANDOM forest algorithms - Abstract
This paper aims to build a machine learning and a neural network model to detect the probability of money laundering in banks. The paper's data came from a simulation of actual transactions flagged for money laundering in Middle Eastern banks. The main findings highlight that criminal networks mainly use the integration stage to integrate money into the financial system. Fraudsters prefer to launder funds in the early hours, morning followed by the business day's afternoon time intervals. Additionally, the Naïve Bayes and Random Forest classifiers were identified as the two best-performing models to predict bank money laundering transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. AI GODS, JEANS GODS, AND THRIFT GODS: RESPONDING TO RESPONSES TO THE BLESSED BY THE ALGORITHM PAPER (SINGLER 2020).
- Author
-
Singler, Beth
- Subjects
GODS ,ARTIFICIAL intelligence ,ALGORITHMS ,THRIFT institutions - Published
- 2023
- Full Text
- View/download PDF
21. Data Mining Algorithm Based on Fusion Computer Artificial Intelligence Technology.
- Author
-
Yingqian Bai, Kepeng Bao, and Tao Xu
- Subjects
ARTIFICIAL intelligence ,DATA mining ,ALGORITHMS ,DISTRIBUTED databases ,ENTROPY (Information theory) - Abstract
INTRODUCTION: The paper constructs a massive data mining model of distributed spatiotemporal databases for the Internet of Things. Then a homologous data fusion method based on information entropy is proposed. The storage space required by the tree structure is reduced by constructing the data schema tree of the merged data set. Secondly, the optimal dynamic support degree is obtained by using a neural network and genetic algorithm. Frequent items in the Internet of Things data are mined to achieve the normalization of the clustered feature data based on the threshold value. Experiments show that the F-measure of the data mining algorithm improves the efficiency by 15.64% and 18.25% compared with the kinds of other literatures respectively. RI increased by 21.17% and 26.07%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Smart Random Walk Distributed Secured Edge Algorithm Using Multi-Regression for Green Network.
- Author
-
Saba, Tanzila, Haseeb, Khalid, Rehman, Amjad, Damaševičius, Robertas, and Bahaj, Saeed Ali
- Subjects
RANDOM walks ,ALGORITHMS ,ARTIFICIAL intelligence ,INTERNET of things ,ELECTRONIC paper ,INTERNET traffic - Abstract
Smart communication has significantly advanced with the integration of the Internet of Things (IoT). Many devices and online services are utilized in the network system to cope with data gathering and forwarding. Recently, many traffic-aware solutions have explored autonomous systems to attain the intelligent routing and flowing of internet traffic with the support of artificial intelligence. However, the inefficient usage of nodes' batteries and long-range communication degrades the connectivity time for the deployed sensors with the end devices. Moreover, trustworthy route identification is another significant research challenge for formulating a smart system. Therefore, this paper presents a smart Random walk Distributed Secured Edge algorithm (RDSE), using a multi-regression model for IoT networks, which aims to enhance the stability of the chosen IoT network with the support of an optimal system. In addition, by using secured computing, the proposed architecture increases the trustworthiness of smart devices with the least node complexity. The proposed algorithm differs from other works in terms of the following factors. Firstly, it uses the random walk to form the initial routes with certain probabilities, and later, by exploring a multi-variant function, it attains long-lasting communication with a high degree of network stability. This helps to improve the optimization criteria for the nodes' communication, and efficiently utilizes energy with the combination of mobile edges. Secondly, the trusted factors successfully identify the normal nodes even when the system is compromised. Therefore, the proposed algorithm reduces data risks and offers a more reliable and private system. In addition, the simulations-based testing reveals the significant performance of the proposed algorithm in comparison to the existing work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Improvement of action recognition based on ANN-BP algorithm for auto driving cars.
- Author
-
Yong Tian and Jun Tan
- Subjects
ARTIFICIAL neural networks ,AUTONOMOUS vehicles ,AUTOMOBILE driving ,ARTIFICIAL intelligence ,ALGORITHMS ,TIME-frequency analysis - Abstract
Introduction: With the development of artificial intelligence and autonomous driving technology, the application of motion recognition in automotive autonomous driving is becoming more and more important. The traditional feature extraction method uses adaptive search hybrid learning and needs to design the feature extraction process manually, which is difficult to meet the recognition requirements in complex environments. Methods: In this paper, a fusion algorithm is proposed to classify the driving characteristics through time-frequency analysis, and perform backpropagation operation in artificial neural network to improve the convergence speed of the algorithm. The performance analysis experiments of the study were carried out on Autov data sets, and the results were compared with those of the other three algorithms. Results: When the vehicle action coefficient is 227, the judgment accuracy of the four algorithms is 0.98, 0.94, 0.93 and 0.95, respectively, indicating that the fusion algorithm is stable. When the road sample is 547, the vehicle driving ability of the fusion algorithm is 4.7, which is the best performance among the four algorithms, indicating that the fusion algorithm has strong adaptability. Discussion: The results show that the fusion algorithm has practical significance in improving the autonomous operation ability of autonomous vehicles, reducing the frequency of vehicle accidents during driving, and contributing to the development of production, life and society. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Ensemble Deep Learning-Based Image Classification for Breast Cancer Subtype and Invasiveness Diagnosis from Whole Slide Image Histopathology.
- Author
-
Balasubramanian, Aadhi Aadhavan, Al-Heejawi, Salah Mohammed Awad, Singh, Akarsh, Breggia, Anne, Ahmad, Bilal, Christman, Robert, Ryan, Stephen T., and Amal, Saeed
- Subjects
BREAST tumor diagnosis ,CANCER invasiveness ,TASK performance ,MEDICAL technology ,BIOINDICATORS ,BREAST tumors ,ARTIFICIAL intelligence ,MEDICAL care ,HOSPITALS ,CAUSES of death ,EVALUATION of medical care ,DESCRIPTIVE statistics ,DEEP learning ,COMPUTER-aided diagnosis ,ARTIFICIAL neural networks ,DIGITAL image processing ,ALGORITHMS ,CARCINOMA in situ - Abstract
Simple Summary: Breast cancer is a significant cause of female cancer-related deaths in the US. Checking how severe the cancer is helps in planning treatment. Modern AI methods are good at grading cancer, but they are not used much in hospitals yet. We developed and utilized ensemble deep learning algorithms for addressing the tasks of classifying (1) breast cancer subtype and (2) breast cancer invasiveness from whole slide image (WSI) histopathology slides. The ensemble models used were based on convolutional neural networks (CNNs) known for extracting distinctive features crucial for accurate classification. In this paper, we provide a comprehensive analysis of these models and the used methodology for breast cancer diagnosis tasks. Cancer diagnosis and classification are pivotal for effective patient management and treatment planning. In this study, a comprehensive approach is presented utilizing ensemble deep learning techniques to analyze breast cancer histopathology images. Our datasets were based on two widely employed datasets from different centers for two different tasks: BACH and BreakHis. Within the BACH dataset, a proposed ensemble strategy was employed, incorporating VGG16 and ResNet50 architectures to achieve precise classification of breast cancer histopathology images. Introducing a novel image patching technique to preprocess a high-resolution image facilitated a focused analysis of localized regions of interest. The annotated BACH dataset encompassed 400 WSIs across four distinct classes: Normal, Benign, In Situ Carcinoma, and Invasive Carcinoma. In addition, the proposed ensemble was used on the BreakHis dataset, utilizing VGG16, ResNet34, and ResNet50 models to classify microscopic images into eight distinct categories (four benign and four malignant). For both datasets, a five-fold cross-validation approach was employed for rigorous training and testing. Preliminary experimental results indicated a patch classification accuracy of 95.31% (for the BACH dataset) and WSI image classification accuracy of 98.43% (BreakHis). This research significantly contributes to ongoing endeavors in harnessing artificial intelligence to advance breast cancer diagnosis, potentially fostering improved patient outcomes and alleviating healthcare burdens. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Artificial Intelligence-Based Atrial Fibrillation Recognition Method for Motion Artifact-Contaminated Electrocardiogram Signals Preprocessed by Adaptive Filtering Algorithm.
- Author
-
Zhang, Huanqian, Zhao, Hantao, and Guo, Zhang
- Subjects
ARTIFICIAL intelligence ,ADAPTIVE filters ,ARRHYTHMIA ,ELECTROCARDIOGRAPHY ,RECOGNITION (Psychology) ,ATRIAL fibrillation ,ALGORITHMS - Abstract
Atrial fibrillation (AF) is a common arrhythmia, and out-of-hospital, wearable, long-term electrocardiogram (ECG) monitoring can help with the early detection of AF. The presence of a motion artifact (MA) in ECG can significantly affect the characteristics of the ECG signal and hinder early detection of AF. Studies have shown that (a) using reference signals with a strong correlation with MAs in adaptive filtering (ADF) can eliminate MAs from the ECG, and (b) artificial intelligence (AI) algorithms can recognize AF when there is no presence of MAs. However, no literature has been reported on whether ADF can improve the accuracy of AI for recognizing AF in the presence of MAs. Therefore, this paper investigates the accuracy of AI recognition for AF when ECGs are artificially introduced with MAs and processed by ADF. In this study, 13 types of MA signals with different signal-to-noise ratios ranging from +8 dB to −16 dB were artificially added to the AF ECG dataset. Firstly, the accuracy of AF recognition using AI was obtained for a signal with MAs. Secondly, after removing the MAs by ADF, the signal was further identified using AI to obtain the accuracy of the AF recognition. We found that after undergoing ADF, the accuracy of AI recognition for AF improved under all MA intensities, with a maximum improvement of 60%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Applying "Two Heads Are Better Than One" Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems.
- Author
-
Hsieh, Fu-Shiung
- Subjects
RECOMMENDER systems ,EVOLUTIONARY algorithms ,RIDESHARING ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,SELF-adaptive software ,ALGORITHMS - Abstract
Human beings have created numerous laws, sayings and proverbs that still influence behaviors and decision-making processes of people. Some of the laws, sayings or proverbs are used by people to understand the phenomena that may take place in daily life. For example, Murphy's law states that "Anything that can go wrong will go wrong." Murphy's law is helpful for project planning with analysis and the consideration of risk. Similar to Murphy's law, the old saying "Two heads are better than one" also influences the determination of the ways for people to get jobs done effectively. Although the old saying "Two heads are better than one" has been extensively discussed in different contexts, there is a lack of studies about whether this saying is valid and can be applied in evolutionary computation. Evolutionary computation is an important optimization approach in artificial intelligence. In this paper, we attempt to study the validity of this saying in the context of evolutionary computation approach to the decision making of ridesharing systems with trust constraints. We study the validity of the saying "Two heads are better than one" by developing a series of self-adaptive evolutionary algorithms for solving the optimization problem of ridesharing systems with trust constraints based on the saying, conducting several series of experiments and comparing the effectiveness of these self-adaptive evolutionary algorithms. The new finding is that the old saying "Two heads are better than one" is valid in most cases and hence can be applied to facilitate the development of effective self-adaptive evolutionary algorithms. Our new finding paves the way for developing a better evolutionary computation approach for ridesharing recommendation systems based on sayings created by human beings or human intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. From algorithmic governance to govern algorithm.
- Author
-
Xu, Zichun
- Subjects
ALGORITHMS ,ARTIFICIAL intelligence ,MODERNIZATION (Social science) ,BIG data ,NETWORK governance ,BLOCKCHAINS - Abstract
Algorithm is the core category and basic methods of the digital age, and advanced technologies such as big data, artificial intelligence, and blockchain all need to rely on various algorithm designs or take the algorithm as the underlying principle. However, due to the characteristics of algorithm design, application, and technology itself, there are also hidden worries such as algorithm black-box, algorithm discrimination, and difficulty in accountability in the operation process to varying degrees. This paper summarizes these problems into three aspects: unexplainable, self-reinforcing and autonomous. Facing the opportunities and risks generated by the application of the algorithm in national governance, while actively promoting the development of algorithm technology to continuously promote the modernization process of national governance, it is also necessary to increase the governance of the algorithm. The practice has proved that enhancing the interpretability of algorithm, optimizing algorithm design, and adopting legal regulatory algorithm are the basic approaches to effective regulatory algorithm in the era of intelligent governance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. The Challenges of Algorithm Management: The Spanish Perspective.
- Author
-
Prado, Daniel Perez del
- Subjects
ALGORITHMS ,LABOR laws ,DISRUPTIVE innovations ,ARTIFICIAL intelligence ,DIGITAL technology - Abstract
This paper focuses on how Spain's labour and employment law is dealing with technological disruption and, particularly, with algorithm management, looking for a harmonious equilibrium between traditional structures and profound changes. It pays special attention to the different actors affected and the most recent normative changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence.
- Author
-
Ivanova, Mariia, Pescia, Carlo, Trapani, Dario, Venetis, Konstantinos, Frascarelli, Chiara, Mane, Eltjona, Cursano, Giulia, Sajjadi, Elham, Scatena, Cristian, Cerbelli, Bruna, d'Amati, Giulia, Porta, Francesca Maria, Guerini-Rocco, Elena, Criscitiello, Carmen, Curigliano, Giuseppe, and Fusco, Nicola
- Subjects
BREAST tumor risk factors ,RISK assessment ,MEDICAL protocols ,CANCER relapse ,ARTIFICIAL intelligence ,EARLY detection of cancer ,CYTOCHEMISTRY ,TUMOR markers ,DECISION making in clinical medicine ,IMMUNOHISTOCHEMISTRY ,PATIENT-centered care ,DEEP learning ,ARTIFICIAL neural networks ,MACHINE learning ,ONCOLOGISTS ,INDIVIDUALIZED medicine ,MOLECULAR pathology ,HEALTH care teams ,ALGORITHMS ,DISEASE risk factors - Abstract
Simple Summary: Risk assessment in early breast cancer is critical for clinical decisions, but defining risk categories poses a significant challenge. The integration of conventional histopathology and biomarkers with artificial intelligence (AI) techniques, including machine learning and deep learning, has the potential to offer more precise information. AI applications extend beyond detection to histological subtyping, grading, and molecular feature identification. The successful integration of AI into clinical practice requires collaboration between histopathologists, molecular pathologists, computational pathologists, and oncologists to optimize patient outcomes. Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Feature-Selection-Based DDoS Attack Detection Using AI Algorithms.
- Author
-
Raza, Muhammad Saibtain, Sheikh, Mohammad Nowsin Amin, Hwang, I-Shyan, and Ab-Rahman, Mohammad Syuhaimi
- Subjects
DENIAL of service attacks ,CONVOLUTIONAL neural networks ,ARTIFICIAL intelligence ,MACHINE learning ,ALGORITHMS - Abstract
SDN has the ability to transform network design by providing increased versatility and effective regulation. Its programmable centralized controller gives network administration employees more authority, allowing for more seamless supervision. However, centralization makes it vulnerable to a variety of attack vectors, with distributed denial of service (DDoS) attacks posing a serious concern. Feature selection-based Machine Learning (ML) techniques are more effective than traditional signature-based Intrusion Detection Systems (IDS) at identifying new threats in the context of defending against distributed denial of service (DDoS) attacks. In this study, NGBoost is compared with four additional machine learning (ML) algorithms: convolutional neural network (CNN), Stochastic Gradient Descent (SGD), Decision Tree, and Random Forest, in order to assess the effectiveness of DDoS detection on the CICDDoS2019 dataset. It focuses on important measures such as F1 score, recall, accuracy, and precision. We have examined NeTBIOS, a layer-7 attack, and SYN, a layer-4 attack, in our paper. Our investigation shows that Natural Gradient Boosting and Convolutional Neural Networks, in particular, show promise with tabular data categorization. In conclusion, we go through specific study results on protecting against attacks using DDoS. These experimental findings offer a framework for making decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. The Challenges of Algorithm Management: The Spanish Perspective.
- Author
-
Perez del Prado, Daniel
- Subjects
ALGORITHMS ,LABOR laws ,ARTIFICIAL intelligence - Abstract
This paper focuses on how Spain's labour and employment law is dealing with technological disruption and, particularly, with algorithm management, looking for a harmonious equilibrium between traditional structures and profound changes. It pays special attention to the different actors affected and the most recent normative changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Implementation of a Long Short-Term Memory Neural Network-Based Algorithm for Dynamic Obstacle Avoidance.
- Author
-
Mulás-Tejeda, Esmeralda, Gómez-Espinosa, Alfonso, Escobedo Cabello, Jesús Arturo, Cantoral-Ceballos, Jose Antonio, and Molina-Leal, Alejandra
- Subjects
MOBILE robots ,HUMAN-robot interaction ,AUTONOMOUS robots ,ANGULAR velocity ,LINEAR velocity ,MOTION capture (Human mechanics) ,ALGORITHMS - Abstract
Autonomous mobile robots are essential to the industry, and human–robot interactions are becoming more common nowadays. These interactions require that the robots navigate scenarios with static and dynamic obstacles in a safely manner, avoiding collisions. This paper presents a physical implementation of a method for dynamic obstacle avoidance using a long short-term memory (LSTM) neural network that obtains information from the mobile robot's LiDAR for it to be capable of navigating through scenarios with static and dynamic obstacles while avoiding collisions and reaching its goal. The model is implemented using a TurtleBot3 mobile robot within an OptiTrack motion capture (MoCap) system for obtaining its position at any given time. The user operates the robot through these scenarios, recording its LiDAR readings, target point, position inside the MoCap system, and its linear and angular velocities, all of which serve as the input for the LSTM network. The model is trained on data from multiple user-operated trajectories across five different scenarios, outputting the linear and angular velocities for the mobile robot. Physical experiments prove that the model is successful in allowing the mobile robot to reach the target point in each scenario while avoiding the dynamic obstacle, with a validation accuracy of 98.02%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. AI/ML-Powered IoT Solutions for Smart Buildings and Energy Efficiency.
- Author
-
Dasgupta, Rohan
- Subjects
INTELLIGENT buildings ,INTERNET of things ,ENERGY consumption ,ARTIFICIAL intelligence ,RETROFITTING of buildings - Abstract
Energy efficiency is a crucial issue impacting the lives of people around the world. Over the past few years, we have all encountered the adverse effects of high energy costs in our personal households as well as the global economy. The application of Artificial Intelligence and Machine Learning (AI/ML) on Internet of Things (IoT) data in the context of improving the energy efficiency of buildings holds immense potential. This paper aims to utilize the power of AI/ML and IoT to devise predictions and provide recommendations to improve the energy efficiency of buildings. The concept of "smart buildings" has recently become a key focus of AI/ML research, wherein the building collects energy consumption data from devices and sensors to analyze energy usage and provide concrete recommendations for enhancing the energy efficiency of a building. Similar research has been undertaken to improve the energy efficiency in smart buildings through retrofitting intervention via IoT components in buildings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Two-Stage Probe-Based Search Optimization Algorithm for the Traveling Salesman Problems.
- Author
-
Rahman, Md. Azizur and Ma, Jinwen
- Subjects
OPTIMIZATION algorithms ,SEARCH algorithms ,COMBINATORIAL optimization ,OPERATIONS research ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
As a classical combinatorial optimization problem, the traveling salesman problem (TSP) has been extensively investigated in the fields of Artificial Intelligence and Operations Research. Due to being NP-complete, it is still rather challenging to solve both effectively and efficiently. Because of its high theoretical significance and wide practical applications, great effort has been undertaken to solve it from the point of view of intelligent search. In this paper, we propose a two-stage probe-based search optimization algorithm for solving both symmetric and asymmetric TSPs through the stages of route development and a self-escape mechanism. Specifically, in the first stage, a reasonable proportion threshold filter of potential basis probes or partial routes is set up at each step during the complete route development process. In this way, the poor basis probes with longer routes are filtered out automatically. Moreover, four local augmentation operators are further employed to improve these potential basis probes at each step. In the second stage, a self-escape mechanism or operation is further implemented on the obtained complete routes to prevent the probe-based search from being trapped in a locally optimal solution. The experimental results on a collection of benchmark TSP datasets demonstrate that our proposed algorithm is more effective than other state-of-the-art optimization algorithms. In fact, it achieves the best-known TSP benchmark solutions in many datasets, while, in certain cases, it even generates solutions that are better than the best-known TSP benchmark solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A survey on energy‐efficient workflow scheduling algorithms in cloud computing.
- Author
-
Verma, Prateek, Maurya, Ashish Kumar, and Yadav, Rama Shankar
- Subjects
SERVER farms (Computer network management) ,CLOUD computing ,CARBON emissions ,WORKFLOW ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
The advancements in computing and storage capabilities of machines and their fusion with new technologies like the Internet of Thing (IoT), 5G networks, and artificial intelligence, to name a few, has resulted in a paradigm shift in the way computing is done in a cloud environment. In addition, the ever‐increasing user demand for cloud services and resources has resulted in cloud service providers (CSPs) expanding the scale of their data center facilities. This has increased energy consumption leading to more carbon dioxide emission levels. Hence, it becomes all the more important to design scheduling algorithms that optimize the use of cloud resources with minimum energy consumption. This paper surveys state‐of‐the‐art algorithms for scheduling workflow tasks to cloud resources with a focus on reducing energy consumption. For this, we categorize different workflow scheduling algorithms based on the scheduling approaches used and provide an analytical discussion of the algorithms covered in the paper. Further, we provide a detailed classification of different energy‐efficient strategies used by CSPs for energy saving in data centers. Finally, we describe some of the popular real‐world workflow applications as well as highlight important emerging trends and open issues in cloud computing for future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Comparative Evaluation of NeRF Algorithms on Single Image Dataset for 3D Reconstruction.
- Author
-
Condorelli, Francesca and Perticarini, Maurizio
- Subjects
ARTIFICIAL intelligence ,THREE-dimensional imaging ,HISTORIC sites ,ALGORITHMS ,COMPUTER vision ,IMAGE reconstruction algorithms - Abstract
The reconstruction of three-dimensional scenes from a single image represents a significant challenge in computer vision, particularly in the context of cultural heritage digitisation, where datasets may be limited or of poor quality. This paper addresses this challenge by conducting a study of the latest and most advanced algorithms for single-image 3D reconstruction, with a focus on applications in cultural heritage conservation. Exploiting different single-image datasets, the research evaluates the strengths and limitations of various artificial intelligence-based algorithms, in particular Neural Radiance Fields (NeRF), in reconstructing detailed 3D models from limited visual data. The study includes experiments on scenarios such as inaccessible or non-existent heritage sites, where traditional photogrammetric methods fail. The results demonstrate the effectiveness of NeRF-based approaches in producing accurate, high-resolution reconstructions suitable for visualisation and metric analysis. The results contribute to advancing the understanding of NeRF-based approaches in handling single-image inputs and offer insights for real-world applications such as object location and immersive content generation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Enhancements in Radiological Detection of Metastatic Lymph Nodes Utilizing AI-Assisted Ultrasound Imaging Data and the Lymph Node Reporting and Data System Scale.
- Author
-
Chudobiński, Cezary, Świderski, Bartosz, Antoniuk, Izabella, and Kurek, Jarosław
- Subjects
LYMPH nodes ,RECEIVER operating characteristic curves ,EARLY detection of cancer ,ARTIFICIAL intelligence ,MULTIPLE regression analysis ,ULTRASONIC imaging ,METASTASIS ,QUALITY assurance ,ALGORITHMS - Abstract
Simple Summary: A novel approach for automatic detection of neoplastic lesions in lymph nodes is presented, which incorporates machine learning methods and the new LN-RADS scale. The presented solution incorporates different network structures with diverse datasets to improve the overall effectiveness. Final findings demonstrate that incorporating the LN-RADS scale labels improved the overall diagnosis, especially when compared with current, standard practices. The presented solution is meant as an aid in the diagnosis process. The paper presents a novel approach for the automatic detection of neoplastic lesions in lymph nodes (LNs). It leverages the latest advances in machine learning (ML) with the LN Reporting and Data System (LN-RADS) scale. By integrating diverse datasets and network structures, the research investigates the effectiveness of ML algorithms in improving diagnostic accuracy and automation potential. Both Multinominal Logistic Regression (MLR)-integrated and fully connected neuron layers are included in the analysis. The methods were trained using three variants of combinations of histopathological data and LN-RADS scale labels to assess their utility. The findings demonstrate that the LN-RADS scale improves prediction accuracy. MLR integration is shown to achieve higher accuracy, while the fully connected neuron approach excels in AUC performance. All of the above suggests a possibility for significant improvement in the early detection and prognosis of cancer using AI techniques. The study underlines the importance of further exploration into combined datasets and network architectures, which could potentially lead to even greater improvements in the diagnostic process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. An efficient beaconing of bluetooth low energy by decision making algorithm.
- Author
-
Fujisawa, Minoru, Yasuda, Hiroyuki, Isogai, Ryosuke, Arai, Maki, Yoshida, Yoshifumi, Li, Aohan, Kim, Song-Ju, and Hasegawa, Mikio
- Subjects
ARTIFICIAL intelligence ,DECISION making ,WIRELESS communications ,ALGORITHMS - Abstract
Ongoing research endeavors are exploring the potential of artificial intelligence to enhance the efficiency of wireless communication systems. Nevertheless, complex computational mechanisms, such as those inherent in neural networks, are not optimally suited for applications where the reduction of computational intricacy is of paramount importance. The rise in Bluetooth-enabled devices has led to the widespread adoption of Bluetooth Low Energy (BLE) in various IoT applications, primarily due to its low power consumption. For specific applications, such as lost and found tags which operate on small batteries, it's especially important to further reduce power usage. With the objective of achieving low power consumption by optimally selecting channels and advertisement intervals, this paper introduces a parameter selection method derived from the Multi-Armed Bandit (MAB) algorithm, a technique known for addressing human decision-making challenges. In this study, we evaluate our proposed method using simulations in diverse environments. The outcomes indicate that, without compromising much on reliability, our approach can reduce power consumption by up to 40% based on the wireless surroundings. Additionally, when this method was implemented on an actual BLE device, it demonstrated effectiveness in reducing power consumption by about 35% in real environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Towards a common European ethical and legal framework for conducting clinical research: the GATEKEEPER experience.
- Author
-
Maccaro, Alessia, Tsiompanidou, Vasiliki, Piaggio, Davide, Gallego Montejo, Alba M., Cea Sánchez, Gloria, de Batlle, Jordi, Quesada Rodriguez, Adrian, Fico, Giuseppe, and Pecchia, Leandro
- Subjects
MEDICAL research laws ,DATA security ,MEDICAL protocols ,HUMAN services programs ,DIFFUSION of innovations ,COST effectiveness ,PROFESSIONAL ethics ,DIGITAL health ,CLINICAL medicine research ,ARTIFICIAL intelligence ,DECISION making ,MEDICAL research ,CONCEPTUAL structures ,RULES ,ALGORITHMS - Abstract
This paper examines the ethical and legal challenges encountered during the GATEKEEPER Project and how these challenges informed the development of a comprehensive framework for future Large-Scale Pilot (LSP) projects. GATEKEEPER is a LSP Project with 48 partners conducting 30 implementation studies across Europe with 50,000 target participants grouped into 9 Reference Use Cases. The project underscored the complexity of obtaining ethical approval across various jurisdictions with divergent regulations and procedures. Through a detailed analysis of the issues faced and the strategies employed to navigate these challenges, this study proposes an ethical and legal framework. This framework, derived from a comparative analysis of ethical application forms and regulations, aims to streamline the ethical approval process for future LSP research projects. By addressing the hurdles encountered in GATEKEEPER, the proposed framework offers a roadmap for more efficient and effective project management, ensuring smoother implementation of similar projects in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Piquing artificial intelligence towards drug discovery: Tools, techniques, and applications.
- Author
-
Agu, Peter Chinedu and Obulose, Chidiebere Nwiboko
- Subjects
DRUG discovery ,ARTIFICIAL intelligence ,DRUG design ,DRUG development ,NANOMEDICINE ,DRUG toxicity - Abstract
The purpose of this study was to discuss how artificial intelligence (AI) methods have affected the field of drug development. It looks at how AI models and data resources are reshaping the drug development process by offering more affordable and expedient options to conventional approaches. The paper opens with an overview of well‐known information sources for drug development. The discussion then moves on to molecular representation techniques that make it possible to convert data into representations that computers can understand. The paper also gives a general overview of the algorithms used in the creation of drug discovery models based on AI. In particular, the paper looks at how AI algorithms might be used to forecast drug toxicity, drug bioactivity, and drug physicochemical properties. De novo drug design, binding affinity prediction, and other AI‐based models for drug–target interaction were covered in deeper detail. Modern applications of AI in nanomedicine design and pharmacological synergism/antagonism prediction were also covered. The potential advantages of AI in drug development are highlighted as the evaluation comes to a close. It underlines how AI may greatly speed up and improve the efficiency of drug discovery, resulting in the creation of new and better medicines. To fully realize the promise of AI in drug discovery, the review acknowledges the difficulties that come with its uses in this field and advocates for more study and development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. SH-GAT: Software-hardware co-design for accelerating graph attention networks on FPGA.
- Author
-
Wang, Renping, Li, Shun, Tang, Enhao, Lan, Sen, Liu, Yajing, Yang, Jing, Huang, Shizhen, and Hu, Hailong
- Subjects
COMPUTER software ,ARTIFICIAL intelligence ,TECHNOLOGICAL innovations ,MACHINE learning ,ALGORITHMS - Abstract
Graph convolution networks (GCN) have demonstrated success in learning graph structures; however, they are limited in inductive tasks. Graph attention networks (GAT) were proposed to address the limitations of GCN and have shown high performance in graph-based tasks. Despite this success, GAT faces challenges in hardware acceleration, including: 1) The GAT algorithm has difficulty adapting to hardware; 2) challenges in efficiently implementing Sparse matrix multiplication (SPMM); and 3) complex addressing and pipeline stall issues due to irregular memory accesses. To this end, this paper proposed SH-GAT, an FPGA-based GAT accelerator that achieves more efficient GAT inference. The proposed approach employed several optimizations to enhance GAT performance. First, this work optimized the GAT algorithm using split weights and softmax approximation to make it more hardware-friendly. Second, a load-balanced SPMM kernel was designed to fully leverage potential parallelism and improve data throughput. Lastly, data preprocessing was performed by pre-fetching the source node and its neighbor nodes, effectively addressing pipeline stall and complexly addressing issues arising from irregular memory access. SH-GAT was evaluated on the Xilinx FPGA Alveo U280 accelerator card with three popular datasets. Compared to existing CPU, GPU, and state-of-the-art (SOTA) FPGA-based accelerators, SH-GAT can achieve speedup by up to 3283 × , 13 × , and 2.3 ×. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Intelligent Algorithms Enable Photocatalyst Design and Performance Prediction.
- Author
-
Wang, Shifa, Mo, Peilin, Li, Dengfeng, and Syed, Asad
- Subjects
PHOTOCATALYSTS ,ARTIFICIAL neural networks ,OPTIMIZATION algorithms ,PHOTOCATALYSIS ,ALGORITHMS ,ARTIFICIAL intelligence ,POLLUTANTS - Abstract
Photocatalysts have made great contributions to the degradation of pollutants to achieve environmental purification. The traditional method of developing new photocatalysts is to design and perform a large number of experiments to continuously try to obtain efficient photocatalysts that can degrade pollutants, which is time-consuming, costly, and does not necessarily achieve the best performance of the photocatalyst. The rapid development of photocatalysis has been accelerated by the rapid development of artificial intelligence. Intelligent algorithms can be utilized to design photocatalysts and predict photocatalytic performance, resulting in a reduction in development time and the cost of new catalysts. In this paper, the intelligent algorithms for photocatalyst design and photocatalytic performance prediction are reviewed, especially the artificial neural network model and the model optimized by an intelligent algorithm. A detailed discussion is given on the advantages and disadvantages of the neural network model, as well as its application in photocatalysis optimized by intelligent algorithms. The use of intelligent algorithms in photocatalysis is challenging and long term due to the lack of suitable neural network models for predicting the photocatalytic performance of photocatalysts. The prediction of photocatalytic performance of photocatalysts can be aided by the combination of various intelligent optimization algorithms and neural network models, but it is only useful in the early stages. Intelligent algorithms can be used to design photocatalysts and predict their photocatalytic performance, which is a promising technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. The Algorithm Holy: TikTok, Technomancy, and the Rise of Algorithmic Divination.
- Author
-
St. Lawrence, Emma
- Subjects
SOCIAL media mobile apps ,WITCHCRAFT ,DIVINATION ,DANCE ,ALGORITHMS ,SINGING ,SUBCULTURES ,POPULAR music - Abstract
The social media app TikTok was launched in the US in 2017 with a very specific purpose: sharing 15-s clips of singing and dancing to popular songs. Seven years and several billion downloads later, it is now the go-to app for Gen Z Internet users and much better known for its ultra-personalized algorithm, AI-driven filters, and network of thriving subcultures. Among them, a growing community of magical and spiritual practitioners, frequently collectivized as Witchtok, who use the app not only share their craft and create community but consider the technology itself a powerful partner with which to conduct readings, channel deities, connect to a collective conscious, and transcend the communicative boundaries between the human and spirit realms—a practice that can be understood as algorithmic divination. In analyzing contemporary witchcraft on TikTok and contextualizing it within the larger history of technospirituality, this paper aims to explore algorithmic divination as an increasingly popular and powerful practice of technomancy open to practitioners of diverse creed and belief. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Algorithms and Faith: The Meaning, Power, and Causality of Algorithms in Catholic Online Discourse.
- Author
-
Sierocki, Radosław
- Subjects
ONLINE algorithms ,ALGORITHMS ,ARTIFICIAL intelligence ,COMPUTER programming ,DISCOURSE analysis - Abstract
The purpose of this article is to present grassroots concepts and ideas about "the algorithm" in the religious context. The power and causality of algorithms are based on lines of computer code, making a society influenced by "black boxes" or "enigmatic technologies" (as they are incomprehensible to most people). On the other hand, the power of algorithms lies in the meanings that we attribute to them. The extent of the power, agency, and control that algorithms have over us depends on how much power, agency, and control we are willing to give to algorithms and artificial intelligence, which involves building the idea of their omnipotence. The key question is about the meanings and the ideas about algorithms that are circulating in society. This paper is focused on the analysis of "vernacular/folk" theories on algorithms, reconstructed based on posts made by the users of Polish Catholic forums. The qualitative analysis of online discourse makes it possible to point out several themes, i.e., according to the linguistic concept, "algorithm" is the source domain used in explanations of religious issues (God as the creator of the algorithm, the soul as the algorithm); algorithms and the effects of their work are combined with the individualization and personalization of religion; algorithms are perceived as ideological machines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. HYPERGRAPH HORN FUNCTIONS.
- Author
-
BÉRCZI, KRISTÓF, BOROS, ENDRE, and KAZUHISA MAKINO
- Subjects
ARTIFICIAL intelligence ,COMPUTER science ,POLYNOMIAL time algorithms ,DATABASES ,BOOLEAN functions ,SEMIDEFINITE programming - Abstract
Horn functions form a subclass of Boolean functions possessing interesting structural and computational properties. These functions play a fundamental role in algebra, artificial intelligence, combinatorics, computer science, database theory, and logic. In the present paper, we introduce the subclass of hypergraph Horn functions that generalizes matroids and equivalence relations. We provide multiple characterizations of hypergraph Horn functions in terms of implicate-duality and the closure operator, which are, respectively, regarded as generalizations of matroid duality and the Mac Lane-Steinitz exchange property of matroid closure. We also study algorithmic issues on hypergraph Horn functions and show that the recognition problem (i.e., deciding if a given definite Horn CNF represents a hypergraph Horn function) and key realization (i.e., deciding if a given hypergraph is realized as a key set by a hypergraph Horn function) can be done in polynomial time, while implicate sets can be generated with polynomial delay. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. SPCTRE: sparsity-constrained fully-digital reservoir computing architecture on FPGA.
- Author
-
Abe, Yuki, Nishida, Kohei, Ando, Kota, and Asai, Tetsuya
- Subjects
ARCHITECTURAL design ,ARTIFICIAL intelligence ,PARALLEL processing ,PARALLEL programming ,ALGORITHMS - Abstract
This paper proposes an unconventional architecture and algorithm for implementing reservoir computing on FPGA. An architecture-oriented algorithm with improved throughput and architecture designed to reduce memory and hardware resource requirements are presented. The proposed architecture exhibits good performance in terms of benchmarks for reservoir computing. A prediction accelerator for reservoir computing that operates on 55.45 mW at 450 K fps with <3000 LEs is realized by implementing the architecture on FPGA. The proposed approach presents a novel FPGA implementation of reservoir computing focussing on both algorithms and architecture that may serve as a basis for applications of AI at network edge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Artificial Intelligence-Based Algorithms in Medical Image Scan Segmentation and Intelligent Visual Content Generation—A Concise Overview.
- Author
-
Rudnicka, Zofia, Szczepanski, Janusz, and Pregowska, Agnieszka
- Subjects
ARTIFICIAL intelligence ,COMPUTER-assisted image analysis (Medicine) ,DIAGNOSTIC imaging ,IMAGE segmentation ,ALGORITHMS - Abstract
Recently, artificial intelligence (AI)-based algorithms have revolutionized the medical image segmentation processes. Thus, the precise segmentation of organs and their lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapies, as well as increasing the effectiveness of the training process. In this context, AI may contribute to the automatization of the image scan segmentation process and increase the quality of the resulting 3D objects, which may lead to the generation of more realistic virtual objects. In this paper, we focus on the AI-based solutions applied in medical image scan segmentation and intelligent visual content generation, i.e., computer-generated three-dimensional (3D) images in the context of extended reality (XR). We consider different types of neural networks used with a special emphasis on the learning rules applied, taking into account algorithm accuracy and performance, as well as open data availability. This paper attempts to summarize the current development of AI-based segmentation methods in medical imaging and intelligent visual content generation that are applied in XR. It concludes with possible developments and open challenges in AI applications in extended reality-based solutions. Finally, future lines of research and development directions of artificial intelligence applications, both in medical image segmentation and extended reality-based medical solutions, are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Economic Dispatch Optimization Strategies and Problem Formulation: A Comprehensive Review.
- Author
-
Marzbani, Fatemeh and Abdelfatah, Akmal
- Subjects
EVIDENCE gaps ,MATHEMATICAL optimization ,COMPUTER performance ,ENERGY management ,ALGORITHMS - Abstract
Economic Dispatch Problems (EDP) refer to the process of determining the power output of generation units such that the electricity demand of the system is satisfied at a minimum cost while technical and operational constraints of the system are satisfied. This procedure is vital in the efficient energy management of electricity networks since it can ensure the reliable and efficient operation of power systems. As power systems transition from conventional to modern ones, new components and constraints are introduced to power systems, making the EDP increasingly complex. This highlights the importance of developing advanced optimization techniques that can efficiently handle these new complexities to ensure optimal operation and cost-effectiveness of power systems. This review paper provides a comprehensive exploration of the EDP, encompassing its mathematical formulation and the examination of commonly used problem formulation techniques, including single and multi-objective optimization methods. It also explores the progression of paradigms in economic dispatch, tracing the journey from traditional methods to contemporary strategies in power system management. The paper categorizes the commonly utilized techniques for solving EDP into four groups: conventional mathematical approaches, uncertainty modelling methods, artificial intelligence-driven techniques, and hybrid algorithms. It identifies critical research gaps, a predominant focus on single-case studies that limit the generalizability of findings, and the challenge of comparing research due to arbitrary system choices and formulation variations. The present paper calls for the implementation of standardized evaluation criteria and the inclusion of a diverse range of case studies to enhance the practicality of optimization techniques in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A Comprehensive Overview of Control Algorithms, Sensors, Actuators, and Communication Tools of Autonomous All-Terrain Vehicles in Agriculture.
- Author
-
Etezadi, Hamed and Eshkabilov, Sulaymon
- Subjects
DATA transmission systems ,AUTONOMOUS vehicles ,ACTUATORS ,AGRICULTURAL technology ,COMPUTER vision ,DETECTORS ,ALGORITHMS - Abstract
This review paper discusses the development trends of agricultural autonomous all-terrain vehicles (AATVs) from four cornerstones, such as (1) control strategy and algorithms, (2) sensors, (3) data communication tools and systems, and (4) controllers and actuators, based on 221 papers published in peer-reviewed journals for 1960–2023. The paper highlights a comparative analysis of commonly employed control methods and algorithms by highlighting their advantages and disadvantages. It gives comparative analyses of sensors, data communication tools, actuators, and hardware-embedded controllers. In recent years, many novel developments in AATVs have been made due to advancements in wireless and remote communication, high-speed data processors, sensors, computer vision, and broader applications of AI tools. Technical advancements in fully autonomous control of AATVs remain limited, requiring research into accurate estimation of terrain mechanics, identifying uncertainties, and making fast and accurate decisions, as well as utilizing wireless communication and edge cloud computing. Furthermore, most of the developments are at the research level and have many practical limitations due to terrain and weather conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Guided Intelligent Hyper-Heuristic Algorithm for Critical Software Application Testing Satisfying Multiple Coverage Criteria.
- Author
-
Rani, S. Alagu, Akila, C., and Raja, S. P.
- Subjects
COMPUTER software testing ,APPLICATION software ,DECISION support systems ,ALGORITHMS ,INTELLIGENT agents ,OPTIMIZATION algorithms - Abstract
This paper proposes a novel algorithm that combines symbolic execution and data flow testing to generate test cases satisfying multiple coverage criteria of critical software applications. The coverage criteria considered are data flow coverage as the primary criterion, software safety requirements, and equivalence partitioning as sub-criteria. black The characteristics of the subjects used for the study include high-precision floating-point computation and iterative programs. The work proposes an algorithm that aids the tester in automated test data generation, satisfying multiple coverage criteria for critical software. The algorithm adapts itself and selects different heuristics based on program characteristics. The algorithm has an intelligent agent as its decision support system to accomplish this adaptability. Intelligent agent uses the knowledge base to select different low-level heuristics based on the current state of the problem instance during each generation of genetic algorithm execution. The knowledge base mimics the expert's decision in choosing the appropriate heuristics. black The algorithm outperforms by accomplishing 100% data flow coverage for all subjects. In contrast, the simple genetic algorithm, random testing and a hyper-heuristic algorithm could accomplish a maximum of 83%, 67% and 76.7%, respectively, for the subject program with high complexity. black The proposed algorithm covers other criteria, namely equivalence partition coverage and software safety requirements, with fewer iterations. black The results reveal that test cases generated by the proposed algorithm are also effective in fault detection, with 87.2% of mutants killed when compared to a maximum of 76.4% of mutants killed for the complex subject with test cases of other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.