1,759 results
Search Results
2. Application of artificial intelligence technology in AI music creation.
- Author
-
Li, Haoyang
- Subjects
- *
ARTIFICIAL intelligence , *MUSICAL analysis , *MUSICAL intervals & scales , *MUSICAL composition , *ALGORITHMS - Abstract
Aiming at the problem of poor music source files and sound quality in the process of AI music creation, this paper proposes an automatic text generation algorithm to assist the analysis of AI music creation. Firstly, for the text generation task of independent multi-source syntactic structure diagram data, the relationship between multi-source input documents is modeled from semantic association and syntactic dependence, so as to realize the generation of final music writing text. Secondly, considering the problem of difficult to locate related work in massive music texts. Finally, the actual effect of AI music is comprehensively judged. The results show that this paper proposes a text automatic generation algorithm, which can optimize the multimodal encoder, make overall judgment on the internal data, network data, and graph structure of music text, and improve the encoding rate of information and semantics. Therefore, the text automatic generation algorithm can control the music unit, identify the characteristics of multiple modes in music creation, and improve the effect of AI music creation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Canadian Association of Radiologists White Paper on De-identification of Medical Imaging: Part 2, Practical Considerations.
- Author
-
Parker, William, Jaremko, Jacob L., Cicero, Mark, Azar, Marleine, El-Emam, Khaled, Gray, Bruce G., Hurrell, Casey, Lavoie-Cardinal, Flavie, Desjardins, Benoit, Lum, Andrea, Sheremeta, Lori, Lee, Emil, Reinhold, Caroline, Tang, An, and Bromwich, Rebecca
- Subjects
- *
ALGORITHMS , *ARTIFICIAL intelligence , *DATA encryption , *DATABASE management , *DIAGNOSTIC imaging , *HEALTH services accessibility , *MACHINE learning , *MEDICAL protocols , *DICOM (Computer network protocol) , *COVID-19 pandemic - Abstract
The application of big data, radiomics, machine learning, and artificial intelligence (AI) algorithms in radiology requires access to large data sets containing personal health information. Because machine learning projects often require collaboration between different sites or data transfer to a third party, precautions are required to safeguard patient privacy. Safety measures are required to prevent inadvertent access to and transfer of identifiable information. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI Ethical and Legal standing committee with the mandate to guide the medical imaging community in terms of best practices in data management, access to health care data, de-identification, and accountability practices. Part 2 of this article will inform CAR members on the practical aspects of medical imaging de-identification, strengths and limitations of de-identification approaches, list of de-identification software and tools available, and perspectives on future directions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Physics driven behavioural clustering of free-falling paper shapes.
- Author
-
Howison, Toby, Hughes, Josie, Giardina, Fabio, and Iida, Fumiya
- Subjects
- *
PHYSICS , *SET functions , *MACHINE learning , *PHENOMENOLOGICAL theory (Physics) , *CONTINUUM mechanics - Abstract
Many complex physical systems exhibit a rich variety of discrete behavioural modes. Often, the system complexity limits the applicability of standard modelling tools. Hence, understanding the underlying physics of different behaviours and distinguishing between them is challenging. Although traditional machine learning techniques could predict and classify behaviour well, typically they do not provide any meaningful insight into the underlying physics of the system. In this paper we present a novel method for extracting physically meaningful clusters of discrete behaviour from limited experimental observations. This method obtains a set of physically plausible functions that both facilitate behavioural clustering and aid in system understanding. We demonstrate the approach on the V-shaped falling paper system, a new falling paper type system that exhibits four distinct behavioural modes depending on a few morphological parameters. Using just 49 experimental observations, the method discovered a set of candidate functions that distinguish behaviours with an error of 2.04%, while also aiding insight into the physical phenomena driving each behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. AI in teacher education: Unlocking new dimensions in teaching support, inclusive learning, and digital literacy.
- Author
-
Zhang, Jia and Zhang, Zhuo
- Subjects
- *
TEACHER education , *DIGITAL technology , *SCHOOL environment , *INTELLECT , *SCALE analysis (Psychology) , *PSYCHOLOGY of teachers , *T-test (Statistics) , *ARTIFICIAL intelligence , *TEACHING methods , *QUANTITATIVE research , *PSYCHOLOGICAL adaptation , *EDUCATIONAL technology , *DESCRIPTIVE statistics , *COMPUTER literacy , *CONCEPTUAL structures , *COLLEGE teacher attitudes , *PROFESSIONAL employee training , *ABILITY , *LEARNING strategies , *SOCIAL support , *TEACHER-student relationships , *STUDENT attitudes , *PSYCHOLOGY of college students , *COMPUTER assisted instruction , *INTERPERSONAL relations , *DATA analysis software , *ALGORITHMS , *TRAINING - Abstract
Background: AI can positively influence teaching by offering support for classroom management, creating inclusive learning environments, enhancing digital skills, personalizing teaching methods, and strengthening teacher‐student relationships. Objectives: This quantitative research study investigates the opportunities, difficulties, and consequences of incorporating AI into teacher education. Methods: Data were collected through structured questionnaires from 202 college students and 68 staff members. The analysis was conducted using SPSS software. Results: The study provides a novel contribution by its thorough investigation of the diverse effects of AI on teacher education. It offers beneficial perspectives on the possible benefits and challenges, illuminating the far‐reaching changes that AI could bring to the terrain of learning and instruction and teaching methods in the time yet to come. The research sought to assess the effect of AI adoption in teacher education across five main dimensions: (i) its influence on teaching support and classroom management, (ii) its role in creating inclusive and accessible learning environments, (iii) its contribution to improving teachers' digital literacy and computer skills, and enhancing access to digital teaching resources, (iv) its positive influence on identifying students' learning styles and facilitating the adoption of diverse teaching methods, and (v) its role in strengthening teacher‐student relationships through improved interactions. Conclusion: The findings elucidate the promising opportunities that AI presents in the field of teacher education, along with the obstacles that require resolution for the effective fusion of AI educational settings. Lay Description: What is currently known about this topic?: AI has the potential to enhance various aspects of teaching, including classroom management and personalizing teaching methods.Incorporating AI into education has garnered significant interest due to its perceived benefits in improving learning outcomes. What does this paper add?: This paper provides a comprehensive investigation into the effects of AI adoption in teacher education, highlighting both the opportunities and challenges associated with its implementation.It offers insights into how AI can influence different dimensions of teaching, such as classroom management, learning environment inclusivity, and teacher‐student relationships. Implications for practice/or policy: The findings of this study underscore the importance of integrating AI into teacher education programs to leverage its potential benefits in enhancing teaching practices.Policymakers and educators should consider the implications of AI adoption in education and develop strategies to address challenges while maximizing the advantages of AI technologies in teaching and learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Systematic literature review of AI algorithms applied to unmanned aerial vehicle images.
- Author
-
Khadir, Kenza Ait El, Fadil, Abdelhamid, and El Brirchi, El Hassan
- Subjects
- *
ARTIFICIAL intelligence , *ALGORITHMS , *IMAGE processing , *OBJECT recognition (Computer vision) , *RESEARCH questions - Abstract
Artificial Intelligence (AI) combined with image processing has shown significant improvements through new techniques such as Machine Learning (ML) models. This paper introduces the key methods and algorithms used for Drone image processing. We discuss the benefits and limitations of using ML models instead of classical techniques. Our goal is to classify, categorize and describe the methods that are used in realistic settings of diverse domains of applications. We conducted a systematic literature review where systems presented in the papers were analysed based on their domain, task, technology, and efficiency. By extensively reviewing the existing literature, we successfully identified key themes and trends that emerged across the various research questions. The overall findings of the research emphasise the potential of AI and drone imagery in numerous fields. However, the review also uncovered several challenges that necessitate attention, such as issues related to data quality and the requirement for more advanced AI algorithms. The paper outlines significant innovations in the field and offers recommendations for future research directions. By highlighting cross-disciplinary insights, it delves into methodological approaches, exploring commonalities in AI algorithms and UAVs technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Strategies to improve fairness in artificial intelligence:A systematic literature review.
- Author
-
Trigo, António, Stein, Nubia, and Paulo Belfo, Fernando
- Subjects
- *
ARTIFICIAL intelligence , *LITERATURE reviews , *PREJUDICES , *ALGORITHMS , *DISCRIMINATION (Sociology) - Abstract
Decisions based on artificial intelligence can reproduce biases or prejudices present in biased historical data and poorly formulated systems, presenting serious social consequences for underrepresented groups of individuals. This paper presents a systematic literature review of technical, feasible, and practicable solutions to improve fairness in artificial intelligence classified according to different perspectives: fairness metrics, moment of intervention (pre-processing, processing, or post-processing), research area, datasets, and algorithms used in the research. The main contribution of this paper is to establish common ground regarding the techniques to be used to improve fairness in artificial intelligence, defined as the absence of bias or discrimination in the decisions made by artificial intelligence systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Applying Machine Learning in Marketing: An Analysis Using the NMF and k-Means Algorithms.
- Author
-
Gallego, Victor, Lingan, Jessica, Freixes, Alfons, Juan, Angel A., and Osorio, Celia
- Subjects
- *
K-means clustering , *MACHINE learning , *ARTIFICIAL intelligence , *ADVERTISING effectiveness , *DATABASES - Abstract
The integration of machine learning (ML) techniques into marketing strategies has become increasingly relevant in modern business. Utilizing scientific manuscripts indexed in the Scopus database, this article explores how this integration is being carried out. Initially, a focused search is undertaken for academic articles containing both the terms "machine learning" and "marketing" in their titles, which yields a pool of papers. These papers have been processed using the Supabase platform. The process has included steps like text refinement and feature extraction. In addition, our study uses two key ML methodologies: topic modeling through NMF and a comparative analysis utilizing the k-means clustering algorithm. Through this analysis, three distinct clusters emerged, thus clarifying how ML techniques are influencing marketing strategies, from enhancing customer segmentation practices to optimizing the effectiveness of advertising campaigns. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Canadian Association of Radiologists White Paper on Ethical and Legal Issues Related to Artificial Intelligence in Radiology.
- Author
-
Jaremko, Jacob L., Azar, Marleine, Bromwich, Rebecca, Lum, Andrea, Alicia Cheong, Li Hsia, Gibert, Martin, Laviolette, François, Gray, Bruce, Reinhold, Caroline, Cicero, Mark, Chong, Jaron, Shaw, James, Rybicki, Frank J., Hurrell, Casey, Lee, Emil, and Tang, An
- Subjects
- *
ARTIFICIAL intelligence laws , *ACQUISITION of property , *ALGORITHMS , *ARTIFICIAL intelligence , *AUTONOMY (Psychology) , *CONCEPTUAL structures , *MEDICAL ethics , *MEDICAL practice , *MEDICAL specialties & specialists , *PRIVACY , *RADIOLOGISTS , *DATA security - Abstract
Artificial intelligence (AI) software that analyzes medical images is becoming increasingly prevalent. Unlike earlier generations of AI software, which relied on expert knowledge to identify imaging features, machine learning approaches automatically learn to recognize these features. However, the promise of accurate personalized medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimization, and prognostication. Radiology is positioned to lead development and implementation of AI algorithms and to manage the associated ethical and legal challenges. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
- *
ALGORITHMS , *SYSTEMS design , *CYBER physical systems , *COMPUTER scheduling , *ARTIFICIAL intelligence , *ARTIFICIAL neural networks , *FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. The Indian approach to Artificial Intelligence: an analysis of policy discussions, constitutional values, and regulation.
- Author
-
Biju, P. R. and Gayathri, O.
- Subjects
- *
DIGITAL technology , *GOVERNMENT policy , *ARTIFICIAL intelligence , *MONETARY incentives , *SOCIAL problems - Abstract
India has produced several drafts of data policies. In this work, they are referred to [1] JBNSCR 2018, [2] DPDPR 2018, [3] NSAI 2018, [4] RAITF 2018, [5] PDPB 2019, [6] PRAI 2021, [7] JPCR 2021, [8] IDAUP 2022, [9] IDABNUP 2022. All of them consider Artificial Intelligence (AI) a social problem solver at the societal level, let alone an incentive for economic growth. However, these policy drafts warn of the social disruptions caused by algorithms and encourage the careful use of computational technologies in various social contexts. Hence, the emerging data society and its implications in India's social contexts demand immense social science attention, which needs to be improved in the policy drafts, primarily because they are creations of industry stakeholders, technocrats, bureaucrats, and experts from tech schools. In the larger social milieu of digital infrastructure emerging, the fundamental question is whether India's national philosophy envisioned in the Indian constitution is reflected in the policy papers. The paper enquires whether the national data policy upholds the core values dispersed through the philosophy of the Indian constitution, which, among other things, is not confined only to inclusion, diversity, rights, liberty, justice and equality. By focusing on constitutional values, the paper seeks to offer a broader and more critical understanding of India's approach to AI policy by bringing together analyses of a wide array of policy documents available in the public realm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology.
- Author
-
Frewing, Aaryn, Gibson, Alexander B., Robertson, Richard, Urie, Paul M., and Della Corte, Dennis
- Subjects
- *
FEAR , *ARTIFICIAL intelligence , *DIGITAL diagnostic imaging , *PROSTATE tumors , *TUMOR grading , *DIAGNOSTIC errors , *LEARNING strategies , *ALGORITHMS ,RESEARCH evaluation - Abstract
* Context.--Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. Objective.--To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. Data Sources.--The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. Conclusions.--It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Piquing artificial intelligence towards drug discovery: Tools, techniques, and applications.
- Author
-
Agu, Peter Chinedu and Obulose, Chidiebere Nwiboko
- Subjects
- *
DRUG discovery , *ARTIFICIAL intelligence , *DRUG design , *DRUG development , *NANOMEDICINE , *DRUG toxicity - Abstract
The purpose of this study was to discuss how artificial intelligence (AI) methods have affected the field of drug development. It looks at how AI models and data resources are reshaping the drug development process by offering more affordable and expedient options to conventional approaches. The paper opens with an overview of well‐known information sources for drug development. The discussion then moves on to molecular representation techniques that make it possible to convert data into representations that computers can understand. The paper also gives a general overview of the algorithms used in the creation of drug discovery models based on AI. In particular, the paper looks at how AI algorithms might be used to forecast drug toxicity, drug bioactivity, and drug physicochemical properties. De novo drug design, binding affinity prediction, and other AI‐based models for drug–target interaction were covered in deeper detail. Modern applications of AI in nanomedicine design and pharmacological synergism/antagonism prediction were also covered. The potential advantages of AI in drug development are highlighted as the evaluation comes to a close. It underlines how AI may greatly speed up and improve the efficiency of drug discovery, resulting in the creation of new and better medicines. To fully realize the promise of AI in drug discovery, the review acknowledges the difficulties that come with its uses in this field and advocates for more study and development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Comparison of A* algorithm with hierarchical pathfinding A* algorithm in 3D maze runner game.
- Author
-
Anwar, Yusuf and Thamrin, Husni
- Subjects
- *
ALGORITHMS , *ARTIFICIAL intelligence , *MAZE tests , *MAZE puzzles , *PROGRAMMING languages - Abstract
Artificial Intelligence (AI) is an essential component in modern games. With AI, players can feel the challenges in the game and the game will feel more real. AI has several branches, one of which is path-finding. Pathfinding is a way of finding the shortest path between two points. The main problem in path-finding is how to perform a path-finding accurately and requires fewer computational resources (CPU and memory). This paper describes the results of research that tested the A* and Hierarchical Pathfinding A* algorithms using the Unity 3D platform and the C# programming language. The graph or space to be used is a graph with 8 branch nodes. While the benchmarks used are the path generated and the processing time of an algorithm. This paper results in a conclusion that the path generated by the A* algorithm is shorter than the Hierarchical Pathfinding A* algorithm. The number of paths processed for the A* algorithm is more than the Hierarchical Pathfinding A* algorithm. The total execution time of the A* Algorithm is smaller than that of the A* Hierarchical Pathfinding Algorithm. The Hierarchical Pathfinding A* algorithm experiences spikes in execution time more often than the A* algorithm. However, if the total spike time, the Hierarchical Algorithm is less than the A* Algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Building Human-Like Artificial Agents: A General Cognitive Algorithm for Emulating Human Decision-Making in Dynamic Environments.
- Author
-
Gonzalez, Cleotilde
- Subjects
- *
RESEARCH funding , *ARTIFICIAL intelligence , *DECISION making , *LEARNING , *DESCRIPTIVE statistics , *DECISION trees , *JUDGMENT (Psychology) , *COGNITION , *ALGORITHMS - Abstract
One of the early goals of artificial intelligence (AI) was to create algorithms that exhibited behavior indistinguishable from human behavior (i.e., human-like behavior). Today, AI has diverged, often aiming to excel in tasks inspired by human capabilities and outperform humans, rather than replicating human cogntion and action. In this paper, I explore the overarching question of whether computational algorithms have achieved this initial goal of AI. I focus on dynamic decision-making, approaching the question from the perspective of computational cognitive science. I present a general cognitive algorithm that intends to emulate human decision-making in dynamic environments, as defined in instance-based learning theory (IBLT). I use the cognitive steps proposed in IBLT to organize and discuss current evidence that supports some of the human-likeness of the decision-making mechanisms. I also highlight the significant gaps in research that are required to improve current models and to create higher fidelity in computational algorithms to represent human decision processes. I conclude with concrete steps toward advancing the construction of algorithms that exhibit human-like behavior with the ultimate goal of supporting human dynamic decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. DM–AHR : A Self-Supervised Conditional Diffusion Model for AI-Generated Hairless Imaging for Enhanced Skin Diagnosis Applications.
- Author
-
Benjdira, Bilel, M. Ali, Anas, Koubaa, Anis, Ammar, Adel, and Boulila, Wadii
- Subjects
- *
SKIN diseases , *MEDICAL technology , *HAIR removal , *RESEARCH funding , *DIAGNOSTIC imaging , *ARTIFICIAL intelligence , *DESCRIPTIVE statistics , *DATA analysis software , *ALGORITHMS - Abstract
Simple Summary: Skin diseases can be serious, and early detection is key to effective treatment. Unfortunately, the quality of images used to diagnose these diseases often suffers due to interference from hair, making accurate diagnosis challenging. This research introduces a novel technology, the DM–AHR, a self-supervised conditional diffusion model designed specifically to generate clear, hairless images for better skin disease diagnosis. Our work not only presents a new, advanced model that expertly identifies and removes hair from dermoscopic images but also introduces a specialized dataset, DERMAHAIR, to further research and improve diagnostic processes. The enhancements in image quality provided by DM–AHR significantly improve the accuracy of skin disease diagnoses, and it promises to be a valuable tool in medical imaging. Accurate skin diagnosis through end-user applications is important for early detection and cure of severe skin diseases. However, the low quality of dermoscopic images hampers this mission, especially with the presence of hair on these kinds of images. This paper introduces DM–AHR, a novel, self-supervised conditional diffusion model designed specifically for the automatic generation of hairless dermoscopic images to improve the quality of skin diagnosis applications. The current research contributes in three significant ways to the field of dermatologic imaging. First, we develop a customized diffusion model that adeptly differentiates between hair and skin features. Second, we pioneer a novel self-supervised learning strategy that is specifically tailored to optimize performance for hairless imaging. Third, we introduce a new dataset, named DERMAHAIR (DERMatologic Automatic HAIR Removal Dataset), that is designed to advance and benchmark research in this specialized domain. These contributions significantly enhance the clarity of dermoscopic images, improving the accuracy of skin diagnosis procedures. We elaborate on the architecture of DM–AHR and demonstrate its effective performance in removing hair while preserving critical details of skin lesions. Our results show an enhancement in the accuracy of skin lesion analysis when compared to existing techniques. Given its robust performance, DM–AHR holds considerable promise for broader application in medical image enhancement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Artificial intelligence assisted IoT-fog based framework for emergency fire response in smart buildings.
- Author
-
Saini, Munish, Sengupta, Eshan, and Thakur, Suraaj
- Subjects
- *
FIRE management , *ARTIFICIAL intelligence , *EMERGENCY management , *INTERNET of things , *FLOOR plans , *ALGORITHMS - Abstract
Anthropogenic hazards are unrelenting threat to lives and property, with human irresponsibility emerging as a leading source of urban as well as industrial fires. The complexity of urban structures and crowded layouts make these kinds of fires more lethal. This paper presents an Artificial Intelligence (AI) based framework designed for smart buildings as a solution to the devastating obstacles caused by fire crises. Our system creates a 3D model of the building using floor plans and the A* algorithm for escape route identification. The proposed framework includes a YOLO-based smart monitoring system for the identification and counting of people caught in a fire, with the ability to distinguish between conscious and unconscious persons. The proposed system informs inhabitants in the case of a fire and directs them to the closest exit for a safe evacuation. Moreover, fire and rescue officials receive real-time information on affected persons, such as the number and location of adults and children who are conscious and unconscious. Perhaps most significantly, the suggested framework performs exceptionally well, scoring 96% for precision and 98% for recall in the detection of fire and humans. These findings highlight the effectiveness of the model in locating people within infrastructures affected by fire. The framework considerably outperforms the most advanced algorithms in terms of speed and efficiency for shortest path detection, greatly improving the ability of fire rescue teams to quickly find and aid residents who are trapped in a fire. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A checklist for reporting, reading and evaluating Artificial Intelligence Technology Enhanced Learning (AITEL) research in medical education.
- Author
-
Masters, Ken and Salcedo, Daniel
- Subjects
- *
READING , *PUBLIC health laws , *MEDICAL education , *ARTIFICIAL intelligence , *TECHNOLOGY , *MEDICAL research , *COMPUTER assisted instruction , *LEARNING strategies , *QUALITY assurance , *ALGORITHMS - Abstract
Advances in Artificial Intelligence (AI) have led to AI systems' being used increasingly in medical education research. Current methods of reporting on the research, however, tend to follow patterns of describing an intervention and reporting on results, with little description of the AI in the system, or the many concerns about the use of AI. In essence, the readers do not actually know anything about the system itself. This paper proposes a checklist for reporting on AI systems, and covers the initial protocols and scoping, modelling and code, algorithm design, training data, testing and validation, usage, comparisons, real-world requirements, results and limitations, and ethical considerations. The aim is to have a systematic reporting process so that readers can have a comprehensive understanding of the AI system that was used in the research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Differential Evolution Algorithm with Three Mutation Operators for Global Optimization.
- Author
-
Wang, Xuming and Yu, Xiaobing
- Subjects
- *
EVOLUTIONARY algorithms , *ARTIFICIAL intelligence , *GLOBAL optimization , *ALGORITHMS , *DIFFERENTIAL evolution , *BENCHES - Abstract
Differential evolution algorithm is a very powerful and recently proposed evolutionary algorithm. Generally, only a mutation operator and predefined parameter values of differential evolution algorithm are utilized to solve various optimization problems, which limits the performance of the algorithm. In this paper, six commonly used mutation operators are divided into three categories according to their own features. A mutation pool is established based on the three categories. A parameter pool with three predefined values is designed. During evolution, three mutation operators are randomly chosen from the three categories, and three parameter values are also randomly selected from the parameter pool. The three groups of mutation operators and parameter values are employed to produce trial vectors. The proposed algorithm makes good use of different mutation operators. Three recently proposed differential evolution variants and three non-differential evolution algorithms are used to make comparisons on the 29 testing functions from CEC. The experimental results have demonstrated that the proposed algorithm is very competitive. The proposed algorithm is utilized to solve three real applications, and the results are superior. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Automated government form filling for aged and monolingual people using interactive tool.
- Author
-
Hegde, Adarsh R., Sujala Reddy, R. S., Kruthika, P., Pragathi, B. C., Sai Lahari, Sreerama, Deepamala, N., and Shobha, G.
- Subjects
- *
AUTOMATIC speech recognition , *CONVERSATION , *ARTIFICIAL intelligence , *DESCRIPTIVE statistics , *MULTILINGUALISM , *GOVERNMENT programs , *COMMUNICATION devices for people with disabilities , *COMMUNICATION , *AUTOMATION , *ALGORITHMS - Abstract
The Government of India offers various schemes for various classes of citizens. Most of the application forms of schemes to be filled are in English and it is observed that monolingual individuals find it difficult to access and fill the forms. This paper addresses the challenges faced by monolingual individuals in India, particularly the elderly, people with impairments, and those from marginalized communities. The proposed work is to create an interactive system called "Dhvani" voicebot, specifically designed for the Kannada language. It helps users in identifying suitable government schemes and fills forms in English. The proposed system is developed using the RASA chatbot framework and NLP techniques to comprehend user utterances. RNN and SVM algorithms are employed to ensure smooth conversation flow and interaction with the users. To enhance scheme suggestion accuracy, a knowledge graph is created, containing relevant data on government schemes. The intent classification model achieves an accuracy of 97%, indicating its ability to accurately understand user intentions. The integration of a knowledge graph improves the accuracy of scheme identification and suggestion to users. The system automates the process of filling out government scheme forms based on user inputs. Dhvani voicebot system presents a practical solution to address the challenges faced by monolingual individuals in accessing government schemes in India. The high accuracy of intent classification and the use of a knowledge graph contribute to the system's effectiveness. The study suggests that this system can be extended to other languages. An automated tool called "Dhvani" will solve the problem of aged, illiterate and physically challenged persons filling forms in post offices and banks. Most of the schemes, pension funds, cash withdrawal, cash deposit is through these organizations. So. the tool makes the process easier for the above mention persons without the help of others. An intent recognition and interactive tool developed in Kannada Language which is widely spoken in Karnataka, India. The digital resources available in Kannada Language is very sparce. Use of technology like interactive tool, Knowledge graph, RNN and SVM are used in the development of the tool. Government scheme recommendation interactively makes the users to choose the scheme faster in an interactive way. The form is filled automatically and can be edited to rectify mistakes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. The HHL algorithm: Implementation and research directions.
- Author
-
Sambhaje, Varsha and Chaurasia, Anju
- Subjects
- *
RESEARCH implementation , *ALGORITHMS , *QUANTUM computing , *LINEAR systems , *ARTIFICIAL intelligence , *MACHINE learning , *QUANTUM computers - Abstract
Linear systems of equations lie at the heart of numerous scientific and engineering challenges. In cutting-edge arena like artificial intelligence, machine learning and neuro-computation, these systems serve as a fundamental tool for mathematical modeling. Classical algorithms for solving linear systems have been extensively developed and forms the backbone of diverse applications across various scientific disciplines. While classical algorithms exist for solving linear systems, they often encounter limitations termed “NP-completeness” as data complexity increases. The emerging field of quantum computing offers a revolutionary approach to deal with these kinds of problems. The Harrow–Hassidim–Lloyd (HHL) algorithm tackles these challenges and opens new avenues for research. This study delves into the contemporary effectiveness of the HHL algorithm to address systems of linear equations. By examining recent research in quantum machine learning, we aim to assess the HHL algorithm’s potential to revolutionize the process of optimizing hyperparameters for machine learning models, resulting in increased efficiency and cost savings. This paper meticulously analyzes the HHL algorithm and explores its evolution from conception to the latest advancements. A comprehensive examination of the HHL algorithm, including its evolution over time, is thoroughly explored. The investigation delves into the potential challenges and limitations that might hinder the practical deployment of the HHL algorithm. Identifying these roadblocks will pave the way for future research and development efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Economic Dispatch Optimization Strategies and Problem Formulation: A Comprehensive Review.
- Author
-
Marzbani, Fatemeh and Abdelfatah, Akmal
- Subjects
- *
EVIDENCE gaps , *MATHEMATICAL optimization , *COMPUTER performance , *ENERGY management , *ALGORITHMS - Abstract
Economic Dispatch Problems (EDP) refer to the process of determining the power output of generation units such that the electricity demand of the system is satisfied at a minimum cost while technical and operational constraints of the system are satisfied. This procedure is vital in the efficient energy management of electricity networks since it can ensure the reliable and efficient operation of power systems. As power systems transition from conventional to modern ones, new components and constraints are introduced to power systems, making the EDP increasingly complex. This highlights the importance of developing advanced optimization techniques that can efficiently handle these new complexities to ensure optimal operation and cost-effectiveness of power systems. This review paper provides a comprehensive exploration of the EDP, encompassing its mathematical formulation and the examination of commonly used problem formulation techniques, including single and multi-objective optimization methods. It also explores the progression of paradigms in economic dispatch, tracing the journey from traditional methods to contemporary strategies in power system management. The paper categorizes the commonly utilized techniques for solving EDP into four groups: conventional mathematical approaches, uncertainty modelling methods, artificial intelligence-driven techniques, and hybrid algorithms. It identifies critical research gaps, a predominant focus on single-case studies that limit the generalizability of findings, and the challenge of comparing research due to arbitrary system choices and formulation variations. The present paper calls for the implementation of standardized evaluation criteria and the inclusion of a diverse range of case studies to enhance the practicality of optimization techniques in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Improved adaptive-phase fuzzy high utility pattern mining algorithm based on tree-list structure for intelligent decision systems.
- Author
-
Chen, Jing, Liu, Aijun, Zhang, Hongjun, Yang, Shengyi, Zheng, Hui, Zhou, Ning, and Li, Peng
- Subjects
- *
ARTIFICIAL intelligence , *SMART structures , *ALGORITHMS , *DATA mining , *BIG data - Abstract
With the rapid development of AI and big data mining technologies, computerized medical decision-making has become increasingly prominent. The aim of high-utility pattern mining (HUPM) is to discover meaningful patterns in medical databases that contribute to maximizing the utility from the perspective of diagnosis. However, HUPM pays less attention to the interpretability and explainability of these patterns in medical decision-making scenarios. This paper proposes a novel algorithm called the Improved fuzzy high-utility pattern mining (IF-HUPM) to address this problem. First, the paper applies a fuzzy preprocessing method to divide the fuzzy intervals of a medical quantitative data set, which enhances the fuzziness and interpretability of the data. Next, in the process of IF-HUPM, both fuzzy tree and list structures are employed to calculate fuzzy high-utility values. By combining the characteristics of the one-stage and two-stage algorithms of HUPM, an adaptive-phase Fuzzy HUPM hybrid frame is proposed. The experimental results demonstrate that the proposed IF-HUPM algorithm enhances both accuracy and efficiency and the mining process requires less time and space on average. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Disparities in Breast Cancer Diagnostics: How Radiologists Can Level the Inequalities.
- Author
-
Pesapane, Filippo, Tantrige, Priyan, Rotili, Anna, Nicosia, Luca, Penco, Silvia, Bozzini, Anna Carla, Raimondi, Sara, Corso, Giovanni, Grasso, Roberto, Pravettoni, Gabriella, Gandini, Sara, and Cassano, Enrico
- Subjects
- *
BREAST tumor diagnosis , *OCCUPATIONAL roles , *HEALTH policy , *DIVERSITY & inclusion policies , *EQUALITY , *HEALTH services accessibility , *MINORITIES , *GENDER affirming care , *TELERADIOLOGY , *ARTIFICIAL intelligence , *RADIATION , *DIAGNOSTIC imaging , *LABOR supply , *CULTURAL competence , *HEALTH , *COMMUNICATION , *HEALTH equity , *PHYSICIANS , *ALGORITHMS - Abstract
Simple Summary: This paper delves into the persistent issue of unequal access to medical imaging, with a particular focus on breast cancer screening and its impact on marginalized communities and racial/ethnic minorities. Central to our discussion is the role of scientific mobility among radiologists in fostering healthcare policy changes that promote diversity and cultural competence. We propose various strategies to bridge this gap, including cultural education, sensitivity training, and diversifying the radiology workforce. These measures aim to improve communication with diverse patient groups and reduce healthcare disparities. Additionally, we explore the challenges and advantages of teleradiology as a means to extend medical imaging services to underserved areas. In the context of artificial intelligence, we emphasize the critical need to validate algorithms across diverse populations to ensure unbiased and equitable healthcare outcomes. Overall, this paper underscores the importance of international collaboration in addressing global access barriers, presenting it as a key to mitigating disparities in medical imaging access and contributing to the pursuit of equitable healthcare. Access to medical imaging is pivotal in healthcare, playing a crucial role in the prevention, diagnosis, and management of diseases. However, disparities persist in this scenario, disproportionately affecting marginalized communities, racial and ethnic minorities, and individuals facing linguistic or cultural barriers. This paper critically assesses methods to mitigate these disparities, with a focus on breast cancer screening. We underscore scientific mobility as a vital tool for radiologists to advocate for healthcare policy changes: it not only enhances diversity and cultural competence within the radiology community but also fosters international cooperation and knowledge exchange among healthcare institutions. Efforts to ensure cultural competency among radiologists are discussed, including ongoing cultural education, sensitivity training, and workforce diversification. These initiatives are key to improving patient communication and reducing healthcare disparities. This paper also highlights the crucial role of policy changes and legislation in promoting equal access to essential screening services like mammography. We explore the challenges and potential of teleradiology in improving access to medical imaging in remote and underserved areas. In the era of artificial intelligence, this paper emphasizes the necessity of validating its models across a spectrum of populations to prevent bias and achieve equitable healthcare outcomes. Finally, the importance of international collaboration is illustrated, showcasing its role in sharing insights and strategies to overcome global access barriers in medical imaging. Overall, this paper offers a comprehensive overview of the challenges related to disparities in medical imaging access and proposes actionable strategies to address these challenges, aiming for equitable healthcare delivery. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Predicting Money Laundering Using Machine Learning and Artificial Neural Networks Algorithms in Banks.
- Author
-
Lokanan, Mark E.
- Subjects
- *
ARTIFICIAL neural networks , *MONEY laundering , *MACHINE learning , *ALGORITHMS , *RANDOM forest algorithms - Abstract
This paper aims to build a machine learning and a neural network model to detect the probability of money laundering in banks. The paper's data came from a simulation of actual transactions flagged for money laundering in Middle Eastern banks. The main findings highlight that criminal networks mainly use the integration stage to integrate money into the financial system. Fraudsters prefer to launder funds in the early hours, morning followed by the business day's afternoon time intervals. Additionally, the Naïve Bayes and Random Forest classifiers were identified as the two best-performing models to predict bank money laundering transactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A review on kidney tumor segmentation and detection using different artificial intelligence algorithms.
- Author
-
Patel, Vinitkumar Vasantbhai and Yadav, Arvind R.
- Subjects
- *
ARTIFICIAL intelligence , *KIDNEY tumors , *ALGORITHMS , *DEEP learning , *DATA warehousing , *MACHINE learning - Abstract
Kidney is one of the significant organs in the human body which performs filtering out blood, balances fluid, removes the waste, maintains the level of electrolytes and hormone levels. So, any disorder or dysfunction in kidney needs to be detected on time in order to preserve life. Segmentation on kidney tumor in medical field is a critical task and many conventional methods have been employed for early prediction of kidney abnormalities but with limitations such as high cost, extended time for computation and analysis with huge amount of data. Due to all such problems, the prediction rate and accuracy has reduced considerably. In order to overcome the challenges, Artificial Intelligence (AI) technology has penetrated into the field of medicine particularly in the renal department. The evolution of AI in kidney therapies improve the process of diagnosis through several Machine Learning (ML) and Deep Learning (DL) algorithms. It has the capability of improving and influencing on the status with its capacity of learning from the massive data and apply them accordingly to differentiate on the circumstances. The storage of larger data and segmentation with AI assistance are highly helpful for the analysis of occurrence of the disease. AI algorithms have predicted the severity of tumor stages with effective accuracies. Hence, this paper provides a critical review of different AI based algorithms being used in the kidney tumor prognostication. Its numerous benefits in field of segmentation have been researched from the existing works and provides an insight on the contribution of AI in the kidney disease prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Research progress of multi-objective path planning optimization algorithms.
- Author
-
Ding, Ziyan
- Subjects
- *
OPTIMIZATION algorithms , *POTENTIAL field method (Robotics) , *MOBILE robots , *ENVIRONMENTAL mapping , *ARTIFICIAL intelligence , *ALGORITHMS - Abstract
The field of robotics research has been well developed with the combination of computers and machinery. With the advancement of artificial intelligence, mobile robots are also attracting more attention today, and path planning is a crucial foundational technology for mobile robots to complete transportation requirements and reach pertinent target areas. In an efficiency-conscious society, the single-goal planning of transmission is gradually failing to meet the needs of enterprises and factories for efficient operations, path planning that can simultaneously plan the optimal methods and reach many target points is increasingly replacing the conventional single-goal path planning. However, there are more factors to be considered in the real complex environment to face various complex road conditions, and for this reason, various single or hybrid algorithms are being optimized and solved for this kind of problem. This paper summarizes the main methods of path planning to simulate the scale of obstacles and environmental scene maps in various conditions, focusing on several basic algorithms and their hybrid algorithms for solving multi-objective path planning problems in global and local path planning, as well as their improvements and innovations on the basic algorithms. This paper's primary idea is to divide the multi-path planning process into various components and substitute each part into a suitable algorithm and model to solve it separately to accomplish the task of reaching multiple target points efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Scientific papers and artificial intelligence. Brave new world?
- Author
-
Nexøe, Jørgen
- Subjects
- *
COMPUTERS , *MANUSCRIPTS , *ARTIFICIAL intelligence , *MACHINE learning , *DATA analysis , *MEDICAL literature , *MEDICAL research , *ALGORITHMS - Published
- 2023
- Full Text
- View/download PDF
29. Artificial Intelligence-Based Atrial Fibrillation Recognition Method for Motion Artifact-Contaminated Electrocardiogram Signals Preprocessed by Adaptive Filtering Algorithm.
- Author
-
Zhang, Huanqian, Zhao, Hantao, and Guo, Zhang
- Subjects
- *
ARTIFICIAL intelligence , *ADAPTIVE filters , *ARRHYTHMIA , *ELECTROCARDIOGRAPHY , *RECOGNITION (Psychology) , *ATRIAL fibrillation , *ALGORITHMS - Abstract
Atrial fibrillation (AF) is a common arrhythmia, and out-of-hospital, wearable, long-term electrocardiogram (ECG) monitoring can help with the early detection of AF. The presence of a motion artifact (MA) in ECG can significantly affect the characteristics of the ECG signal and hinder early detection of AF. Studies have shown that (a) using reference signals with a strong correlation with MAs in adaptive filtering (ADF) can eliminate MAs from the ECG, and (b) artificial intelligence (AI) algorithms can recognize AF when there is no presence of MAs. However, no literature has been reported on whether ADF can improve the accuracy of AI for recognizing AF in the presence of MAs. Therefore, this paper investigates the accuracy of AI recognition for AF when ECGs are artificially introduced with MAs and processed by ADF. In this study, 13 types of MA signals with different signal-to-noise ratios ranging from +8 dB to −16 dB were artificially added to the AF ECG dataset. Firstly, the accuracy of AF recognition using AI was obtained for a signal with MAs. Secondly, after removing the MAs by ADF, the signal was further identified using AI to obtain the accuracy of the AF recognition. We found that after undergoing ADF, the accuracy of AI recognition for AF improved under all MA intensities, with a maximum improvement of 60%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Ensemble Deep Learning-Based Image Classification for Breast Cancer Subtype and Invasiveness Diagnosis from Whole Slide Image Histopathology.
- Author
-
Balasubramanian, Aadhi Aadhavan, Al-Heejawi, Salah Mohammed Awad, Singh, Akarsh, Breggia, Anne, Ahmad, Bilal, Christman, Robert, Ryan, Stephen T., and Amal, Saeed
- Subjects
- *
BREAST tumor diagnosis , *CANCER invasiveness , *TASK performance , *MEDICAL technology , *BIOINDICATORS , *BREAST tumors , *ARTIFICIAL intelligence , *MEDICAL care , *HOSPITALS , *CAUSES of death , *EVALUATION of medical care , *DESCRIPTIVE statistics , *DEEP learning , *COMPUTER-aided diagnosis , *ARTIFICIAL neural networks , *DIGITAL image processing , *ALGORITHMS , *CARCINOMA in situ - Abstract
Simple Summary: Breast cancer is a significant cause of female cancer-related deaths in the US. Checking how severe the cancer is helps in planning treatment. Modern AI methods are good at grading cancer, but they are not used much in hospitals yet. We developed and utilized ensemble deep learning algorithms for addressing the tasks of classifying (1) breast cancer subtype and (2) breast cancer invasiveness from whole slide image (WSI) histopathology slides. The ensemble models used were based on convolutional neural networks (CNNs) known for extracting distinctive features crucial for accurate classification. In this paper, we provide a comprehensive analysis of these models and the used methodology for breast cancer diagnosis tasks. Cancer diagnosis and classification are pivotal for effective patient management and treatment planning. In this study, a comprehensive approach is presented utilizing ensemble deep learning techniques to analyze breast cancer histopathology images. Our datasets were based on two widely employed datasets from different centers for two different tasks: BACH and BreakHis. Within the BACH dataset, a proposed ensemble strategy was employed, incorporating VGG16 and ResNet50 architectures to achieve precise classification of breast cancer histopathology images. Introducing a novel image patching technique to preprocess a high-resolution image facilitated a focused analysis of localized regions of interest. The annotated BACH dataset encompassed 400 WSIs across four distinct classes: Normal, Benign, In Situ Carcinoma, and Invasive Carcinoma. In addition, the proposed ensemble was used on the BreakHis dataset, utilizing VGG16, ResNet34, and ResNet50 models to classify microscopic images into eight distinct categories (four benign and four malignant). For both datasets, a five-fold cross-validation approach was employed for rigorous training and testing. Preliminary experimental results indicated a patch classification accuracy of 95.31% (for the BACH dataset) and WSI image classification accuracy of 98.43% (BreakHis). This research significantly contributes to ongoing endeavors in harnessing artificial intelligence to advance breast cancer diagnosis, potentially fostering improved patient outcomes and alleviating healthcare burdens. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Malnutrition risk assessment using a machine learning‐based screening tool: A multicentre retrospective cohort.
- Author
-
Parchure, Prathamesh, Besculides, Melanie, Zhan, Serena, Cheng, Fu‐yuan, Timsina, Prem, Cheertirala, Satya Narayana, Kersch, Ilana, Wilson, Sara, Freeman, Robert, Reich, David, Mazumdar, Madhu, and Kia, Arash
- Subjects
- *
MALNUTRITION diagnosis , *RISK assessment , *DIETETICS , *MALNUTRITION , *MEDICAL quality control , *HUMAN services programs , *HOSPITAL care , *NUTRITIONAL assessment , *ARTIFICIAL intelligence , *RETROSPECTIVE studies , *DESCRIPTIVE statistics , *LONGITUDINAL method , *PRE-tests & post-tests , *RESEARCH , *METROPOLITAN areas , *MACHINE learning , *QUALITY assurance , *LENGTH of stay in hospitals , *ALGORITHMS , *DISEASE risk factors ,ELECTRONIC health record standards - Abstract
Background: Malnutrition is associated with increased morbidity, mortality, and healthcare costs. Early detection is important for timely intervention. This paper assesses the ability of a machine learning screening tool (MUST‐Plus) implemented in registered dietitian (RD) workflow to identify malnourished patients early in the hospital stay and to improve the diagnosis and documentation rate of malnutrition. Methods: This retrospective cohort study was conducted in a large, urban health system in New York City comprising six hospitals serving a diverse patient population. The study included all patients aged ≥ 18 years, who were not admitted for COVID‐19 and had a length of stay of ≤ 30 days. Results: Of the 7736 hospitalisations that met the inclusion criteria, 1947 (25.2%) were identified as being malnourished by MUST‐Plus‐assisted RD evaluations. The lag between admission and diagnosis improved with MUST‐Plus implementation. The usability of the tool output by RDs exceeded 90%, showing good acceptance by users. When compared pre‐/post‐implementation, the rate of both diagnoses and documentation of malnutrition showed improvement. Conclusion: MUST‐Plus, a machine learning‐based screening tool, shows great promise as a malnutrition screening tool for hospitalised patients when used in conjunction with adequate RD staffing and training about the tool. It performed well across multiple measures and settings. Other health systems can use their electronic health record data to develop, test and implement similar machine learning‐based processes to improve malnutrition screening and facilitate timely intervention. Key points/Highlights: Malnutrition is prevalent among hospitalised patients and frequently goes unrecognised, with the potential for severe sequelae. Accurate diagnosis, documentation and treatment of malnutrition have the potential of having a positive impact on morbidity rate, mortality rate, length of inpatient stay, readmission rate and hospital revenue. The tool's successful application highlights its potential to optimise malnutrition screening in healthcare systems, offering potential benefits for patient outcomes and hospital finances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Early Breast Cancer Risk Assessment: Integrating Histopathology with Artificial Intelligence.
- Author
-
Ivanova, Mariia, Pescia, Carlo, Trapani, Dario, Venetis, Konstantinos, Frascarelli, Chiara, Mane, Eltjona, Cursano, Giulia, Sajjadi, Elham, Scatena, Cristian, Cerbelli, Bruna, d'Amati, Giulia, Porta, Francesca Maria, Guerini-Rocco, Elena, Criscitiello, Carmen, Curigliano, Giuseppe, and Fusco, Nicola
- Subjects
- *
BREAST tumor risk factors , *RISK assessment , *MEDICAL protocols , *CANCER relapse , *ARTIFICIAL intelligence , *EARLY detection of cancer , *CYTOCHEMISTRY , *TUMOR markers , *DECISION making in clinical medicine , *IMMUNOHISTOCHEMISTRY , *PATIENT-centered care , *DEEP learning , *ARTIFICIAL neural networks , *MACHINE learning , *ONCOLOGISTS , *INDIVIDUALIZED medicine , *MOLECULAR pathology , *HEALTH care teams , *ALGORITHMS , *DISEASE risk factors - Abstract
Simple Summary: Risk assessment in early breast cancer is critical for clinical decisions, but defining risk categories poses a significant challenge. The integration of conventional histopathology and biomarkers with artificial intelligence (AI) techniques, including machine learning and deep learning, has the potential to offer more precise information. AI applications extend beyond detection to histological subtyping, grading, and molecular feature identification. The successful integration of AI into clinical practice requires collaboration between histopathologists, molecular pathologists, computational pathologists, and oncologists to optimize patient outcomes. Effective risk assessment in early breast cancer is essential for informed clinical decision-making, yet consensus on defining risk categories remains challenging. This paper explores evolving approaches in risk stratification, encompassing histopathological, immunohistochemical, and molecular biomarkers alongside cutting-edge artificial intelligence (AI) techniques. Leveraging machine learning, deep learning, and convolutional neural networks, AI is reshaping predictive algorithms for recurrence risk, thereby revolutionizing diagnostic accuracy and treatment planning. Beyond detection, AI applications extend to histological subtyping, grading, lymph node assessment, and molecular feature identification, fostering personalized therapy decisions. With rising cancer rates, it is crucial to implement AI to accelerate breakthroughs in clinical practice, benefiting both patients and healthcare providers. However, it is important to recognize that while AI offers powerful automation and analysis tools, it lacks the nuanced understanding, clinical context, and ethical considerations inherent to human pathologists in patient care. Hence, the successful integration of AI into clinical practice demands collaborative efforts between medical experts and computational pathologists to optimize patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Application of artificial intelligence in the diagnosis and treatment of cardiac arrhythmia.
- Author
-
Guo, Rong‐Xin, Tian, Xu, Bazoukis, George, Tse, Gary, Hong, Shenda, Chen, Kang‐Yin, and Liu, Tong
- Subjects
- *
ARRHYTHMIA diagnosis , *ARRHYTHMIA treatment , *ARTIFICIAL intelligence , *HEART , *WEARABLE technology , *TELEMEDICINE , *VENTRICULAR arrhythmia , *PATHOGENESIS , *CARDIAC pacing , *PATIENT monitoring , *PREDICTIVE validity , *ELECTRODES , *ALGORITHMS - Abstract
The rapid growth in computational power, sensor technology, and wearable devices has provided a solid foundation for all aspects of cardiac arrhythmia care. Artificial intelligence (AI) has been instrumental in bringing about significant changes in the prevention, risk assessment, diagnosis, and treatment of arrhythmia. This review examines the current state of AI in the diagnosis and treatment of atrial fibrillation, supraventricular arrhythmia, ventricular arrhythmia, hereditary channelopathies, and cardiac pacing. Furthermore, ChatGPT, which has gained attention recently, is addressed in this paper along with its potential applications in the field of arrhythmia. Additionally, the accuracy of arrhythmia diagnosis can be improved by identifying electrode misplacement or erroneous swapping of electrode position using AI. Remote monitoring has expanded greatly due to the emergence of contactless monitoring technology as wearable devices continue to develop and flourish. Parallel advances in AI computing power, ChatGPT, availability of large data sets, and more have greatly expanded applications in arrhythmia diagnosis, risk assessment, and treatment. More precise algorithms based on big data, personalized risk assessment, telemedicine and mobile health, smart hardware and wearables, and the exploration of rare or complex types of arrhythmia are the future direction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. From algorithmic governance to govern algorithm.
- Author
-
Xu, Zichun
- Subjects
- *
ALGORITHMS , *ARTIFICIAL intelligence , *MODERNIZATION (Social science) , *BIG data , *NETWORK governance , *BLOCKCHAINS - Abstract
Algorithm is the core category and basic methods of the digital age, and advanced technologies such as big data, artificial intelligence, and blockchain all need to rely on various algorithm designs or take the algorithm as the underlying principle. However, due to the characteristics of algorithm design, application, and technology itself, there are also hidden worries such as algorithm black-box, algorithm discrimination, and difficulty in accountability in the operation process to varying degrees. This paper summarizes these problems into three aspects: unexplainable, self-reinforcing and autonomous. Facing the opportunities and risks generated by the application of the algorithm in national governance, while actively promoting the development of algorithm technology to continuously promote the modernization process of national governance, it is also necessary to increase the governance of the algorithm. The practice has proved that enhancing the interpretability of algorithm, optimizing algorithm design, and adopting legal regulatory algorithm are the basic approaches to effective regulatory algorithm in the era of intelligent governance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Implementation of a Long Short-Term Memory Neural Network-Based Algorithm for Dynamic Obstacle Avoidance.
- Author
-
Mulás-Tejeda, Esmeralda, Gómez-Espinosa, Alfonso, Escobedo Cabello, Jesús Arturo, Cantoral-Ceballos, Jose Antonio, and Molina-Leal, Alejandra
- Subjects
- *
MOBILE robots , *HUMAN-robot interaction , *AUTONOMOUS robots , *ANGULAR velocity , *LINEAR velocity , *MOTION capture (Human mechanics) , *ALGORITHMS - Abstract
Autonomous mobile robots are essential to the industry, and human–robot interactions are becoming more common nowadays. These interactions require that the robots navigate scenarios with static and dynamic obstacles in a safely manner, avoiding collisions. This paper presents a physical implementation of a method for dynamic obstacle avoidance using a long short-term memory (LSTM) neural network that obtains information from the mobile robot's LiDAR for it to be capable of navigating through scenarios with static and dynamic obstacles while avoiding collisions and reaching its goal. The model is implemented using a TurtleBot3 mobile robot within an OptiTrack motion capture (MoCap) system for obtaining its position at any given time. The user operates the robot through these scenarios, recording its LiDAR readings, target point, position inside the MoCap system, and its linear and angular velocities, all of which serve as the input for the LSTM network. The model is trained on data from multiple user-operated trajectories across five different scenarios, outputting the linear and angular velocities for the mobile robot. Physical experiments prove that the model is successful in allowing the mobile robot to reach the target point in each scenario while avoiding the dynamic obstacle, with a validation accuracy of 98.02%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A Comprehensive analysis of Deployment Optimization Methods for CNN-Based Applications on Edge Devices.
- Author
-
Qi Li, Zhenling Su, and Lin Meng
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL intelligence , *ALGORITHMS - Abstract
The development of the promising Artificial Intelligence of The things (AIoT) technology increases the demand for implementing Convolutional Neural Networks (CNN) algorithms on the edge devices. However, implementing huge CNN-based applications on the resource-constrained edge devices is considered challenging. Therefore, several CNN optimization methods are integrated into the deployment tools of the edge devices. Since this field evolves rapidly, relevant tools adopt non-uniform deployment optimization flows, and the optimization details are poorly explained. This fact hinders developers from further analyzing the bottlenecks of the CNN-based applications on the edge devices. Hence, the paper comprehensively analyzes the deployment optimization methods for the CNN-based applications on the edge devices. Optimization methods are classified into the Hardware-Agnostic and Hardware-Specific methods. Their ideas and processing details are analyzed, and some suggestions are proposed according to the deployment experiments with different architecture models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
37. Exploring Neutrosophic Numeral System Algorithms for Handling Uncertainty and Ambiguity in Numerical Data: An Overview and Future Directions.
- Author
-
Salama, A. A., Shams, Mahmoud Y., Elseuofi, Sherif, and Khalid, Huda E.
- Subjects
- *
NUMBER systems , *AMBIGUITY , *DECIMAL system , *PATTERN recognition systems , *ARTIFICIAL intelligence , *SET theory , *ALGORITHMS - Abstract
The Neutrosophic Numeral System Algorithms are a set of techniques designed to handle uncertainty and ambiguity in numerical data. These algorithms use Neutrosophic Set Theory, a mathematical framework that deals with incomplete, indeterminate, and inconsistent information. In this paper, we provide an overview of different approaches used in Neutrosophic Numeral System Algorithms, including Neutrosophic Binary System, Neutrosophic Decimal System, and Neutrosophic Octal System. These systems use different bases and representations to account for degrees of truth, indeterminacy, and falsity in numerical data. We also explore the relationship between Neutrosophic Numeral System Algorithms and Number Neutrosophic Systems, which are another type of Neutrosophic System used for representing numerical data. Number Neutrosophic Systems use Neutrosophic Numbers to represent degrees of truth, indeterminacy, and falsity in numerical data, and they can be used in conjunction with Neutrosophic Numeral System Algorithms to handle uncertainty and ambiguity in decision-making and artificial intelligence applications. Moreover. We discuss the advantages and disadvantages of each algorithm and their potential applications in various fields. Finally, we highlight the importance of Neutrosophic cryptography in addressing uncertainty and ambiguity in decision making and artificial intelligence and discuss future research directions. Understanding Neutrosophic Numeral System Algorithms and their relationship with Number Neutrosophic Systems is crucial for developing effective techniques for handling uncertainty and ambiguity in numerical data in decision-making, pattern recognition, and artificial intelligence applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
38. Dark sides of artificial intelligence: The dangers of automated decision‐making in search engine advertising.
- Author
-
Schultz, Carsten D., Koch, Christian, and Olbrich, Rainer
- Subjects
- *
DECISION support systems , *ARTIFICIAL intelligence , *EMPIRICAL research , *DESCRIPTIVE statistics , *CONSUMERS , *TIME series analysis , *ADVERTISING , *SEARCH engines , *ASSOCIATIONS, institutions, etc. , *RESEARCH methodology , *AUTOMATION , *COMPARATIVE studies , *BUDGET , *ALGORITHMS , *REGRESSION analysis - Abstract
With the growing use of artificial intelligence, search engine providers are increasingly pushing advertisers to use automated bidding strategies based on machine learning. Such automated decision‐making systems leave advertisers in the dark about the data being used and how they can influence the outcome of the decision‐making process. Previous literature on artificial intelligence lacks an understanding of the dangers related to artificially intelligent systems and their lack of transparency. In response, our paper addresses the inherent risks of the automated optimization of advertisers' bidding strategies in search engine advertising. The selected empirical case of a service company therefore demonstrates how data availability can trigger a long‐term decline in advertising performance and how search engine advertising performance metrics develop before and after an event of data scarcity. Based on data collected for 525 days, difference‐in‐differences analysis shows that the algorithmic approach has a considerable and lasting negative impact on advertising performance. Furthermore, the empirical case indicates that self‐regulated learning can initialize a downward spiral that gradually impairs advertising performance. Thus, the aim of this study is to increase awareness regarding automated decision‐making dangers in search engine advertising and help advertisers take preventive measures to reduce the risks of algorithm missteps. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Two-Stage Probe-Based Search Optimization Algorithm for the Traveling Salesman Problems.
- Author
-
Rahman, Md. Azizur and Ma, Jinwen
- Subjects
- *
OPTIMIZATION algorithms , *SEARCH algorithms , *COMBINATORIAL optimization , *OPERATIONS research , *ARTIFICIAL intelligence , *ALGORITHMS - Abstract
As a classical combinatorial optimization problem, the traveling salesman problem (TSP) has been extensively investigated in the fields of Artificial Intelligence and Operations Research. Due to being NP-complete, it is still rather challenging to solve both effectively and efficiently. Because of its high theoretical significance and wide practical applications, great effort has been undertaken to solve it from the point of view of intelligent search. In this paper, we propose a two-stage probe-based search optimization algorithm for solving both symmetric and asymmetric TSPs through the stages of route development and a self-escape mechanism. Specifically, in the first stage, a reasonable proportion threshold filter of potential basis probes or partial routes is set up at each step during the complete route development process. In this way, the poor basis probes with longer routes are filtered out automatically. Moreover, four local augmentation operators are further employed to improve these potential basis probes at each step. In the second stage, a self-escape mechanism or operation is further implemented on the obtained complete routes to prevent the probe-based search from being trapped in a locally optimal solution. The experimental results on a collection of benchmark TSP datasets demonstrate that our proposed algorithm is more effective than other state-of-the-art optimization algorithms. In fact, it achieves the best-known TSP benchmark solutions in many datasets, while, in certain cases, it even generates solutions that are better than the best-known TSP benchmark solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A new intrusion detection system based on SVM–GWO algorithms for Internet of Things.
- Author
-
Ghasemi, Hamed and Babaie, Shahram
- Subjects
- *
INTERNET of things , *INTRUSION detection systems (Computer security) , *INTELLIGENT transportation systems , *SUPPORT vector machines , *ALGORITHMS , *ARTIFICIAL intelligence , *METAHEURISTIC algorithms - Abstract
Internet of Things (IoT) as an emerging technology is widely used in various applications such as remote healthcare, smart environment, and intelligent transportation systems. It is necessary to address users' concerns about cost, ease of use, privacy, and comprehensive security to grow the popularity of this technology. Intrusion Detection System (IDS) plays an indispensable role in security and preventing unauthorized users to access authorized network resources through analyzing network patterns. Several techniques such as metaheuristic algorithms, machine learning, fuzzy logic, and artificial intelligence algorithms can be applied to increase the accuracy of IDS, feature selection, and network patterns classification. In this paper, a hybrid intrusion detection system based on Support Vector Machine (SVM) and Grey Wolf Optimization (GWO) is presented that utilizes the advantages of these algorithms. In the proposed approach, the support vector machine has been used to train and differentiate anomaly records from normal records and grey wolf optimization has been used to find the kernel function, feature selection, and adjust optimal parameters for the SVM in order to improve the classification. The conducted simulations prove that the proposed approach outperforms in terms of detection accuracy, precision, recall, and F-score on both NSL-KDD and TON_IoT datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Enhancements in Radiological Detection of Metastatic Lymph Nodes Utilizing AI-Assisted Ultrasound Imaging Data and the Lymph Node Reporting and Data System Scale.
- Author
-
Chudobiński, Cezary, Świderski, Bartosz, Antoniuk, Izabella, and Kurek, Jarosław
- Subjects
- *
LYMPH nodes , *RECEIVER operating characteristic curves , *EARLY detection of cancer , *ARTIFICIAL intelligence , *MULTIPLE regression analysis , *ULTRASONIC imaging , *METASTASIS , *QUALITY assurance , *ALGORITHMS - Abstract
Simple Summary: A novel approach for automatic detection of neoplastic lesions in lymph nodes is presented, which incorporates machine learning methods and the new LN-RADS scale. The presented solution incorporates different network structures with diverse datasets to improve the overall effectiveness. Final findings demonstrate that incorporating the LN-RADS scale labels improved the overall diagnosis, especially when compared with current, standard practices. The presented solution is meant as an aid in the diagnosis process. The paper presents a novel approach for the automatic detection of neoplastic lesions in lymph nodes (LNs). It leverages the latest advances in machine learning (ML) with the LN Reporting and Data System (LN-RADS) scale. By integrating diverse datasets and network structures, the research investigates the effectiveness of ML algorithms in improving diagnostic accuracy and automation potential. Both Multinominal Logistic Regression (MLR)-integrated and fully connected neuron layers are included in the analysis. The methods were trained using three variants of combinations of histopathological data and LN-RADS scale labels to assess their utility. The findings demonstrate that the LN-RADS scale improves prediction accuracy. MLR integration is shown to achieve higher accuracy, while the fully connected neuron approach excels in AUC performance. All of the above suggests a possibility for significant improvement in the early detection and prognosis of cancer using AI techniques. The study underlines the importance of further exploration into combined datasets and network architectures, which could potentially lead to even greater improvements in the diagnostic process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. HYPERGRAPH HORN FUNCTIONS.
- Author
-
BÉRCZI, KRISTÓF, BOROS, ENDRE, and KAZUHISA MAKINO
- Subjects
- *
ARTIFICIAL intelligence , *COMPUTER science , *POLYNOMIAL time algorithms , *DATABASES , *BOOLEAN functions , *SEMIDEFINITE programming - Abstract
Horn functions form a subclass of Boolean functions possessing interesting structural and computational properties. These functions play a fundamental role in algebra, artificial intelligence, combinatorics, computer science, database theory, and logic. In the present paper, we introduce the subclass of hypergraph Horn functions that generalizes matroids and equivalence relations. We provide multiple characterizations of hypergraph Horn functions in terms of implicate-duality and the closure operator, which are, respectively, regarded as generalizations of matroid duality and the Mac Lane-Steinitz exchange property of matroid closure. We also study algorithmic issues on hypergraph Horn functions and show that the recognition problem (i.e., deciding if a given definite Horn CNF represents a hypergraph Horn function) and key realization (i.e., deciding if a given hypergraph is realized as a key set by a hypergraph Horn function) can be done in polynomial time, while implicate sets can be generated with polynomial delay. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. SH-GAT: Software-hardware co-design for accelerating graph attention networks on FPGA.
- Author
-
Wang, Renping, Li, Shun, Tang, Enhao, Lan, Sen, Liu, Yajing, Yang, Jing, Huang, Shizhen, and Hu, Hailong
- Subjects
- *
COMPUTER software , *ARTIFICIAL intelligence , *TECHNOLOGICAL innovations , *MACHINE learning , *ALGORITHMS - Abstract
Graph convolution networks (GCN) have demonstrated success in learning graph structures; however, they are limited in inductive tasks. Graph attention networks (GAT) were proposed to address the limitations of GCN and have shown high performance in graph-based tasks. Despite this success, GAT faces challenges in hardware acceleration, including: 1) The GAT algorithm has difficulty adapting to hardware; 2) challenges in efficiently implementing Sparse matrix multiplication (SPMM); and 3) complex addressing and pipeline stall issues due to irregular memory accesses. To this end, this paper proposed SH-GAT, an FPGA-based GAT accelerator that achieves more efficient GAT inference. The proposed approach employed several optimizations to enhance GAT performance. First, this work optimized the GAT algorithm using split weights and softmax approximation to make it more hardware-friendly. Second, a load-balanced SPMM kernel was designed to fully leverage potential parallelism and improve data throughput. Lastly, data preprocessing was performed by pre-fetching the source node and its neighbor nodes, effectively addressing pipeline stall and complexly addressing issues arising from irregular memory access. SH-GAT was evaluated on the Xilinx FPGA Alveo U280 accelerator card with three popular datasets. Compared to existing CPU, GPU, and state-of-the-art (SOTA) FPGA-based accelerators, SH-GAT can achieve speedup by up to 3283 × , 13 × , and 2.3 ×. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Intelligent Algorithms Enable Photocatalyst Design and Performance Prediction.
- Author
-
Wang, Shifa, Mo, Peilin, Li, Dengfeng, and Syed, Asad
- Subjects
- *
PHOTOCATALYSTS , *ARTIFICIAL neural networks , *OPTIMIZATION algorithms , *PHOTOCATALYSIS , *ALGORITHMS , *ARTIFICIAL intelligence , *POLLUTANTS - Abstract
Photocatalysts have made great contributions to the degradation of pollutants to achieve environmental purification. The traditional method of developing new photocatalysts is to design and perform a large number of experiments to continuously try to obtain efficient photocatalysts that can degrade pollutants, which is time-consuming, costly, and does not necessarily achieve the best performance of the photocatalyst. The rapid development of photocatalysis has been accelerated by the rapid development of artificial intelligence. Intelligent algorithms can be utilized to design photocatalysts and predict photocatalytic performance, resulting in a reduction in development time and the cost of new catalysts. In this paper, the intelligent algorithms for photocatalyst design and photocatalytic performance prediction are reviewed, especially the artificial neural network model and the model optimized by an intelligent algorithm. A detailed discussion is given on the advantages and disadvantages of the neural network model, as well as its application in photocatalysis optimized by intelligent algorithms. The use of intelligent algorithms in photocatalysis is challenging and long term due to the lack of suitable neural network models for predicting the photocatalytic performance of photocatalysts. The prediction of photocatalytic performance of photocatalysts can be aided by the combination of various intelligent optimization algorithms and neural network models, but it is only useful in the early stages. Intelligent algorithms can be used to design photocatalysts and predict their photocatalytic performance, which is a promising technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Insider employee-led cyber fraud (IECF) in Indian banks: from identification to sustainable mitigation planning.
- Author
-
Roy, Neha Chhabra and Prabhakaran, Sreeleakha
- Subjects
- *
BANKING laws , *FRAUD prevention , *CORRUPTION , *ORGANIZATIONAL behavior , *RISK assessment , *DATA security , *RANDOM forest algorithms , *COMPUTERS , *FOCUS groups , *DATA security failures , *INTERVIEWING , *DEBT , *QUESTIONNAIRES , *ARTIFICIAL intelligence , *LOGISTIC regression analysis , *IDENTITY theft , *SECURITY systems , *FINANCIAL stress , *RESEARCH methodology , *CONCEPTUAL structures , *JOB stress , *ARTIFICIAL neural networks , *MACHINE learning , *ALGORITHMS - Abstract
This paper explores the different insider employee-led cyber frauds (IECF) based on the recent large-scale fraud events of prominent Indian banking institutions. Examining the different types of fraud and appropriate control measures will protect the banking industry from fraudsters. In this study, we identify and classify Cyber Fraud (CF), map the severity of the fraud on a scale of priority, test the mitigation effectiveness, and propose optimal mitigation measures. The identification and classification of CF losses were based on a literature review and focus group discussions with risk and vigilance officers and cyber cell experts. The CF was analyzed using secondary data. We predicted and prioritized CF based on machine learning-derived Random Forest (RF). An efficient fraud mitigation model was developed based on an offender-victim-centric approach. Mitigation is advised both before and after fraud occurs. Through the findings of this research, banks and fraud investigators can prevent CF by detecting it quickly and controlling it on time. This study proposes a structured, sustainable CF mitigation plan that protects banks, employees, regulators, customers, and the economy, thus saving time, resources, and money. Further, these mitigation measures will improve the reputation of the Indian banking industry and ensure its survival. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Object detection algorithm for indoor switchgear components in substations based on improved YOLOv5s.
- Author
-
Wu Changdong and Liu Rui
- Subjects
- *
OBJECT recognition (Computer vision) , *ELECTRIC power equipment , *FEATURE extraction , *ARTIFICIAL intelligence , *ALGORITHMS , *PYRAMIDS - Abstract
With the continuous progress of science and technology, electric power equipment detection systems are developing in the direction of artificial intelligence. To achieve good automatic detection results, a high-quality and speedy algorithm is designed to intelligently detect indoor switchgear components in substations. This proposed method can detect the status of components based on image processing technology, which belongs to the field of condition monitoring. In this paper, the targets to be detected include multi-colour buttons or lights and the ammeters or voltmeters of the electrical switchgear. Two hybrid improved algorithms are used to optimise the you only look once v5s (YOLOv5s) network framework for increasing the detection speed and performance. Firstly, deeper feature map extraction is achieved using HorNet recursive gated convolution to replace the original C3 module for more efficient results. Then, a bidirectional feature pyramid network (BiFPN) algorithm is used to achieve the bidirectional propagation of feature information in the feature pyramid. This method can promote better fusion of feature information at different levels and help to convey feature and location information in the image. Finally, the improved YOLOv5s-BH model is used to detect the targets in substations. The experimental results show that the proposed method provides encouraging detection results for indoor switchgear components in substations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Beyond the Algorithm: Understanding How ChatGPT Handles Complex Library Queries.
- Author
-
Yang, Sharon Q. and Mason, Sarah
- Subjects
- *
WORLD Wide Web , *LIBRARY reference services , *T-test (Statistics) , *PLAGIARISM , *ARTIFICIAL intelligence , *STATISTICAL sampling , *QUESTIONNAIRES , *ACADEMIC libraries , *LIBRARIANS , *DESCRIPTIVE statistics , *INFORMATION services , *INFORMATION retrieval , *CONFIDENCE intervals , *ALGORITHMS , *REFERENCE interviews (Library science) - Abstract
The introduction of ChatGPT 3.5 in November 2022 ignited a sensation in the academic community, leaving many astounded by its capabilities. This new release more closely emulates human responses than its predecessors. Among its remarkable capabilities, it can answer questions, catalog items in MARC21, recommend reading lists, and make suggestions on a wide array of topics. To assess ChatGPT’s efficacy in aiding library users, the authors of this paper conducted an experiment comparing ChatGPT’s performance with that of librarians in answering reference questions. Thirty questions were randomly selected from the transaction log of the reference inquiries between June 1, 2023 to July 31, 2023 at the Rider University Libraries. These queries constituted 34% of the total user questions during this two-month period. The authors compared the answers by ChatGPT and those by reference librarians for their accuracy, relevance, and friendliness. The findings indicate that reference librarians markedly outperformed their robotic counterpart. An evident issue arises from ChatGPT’s deficiency in understanding local policies and practices. This consequently hinders its ability to provide satisfactory answers in those areas. OpenAI posits that ChatGPT’s proficiency can be enhanced through targeted fine-tuning using locally specific information. At the moment, ChatGPT remains a great tool for librarians. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. The Algorithm Holy: TikTok, Technomancy, and the Rise of Algorithmic Divination.
- Author
-
St. Lawrence, Emma
- Subjects
- *
SOCIAL media mobile apps , *WITCHCRAFT , *DIVINATION , *DANCE , *ALGORITHMS , *SINGING , *SUBCULTURES , *POPULAR music - Abstract
The social media app TikTok was launched in the US in 2017 with a very specific purpose: sharing 15-s clips of singing and dancing to popular songs. Seven years and several billion downloads later, it is now the go-to app for Gen Z Internet users and much better known for its ultra-personalized algorithm, AI-driven filters, and network of thriving subcultures. Among them, a growing community of magical and spiritual practitioners, frequently collectivized as Witchtok, who use the app not only share their craft and create community but consider the technology itself a powerful partner with which to conduct readings, channel deities, connect to a collective conscious, and transcend the communicative boundaries between the human and spirit realms—a practice that can be understood as algorithmic divination. In analyzing contemporary witchcraft on TikTok and contextualizing it within the larger history of technospirituality, this paper aims to explore algorithmic divination as an increasingly popular and powerful practice of technomancy open to practitioners of diverse creed and belief. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Algorithms and Faith: The Meaning, Power, and Causality of Algorithms in Catholic Online Discourse.
- Author
-
Sierocki, Radosław
- Subjects
- *
ONLINE algorithms , *ALGORITHMS , *ARTIFICIAL intelligence , *COMPUTER programming , *DISCOURSE analysis - Abstract
The purpose of this article is to present grassroots concepts and ideas about "the algorithm" in the religious context. The power and causality of algorithms are based on lines of computer code, making a society influenced by "black boxes" or "enigmatic technologies" (as they are incomprehensible to most people). On the other hand, the power of algorithms lies in the meanings that we attribute to them. The extent of the power, agency, and control that algorithms have over us depends on how much power, agency, and control we are willing to give to algorithms and artificial intelligence, which involves building the idea of their omnipotence. The key question is about the meanings and the ideas about algorithms that are circulating in society. This paper is focused on the analysis of "vernacular/folk" theories on algorithms, reconstructed based on posts made by the users of Polish Catholic forums. The qualitative analysis of online discourse makes it possible to point out several themes, i.e., according to the linguistic concept, "algorithm" is the source domain used in explanations of religious issues (God as the creator of the algorithm, the soul as the algorithm); algorithms and the effects of their work are combined with the individualization and personalization of religion; algorithms are perceived as ideological machines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. SPCTRE: sparsity-constrained fully-digital reservoir computing architecture on FPGA.
- Author
-
Abe, Yuki, Nishida, Kohei, Ando, Kota, and Asai, Tetsuya
- Subjects
- *
ARCHITECTURAL design , *ARTIFICIAL intelligence , *PARALLEL processing , *PARALLEL programming , *ALGORITHMS - Abstract
This paper proposes an unconventional architecture and algorithm for implementing reservoir computing on FPGA. An architecture-oriented algorithm with improved throughput and architecture designed to reduce memory and hardware resource requirements are presented. The proposed architecture exhibits good performance in terms of benchmarks for reservoir computing. A prediction accelerator for reservoir computing that operates on 55.45 mW at 450 K fps with <3000 LEs is realized by implementing the architecture on FPGA. The proposed approach presents a novel FPGA implementation of reservoir computing focussing on both algorithms and architecture that may serve as a basis for applications of AI at network edge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.