1,375 results
Search Results
202. Creating non-discriminatory Artificial Intelligence systems: balancing the tensions between code granularity and the general nature of legal rules.
- Author
-
Soriano Arnanz, Alba
- Subjects
ARTIFICIAL intelligence ,LEGAL instruments ,CIVIL rights - Abstract
Copyright of IDP: Revista de Internet, Derecho y Politica is the property of Universitat Oberta de Catalunya and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
203. Analysis of Artificial Intelligence-Based Approaches Applied to Non-Invasive Imaging for Early Detection of Melanoma: A Systematic Review.
- Author
-
Patel, Raj H., Foltz, Emilie A., Witkowski, Alexander, and Ludzik, Joanna
- Subjects
MELANOMA diagnosis ,ONLINE information services ,MEDICAL databases ,DERMATOLOGISTS ,DEEP learning ,MEDICAL information storage & retrieval systems ,IN vivo studies ,MICROSCOPY ,SYSTEMATIC reviews ,EARLY detection of cancer ,ARTIFICIAL intelligence ,MACHINE learning ,DIAGNOSTIC imaging ,OPTICAL coherence tomography ,DERMOSCOPY ,DESCRIPTIVE statistics ,MEDLINE ,SENSITIVITY & specificity (Statistics) ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
Simple Summary: Melanoma is the most dangerous type of skin cancer worldwide. Early detection of melanoma is crucial for better outcomes, but this often can be challenging. This research explores the use of artificial intelligence (AI) techniques combined with non-invasive imaging methods to improve melanoma detection. The authors aim to evaluate the current state of AI-based techniques using tools including dermoscopy, optical coherence tomography (OCT), and reflectance confocal microscopy (RCM). The findings demonstrate that several AI algorithms perform as well as or better than dermatologists in detecting melanoma, particularly in the analysis of dermoscopy images. This research highlights the potential of AI to enhance diagnostic accuracy, leading to improved patient outcomes. Further studies are needed to address limitations and ensure the reliability and effectiveness of AI-based techniques. Background: Melanoma, the deadliest form of skin cancer, poses a significant public health challenge worldwide. Early detection is crucial for improved patient outcomes. Non-invasive skin imaging techniques allow for improved diagnostic accuracy; however, their use is often limited due to the need for skilled practitioners trained to interpret images in a standardized fashion. Recent innovations in artificial intelligence (AI)-based techniques for skin lesion image interpretation show potential for the use of AI in the early detection of melanoma. Objective: The aim of this study was to evaluate the current state of AI-based techniques used in combination with non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM), optical coherence tomography (OCT), and dermoscopy. We also aimed to determine whether the application of AI-based techniques can lead to improved diagnostic accuracy of melanoma. Methods: A systematic search was conducted via the Medline/PubMed, Cochrane, and Embase databases for eligible publications between 2018 and 2022. Screening methods adhered to the 2020 version of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Included studies utilized AI-based algorithms for melanoma detection and directly addressed the review objectives. Results: We retrieved 40 papers amongst the three databases. All studies directly comparing the performance of AI-based techniques with dermatologists reported the superior or equivalent performance of AI-based techniques in improving the detection of melanoma. In studies directly comparing algorithm performance on dermoscopy images to dermatologists, AI-based algorithms achieved a higher ROC (>80%) in the detection of melanoma. In these comparative studies using dermoscopic images, the mean algorithm sensitivity was 83.01% and the mean algorithm specificity was 85.58%. Studies evaluating machine learning in conjunction with OCT boasted accuracy of 95%, while studies evaluating RCM reported a mean accuracy rate of 82.72%. Conclusions: Our results demonstrate the robust potential of AI-based techniques to improve diagnostic accuracy and patient outcomes through the early identification of melanoma. Further studies are needed to assess the generalizability of these AI-based techniques across different populations and skin types, improve standardization in image processing, and further compare the performance of AI-based techniques with board-certified dermatologists to evaluate clinical applicability. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
204. Design of Intelligent Controller for Aero-engine Based on TD3 Algorithm.
- Author
-
Jianming Zhu, Wei Tang, and Jianhua Dong
- Subjects
INTELLIGENT control systems ,DEEP reinforcement learning ,TURBOFAN engines ,ARTIFICIAL intelligence ,ALGORITHMS ,REINFORCEMENT learning - Abstract
Recently, higher structure complicacy and performance requirements of the aero-engine have brought higher demands on its control system. With the rapid development of artificial intelligence technology, the intelligent controller with self-learning ability will be able to make a great difference. In the paper, we propose an aero-engine intelligent controller design method based on twin delayed deep deterministic policy gradient (TD3) algorithm. The design method allows the intelligent controller to interact autonomously with the aero-engine system to acquire the optimal control sequence. The JT9D turbofan engine is used to introduce the controller design workflow proposed in the paper. First, the problem of aero-engine control is described as a Markov decision process for deep reinforcement learning (DRL) algorithms. Second, a complete intelligent controller design process is constructed by reasonably designing the network structures and reward function. Finally, the comparison simulations are carried out to verify the superior performance of the controller design methods. The simulation results indicate that low-pressure turbine speed has no overshoot, and the settling time does not exceed 0.88s during the engine acceleration process. In the deceleration process, the overshoot of the low-pressure turbine speed is limited to 0.74% and the settling time does not exceed about 0.6s. The results prove that the TD3 controller outperforms deep deterministic policy gradient (DDPG) and the proportional-integral-derivative (PID) in the speed tracking control. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
205. A MADDPG-based multi-agent antagonistic algorithm for sea battlefield confrontation.
- Author
-
Chen, Wei and Nie, Jing
- Subjects
DEEP reinforcement learning ,MACHINE learning ,REINFORCEMENT learning ,ALGORITHMS ,ARTIFICIAL intelligence ,INTELLIGENT buildings - Abstract
There is a concerted effort to build intelligent sea and numerous artificial intelligence technologies have been explored. At present, more and more people are engaged in the research of deep reinforcement learning algorithm, and its mainstream application is in the field of games. Reinforcement learning has conquered chess belonging to complete information game, and Texas poker belonging to incomplete information games. And it reached or even surpassed the highest player level of mankind in E-sports games with huge state space and complex action space. However, reinforcement learning algorithm still has great challenges in fields such as automatic driving. The main reason is that the training of reinforcement learning needs to build an environment for interacting with agents. However, it is very difficult to construct realistic simulation scenes, and there is no guarantee that we will not encounter the state that the agent has not seen. Therefore, it is necessary to explore the simulation scene first. Based on this, this paper mainly studies reinforcement learning in simulation scenario. There are huge challenges in migrating them to real scenario applications, especially in sea missions. Aiming at the heterogeneous multi-agent game confrontation scenario, this paper proposes a sea battlefield game confrontation decision algorithm based on multi-agent deep deterministic policy gradient. The algorithm combines long short-term memory and actor-critic, which not only realizes the convergence of the algorithm in huge state space and action space, but also solves the problem of sparse real rewards. At the same time, imitation learning is integrated into the decision algorithm, which not only improves the convergence speed of the algorithm, but also greatly improves the effectiveness of the algorithm. The results show that the algorithm can deal with a variety of different tactical sea battlefield scenarios, make flexible decisions according to the changes of the enemy, and the average winning rate is close to 90%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
206. Early health prediction framework using XGBoost ensemble algorithm in intelligent environment.
- Author
-
Kumar, Dheeraj, Sood, Sandeep Kumar, and Rawat, Keshav Singh
- Subjects
MACHINE learning ,ARTIFICIAL intelligence ,VIRUS diseases ,INTERNET of things ,ALGORITHMS - Abstract
Amidst the COVID-19 humanitarian catastrophe, the Internet of Things and Artificial Intelligence (AI) are premier technologies in the healthcare domain that have emerged to a great extent. This global health emergency highlights the need to bolster current healthcare systems for future preparedness. Conspicuously, the current paper presents a non-invasive AI-empowered model for passive health monitoring and predicting viral C-19 infection in the home environment. It consists of four notable layers: fully automated data acquisition, data analysis and Bayesian probabilistic classification, temporal COVID-19 severity prediction, and communication layer. These layers include IoT sensors embedded in the intelligent toilet system to collect required data, processes and analyses of the urine parametric data at the fog layer, and forecasting the COVID-19 severity using the XGBoost machine learning model at the cloud layer. The model has been evaluated over 53,550 data instances in a simulated environment for implementation purposes. The results implied that the proposed AI framework outperformed state-of-the-art strategies in terms of temporal approximation (94.53 s), reliability (92.69%), stability (0.89%), and predictive performance analysis (95.26%) metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
207. Booth-Encoded Karatsuba: A Novel Hardware-Efficient Multiplier.
- Author
-
JAIN, Riya, PAHWA, Khushbu, and PANDEY, Neeta
- Subjects
EMERGENCY management ,ARTIFICIAL intelligence ,ALGORITHMS ,INTERNET of things - Abstract
There is a recent boom being witnessed in emerging areas like IoMT (Internet of Medical Things), Artificial Intelligence for healthcare, and disaster management. These novel research frontiers are critical in terms of hardware and cannot afford to compromise accuracy or reliability. Multiplier, being one of the most heavily used components, becomes crucial in these applications. If optimized, multipliers can impact the overall performance of the system. Thus, in this paper, an attempt has been made to determine the potential of accurate multipliers while meeting minimal hardware requirements. In this paper, we propose a novel Booth-Encoded Karatsuba multiplier and provide its comparison with a Booth-Encoded Wallace tree multiplier. These architectures have been developed using two types of Booth encoding: Radix-4 and Radix-8 for 16-bit, 32-bit and 64-bit multiplications. The algorithm is designed to be parameterizable to different bit widths, thereby offering higher flexibility. The proposed multiplier offers advantage of enhanced performance with significant reduction in hardware while negligibly trading off the Power Delay Product (PDP). It has been observed that the performance of the proposed architecture increases with increasing multiplier size due to significant reduction in hardware and slight increase in PDP. All the architectures have been implemented in Verilog HDL using Xilinx Vivado Design Suite. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
208. How to replace a physiotherapist: artificial intelligence and the redistribution of expertise.
- Author
-
Rowe, Michael, Nicholls, David A., and Shaw, James
- Subjects
PHYSICAL therapy ,NATURAL language processing ,ARTIFICIAL intelligence ,PATIENT-centered care ,ROBOTICS ,CLINICAL competence ,AUTOMATION ,ALGORITHMS - Abstract
The convergence of large datasets, increased computational power, and enhanced algorithm design has led to the increased success of machine learning (ML) and artificial intelligence (AI) across a wide variety of healthcare professions but which, so far, have eluded formal discussion in physiotherapy. This is a concern as we begin to see accelerating performance improvements in AI research in general, and specifically, an increase in competence within narrow domains of practice in clinical AI. In this paper we argue that the introduction of AI-based systems within the health sector is likely to have a significant influence on physiotherapy practice, leading to the automation of tasks that we might consider to be core to the discipline. We present examples of some of these AI-based systems in clinical practice, specifically video analysis, natural language processing (NLP), robotics, personalized healthcare, expert systems, and prediction algorithms. We address some of the key ethical implications of these emerging technologies, discuss the implications for physiotherapists, and explore how the resultant changes may challenge some long-held assumptions about the status of the profession in society. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
209. Application of Federated Learning Algorithm Based on K-Means in Electric Power Data.
- Author
-
Weimin He and Lei Zhao
- Subjects
ELECTRIC power ,ALGORITHMS ,ARTIFICIAL intelligence ,MACHINE learning ,DATA - Abstract
Accurate electricity forecasting is the key basis for guiding the power sector to arrange operation plans and guaranteeing the profitability of electric power companies. However, with the increasing demand of enterprises and departments for data security, the phenomenon of "Isolated Data Island" becomes more and more serious, resulting in the accuracy loss of the traditional electricity prediction model. Federated learning, as an emerging artificial intelligence technology, is designed to ensure data privacy while carrying out efficient machine learning, which provides a new way to solve the problem of "Isolated Data Island" in terms of electricity forecasting. Nonetheless, due to the popularity of smart meters, the collected electricity data presents the characteristics of uneven distribution and huge data volume, so it is difficult to apply the electric quantity prediction model generated only by federated learning in practice. To solve this problem, a clustering federated learning method (C-FL) is proposed to protect data privacy while improving the accuracy of power prediction. Firstly, C-FL uses K-means algorithm to cluster power data locally in power enterprises, and then builds accurate power forecasting models for each class of power data combined with other local clients through federated learning. A large number of experimental results show that the clustering federated learning method proposed in this paper is superior to the existing federated learning models in terms of the accuracy of electric power forecasting. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
210. Optimization of Discrete Wavelet Transform Feature Representation and Hierarchical Classification of G-Protein Coupled Receptor Using Firefly Algorithm and Particle Swarm Optimization.
- Author
-
Kamal, Nor Ashikin Mohamad, Bakar, Azuraliza Abu, and Zainudin, Suhaila
- Subjects
DISCRETE wavelet transforms ,PARTICLE swarm optimization ,WAVELET transforms ,ALGORITHMS ,ARTIFICIAL intelligence ,AMINO acid sequence ,MATHEMATICAL optimization - Abstract
Featured Application: This article belongs to the Section Computing and Artificial Intelligence. Ineffective protein feature representation poses problems in protein classification in hierarchical structures. Discrete wavelet transform (DWT) is a feature representation method which generates global and local features based on different wavelet families and decomposition levels. To represent protein sequences, the proper wavelet family and decomposition level must be selected. This paper proposed a hybrid optimization method using particle swarm optimization and the firefly algorithm (FAPSO) to choose the suitable wavelet family and decomposition level of wavelet transformation for protein feature representation. The suggested approach improved on the work of earlier researchers who, in most cases, manually selected the wavelet family and level of decomposition based solely on experience and not on data. The paper also applied the virtual class methods to overcome the error propagation problems in hierarchical classification. The effectiveness of the proposed method was tested on a G-Protein Coupled Receptor (GPCR) protein data set consisting of 5 classes at the family level, 38 classes at the subfamily level, and 87 classes at the sub-subfamily level. Based on the result obtained, the most selected wavelet family and decomposition level chosen to represent GPCR classes by FAPSO are Biorthogonal wavelets and decomposition level 1, respectively. The experimental results show that the representation of GPCR protein using the FAPSO algorithm with virtual classes can yield 97.9%, 86.9%, and 81.3% classification accuracy at the family, subfamily, and sub-subfamily levels, respectively. In conclusion, the result shows that the selection of optimized wavelet family and decomposition level by the FAPSO algorithm, and the virtual class method can be potentially used as the feature representation method and a hierarchical classification method for GPCR protein. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
211. Emergency Information Communication Structure by Using Multimodel Fusion and Artificial Intelligence Algorithm.
- Author
-
Lei, Liping
- Subjects
ARTIFICIAL intelligence ,EMERGENCY management ,DATA extraction ,INFORMATION resources management ,ALGORITHMS ,CONSTRUCTION management - Abstract
With the development of The Times, social events are increasing, and emergency management has gradually become the main helper to solve the crisis in the public domain. By observing the current situation of many countries and regions, we can find that various types of public crises often occur in many countries and regions in the world, which have severely affected people's daily life, lives, and property. Through long-term research and analysis, it can be known that the emergency management mechanism currently established in China has certain shortcomings. The communication problem of emergency information is likely to cause the emergency work to not proceed smoothly. In addition, problems in the communication channels of emergency information are likely to cause problems in the cooperation of various departments when people carry out emergency management work, and the efficiency of the government in dealing with problems will also be reduced in real scenarios. In order to improve the efficiency of emergency information management, this paper aims at the various problems existing and facing in the construction of emergency management system. On this basis, the integration of various relevant emergency information management plan models is analyzed and sorted out, and based on the research and integration of the development of artificial intelligence algorithms. The main research results of emergency information management at home and abroad are comprehensively studied and evaluated. Finally, a QG algorithm based on more model fusion is developed. In the process of analysis, this article uses artificial intelligence algorithms to build a prediction model of multiple modes and collects the data needed to build the model by random extraction. Through the analysis of different data sets, it is used as the basic training data for prediction. Through comprehensive analysis, the model constructed in this paper can promote the sharing of emergency information among departments to a certain extent. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
212. Artificial Intelligence Aerobics Action Image Simulation Based on the Image Segmentation Algorithm.
- Author
-
Jiang, Tao
- Subjects
ARTIFICIAL intelligence ,SATISFACTION ,ALGORITHMS ,ACTIVE learning ,IMAGING systems ,AEROBIC exercises ,IMAGE segmentation - Abstract
At present, aerobics is becoming a popular fashion with the continuous development of cultural needs. Because aerobics has the characteristics of many movements, rapid changes, strong complexity, and difficult performance of difficult movements, the current aerobics teaching still presents shortcomings such as low teaching level, limited teachers' resources, and energy. Therefore, it is difficult to effectively meet the actual learning needs of students. Based on this point, artificial intelligence can be used to simulate and guide the technical movements of aerobics to effectively teach students. In this paper, an artificial intelligence aerobics image simulation system is researched and developed and the GrabCut image segmentation algorithm is mainly used. After analyzing some shortcomings of the algorithm, the GrabCut algorithm cascade and graph-based are selected to complete the optimization, so as to lay a good system foundation and then build the aerobics artificial intelligence image simulation system according to the algorithm foundation. Finally, it analyzes and researches the actual problems of aerobics teaching activities in colleges and universities and focuses on the problems, achievements, and personal satisfaction of students who use the system in actual learning, which proves that the system can effectively assist aerobics teaching activities. By studying the image segmentation algorithm and artificial intelligence technology, this paper applies it to the field of aerobics action image simulation, so as to promote its technological development. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
213. An Intelligent Predictive Algorithm for the Anti-Rollover Prevention of Heavy Vehicles for Off-Road Applications.
- Author
-
Tota, Antonio, Dimauro, Luca, Velardocchia, Filippo, Paciullo, Genny, and Velardocchia, Mauro
- Subjects
OFF-road vehicles ,RECURRENT neural networks ,TRAFFIC accidents ,ALGORITHMS ,ARTIFICIAL intelligence ,VEHICLE models ,AGGRESSIVE driving - Abstract
Rollover detection and prevention are among the most critical aspects affecting the stability and safety assessment of heavy vehicles, especially for off-road driving applications. This topic has been studied in the past and analyzed in depth in terms of vehicle modelling and control algorithms design able to prevent the rollover risk. However, it still represents a serious problem for automotive carmakers due to the huge counts among the main causes for traffic accidents. The risk also becomes more challenging to predict for off-road heavy vehicles, for which the incipient rollover might be triggered by external factors, i.e., road irregularities, bank angles as well as by aggressive input from the driver. The recent advances in road profile measurement and estimation systems make road-preview-based algorithms a viable solution for the rollover detection. This paper describes a model-based formulation to analytically evaluate the load transfer dynamics and its variation due to the presence of road perturbations, i.e., road bank angle and irregularities. An algorithm to detect and predict the rollover risk for heavy vehicles is also presented, even in presence of irregular road profiles, with the calculation of the ISO-LTR Predictive Time through the Phase-Plane analysis. Furthermore, the artificial intelligence techniques, based on the recurrent neural network approach, is also presented as a preliminary solution for a realistic implementation of the methodology. The paper finally assess the efficacy of the proposed rollover predictive algorithm by providing numerical results from the simulation of the most severe maneuvers in realistic off-road driving scenarios, also demonstrating its promising predictive capabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
214. Adaptive Compute Offloading Algorithm for Metasystem Based on Deep Reinforcement Learning.
- Author
-
Wang, Chunxin, Wang, Wensheng, Li, Wenjing, Liu, Zhu, Zhu, Jinhong, and Zhang, Nan
- Subjects
ADAPTIVE computing systems ,REINFORCEMENT learning ,ALGORITHMS ,TIME-varying networks ,SEARCH algorithms - Abstract
There has been a lot of research on edge-computing task offloading in deep reinforcement learning (DRL). Deep reinforcement learning is one of the important algorithms in the current AI field, but there is still room for improvement in the time cost and adaptive correction ability of the algorithm. This paper studies the application of DRL algorithms in edge-computing task offloading, and its key innovation is to propose an MADRLCO algorithm, which is based on the design idea of the Actor–Critic framework, uses the DNN model to act as an Actor, and can more accurately locate the initial decision through iterative training, and use the LSTM model to optimize the Critic, which can be more accurate. The optimal decision can be located in a short period of time. The main work of this paper is divided into three parts: (1) The AC algorithm of the Actor–Critic framework in DRL is proposed to be applied to edge-computing task offloading. (2) To address the weak generalization ability of the basic version of the Actor–Critic algorithm in multi-objective optimization, the sequential quantitative correction and adaptive correction parameter K method are used to optimize the Critic frame, thereby improving the generalization ability of the model in multi-objective decision-making and improving the rationality of decision-making results. (3) Aiming at the problem of large time cost in the critical framework of the model, a search algorithm for resource allocation-related parameters based on the time-series prediction method is proposed (time-series forecasting is a research branch of pattern recognition), which reduces the time overhead of the algorithm and improves the adaptive correction capability of the model. The algorithm in this paper can adapt to not only the time-varying network channel state, but also the time-varying number of device connections. Finally, it is proved by experiments that compared with the DRL calculation offloading algorithm based on DNN plus binary search, the MADRLCO algorithm reduces the model training time by 66.27%, and in the environment of the time-varying number of devices in the metasystem, the average model average. The standard calculation rate is 0.0403 higher than that of the current optimal algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
215. Advances in Cuffless Continuous Blood Pressure Monitoring Technology Based on PPG Signals.
- Author
-
Qin, Caijie, Wang, Xiaohua, Xu, Guangjun, and Ma, Xibo
- Subjects
PLETHYSMOGRAPHY ,ONLINE information services ,SYSTEMATIC reviews ,ARTIFICIAL intelligence ,BLOOD pressure measurement ,MEDLINE ,ALGORITHMS - Abstract
Objective. To review the progress of research on photoplethysmography- (PPG-) based cuffless continuous blood pressure monitoring technologies and prospect the challenges that need to be addressed in the future. Methods. Using Web of Science and PubMed as search engines, the literature on cuffless continuous blood pressure studies using PPG signals in the recent five years were searched. Results. Based on the retrieved literature, this paper describes the available open datasets, commonly used signal preprocessing methods, and model evaluation criteria. Early researches employed multisite PPG signals to calculate pulse wave velocity or time and predicted blood pressure by a simple linear equation. Later, extensive researches were dedicated to mine the features of PPG signals related to blood pressure and regressed blood pressure by machine learning models. Most recently, many researches have emerged to experiment with complex deep learning models for blood pressure prediction with the raw PPG signal as input. Conclusion. This paper summarized the methods in the retrieved literature, provided insight into the artificial intelligence algorithms employed in the literature, and concluded with a discussion of the challenges and opportunities for the development of cuffless continuous blood pressure monitoring technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
216. Good Intention, Bad Intention, and Algorithm: Rethinking the Value of Nudge in the Era of Artificial Intelligence.
- Author
-
Chang-Yun Ku
- Subjects
ARTIFICIAL intelligence ,INTENTION ,ALGORITHMS - Abstract
The algorithm not only amplifies every detail of human society but also has the same function as the famous nudge technique, i.e. choice architecture, which pushes people toward a certain direction while assuming it’s made by their own will. By this nudge-like function of the algorithm, I want to reevaluate the long-controversial issue of the concept of nudge: is this nudge technique harmless? And if it isn’t, can we still use this nudge technique even with good intention? I’ll start by introducing the concepts of nudge and sludge then talk about their main issues. Third, I’ll use three algorithmic examples to demonstrate the consequences of this nudge technique. Fourth, I will address the nature of the nudge technique and the meaning of intention in nudge. Fifth, I’ll push the discussion further for an important philosophical issue: the white lie. Finally, I’ll summarize my argument and conclude this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
217. A checklist for reporting, reading and evaluating Artificial Intelligence Technology Enhanced Learning (AITEL) research in medical education.
- Author
-
Masters, Ken and Salcedo, Daniel
- Subjects
READING ,PUBLIC health laws ,MEDICAL education ,ARTIFICIAL intelligence ,TECHNOLOGY ,MEDICAL research ,COMPUTER assisted instruction ,LEARNING strategies ,QUALITY assurance ,ALGORITHMS - Abstract
Advances in Artificial Intelligence (AI) have led to AI systems' being used increasingly in medical education research. Current methods of reporting on the research, however, tend to follow patterns of describing an intervention and reporting on results, with little description of the AI in the system, or the many concerns about the use of AI. In essence, the readers do not actually know anything about the system itself. This paper proposes a checklist for reporting on AI systems, and covers the initial protocols and scoping, modelling and code, algorithm design, training data, testing and validation, usage, comparisons, real-world requirements, results and limitations, and ethical considerations. The aim is to have a systematic reporting process so that readers can have a comprehensive understanding of the AI system that was used in the research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
218. Never mind AI, what is automated testing and what are your responsibilities as a test user?
- Author
-
Redman, Alan
- Subjects
ARTIFICIAL intelligence ,APPRAISERS ,TEST scoring ,AUTOMATION ,DECISION making - Abstract
Key digested message Automated testing developed as an inevitable response to the greater testing volumes triggered by online assessment for selection in occupational settings. The technology has advanced to a point where the underlying decision-making based on test scores can be obscured from assessors. This article explores the responsibilities of assessors when adopting automated testing and inevitably discusses the impact of AI. There are five jokes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
219. A Data Augmentation Methodology to Reduce the Class Imbalance in Histopathology Images.
- Author
-
Escobar Díaz Guerrero, Rodrigo, Carvalho, Lina, Bocklitz, Thomas, Popp, Juergen, and Oliveira, José Luis
- Subjects
COMPUTER-assisted image analysis (Medicine) ,RESEARCH funding ,ARTIFICIAL intelligence ,KRUSKAL-Wallis Test ,DESCRIPTIVE statistics ,BIOINFORMATICS ,DEEP learning ,ARTIFICIAL neural networks ,CELL nuclei ,DIGITAL image processing ,ALGORITHMS - Abstract
Deep learning techniques have recently yielded remarkable results across various fields. However, the quality of these results depends heavily on the quality and quantity of data used during the training phase. One common issue in multi-class and multi-label classification is class imbalance, where one or several classes make up a substantial portion of the total instances. This imbalance causes the neural network to prioritize features of the majority classes during training, as their detection leads to higher scores. In the context of object detection, two types of imbalance can be identified: (1) an imbalance between the space occupied by the foreground and background and (2) an imbalance in the number of instances for each class. This paper aims to address the second type of imbalance without exacerbating the first. To achieve this, we propose a modification of the copy-paste data augmentation technique, combined with weight-balancing methods in the loss function. This strategy was specifically tailored to improve the performance in datasets with a high instance density, where instance overlap could be detrimental. To validate our methodology, we applied it to a highly unbalanced dataset focused on nuclei detection. The results show that this hybrid approach improves the classification of minority classes without significantly compromising the performance of majority classes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
220. Automated government form filling for aged and monolingual people using interactive tool.
- Author
-
Hegde, Adarsh R., Sujala Reddy, R. S., Kruthika, P., Pragathi, B. C., Sai Lahari, Sreerama, Deepamala, N., and Shobha, G.
- Subjects
AUTOMATIC speech recognition ,CONVERSATION ,ARTIFICIAL intelligence ,DESCRIPTIVE statistics ,MULTILINGUALISM ,GOVERNMENT programs ,COMMUNICATION devices for people with disabilities ,COMMUNICATION ,AUTOMATION ,ALGORITHMS - Abstract
The Government of India offers various schemes for various classes of citizens. Most of the application forms of schemes to be filled are in English and it is observed that monolingual individuals find it difficult to access and fill the forms. This paper addresses the challenges faced by monolingual individuals in India, particularly the elderly, people with impairments, and those from marginalized communities. The proposed work is to create an interactive system called "Dhvani" voicebot, specifically designed for the Kannada language. It helps users in identifying suitable government schemes and fills forms in English. The proposed system is developed using the RASA chatbot framework and NLP techniques to comprehend user utterances. RNN and SVM algorithms are employed to ensure smooth conversation flow and interaction with the users. To enhance scheme suggestion accuracy, a knowledge graph is created, containing relevant data on government schemes. The intent classification model achieves an accuracy of 97%, indicating its ability to accurately understand user intentions. The integration of a knowledge graph improves the accuracy of scheme identification and suggestion to users. The system automates the process of filling out government scheme forms based on user inputs. Dhvani voicebot system presents a practical solution to address the challenges faced by monolingual individuals in accessing government schemes in India. The high accuracy of intent classification and the use of a knowledge graph contribute to the system's effectiveness. The study suggests that this system can be extended to other languages. An automated tool called "Dhvani" will solve the problem of aged, illiterate and physically challenged persons filling forms in post offices and banks. Most of the schemes, pension funds, cash withdrawal, cash deposit is through these organizations. So. the tool makes the process easier for the above mention persons without the help of others. An intent recognition and interactive tool developed in Kannada Language which is widely spoken in Karnataka, India. The digital resources available in Kannada Language is very sparce. Use of technology like interactive tool, Knowledge graph, RNN and SVM are used in the development of the tool. Government scheme recommendation interactively makes the users to choose the scheme faster in an interactive way. The form is filled automatically and can be edited to rectify mistakes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
221. Integrating AI with MBSE for Data Extraction from Medical Standards.
- Author
-
Ghanawi, Ibrahim, Chami, Mohammad Wissam, Chami, Mohammad, Coric, Marko, and Abdoun, Nabil
- Subjects
LANGUAGE models ,SYSTEMS engineering ,ARTIFICIAL intelligence ,DIGITIZATION ,ALGORITHMS - Abstract
The growing adoption of Model‐Based Systems Engineering (MBSE) in the medical sector has prompted a significant emphasis on the digitization of medical standards into norm models. This transformation promotes consistency and allows for tracing system model elements to the corresponding norm model elements. Despite these efforts, the current digitization activities heavily rely on manual extraction and transformation, particularly from PDF documents into SysML models. Concurrently, the proliferation of Artificial Intelligence (AI) applications in recent years presents an opportunity to automate such activities. This paper contributes to the integration of AI with MBSE, focusing solely on the extraction and transformation of medical standards information from documents into SysML norm models. It explores the initial outcomes of augmenting data extraction from medical standards using recent AI algorithms and integrating them into MBSE practices. The evaluation involves two approaches, an open‐source multimodal classifier model and a proprietary large language model. The study assesses these approaches on a medical standard and outlines future work, including the exploration of an open‐source large language model approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
222. Accommodating Machine Learning Algorithms in Professional Service Firms.
- Author
-
Faulconbridge, James R., Sarwar, Atif, and Spring, Martin
- Subjects
MACHINE learning ,PROFESSIONAL corporations ,PROFESSIONAL employee training ,ARTIFICIAL intelligence ,INTELLIGENCE service - Abstract
Machine learning algorithms, as one form of artificial intelligence, are significant for professional work because they create the possibility for some predictions, interpretations and judgements that inform decision-making to be made by algorithms. However, little is known about whether it is possible to transform professional work to incorporate machine learning while also addressing negative responses from professionals whose work is changed by inscrutable algorithms. Through original empirical analysis of the effects of machine learning algorithms on the work of accountants and lawyers, this paper identifies the role of accommodating machine learning algorithms in professional service firms. Accommodating machine learning algorithms involves strategic responses that both justify adoption in the context of the possibilities and new contributions of machine learning algorithms and respond to the algorithms' limitations and opaque and inscrutable nature. The analysis advances understanding of the processes that enable or inhibit the cooperative adoption of artificial intelligence in professional service firms and develops insights relevant when examining the long-term impacts of machine learning algorithms as they become ever more sophisticated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
223. A Novel Binary Drawer Algorithm for Feature Selection in AI Application.
- Author
-
Azeez, Hasan
- Subjects
FEATURE selection ,ALGORITHMS ,METAHEURISTIC algorithms ,ARTIFICIAL intelligence - Abstract
In artificial intelligence (AI) applications, to make informed decisions, relevant data must be gathered from vast databases. Choosing only the relevant and desirable characteristics would have a significant impact on the accuracy of the model. The primary goal of feature selection is to remove unneeded features, reducing complexity. This paper presents Binary Drawer Algorithm (BinDA) for feature selection. The Drawer Algorithm (DA) is a novel metaheuristic algorithm inspired by the process of selecting objects from several drawers to build an optimal combination. The standard DA has been enhanced with major features to increase its overall performance. The local search algorithm is a novel addition to the DA algorithm that improves its exploitation capacity. In order to determine how well the algorithm works, it is compared to others and tested on a collection of 20 datasets. The proposed BinDA is assessed in contrast to 7 modern wrapper feature selection techniques. The results show that the proposed BinDA algorithm regularly performs better than existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
224. Signal Processing with Computational Topology.
- Author
-
Oghuz, Kamala Shirin and Safarov, Fazil Tarlan
- Subjects
SIGNAL processing ,COMPUTER software development ,HEALTH outcome assessment ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
The objective of this research is to analyze the mathematical methods of signal processing on abstract geometric spaces of the given sensor network, develop an algorithm on a tangible software development environment, and visualize the usefulness of those algorithms. Specifically, the scope of this research covers the notions of persistence in computational topology and how this notion can be observed in certain mathematical features of the given network. Moreover, we will be analyzing sheaves constructed over a network of sensors in the topological space, from the point of persistence modules and persistent sheaf cohomology, visualizing the persistent sheaf cohomology over barcode diagrams. The research objective is analyzed in detail in the introduction. In the first part of the main body, we have analyzed the mathematical background and previous research works in this domain to derive insight into developing an algorithm. In the second section of the main body, discussed and developed the algorithms, based on the computational topology theoretic approaches. In this paper, we will develop the algorithms to construct simplicial complexes, obtain filtration, construct sheaves, and compute persistent sheaf cohomology. The resulting barcode diagrams are analyzed in terms of certain notions, e.g. robustness, stability, and the Betti numbers. In the end, the primary findings will be succinctly expressed in several points. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
225. Echoing Mechanism of Juvenile Delinquency Prevention and Occupational Therapy Education Guidance Based on Artificial Intelligence.
- Author
-
Hou, Fang
- Subjects
PREVENTION of juvenile delinquency ,CRIME prevention ,MEMORY ,CULTURE ,OCCUPATIONAL therapy education ,INTERNET ,ARTIFICIAL intelligence ,TEENAGERS' conduct of life ,DESCRIPTIVE statistics ,DATA analytics ,ARTIFICIAL neural networks ,NEEDS assessment ,LOGISTIC regression analysis ,ALGORITHMS - Abstract
In this paper, in-depth research and analysis of juvenile delinquency prevention and occupational therapy education guidance using artificial intelligence are conducted, and its response mechanism is designed in this way. Two crime type prediction algorithms based on time-crime type count vectorization and dense neural network and crime type prediction based on the fusion of dense neural network and long- and short-term memory neural network are proposed. The outputs of both are fed into a new neural network for training to achieve the fusion of the two neural networks. Among them, the use of the dense neural network can effectively fit the relationship between the constructed features and crime types. The behavioral manifestations and causes of the formation of deviant behavior in adolescents are discussed. They can only read numerical data, but there is a lot of information in the textual data that is closely related to the training effect. When experimenting, it is necessary to extract knowledge and build applications. The practical work with adolescents with deviant behaviors is again carried out from group work and casework, respectively, with problem diagnosis, needs assessment, and service plan development for specific clients, to carry out relevant practical service work. The causes of juvenile delinquency in the Internet culture are discussed in terms of the Internet environment, juvenile use of the Internet, Internet supervision, and crime prevention education, respectively. The fourth chapter focuses on the analysis of the prevention and control measures for juvenile delinquency in cyberculture. In response to the above-mentioned causes of juvenile delinquency in cyberculture, the prevention and control measures are discussed in four aspects, namely, strengthening the construction of cyberculture and building a healthy cyber environment, strengthening the capacity building of guiding juveniles to use cyber correctly, building a prevention and supervision system to promote the improvement of the legal system, and improving and innovating the crime prevention education in the cyber era. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
226. Animation Design of Multisensor Data Fusion Based on Optimized AVOD Algorithm.
- Author
-
Ding, Li, Wei, Guobing, and Zhang, Kai
- Subjects
MULTISENSOR data fusion ,OBJECT recognition (Computer vision) ,ALGORITHMS ,DATABASES ,ARTIFICIAL intelligence ,3-D animation ,MACHINE learning - Abstract
With the development of artificial intelligence, Internet of things, machine learning, and many other technologies, animation design task based on algorithm theory has become a research hotspot in the field. In recent years, perception technology has gradually become the key technology of animation design, and it is also the key research content in the current field. Whether the perception system can design animation quickly and accurately is the key of research. Compared with other algorithms, using AVOD (Aggregate view object detection) algorithm for animation design has obvious advantages. The original AVOD algorithm has some problems, such as low clustering efficiency, insufficient depth of feature extraction network, and occupying a large amount of memory. Based on this, this paper proposes to use the googleNet network and initial model of k-means++ to extract features and establish an optimized AVOD algorithm. At the same time, in order to illustrate the effectiveness of the optimization method, two typical cases are introduced to provide scientific guidance and reference for the research in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
227. Machine English Translation Evaluation System Based on BP Neural Network Algorithm.
- Author
-
Han, Yanlin and Meng, Shaoxiu
- Subjects
MACHINE translating ,ARTIFICIAL intelligence ,ALGORITHMS ,QUALITY of service ,ERROR rates - Abstract
In order to solve the problems of machine translation efficiency and translation quality, this paper proposes an English translation evaluation system based on the BP neural network algorithm. This method provides users with a more intelligent machine translation service experience. With the help of the BP neural network algorithm, taking English online translation as the research object, Google's translation quality is the best, with an error frequency of only 167, while Baidu translation and iFLYTEK translation in China have a high error rate of 266 and 301, respectively, which is much higher than Google translation. A model of machine translation evaluation based on the neural network algorithm is proposed to better solve the disadvantages of traditional English machine translation. The results show that the machine translation system based on the neural network algorithm can further optimize the problems existing in machine translation, such as insufficient use of information and large scale of model parameters, and further improve the performance of neural network machine translation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
228. A Russian Continuous Speech Recognition System Based on the DTW Algorithm under Artificial Intelligence.
- Author
-
Yu, Chunping and Wang, Xin
- Subjects
AUTOMATIC speech recognition ,ARTIFICIAL intelligence ,SPEECH perception ,ALGORITHMS - Abstract
In order to improve the effect of continuous speech recognition, this paper combines the DTW algorithm to construct a continuous Russian speech recognition system and proposes a continuous Russian speech detection method based on VGDTW-MPCA with an unequal interval process. Moreover, considering the influence of the correlation between variables on the synchronization of the DTW algorithm, this paper constructs a DTW algorithm on a local data set to synchronize in different variable groups. Then, this paper integrates the obtained data into complete 3D data for modeling. It can be seen from the simulation research that the Russian continuous speech recognition system based on DTW proposed in this paper has a high continuous Russian speech recognition accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
229. Cognitive Artificial Intelligence Using Bayesian Computing Based on Hybrid Monte Carlo Algorithm.
- Author
-
Park, Sangsung and Jun, Sunghae
- Subjects
COGNITIVE computing ,ARTIFICIAL intelligence ,EMOTIONS ,ALGORITHMS ,BAYESIAN field theory ,HUMAN experimentation - Abstract
Cognitive artificial intelligence (CAI) is an intelligent machine that thinks and behaves similar to humans. CAI also has an ability to mimic human emotions. With the development of AI in various fields, the interest and demand for CAI are continuously increasing. Most of the current AI research focuses on the realization of intelligence that can make optimal decisions. Existing AI studies have not conducted in-depth research on human emotions and cognitive perspectives. However, in the future, the demand for the use of AI that can imitate human emotions in various fields, such as healthcare and education, will continue. Therefore, we propose a method to build CAI in this paper. We also use Bayesian inference and computing based on the hybrid Monte Carlo algorithm for CAI development. To show how the proposed method for CAI can be applied to practical problems, we create an experiment using simulation data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
230. Design of New Working Environment Based on Artificial Intelligence Algorithm.
- Author
-
Zhou, Boping and Qu, Lihong
- Subjects
ARTIFICIAL intelligence ,TECHNOLOGICAL innovations ,WORK design ,ALGORITHMS - Abstract
With the three industrial revolutions sweeping the world, especially since the third industrial revolution, the complexity of human work has greatly increased, and in the new era of technology and information, workers have new standards and new requirements for their work environment, and new form of work environment design has come into being. In this paper, a work environment system is designed by artificial intelligence algorithm to improve the workers' work environment by assessing the good and bad degree of their natural and social work environment, and an intelligent service system is designed by using artificial intelligence algorithm, which can not only analyze and process the work environment assessment results but also execute the subjective and reasonable requirements made by the workers to help the workers maintain a good mood at work and improve their efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
231. Effect of Intelligent Medical Data Technology in Postoperative Nursing Care.
- Author
-
Duan, Ninggui and Lin, Guangbo
- Subjects
NURSING ,POSTOPERATIVE care ,ARTIFICIAL intelligence ,MACHINE learning ,SURGERY ,PATIENTS ,INTERNET of things ,DATA analytics ,ALGORITHMS - Abstract
Surgery is one of the larger wounds in conventional surgery, and patients often experience different pain and postural discomfort after surgery. With the ever-changing standards of medical care and patient care requirements, providing high-quality care to postoperative patients is an important measure to reduce complications and promote rapid recovery. However, in the traditional postsurgical nursing methods, there are often the phenomenon that wrong patients are connected, patient data is messy, and medicines are counted incorrectly, which directly leads to a rapid decline in nursing efficiency. In the context of the rapid development of artificial intelligence and big data, intelligent medical data analysis technology has gradually been integrated into the medical field. This paper analyzes and studies the application effect of intelligent medical data analysis technology in postoperative nursing. It is aimed at changing the traditional postoperative nursing methods and improving nursing efficiency, and it provides important suggestions for the development of postoperative nursing in the new era. Combining big data and Internet of Things technology, this paper builds a smart medical Internet of Things framework and an intelligent postoperative care system and uses machine learning algorithms to preprocess relevant medical data. The final experimental results show that the intelligent medical data analysis technology has a good effect in improving the nursing efficiency after surgery, and the nursing efficiency has increased by 6.9%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
232. Performance of Deep Learning-Based Algorithm for Detection of Pediatric Intussusception on Abdominal Ultrasound Images.
- Author
-
Li, Zheming, Song, Chunze, Huang, Jian, Li, Jing, Huang, Shoujiang, Qian, Baoxin, Chen, Xing, Hu, Shasha, Shu, Ting, and Yu, Gang
- Subjects
ULTRASONIC imaging ,INTESTINAL intussusception ,CHILDREN'S hospitals ,ALGORITHMS ,DEEP learning ,ARTIFICIAL intelligence ,SIGNAL convolution - Abstract
Background and Aims. Diagnosing pediatric intussusception from ultrasound images can be a difficult task in many primary care hospitals that lack experienced radiologists. To address this challenge, this study developed an artificial intelligence- (AI-) based system for automatic detection of "concentric circles" signs on ultrasound images, thereby improving the efficiency and accuracy of pediatric intussusception diagnosis. Methods. A total of 440 cases (373 pediatric intussusception and 67 normal cases) were retrospectively collected from Children's Hospital affiliated to Zhejiang University School of Medicine from January 2020 to December 2020. An improved Faster RCNN deep learning framework was used to detect "concentric circle" signs. Finally, independent validation set was used to evaluate the performance of the developed AI tool. Results. The data of pediatric intussusception were divided into a training set and validation set according to the ratio of 8 : 2, with training set (298 pediatric intussusception) and validation set (75 pediatric intussusception and 67 normal cases). In the "concentric circle" detection model, the detection rate, recall, specificity, and F 1 score assessed by the validation set were 92.8%, 95.0%, 92.2%, and 86.4%, respectively. Pediatric intussusception was classified by "concentric circle" signs, and the accuracy, recall, specificity, and F 1 score were 93.0%, 92.0%, 94.1%, and 93.2% on the validation set, respectively. Conclusion. The model established in this paper can realize the automatic detection of "concentric circle" signs in the ultrasound images of abdominal intussusception in children; the AI tool can improve the diagnosis speed of pediatric intussusception. It is necessary to further develop an artificial intelligence system for real-time detection of "concentric circles" in ultrasound images for the judgment of children with intussusception. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
233. In-process identification of milling parameters based on digital twin driven intelligent algorithm.
- Author
-
Zheng, Charles Ming, Zhang, Lu, Kang, Yaw-Hong, Zhan, Youji, and Xu, Yongchao
- Subjects
DIGITAL twins ,PARAMETER identification ,CYBER physical systems ,ALGORITHMS ,INDUSTRY 4.0 ,ARTIFICIAL intelligence - Abstract
The potential benefits of Industry 4.0 have led to an increased interest in smart manufacturing. To facilitate the self-diagnosis and adaptive ability in smart milling system, a digital twin–driven intelligent algorithm for monitoring in-process milling parameters is proposed here. The algorithm can extract the radial width of cut, axial depth of cut, cutter runout parameters, and cutting constants in the end milling process at the same time only by using force sensor. It is an important breakthrough in this paper to converge two different force models to realize cyber-physical fusion for identifying milling parameters in the milling process. By using the convolution force model, digital twin technology can extract the approximate solution of milling parameters in the machining process in advance, so as to narrow the range of solution. Furthermore, the subsequent artificial intelligence algorithm can find the accurate solution of the current milling parameters in a short calculation time by cyber-physical fusion with the numerical force model considering cutter runout effect. Milling experiments are carried out to validate the proposed algorithm. It is shown that due to the complementary advantages of the convolution force model and numerical force model, the algorithm proposed in this paper can give consider to the identification accuracy and calculation efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
234. Study Film and Television Postproduction and Innovation Strategy Based on an Artificial Intelligence Algorithm.
- Author
-
Han, Jing and Shao, Lin
- Subjects
TELEVISION broadcasting of films ,ARTIFICIAL intelligence ,TELEVISION production & direction ,FILMMAKING ,ALGORITHMS ,DIGITAL video - Abstract
In the process of choosing the best scheme in the artificial intelligence algorithm, it is impossible to accurately judge the nonlinear relationship between the innovation strategy and the film and television postproduction scheme. An improved artificial intelligence algorithm based on the integration of dynamic factors and the artificial intelligence algorithm is proposed to reduce the disturbance ability of the artificial intelligence algorithm and improve the analysis level of film and television postproduction and innovation strategy. Firstly, the initial innovation strategy set of the production set is established by using dynamic factors, which makes it discrete and reduces the influence of the scheme selection error on the results. Then, the production set is divided into dynamic subproduction sets by using the film and television production theory, and each subproduction set seeks its own parallel innovation strategy. Finally, under the guidance of film and television production theory, each subproduction set shares the matching of optimal solutions. Through MATLAB simulation analysis and verification, the improved dynamic artificial intelligence algorithm can improve the accuracy of judging the innovation strategy of film and television works in an uncertain environment and shorten the convergence time of global feature solution and is superior to the original selection method of film and television production strategy. In addition, under that condition the initial weight scheme and the threshold scheme are set. The artificial intelligence algorithm is used to analyze the innovation strategy selection of youth idol works. The results show that under different film and television production requirements, the innovation strategy selection judgment of the artificial intelligence algorithm is accurate and superior to the original film and television production strategy selection method, which further verifies the effectiveness of the artificial intelligence algorithm proposed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
235. 基于YOLO的自动驾驶目标检测研究综述.
- Author
-
茅智慧, 朱佳利, 吴鑫, and 李君
- Subjects
TRAFFIC signs & signals ,ARTIFICIAL intelligence ,TRAFFIC violations ,INTELLIGENT transportation systems ,PEDESTRIANS ,AUTONOMOUS vehicles ,ALGORITHMS ,DRIVERLESS cars - Abstract
Copyright of Journal of Computer Engineering & Applications is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
236. Adjusting the Stiffness of Supports during Milling of a Large-Size Workpiece Using the Salp Swarm Algorithm.
- Author
-
Kaliński, Krzysztof J., Galewski, Marek A., Stawicka-Morawska, Natalia, Mazur, Michał, and Parus, Arkadiusz
- Subjects
WORKPIECES ,FINITE element method ,SWARM intelligence ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
This paper concerns the problem of vibration reduction during milling. For this purpose, it is proposed that the standard supports of the workpiece be replaced with adjustable stiffness supports. This affects the modal parameters of the whole system, i.e., object and its supports, which is essential from the point of view of the relative tool–workpiece vibrations. To reduce the vibration level during milling, it is necessary to appropriately set the support stiffness coefficients, which are obtained from numerous milling process simulations. The simulations utilize the model of the workpiece with adjustable supports in the convention of a Finite Element Model (FEM) and a dynamic model of the milling process. The FEM parameters are tuned based on modal tests of the actual workpiece. For assessing simulation results, the proper indicator of vibration level must be selected, which is also discussed in the paper. However, simulating the milling process is time consuming and the total number of simulations needed to search the entire available range of support stiffness coefficients is large. To overcome this issue, the artificial intelligence salp swarm algorithm is used. Finally, for the best combination of stiffness coefficients, the vibration reduction is obtained and a significant reduction in search time for determining the support settings makes the approach proposed in the paper attractive from the point of view of practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
237. Error-Resistant Movement Detection Algorithm for the Elderly with Smart Mirror.
- Author
-
Yang, Bo-Seung, Kang, Tae-Won, Choi, Yong-Sik, and Jung, Jin-Woo
- Subjects
POSE estimation (Computer vision) ,OLDER people ,MIRRORS ,ALGORITHMS - Abstract
As the elderly population increases globally, the demand for systems and algorithms that target the elderly is increasing. Focusing on the extendibility of smart mirrors, our purpose is to create a motion detection system based on video input by an attached device (an RGB camera). The motion detection system presented in this paper is based on an algorithm that returns a Boolean value indicating the detection of motion based on skeletal information. We analyzed the problems that occur when the adjacent frame subtraction method (AFSM) is used in the motion detection algorithm based on the skeleton-related output of the pose estimation model. We compared and tested the motion recognition rate for slow-motion with the previously used AFSM and the vector sum method (VSM) proposed in this paper. As an experimental result, the slow-motion detection rate showed an increase of 30–70%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
238. Blockchain Smart Contract to Prevent Forgery of Degree Certificates: Artificial Intelligence Consensus Algorithm.
- Author
-
Kim, Seong-Kyu
- Subjects
BLOCKCHAINS ,ARTIFICIAL intelligence ,NATURAL language processing ,ALGORITHMS ,FORGERY ,DATA protection - Abstract
Certificates are often falsified, such as fake diplomas and forged transcripts. As such, many schools and educational institutions have begun to issue diplomas online. Although diplomas can be issued conveniently anytime, anywhere, there are many cases wherein diplomas are forged through hacking and forgery. This paper deals with the required Blockchain diploma. In addition, we use an automatic translation system, which incorporates natural language processing, to perform verification work that does not require an existing public certificate. The hash algorithm is used to authenticate security. This paper also proposes the use of these security protocols to provide more secure data protection. In addition, each transaction history, whether a diploma is true or not, may be different in length if it is presented in text, but converting it into a hash function means that it is always more than a certain length of SHA-512 or higher. It is then verified using the time stamp values. These chaining codes are designed. This paper also provides the necessary experimental environment. At least 10 nodes are constructed. Blockchain platform development applies and references Blockchain standardization, and a platform test, measurement test, and performance measurement test are conducted to assess the smart contract development and performance measurement. A total of 500 nodes were obtained by averaging 200 times, and a Blockchain-based diploma file was agreed upon at the same time. It shows performance information of about 4100 TPS. In addition, the analysis of artificial intelligence distribution diagram was conducted using a four-point method, and the distribution chart was evenly distributed, confirming the diploma with the highest similarity. The verified values were then analyzed. This paper proposes these natural language processing-based Blockchain algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
239. Virtual Patient Diagnosis and Treatment, Networked Medical Devices, and Artificial Intelligencebased Diagnostic Algorithms in the Healthcare Metaverse and Immersive 3D Worlds.
- Author
-
Popescu, Gheorghe H.
- Subjects
SHARED virtual environments ,SIMULATED patients ,MEDICAL equipment ,AVATARS (Virtual reality) ,ALGORITHMS - Abstract
This paper provides a systematic literature review of studies investigating 3D metaverse experiences in Internet of Medical Things-based virtually simulated environments requiring healthcare artificial intelligence algorithms in relation to immersive virtual reality avatars. The analysis highlights that simulationbased digital twins, remote sensing data fusion algorithms, and smart wearable Internet of Medical Things technologies configure virtual healthcare environments and services in relation to disease prevention and treatment, monitoring vital physiological parameters. Throughout June 2022, I performed a quantitative literature review of the Web of Science, Scopus, and ProQuest databases, with search terms including “the healthcare metaverse and immersive 3D worlds” + “virtual patient diagnosis and treatment,” “networked medical devices,” and “artificial intelligencebased diagnostic algorithms.” As I inspected research published in 2022, only 138 articles satisfied the eligibility criteria. By eliminating controversial findings, outcomes unsubstantiated by replication, too imprecise material, or having similar titles, I decided upon 24, generally empirical, sources. Data visualization tools: Dimensions (bibliometric mapping) and VOSviewer (layout algorithms). Reporting quality assessment tool: PRISMA. Methodological quality assessment tools include: AMSTAR, Distiller SR, MMAT, and ROBIS. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
240. The Future of AI: Engineering and Computing Graduate Students Perspectives on AI and Ethics.
- Author
-
Hooper, Kerrie Danielle and Fletcher, Trina L.
- Subjects
ARTIFICIAL intelligence ,COMPUTING platforms ,ALGORITHMS ,GRADUATE students - Abstract
The Artificial Intelligence (AI) revolution continues to engage with the engineering and computing education world. A machine learning algorithm, or AI application itself, does not always cater to human ideals or ethical considerations. There is a need to be aware of this lack of contextual knowledge in order to design models accordingly. When considering our modern world and striving for diversity, equity, and inclusion, it is essential to ensure that technology works for all. Even though there is an excitement for the advancement of AI, there is also a need to enhance our understanding and consideration of the ethical implications of AI to inform future generations and future AI technology. The education system has a significant role in molding the minds of future AI pioneers and engineers. Therefore, it is vital to understand the attitudes and beliefs of undergraduate and graduate students who will play a pivotal role in the ethical implications of AI advancements. This work-in-progress paper focuses on a survey analysis to examine engineering and computing students' perspectives on ethics in AI before and after taking a course that includes AI and ethics within the syllabus. The following research questions will guide this study: What are the attitudes of engineering and computing students before and after taking a course that covers AI and ethics? In addition, how do their attitudes vary by demographics such as age, gender, and experience? Our goal is to present our current research and survey instrument to the American Society for Engineering Education (ASEE) audience to receive insight and feedback before finalizing the Institutional Review Board (IRB) and distributing it on the target campus. This work-in-progress closes out with the next steps, future work, implications, and concluding thoughts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
241. Automating the assembly planning process to enable design for assembly using reinforcement learning.
- Author
-
Parzeller, Rafael, Koziol, Dominik, Dagner, Tizian, and Gerhard, Detlef
- Subjects
ARTIFICIAL intelligence ,REINFORCEMENT learning ,ALGORITHMS ,COMPUTER-aided design ,LANGUAGE & languages - Abstract
This paper introduces a new concept for the automation of the assembly planning process, to enable Design for Assembly (DfA). The approach involves the application of reinforcement learning (RL) to assembly sequence planning (ASP) based on a 3D-CAD model. The ASP algorithm determines assembly sequences through assembly by disassembly. The assembly sequence is then used for the generation of subassemblies by considering the product contact information. The approach aims to support the creation of the manufacturing bill of materials (MBOM) by automating the assembly planning process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
242. A review on kidney tumor segmentation and detection using different artificial intelligence algorithms.
- Author
-
Patel, Vinitkumar Vasantbhai and Yadav, Arvind R.
- Subjects
ARTIFICIAL intelligence ,KIDNEY tumors ,ALGORITHMS ,DEEP learning ,DATA warehousing ,MACHINE learning - Abstract
Kidney is one of the significant organs in the human body which performs filtering out blood, balances fluid, removes the waste, maintains the level of electrolytes and hormone levels. So, any disorder or dysfunction in kidney needs to be detected on time in order to preserve life. Segmentation on kidney tumor in medical field is a critical task and many conventional methods have been employed for early prediction of kidney abnormalities but with limitations such as high cost, extended time for computation and analysis with huge amount of data. Due to all such problems, the prediction rate and accuracy has reduced considerably. In order to overcome the challenges, Artificial Intelligence (AI) technology has penetrated into the field of medicine particularly in the renal department. The evolution of AI in kidney therapies improve the process of diagnosis through several Machine Learning (ML) and Deep Learning (DL) algorithms. It has the capability of improving and influencing on the status with its capacity of learning from the massive data and apply them accordingly to differentiate on the circumstances. The storage of larger data and segmentation with AI assistance are highly helpful for the analysis of occurrence of the disease. AI algorithms have predicted the severity of tumor stages with effective accuracies. Hence, this paper provides a critical review of different AI based algorithms being used in the kidney tumor prognostication. Its numerous benefits in field of segmentation have been researched from the existing works and provides an insight on the contribution of AI in the kidney disease prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
243. University of Florence Researcher Broadens Understanding of Artificial Intelligence (A Comprehensive Review of Fault Diagnosis and Prognosis Techniques in High Voltage and Medium Voltage Electrical Power Lines).
- Subjects
ELECTRIC power ,ARTIFICIAL intelligence ,ELECTRIC lines ,HIGH voltages ,FAULT diagnosis ,DIAGNOSIS methods - Abstract
A new report from the University of Florence in Italy provides an extensive review of monitoring methods for electrical power lines, with a focus on high-voltage and medium-voltage systems. The objective of these techniques is to prevent catastrophic failures by detecting partial damage or deterioration of components and allowing for organized maintenance operations. The paper discusses the coordination of protection devices and the implementation of artificial intelligence algorithms to improve the reliability of the network. It also highlights diagnostic techniques, protection devices, and prognostic methods, emphasizing the role of artificial intelligence and offering guidelines for choosing between different approaches. [Extracted from the article]
- Published
- 2023
244. IMPLEMENTATION OF AI CHAT BOT USING PROGRAMMING LANQUAQE GO.
- Author
-
Vukotić, Nikola, Stošović, Slavimir, Stefanović, Dušan, and Cvetković, Aleksandar
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,NATURAL language processing ,ALGORITHMS ,COMPUTER software - Abstract
In recent years, the field of technical and technological sciences has certainly been marked by the rapid development of artificial intelligence. Apart from this area, wide application has been shown in many other areas as well. The use of neural networks and deep neural networks has contributed to the fact that artificial intelligence algorithms work much better than before, and therefore their application has become wider than ever. The advancement of artificial intelligence algorithms has contributed to the rapid development of a lot of different software that are very useful virtual assistants and chat bots. This paper aims to introduce the reader to the current achievements in the field of natural language processing, using the Wolfram alpha system. A chat bot was developed that integrates into the very popular Slack application, which is used by many teams in their work. The possibilities of training the model used by the Chat bot are discussed and the results of the work in several different test scenarios are presented. The Go programming language, which is increasingly relevant in the field of Computer Science and has found wide application in many highly reliable systems, was researched and used. [ABSTRACT FROM AUTHOR]
- Published
- 2023
245. ФОРМАЛІЗАЦІЯ МЕТОДІВ ПОБУДОВИ АВТОНОМНИХ СИСТЕМ ШТУЧНОГО ІНТЕЛЕКТУ.
- Author
-
ЗГУРОВСЬКИЙ, М. З., КАСЬЯНОВ, П. О., and ЛЕВЕНЧУК, Л. Б.
- Subjects
ARTIFICIAL intelligence ,DYNAMIC programming ,MATHEMATICAL models ,MARKOV processes ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
This paper explores the problem of formalizing the development of autonomous artificial intelligence systems (AAIS), whose mathematical models may be complex or non-identifiable. Using the value-iterations method for Q-functions of rewards, a methodology for constructing of ε-optimal strategies with a given accuracy has been developed. The results allow us to outline classes (including dual-use), for which it is possible to rigorously justify the construction of optimal and ε-optimal strategies even in cases where the models are identifiable but the computational complexity of standard dynamic programming algorithms may not be strictly polynomial. [ABSTRACT FROM AUTHOR]
- Published
- 2023
246. CAM-NAS: An Efficient and Interpretable Neural Architecture Search Model Based on Class Activation Mapping.
- Author
-
Zhang, Zhiyuan, Wang, Zhan, and Joe, Inwhee
- Subjects
IMAGE recognition (Computer vision) ,ARTIFICIAL intelligence ,GRAPHICS processing units ,ALGORITHMS ,GRAPH algorithms - Abstract
Artificial intelligence (AI) has made rapid progress in recent years, but as the complexity of AI models and the need to deploy them on multiple platforms gradually increases, the design of network model structures for specific platforms becomes more difficult. A neural network architecture search (NAS) serves as a solution to help experts discover new network structures that are suitable for different tasks and platforms. However, traditional NAS algorithms often consume time and many computational resources, especially when dealing with complex tasks and large-scale models, and the search process can become exceptionally time-consuming and difficult to interpret. In this paper, we propose a class activation graph-based neural structure search method (CAM-NAS) to address these problems. Compared with traditional NAS algorithms, CAM-NAS does not require full training of submodels, which greatly improves the search efficiency. Meanwhile, CAM-NAS uses the class activation graph technique, which makes the searched models have better interpretability. In our experiments, we tested CAM-NAS on an NVIDIA RTX 3090 graphics card and showed that it can evaluate a submodel in only 0.08 seconds, which is much faster than traditional NAS methods. In this study, we experimentally evaluated CAM-NAS using the CIFAR-10 and CIFAR-100 datasets as benchmarks. The experimental results show that CAM-NAS achieves very good results. This not only proves the efficiency of CAM-NAS, but also demonstrates its powerful performance in image classification tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
247. Unmanned Vessel Collision Avoidance Algorithm by Dynamic Window Approach Based on COLREGs Considering the Effects of the Wind and Wave.
- Author
-
Yuan, Xiaoyu, Tong, Chengchang, He, Guoxiang, and Wang, Hongbo
- Subjects
COLLISIONS at sea ,MARITIME shipping ,ENERGY consumption ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
In recent years, the rapid development of artificial intelligence algorithms has promoted the intelligent transformation of the ship industry; unmanned surface vessels (USVs) have become a widely used representative product. The dynamic window approach (DWA) is an effective robotic collision avoidance algorithm; however, there are deficiencies in its application to the ship field. First, the DWA algorithm does not consider International Regulations for Preventing Collisions at Sea (COLREGs), which must be met for ship collision avoidance to ensure the navigational safety of the USV and other ships. Second, the DWA algorithm does not consider the influence of wind and waves on the collision avoidance of USVs in actual navigational environments. Reasonable use of windy and wavy environments not only improves navigational safety but also saves navigational time and fuel consumption, thereby improving the economy. Therefore, this paper proposes an improvement algorithm by DWA referred to as utility DWA (UDWA) based on COLREGs considering the sailing environment. The velocity sampling area was improved by dividing the priority, and the velocity function in the objective function was enhanced to convert the effect of wind and waves on the USVs into a change in velocity. The simulation results showed that the UDWA algorithm optimized the distance to the obstacle ship by 43.25%, 31.36%, and 67.81% in a head-on situation, crossing situation, and overtaking situation, respectively, compared to the COLREGs-compliant DWA algorithm, which considers the COLREGs. The improved algorithm not only follows the COLREGs but also has better flexibility in emergency collision avoidance and can safely and economically navigate and complete collision avoidance in windy and wavy environments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
248. Improved Transportation Model with Internet of Things Using Artificial Intelligence Algorithm.
- Author
-
Al-Ani, Ayman Khallel, Ul Arfeen Laghari, Shams, Manoharan, Hariprasath, Selvarajan, Shitharth, and Uddin, Mueen
- Subjects
ARTIFICIAL intelligence ,INTERNET of things ,ALGORITHMS ,COMPUTER systems - Abstract
In this paper, the application of transportation systems in realtime traffic conditions is evaluated with data handling representations. The proposed method is designed in such a way as to detect the number of loads that are present in a vehicle where functionality tasks are computed in the system. Compared to the existing approach, the design model in the proposed method is made by dividing the computing areas into several cluster regions, thereby reducing the complex monitoring system where control errors are minimized. Furthermore, a route management technique is combined with Artificial Intelligence (AI) algorithm to transmit the data to appropriate central servers. Therefore, the combined objective case studies are examined as minimization and maximization criteria, thus increasing the efficiency of the proposed method. Finally, four scenarios are chosen to investigate the projected design’s effectiveness. In all simulated metrics, the proposed approach provides better operational outcomes for an average percentage of 97, thereby reducing the amount of traffic in real-time conditions [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
249. 面向 FPGA 便捷部署的智能模型预测控制.
- Author
-
李星辰, 赵斐然, 孟庆辉, and 游科友
- Subjects
ARTIFICIAL neural networks ,FIELD programmable gate arrays ,ARTIFICIAL intelligence ,PARALLEL programming ,ALGORITHMS ,OPENFLOW (Computer network protocol) - Abstract
Copyright of Control Theory & Applications / Kongzhi Lilun Yu Yinyong is the property of Editorial Department of Control Theory & Applications and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
250. How AI can learn from the law: putting humans in the loop only on appeal.
- Author
-
Cohen, I. Glenn, Babic, Boris, Gerke, Sara, Xia, Qiong, Evgeniou, Theodoros, and Wertenbroch, Klaus
- Subjects
HUMAN rights ,JUDGMENT (Psychology) ,ARTIFICIAL intelligence ,MACHINE learning ,PATIENT readmissions ,DECISION making ,LEGAL procedure ,ALGORITHMS ,SOCIAL psychology ,FEDERAL government - Abstract
While the literature on putting a "human in the loop" in artificial intelligence (AI) and machine learning (ML) has grown significantly, limited attention has been paid to how human expertise ought to be combined with AI/ML judgments. This design question arises because of the ubiquity and quantity of algorithmic decisions being made today in the face of widespread public reluctance to forgo human expert judgment. To resolve this conflict, we propose that human expert judges be included via appeals processes for review of algorithmic decisions. Thus, the human intervenes only in a limited number of cases and only after an initial AI/ML judgment has been made. Based on an analogy with appellate processes in judiciary decision-making, we argue that this is, in many respects, a more efficient way to divide the labor between a human and a machine. Human reviewers can add more nuanced clinical, moral, or legal reasoning, and they can consider case-specific information that is not easily quantified and, as such, not available to the AI/ML at an initial stage. In doing so, the human can serve as a crucial error correction check on the AI/ML, while retaining much of the efficiency of AI/ML's use in the decision-making process. In this paper, we develop these widely applicable arguments while focusing primarily on examples from the use of AI/ML in medicine, including organ allocation, fertility care, and hospital readmission. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.