123 results
Search Results
2. Clinical Pearl: The Clinical Relevance of Neonatal Informatics.
- Author
-
Falciglia, Gustave H., Hageman, Joseph R., Hussain, Walid, Alkureishi, Lolita Alcocer, Shah, Kshama, and Goldstein, Mitchell
- Subjects
MEDICAL logic ,CRITICALLY ill ,PATIENTS ,ARTIFICIAL intelligence ,NEONATAL intensive care units ,ACUTE kidney failure in children ,COMPUTER science ,NEONATAL intensive care ,HOSPITAL nurseries ,INFORMATION science ,ELECTRONIC health records ,WATER-electrolyte balance (Physiology) ,QUALITY assurance ,ALGORITHMS ,CHILDREN - Abstract
The article focuses on the importance of clinical informatics in neonatal care, highlighting its potential to provide critical resources for clinicians. Topics include the specialized data needed for neonatal care, the challenges in transitioning from paper to electronic health records, and the impact of informatics on real-time patient management and research.
- Published
- 2024
3. Advances on intelligent algorithms for scientific computing: an overview.
- Author
-
Cheng Hua, Xinwei Cao, Bolin Liao, and Shuai Li
- Subjects
ARTIFICIAL intelligence ,OPTIMIZATION algorithms ,SCIENTIFIC computing ,ALGORITHMS ,COMPUTER science - Abstract
The field of computer science has undergone rapid expansion due to the increasing interest in improving system performance. This has resulted in the emergence of advanced techniques, such as neural networks, intelligent systems, optimization algorithms, and optimization strategies. These innovations have created novel opportunities and challenges in various domains. This paper presents a thorough examination of three intelligent methods: neural networks, intelligent systems, and optimization algorithms and strategies. It discusses the fundamental principles and techniques employed in these fields, as well as the recent advancements and future prospects. Additionally, this paper analyzes the advantages and limitations of these intelligent approaches. Ultimately, it serves as a comprehensive summary and overview of these critical and rapidly evolving fields, offering an informative guide for novices and researchers interested in these areas. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Software tools for learning artificial intelligence algorithms.
- Author
-
Stamenković, Srećko, Jovanović, Nenad, Vasović, Bojan, Cvjetković, Miloš, and Jovanović, Zoran
- Subjects
ARTIFICIAL intelligence ,SOFTWARE development tools ,EDUCATIONAL technology ,COMPUTER science ,ALGORITHMS ,SIMULATION software - Abstract
In recent years, artificial intelligence has become an important discipline in the field of computer science. Students, in the absence of basic prior knowledge, may have difficulty tracking materials when they first encounter complex and abstract artificial intelligence algorithms. Numerous researchers and educators point out that the use of simulation systems and software tools to illustrate the dynamic behavior of the algorithm can prove to be an effective solution. The introduction and adoption of new technologies in learning and teaching has evolved rapidly. This conceptual review paper aims to explore the emergence of innovative educational technologies in the teaching and learning of artificial intelligence. The aim of this paper is to analyze the existing representative educational tools for learning topics in the field of artificial intelligence to highlight their characteristics and areas they cover, so that readers can more easily draw conclusions about the possible use of some of the analyzed systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. HYPERGRAPH HORN FUNCTIONS.
- Author
-
BÉRCZI, KRISTÓF, BOROS, ENDRE, and KAZUHISA MAKINO
- Subjects
ARTIFICIAL intelligence ,COMPUTER science ,POLYNOMIAL time algorithms ,DATABASES ,BOOLEAN functions ,SEMIDEFINITE programming - Abstract
Horn functions form a subclass of Boolean functions possessing interesting structural and computational properties. These functions play a fundamental role in algebra, artificial intelligence, combinatorics, computer science, database theory, and logic. In the present paper, we introduce the subclass of hypergraph Horn functions that generalizes matroids and equivalence relations. We provide multiple characterizations of hypergraph Horn functions in terms of implicate-duality and the closure operator, which are, respectively, regarded as generalizations of matroid duality and the Mac Lane-Steinitz exchange property of matroid closure. We also study algorithmic issues on hypergraph Horn functions and show that the recognition problem (i.e., deciding if a given definite Horn CNF represents a hypergraph Horn function) and key realization (i.e., deciding if a given hypergraph is realized as a key set by a hypergraph Horn function) can be done in polynomial time, while implicate sets can be generated with polynomial delay. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Design of New Word Retrieval Algorithm for Chinese-English Bilingual Parallel Corpus.
- Author
-
Zhang, Liting
- Subjects
MACHINE translating ,NATURAL language processing ,ALGORITHMS ,NEW words ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Natural language processing is an important direction in the field of computer science and artificial intelligence. It can realize various theories and methods of effective communication between humans and computers using natural language. Machine learning is a branch of natural language processing research, which is based on a large-scale English-Chinese database. Due to the relatively poor alignment corpus of English and Chinese bilingual sentences containing unknown words, machine translation is unprofessional and unbalanced, which is the problem studied in this paper. The purpose of this paper is to design and implement a length-based system for sentence alignment between English and Chinese bilingual texts. The research content of this paper is mainly divided into the following parts. First, the evaluation function of bilingual sentence alignment is designed, and on this basis, the bilingual sentence alignment algorithm based on the length and the optimal sentence pair sequence search algorithm is designed. In this paper, China National Knowledge Infrastructure (CNKI) is selected as an English-Chinese bilingual candidate website and English-Chinese bilingual web pages are downloaded. After analyzing the downloaded pages, nontext content such as page tags is removed, and bilingual text information is stored so as to establish an English-Chinese bilingual corpus based on segment alignment and retain English-Chinese bilingual keywords in the web pages. Second, extract the dictionary from the software StarDict, analyze the original dictionary format, and turn it into a custom dictionary format, which is convenient and better to use the double-sentence sentence alignment system, which is conducive to expanding the number of dictionaries and increasing the professionalism of vocabulary. Finally, we extract the stems of English words from the established corpus to simplify the complexity of English word processing, reduce the noise caused by the conversion of word parts of speech, and improve the operation efficiency. A bilingual sentence alignment system based on length is implemented. Finally, the system parameters are adjusted for comparative experiments to test the system performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Toward the consolidation of a multimetric-based journal ranking and categorization system for computer science subject areas.
- Author
-
Hameed, Abdul, Omar, Muhammad, Bilal, Muhammad, and Han Woo Park
- Subjects
COMPUTER science ,BIBLIOTHERAPY ,ARTIFICIAL intelligence ,PATTERN recognition systems ,CLUSTER analysis (Statistics) ,COMPUTER systems ,SYSTEMS theory ,PRINCIPAL components analysis - Abstract
The evaluation of scientific journals poses challenges owing to the existence of various impact measures. This is because journal ranking is a multidimensional construct that may not be assessed effectively using a single metric such as an impact factor. A few studies have proposed an ensemble of metrics to prevent the bias induced by an individual metric. In this study, a multi-metric journal ranking method based on the standardized average index (SA index) was adopted to develop an extended standardized average index (ESA index). The ESA index utilizes six metrics: the CiteScore, Source Normalized Impact per Paper (SNIP), SCImago Journal Rank (SJR), Hirsh index (H-index), Eigenfactor Score, and Journal Impact Factor from three well-known databases (Scopus, SCImago Journal & Country Rank, and Web of Science). Experiments were conducted in two computer science subject areas: (1) artificial intelligence and (2) computer vision and pattern recognition. Comparing the results of the multi-metric-based journal ranking system with the SA index, it was demonstrated that the multi-metric ESA index exhibited high correlation with all other indicators and significantly outperformed the SA index. To further evaluate the performance of the model and determine the aggregate impact of bibliometric indices with the ESA index, we employed unsupervised machine learning techniques such as clustering coupled with principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). These techniques were utilized to measure the clustering impact of various bibliometric indicators on both the complete set of bibliometric features and the reduced set of features. Furthermore, the results of the ESA index were compared with those of other ranking systems, including the internationally recognized Scopus, SJR, and HEC Journal Recognition System (HJRS) used in Pakistan. These comparisons demonstrated that the multi-metric-based ESA index can serve as a valuable reference for publishers, journal editors, researchers, policymakers, librarians, and practitioners in journal selection, decision making, and professional assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. 机器学习算法在食品气味表征中的应用.
- Author
-
李帅, 柴春祥, and 刘建福
- Subjects
FOOD aroma ,MACHINE learning ,PROCESS capability ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Copyright of Journal of Chinese Institute of Food Science & Technology is the property of Journal of Chinese Institute of Food Science & Technology Periodical Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
9. Comparison of Algorithms for the AI-Based Fault Diagnostic of Cable Joints in MV Networks.
- Author
-
Negri, Virginia, Mingotti, Alessandro, Tinarelli, Roberto, and Peretto, Lorenzo
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,ALGORITHMS ,COMPUTER science ,JOINTS (Anatomy) ,CABLE structures - Abstract
Electrical utilities and system operators (SOs) are constantly looking for solutions to problems in the management and control of the power network. For this purpose, SOs are exploring new research fields, which might bring contributions to the power system environment. A clear example is the field of computer science, within which artificial intelligence (AI) has been developed and is being applied to many fields. In power systems, AI could support the fault prediction of cable joints. Despite the availability of many legacy methods described in the literature, fault prediction is still critical, and it needs new solutions. For this purpose, in this paper, the authors made a further step in the evaluation of machine learning methods (ML) for cable joint health assessment. Six ML algorithms have been compared and assessed on a consolidated test scenario. It simulates a distributed measurement system which collects measurements from medium-voltage (MV) cable joints. Typical metrics have been applied to compare the performance of the algorithms. The analysis is then completed considering the actual in-field conditions and the SOs' requirements. The results demonstrate: (i) the pros and cons of each algorithm; (ii) the best-performing algorithm; (iii) the possible benefits from the implementation of ML algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. GLOBAL OPTIMIZATION FOR THE FORWARD NEURAL NETWORKS AND THEIR APPLICATIONS.
- Author
-
REDDY, K. SUNIL MANOHAR, BABU, G. RAVINDRA, and RAO, S. KRISHNA MOHAN
- Subjects
NEURAL computers ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,COMPUTER science ,ALGORITHMS - Abstract
This paper describes and evaluates several global optimization issues of Artificial Neural Networks (ANN) and their applications. In this paper, the authors examine the properties of the feed-forward neural networks and the process of determining the appropriate network inputs and architecture, and built up a short-term gas load forecast system - the Tell Future system. This system performs very well for short-term gas load forecasting, which is built based on various Back-Propagation (BP) algorithms. The standard Back-Propagation (BP) algorithm for training feed-forward neural networks have proven robust even for difficult problems. In order to forecast the future load from the trained networks, the history loads, temperature, wind velocity, and calendar information should be used in addition to the predicted future temperature and wind velocity. Compared to other regression methods, the neural networks allow more flexible relationships between temperature, wind, calendar information and load pattern. Feed-forward neural networks can be used in many kinds of forecasting in different industrial areas. Similar models can be built to make electric load forecasting, daily water consumption forecasting, stock and markets forecasting, traffic flow and product sales forecasting. [ABSTRACT FROM AUTHOR]
- Published
- 2015
11. A new algorithmic decision for categorical syllogisms via Carroll's diagrams.
- Author
-
Kircali Gursoy, Necla, Senturk, Ibrahim, Oner, Tahsin, and Gursoy, Arif
- Subjects
SYLLOGISM ,ALGORITHMS ,COMPUTER science ,CHARTS, diagrams, etc. ,ARTIFICIAL intelligence ,COMPUTER engineering - Abstract
In this paper, we propose a new effective algorithm for the categorical syllogisms by using a calculus system Syllogistic Logic with Carroll Diagrams, which determines a formal approach to logical reasoning with diagrams, for representations of the fundamental Aristotelian categorical syllogisms. We show that this logical reasoning is closed under the syllogistic criterion of inference. Therefore, the calculus system is implemented to let the formalism which comprises synchronically bilateral and trilateral diagrammatical appearance and naive algorithmic nature. And also, there is no need specific knowledge or exclusive ability to understand this decision procedure as well as to use it in an algorithmic system. Consequently, the empirical contributions of this paper are to design a polynomial-time algorithm at the first time in the literature to conduce to researchers getting into the act in different areas of science used categorical syllogisms such as artificial intelligence, engineering, computer science and etc. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. Regularized Negative Correlation Learning for Neural Network Ensembles.
- Author
-
Huanhuan Chen and Xin Yao
- Subjects
ARTIFICIAL neural networks ,SET theory ,ALGORITHMS ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error/MSEI together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when λ = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overtitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter λ in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
13. Sensitivity to Noise in Bidirectional Associative Memory (BAM).
- Author
-
Du, Shengzhi, Zengqiang Chen, Zhuzhi Yuan, and Xinghui Zhang
- Subjects
MEMORY ,SENSORY perception ,ALGORITHMS ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Original Hebbian encoding scheme of bidirectional associative memory (BAM) provides a poor pattern capacity and recall performance. Based on Rosenblatt's perceptron learning algorithm, the pattern capacity of BAM is enlarged, and perfect recall of all training pattern pairs is guaranteed. However, these methods put their emphases on pattern capacity, rather than error correction capability which is another critical point of BAM. This paper analyzes the sensitivity to noise in RAM and obtains an interesting idea to improve noise immunity of BAM. Some researchers have found that the noise sensitivity of BAM relates to the minimum absolute value of net inputs (MAV). However, in this paper, the analysis on failure association shows that it is related not only to MAV but also to the variance of weights associated with synapse connections. In fact, it is a positive monotone increasing function of the quotient of MAV divided by the variance of weights. This idea provides an useful principle of improving error correction capability of RAM. Some revised encoding schemes, such as small variance learning for RAM (SVBAM), evolutionary pseudorelaxation learning for BAM (EPRLAB) and evolutionary bidirectional learning (EBL), have been introduced to illustrate the performance of this principle. All these methods perform better than their original versions in noise immunity. Moreover, these methods have no negative effect on the pattern capacity of BAM. The convergence of these methods is also discussed in this paper. If there exist solutions, EPRLAB and EBL always converge to a global optimal solution in the senses of both, pattern capacity and noise immunity. However, the convergence of SVBAM may be affected by a preset function. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
14. Variable Search Space Converging Genetic Algorithm for Solving System of Non-linear Equations.
- Author
-
SS, Venkatesh and Mishra, Deepak
- Subjects
GENETIC algorithms ,ALGORITHMS ,EQUATIONS ,ABSOLUTE value ,GENETIC code ,SPACE ,SPACE (Architecture) - Abstract
This paper introduce a new variant of the Genetic Algorithm whichis developed to handle multivariable, multi-objective and very high search space optimization problems like the solving system of non-linear equations. It is an integer coded Genetic Algorithm with conventional cross over and mutation but with Inverse algorithm is varying its search space by varying its digit length on every cycle and it does a fine search followed by a coarse search. And its solution to the optimization problem will converge to precise value over the cycles. Every equation of the system is considered as a single minimization objective function. Multiple objectives are converted to a single fitness function by summing their absolute values. Some difficult test functions for optimization and applications are used to evaluate this algorithm. The results prove that this algorithm is capable to produce promising and precise results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Machine Learning Applied to Diagnosis of Human Diseases: A Systematic Review.
- Author
-
Caballé-Cervigón, Nuria, Castillo-Sequera, José L., Gómez-Pulido, Juan A., Gómez-Pulido, José M., and Polo-Luque, María L.
- Subjects
MACHINE learning ,DIAGNOSIS ,META-analysis ,ALGORITHMS ,COMPUTER science ,MEDLINE - Abstract
Human healthcare is one of the most important topics for society. It tries to find the correct effective and robust disease detection as soon as possible to patients receipt the appropriate cares. Because this detection is often a difficult task, it becomes necessary medicine field searches support from other fields such as statistics and computer science. These disciplines are facing the challenge of exploring new techniques, going beyond the traditional ones. The large number of techniques that are emerging makes it necessary to provide a comprehensive overview that avoids very particular aspects. To this end, we propose a systematic review dealing with the Machine Learning applied to the diagnosis of human diseases. This review focuses on modern techniques related to the development of Machine Learning applied to diagnosis of human diseases in the medical field, in order to discover interesting patterns, making non-trivial predictions and useful in decision-making. In this way, this work can help researchers to discover and, if necessary, determine the applicability of the machine learning techniques in their particular specialties. We provide some examples of the algorithms used in medicine, analysing some trends that are focused on the goal searched, the algorithm used, and the area of applications. We detail the advantages and disadvantages of each technique to help choose the most appropriate in each real-life situation, as several authors have reported. The authors searched Scopus, Journal Citation Reports (JCR), Google Scholar, and MedLine databases from the last decades (from 1980s approximately) up to the present, with English language restrictions, for studies according to the objectives mentioned above. Based on a protocol for data extraction defined and evaluated by all authors using PRISMA methodology, 141 papers were included in this advanced review. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Backpropagation Algorithms for a Broad Class of Dynamic Networks.
- Author
-
De Jesús, Orlando and Hagan, Martin T.
- Subjects
BACK propagation ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,COMPUTER science - Abstract
This paper introduces a general framework for describing dynamic neural networks—the layered digital dynamic network (LDDN). This framework allows the development of two general algorithms for computing the gradients and Jacobians for these dynamic networks: backpropagation-through-time (BPTT) and real-time recurrent learning (RTRL). The structure of the LDDN framework enables an efficient implementation of both algorithms for arbitrary dynamic networks. This paper demonstrates that the BPTT algorithm is more efficient for gradient calculations, but the RTRL algorithm is more efficient for Jacobian calculations. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
17. Contrastive Analysis and Feature Selection for Korean Modal Expression in Chinese-Korean Machine Translation System.
- Author
-
LI, JIN-JI, ROH, JI-EUN, KIM, DONG-IL, and LEE, JONG-HYEOK
- Subjects
MACHINE translating ,ALGORITHMS ,ARTIFICIAL intelligence ,NATURAL language processing ,ELECTRONIC data processing ,HUMAN-computer interaction ,COMPUTER science - Abstract
To generate a proper Korean predicate, a natural modal expression is the most important factor for a machine translation (MT) system. Tense, aspect, mood, negation, and voice are the major constituents related to modal expression. The linguistic encoding of a modal expression is quite different between Chinese and Korean in terms of linguistic typology and genealogy. In this paper, a new applicable categorization of Korean modality system viz. tense, aspect, mood, negation, and voice, will be proposed through a contrastive analysis of Chinese and Korean from the viewpoint of a practical MT system. In order to precisely determine the modal expression, effective feature selection frameworks for Chinese are presented with a variety of machine learning methods. As a result, our proposed approach achieved an accuracy of 83.10%. [ABSTRACT FROM AUTHOR]
- Published
- 2005
18. A Generalized Growing and Pruning RBF (GGAP-RBF) Neural Network for Function Approximation.
- Author
-
Huang, Guang-Bin, Saratchandran, P., and Sundararajan, Narasimhan
- Subjects
RADIAL basis functions ,ALGORITHMS ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science ,APPROXIMATION theory - Abstract
This paper presents a new sequential learning algorithm for radial basis function (RBF) networks referred to as generalized growing and pruning algorithm for RBF (GGAP-RBF). The paper first introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks. The growing and pruning strategy of GGAP-RBF is based on linking the required learning accuracy with the significance of the nearest or intentionally added new neuron. Significance of a neuron is a measure of the average information content of that neuron. The GGAP-RBF algorithm can be used for any arbitrary sampling density for training samples and is derived from a rigorous statistical point of view. Simulation re- suits for bench mark problems in the function approximation area show that the GGAP-RBF outperforms several other sequential learning algorithms in terms of learning speed, network size and generalization performance regardless of the sampling density function of the training data. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
19. Robust Image Segmentation Using FCM With Spatial Constraints Based on New Kernel-Induced Distance Measure.
- Author
-
Songcan Chen and Daoqiang Zhang
- Subjects
ARTIFICIAL intelligence ,ALGORITHMS ,FUZZY logic ,FUZZY systems ,ROBUST control ,COMPUTER science - Abstract
Fuzzy c-means clustering (FCM) with spatial constraints (FCM_S) is an effective algorithm suitable for image segmentation. Its effectiveness contributes not only to the introduction of fuzziness for belongingness of each pixel but also to exploitation of spatial contextual information. Although the contextual information can raise its Insensitivity to noise to some extent, FCM_S still lacks enough robustness to noise and outliers and is not suitable for revealing non-Euclidean structure of the input data due to the use of Euclidean distance (L
2 norm). In this paper, to overcome the above problems, we first propose two variants, FCMS1 and FCMS2 , of FCM_S to aim at simplifying its computation and then extend them, including FCM_S, to corresponding robust kernelized versions KFCM_S, KFCMS1 and KFCMS2 by the kernel methods. Our main motives of using the kernel methods consist in: inducing a class of robust non-Euclidean distance measures for the original data space to derive new objective functions and thus clustering the non-Euclidean structures in data; enhancing robustness of the original clustering algorithms to noise and outliers, and still retaining computational simplicity. The experiments on the artificial and real-world datasets show that our proposed algorithms, especially with spatial constraints, are more effective. [ABSTRACT FROM AUTHOR]- Published
- 2004
- Full Text
- View/download PDF
20. Applications of Artificial Intelligence Algorithms in the Energy Sector.
- Author
-
Szczepaniuk, Hubert and Szczepaniuk, Edyta Karolina
- Subjects
ARTIFICIAL intelligence ,ENERGY industries ,ALGORITHMS ,RENEWABLE energy sources ,COMPUTER science ,MULTIDIMENSIONAL databases - Abstract
The digital transformation of the energy sector toward the Smart Grid paradigm, intelligent energy management, and distributed energy integration poses new requirements for computer science. Issues related to the automation of power grid management, multidimensional analysis of data generated in Smart Grids, and optimization of decision-making processes require urgent solutions. The article aims to analyze the use of selected artificial intelligence (AI) algorithms to support the abovementioned issues. In particular, machine learning methods, metaheuristic algorithms, and intelligent fuzzy inference systems were analyzed. Examples of the analyzed algorithms were tested in crucial domains of the energy sector. The study analyzed cybersecurity, Smart Grid management, energy saving, power loss minimization, fault diagnosis, and renewable energy sources. For each domain of the energy sector, specific engineering problems were defined, for which the use of artificial intelligence algorithms was analyzed. Research results indicate that AI algorithms can improve the processes of energy generation, distribution, storage, consumption, and trading. Based on conducted analyses, we defined open research challenges for the practical application of AI algorithms in critical domains of the energy sector. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Research on OpenCL optimization for FPGA deep learning application.
- Author
-
Zhang, Shuo, Wu, Yanxia, Men, Chaoguang, He, Hongtao, and Liang, Kai
- Subjects
COMPUTER science ,MACHINE learning ,GRAPHICS processing units ,COGNITIVE science ,COMPUTER software ,ARTIFICIAL intelligence ,DEEP learning - Abstract
In recent years, with the development of computer science, deep learning is held as competent enough to solve the problem of inference and learning in high dimensional space. Therefore, it has received unprecedented attention from both the academia and the business community. Compared with CPU/GPU, FPGA has attracted much attention for its high-energy efficiency, short development cycle and reconfigurability in the aspect of deep learning algorithm. However, because of the limited research on OpenCL optimization on FPGA of deep learning algorithms, OpenCL tools and models applied to CPU/GPU cannot be directly used on FPGA. This makes it difficult for software programmers to use FPGA when implementing deep learning algorithms for a rewarding performance. To solve this problem, this paper proposed an OpenCL computational model based on FPGA template architecture to optimize the time-consuming convolution layer in deep learning. The comparison between the program applying the computational model and the corresponding optimization program provided by Xilinx indicates that the former is 8-40 times higher than the latter in terms of performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Line spectral frequency-based features and extreme learning machine for voice activity detection from audio signal.
- Author
-
Mukherjee, Himadri, Obaidullah, Sk. Md., Santosh, K. C., Phadikar, Santanu, and Roy, Kaushik
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,SPECTRAL analysis (Phonetics) ,ALGORITHMS ,COMPUTER science - Abstract
Voice activity detection (VAD) refers to the task of identifying vocal segments from an audio clip. It helps in reducing the computational overhead as well elevate the recognition performance of speech-based systems by helping to discard the non vocal portions from an input signal. In this paper, a VAD technique is presented that uses line spectral frequency-based statistical features namely LSF-S coupled with extreme learning-based classification. The experiments were performed on a database of more than 350 h consisting of data from multifarious sources. We have obtained an encouraging overall accuracy of 99.43%. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Enhancing the robustness of recommender systems against spammers.
- Author
-
Zhang, Chengjun, Liu, Jin, Qu, Yanzhen, Han, Tianqi, Ge, Xujun, and Zeng, An
- Subjects
RECOMMENDER systems ,INFORMATION science ,ROBUST control ,COMPUTER algorithms ,COMPUTER science - Abstract
The accuracy and diversity of recommendation algorithms have always been the research hotspot of recommender systems. A good recommender system should not only have high accuracy and diversity, but also have adequate robustness against spammer attacks. However, the issue of recommendation robustness has received relatively little attention in the literature. In this paper, we systematically study the influences of different spammer behaviors on the recommendation results in various recommendation algorithms. We further propose an improved algorithm by incorporating the inner-similarity of user’s purchased items in the classic KNN approach. The new algorithm effectively enhances the robustness against spammer attacks and thus outperforms traditional algorithms in recommendation accuracy and diversity when spammers exist in the online commercial systems. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Intrusion detection system using Online Sequence Extreme Learning Machine (OS-ELM) in advanced metering infrastructure of smart grid.
- Author
-
Li, Yuancheng, Qiu, Rixuan, and Jing, Sitong
- Subjects
COMPUTER science ,APPLIED mathematics ,MACHINE learning ,COMPUTER security ,ARTIFICIAL neural networks - Abstract
Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can’t satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. Is chess the drosophila of artificial intelligence? A social history of an algorithm.
- Author
-
Ensmenger, Nathan
- Subjects
ARTIFICIAL intelligence ,COMPUTER chess ,CHESS ,ALGORITHMS ,DROSOPHILA ,COMPUTER science ,VIDEO games ,COMPUTER software - Abstract
Since the mid 1960s, researchers in computer science have famously referred to chess as the ‘drosophila’ of artificial intelligence (AI). What they seem to mean by this is that chess, like the common fruit fly, is an accessible, familiar, and relatively simple experimental technology that nonetheless can be used productively to produce valid knowledge about other, more complex systems. But for historians of science and technology, the analogy between chess and drosophila assumes a larger significance. As Robert Kohler has ably described, the decision to adopt drosophila as the organism of choice for genetics research had far-reaching implications for the development of 20th century biology. In a similar manner, the decision to focus on chess as the measure of both human and computer intelligence had important and unintended consequences for AI research. This paper explores the emergence of chess as an experimental technology, its significance in the developing research practices of the AI community, and the unique ways in which the decision to focus on chess shaped the program of AI research in the decade of the 1970s. More broadly, it attempts to open up the virtual black box of computer software – and of computer games in particular – to the scrutiny of historical and sociological analysis. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
26. Advances in medical decision support systems.
- Author
-
Übeyli, Elif Derya
- Subjects
MEDICAL decision making ,DECISION support systems ,ARTIFICIAL intelligence ,DATA mining ,ALGORITHMS ,COMPUTER science ,COMPUTER network resources - Abstract
The article presents an introduction to articles published within this edition of the journal focusing on medical decision support systems and automated diagnostic systems. These include "Augmentation of a nearest neighbour clustering algorithm with a partial supervision strategy for biomedical data classification," by Salem et al, "Comparison of different classifer algorithms for diagnosing macular and optic nerve diseases," by Polat et al, and "Electromyography signal analysis using wavelet tranform and higher order statistics to determine muscle contraction," by Hussain et al.
- Published
- 2009
- Full Text
- View/download PDF
27. Migrating Techniques from Search-Based Multi-Agent Path Finding Solvers to SAT-Based Approach.
- Author
-
Surynek, Pavel, Stern, Roni, Boyarski, Eli, and Felner, Ariel
- Subjects
DEEP learning ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,COMPUTER science - Abstract
In the multi-agent path finding problem (MAPF) we are given a set of agents each with respective start and goal positions. The task is to find paths for all agents while avoiding collisions, aiming to minimize a given objective function. Many MAPF solvers were introduced in the past decade for optimizing two specific objective functions: sum-of-costs and makespan. Two prominent categories of solvers can be distinguished: search-based solvers and compilation-based solvers. Search-based solvers were developed and tested for the sum-of-costs objective, while the most prominent compilation-based solvers that are built around Boolean satisfiability (SAT) were designed for the makespan objective. Very little is known on the performance and relevance of solvers from the compilation-based approach on the sum-of-costs objective. In this paper, we start to close the gap between these cost functions in the compilation-based approach. Our main contribution is a new SAT-based MAPF solver called MDD-SAT, that is directly aimed to optimally solve the MAPF problem under the sum-of-costs objective function. Using both a lower bound on the sum-of-costs and an upper bound on the makespan, MDD-SAT is able to generate a reasonable number of Boolean variables in our SAT encoding. We then further improve the encoding by borrowing ideas from ICTS, a search-based solver. In addition, we show that concepts applicable in search-based solvers like ICTS and ICBS are applicable in the SAT-based approach as well. Specifically, we integrate independence detection, a generic technique for decomposing an MAPF instance into independent subproblems, into our SAT-based approach, and we design a relaxation of our optimal SAT-based solver that results in a bounded suboptimal SAT-based solver. Experimental evaluation on several domains shows that there are many scenarios where our SAT-based methods outperform state-of-the-art sum-of-costs search-based solvers, such as variants of the ICTS and ICBS algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. A General Framework for Learning Rules From Data.
- Author
-
Apolloni, Bruno, Esposito, Anna, Maichiodi, Dario, Orovas, Christos, Palmas, Giorgio, and Taylor, John G.
- Subjects
BOOLEAN algebra ,ALGORITHMS ,ALGEBRA ,ARTIFICIAL intelligence ,COMPUTER science ,ARTIFICIAL neural networks - Abstract
With the aim of getting understandable symbolic rules to explain a given phenomenon, we split the task of learning these rules from sensory data in two phases: a multilayer perceptron maps features into propositional variables and a set of subsequent layers operated by a PAC-like algorithm learns Boolean expressions on these variables. The special features of this procedure are that: i) the neural network is trained to produce a Boolean output having the principal task of discriminating between classes of inputs; ii) the symbolic part is directed to compute rules within a family that is not known a priori; iii) the welding point between the two learning systems is represented by a feedback based on a suitability evaluation of the computed rules. The procedure we propose is based on a computational learning paradigm set up recently in some papers in the fields of theoretical computer science, artificial intelligence and cognitive systems. The present article focuses on information management aspects of the procedure. We deal with the lack of prior information about the rules through learning strategies that affect both the meaning of the variables and the description length of the rules into which they combine. The paper uses the task of learning to formally discriminate among several emotional states as both a working example and a test bench for a comparison with previous symbolic and subsymbolic methods in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
29. PERFORMANCE BOUNDARIES OF OPTIMAL WEIGHTED MEDIAN FILTERS.
- Author
-
Lukac, Rastislav
- Subjects
DISTRIBUTION (Probability theory) ,ALGORITHMS ,GENETIC algorithms ,GENETIC programming ,ARTIFICIAL intelligence ,IMAGE processing ,COMPUTER science ,SIGNAL processing ,FILTERS (Mathematics) - Abstract
This paper focuses on image filtering using weighted median (WM) filters, a nonlinear filter class taking advantages of the robust order-statistic theory and capability to adapt a filter behavior for a variety of statistics related to the desired signals and the noise distributions. The main contribution of the paper is the analysis of the four WM optimization schemes, namely genetic WM optimization, non-adaptive WM optimization algorithm and adaptive WM filtering utilizing the linear and the sigmoidal approximation of the sign function. The analysis is done by extensive simulations, in which several features such as noise reduction, edge preservation, error estimation and dependence of error criteria on the degree of the impulse noise corruption, are examined. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
30. A Local Search Approach to Modelling and Solving Interval Algebra Problems.
- Author
-
Thornton, J., Beaumont, M., Sattar, A., and Maher, Michael
- Subjects
COMPUTER logic ,ARTIFICIAL intelligence ,MATHEMATICAL logic ,ALGORITHMS ,COMPUTER science ,PROBLEM solving ,ALGEBRA - Abstract
Local search techniques have attracted considerable interest in the artificial intelligence community since the development of GSAT and the min-conflicts heuristic for solving propositional satisfiability (SAT) problems and binary constraint satisfaction problems (CSPs) respectively. Newer techniques, such as the discrete Langrangian method (DLM), have significantly improved on GSAT and can also be applied to general constraint satisfaction and optimization. However, local search has yet to be successfully employed in solving temporal constraint satisfaction problems (TCSPs). This paper argues that current formalisms for representing TCSPs are inappropriate for a local search approach, and proposes an alternative CSP-based end-point ordering model for temporal reasoning. The paper looks at modelling and solving problems formulated using Allen's interval algebra (IA) and proposes a new constraint weighting algorithm derived from DLM. Using a set of randomly generated IA problems, it is shown that local search outperforms existing consistency-enforcing algorithms on those problems that the existing techniques find most difficult. [ABSTRACT FROM PUBLISHER]
- Published
- 2004
- Full Text
- View/download PDF
31. A survey on algorithms for Nash equilibria in finite normal-form games.
- Author
-
Li, Hanyu, Huang, Wenhan, Duan, Zhijian, Mguni, David Henry, Shao, Kun, Wang, Jun, and Deng, Xiaotie
- Subjects
NASH equilibrium ,ALGORITHMS ,GAME theory ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Nash equilibrium is one of the most influential solution concepts in game theory. With the development of computer science and artificial intelligence, there is an increasing demand on Nash equilibrium computation, especially for Internet economics and multi-agent learning. This paper reviews various algorithms computing the Nash equilibrium and its approximation solutions in finite normal-form games from both theoretical and empirical perspectives. For the theoretical part, we classify algorithms in the literature and present basic ideas on algorithm design and analysis. For the empirical part, we present a comprehensive comparison on the algorithms in the literature over different kinds of games. Based on these results, we provide practical suggestions on implementations and uses of these algorithms. Finally, we present a series of open problems from both theoretical and practical considerations. • A classification on Nash equilibrium algorithms in the literature with basic ideas on design and analysis presented. • A comprehensive comparison on Nash equilibrium algorithms in the literature over different kinds of games. • Practical suggestions on implementations and uses of Nash equilibrium algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. RCRA 2009 Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion.
- Author
-
Gavanelli, Marco, Mancini, Toni, and Pettorossi, Alberto
- Subjects
SCIENTIFIC community ,ALGORITHMS ,PROBLEM solving ,COMBINATORICS ,COMPUTER science ,ARTIFICIAL intelligence ,CONFERENCES & conventions - Published
- 2011
- Full Text
- View/download PDF
33. ARTIFICIAL INTELLIGENCE APPROACHES IN FINANCE AND ACCOUNTING.
- Author
-
KRÁJNIK, Izabella and DEMETER, Robert
- Subjects
ARTIFICIAL intelligence ,COMPUTER science ,PROBLEM solving ,INTELLIGENT tutoring systems ,MEMORIZATION ,ALGORITHMS - Abstract
This paper takes an integrated approach to accounting from several perspectives, starting from a study of the international literature and transposing the main Society 5 benchmarks to highlight their impact. Artificial intelligence refers to the field of computer science that goes beyond classical computer science, aimed at solving problems for which there is no classical computational algorithm in an efficient way. The intelligent system must be able to do more than just solve problems that require computing power, memorization and retrieval of knowledge or just reasoning control, it should be able to see, speak, hear, understand and reason and command similarly to humans. These are big challenges for intelligent systems, which is why the attention of researchers and the business world is increasingly turning to artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. A New Asymptotic Notation: Weak Theta.
- Author
-
Mogoş, Andrei-Horia, Mogoş, Bianca, and Florea, Adina Magda
- Subjects
- *
THETA functions , *ALGORITHMS , *ARTIFICIAL intelligence , *COMPARATIVE studies , *COMPUTER science - Abstract
Algorithms represent one of the fundamental issues in computer science, while asymptotic notations are widely accepted as the main tool for estimating the complexity of algorithms. Over the years a certain number of asymptotic notations have been proposed. Each of these notations is based on the comparison of various complexity functions with a given complexity function. In this paper, we define a new asymptotic notation, called “Weak Theta,” that uses the comparison of various complexity functions with two given complexity functions. Weak Theta notation is especially useful in characterizing complexity functions whose behaviour is hard to be approximated using a single complexity function. In addition, in order to highlight the main particularities of Weak Theta, we propose and prove several theoretical results: properties of Weak Theta, criteria for comparing two complexity functions, and properties of a new set of complexity functions (also defined in the paper) based on Weak Theta. Furthermore, to illustrate the usefulness of our notation, we discuss an application of Weak Theta in artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
35. An Optimization Model Based on Game Theory.
- Author
-
Yang Shi, Yongkang Xing, Chao Mou, and Zhuqing Kuang
- Subjects
GAME theory ,COMPUTER science ,GLOBAL optimization ,PERTURBATION theory ,ALGORITHMS ,ARTIFICIAL intelligence ,NASH equilibrium - Abstract
Game Theory has a wide range of applications in department of economics, but in the field of computer science, especially in the optimization algorithm is seldom used. In this paper, we integrate thinking of game theory into optimization algorithm, and then propose a new optimization model which can be widely used in optimization processing. This optimization model is divided into two types, which are called "the complete consistency" and "the partial consistency". In these two types, the partial consistency is added disturbance strategy on the basis of the complete consistency. When model's consistency is satisfied, the Nash equilibrium of the optimization model is global optimal and when the model's consistency is not met, the presence of perturbation strategy can improve the application of the algorithm. The basic experiments suggest that this optimization model has broad applicability and better performance, and gives a new idea for some intractable problems in the field of artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
36. Personalized Knowledge Mining in Large Text Sets.
- Author
-
Chudzian, Cezary, Granat, Janusz, Klimasara, Edward, Sobieszek, Jarosław, and Wierzbicki, Andrzej P.
- Subjects
EXPERT systems ,ONTOLOGIES (Information retrieval) ,AUTOMATIC classification ,DATA mining ,KNOWLEDGE management ,COMPUTER science ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
The paper starts with a discussion of the concept of knowledge engineering, in particular ontological engineering. Consequently, the paper presents assumptions accepted as a basis for a group research on a radically personalized system of ontological knowledge mining, relying on the perspective of human centered computing and combining ontological concepts of the user with an ontology resulting from an automatic classification of a given set of textual data. The paper presents a pilot system PrOnto that supports research work in two aspects: searching for information interesting for a user according to her/his personalized ontological profile, and supporting research cooperation in a group of users (Virtual Research Community) according, e.g., to a comparison of such personalized ontological profiles. The paper concludes with suggestions concerning diverse applications of ontological engineering tools and future work. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
37. Noise Reduction Based on Modified Spectral Subtraction Method.
- Author
-
Verteletskaya, Ekaterina and Simak, Boris
- Subjects
BROADBAND communication systems ,ALGORITHMS ,INTELLIGIBILITY of speech ,MATHEMATICAL functions ,ARTIFICIAL intelligence ,SPECTRUM analysis ,COMPUTER science - Abstract
In this paper, we propose a method for enhancing of speech corrupted by broadband noise. The method is based on the spectral subtraction technique. Performance of spectral subtraction, its limitations, artifacts introduced by it, and spectral subtraction modifications for eliminating these artifacts are discussed in the paper in details. To eliminate the musical noise, one of the artifacts introduced by conventional spectral subtraction, we propose to implement reduced varying scaling factor of spectral subtraction, with a following application of weighted function. Weighting function, used in the proposed algorithm, attenuates frequency spectrum components lying outside identified formants regions. The algorithm effects a substantial reduction of the musical noise without significantly distorting the speech. Listening tests were performed to determinate the subjective quality and intelligibility of speech enhanced by our method. Proposed noise reduction algorithm is compared to conventional spectral subtraction based on SNR improvement introduced by them. Spectrograms of speech, enhanced by the proposed algorithm and other modified spectral subtraction algorithms, which show the algorithms performance and degree of speech distortion, are also presented in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2011
38. Comparative Study of the Inference Mechanisms in PROLOG and SPIDER.
- Author
-
Golemanova, Emilia, Golemanov, Tzanko, and Kratchanov, Kostadin
- Subjects
- *
PROLOG (Computer program language) , *ALGORITHMS , *COMPUTER science , *MATHEMATICAL optimization , *ARTIFICIAL intelligence - Abstract
Control Network Programming (CNP) is a graphical nonprocedural programming style whose built-in inference engine (interpreter) is based on search in a recursive network. This paper is the third in a series of reports that share a common objective - comparison between the CNP language SPIDER and the logic programming language PROLOG. The focus here is on the comparative investigation of their interpreters, presented in a generic formal frame - reduction of goals. As a result of juxtaposing their pseudo-codes the advantages of SPIDER are outlined. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. Partially Observable Markov Decision Processes: A Geometric Technique and Analysis.
- Author
-
Zhang, Hao
- Subjects
DYNAMIC programming ,MARKOV processes ,ALGORITHMS ,COMPUTATIONAL complexity ,MATHEMATICS ,COMPUTER science ,ARTIFICIAL intelligence - Abstract
This paper presents a novel framework for studying partially observable Markov decision processes (POMDPs) with finite state, action, observation sets, and discounted rewards. The new framework is solely based on future-reward vectors associated with future policies, which is more parsimonious than the traditional framework based on belief vectors. It reveals the connection between the POMDP problem and two computational geometry problems, i.e., finding the vertices of a convex hull and finding the Minkowski sum of convex polytopes, which can help solve the POMDP problem more efficiently. The new framework can clarify some existing algorithms over both finite and infinite horizons and shed new light on them. It also facilitates the comparison of POMDPs with respect to their degree of observability, as a useful structural result. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
40. NARROW PROOFS MAY BE SPACIOUS: SEPARATING SPACE AND WIDTH IN RESOLUTION.
- Author
-
NORDSTRÖM, JAKOB
- Subjects
COMPUTATIONAL complexity ,ALGORITHMS ,PLEONASM ,REASONING ,COMPUTER science ,AUTOMATIC theorem proving ,ARTIFICIAL intelligence ,PROOF theory ,SPACE - Abstract
The width of a resolution proof is the maximal number of literals in any clause of the proof. The space of a proof is the maximal number of clauses kept in memory simultaneously if the proof is only allowed to infer new clauses from clauses currently in memory. Both of these measures have previously been studied and related to the resolution refutation size of unsatisfiable conjunctive normal form (CNF) formulas. Also, the minimum refutation space of a formula has been proven to be at least as large as the minimum refutation width, but it has been open whether space can be separated from width or the two measures coincide asymptotically. We prove that there is a family of k-CNF formulas for which the refutation width in resolution is constant but the refutation space is nonconstant, thus solving a problem mentioned in several previous papers. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
41. SYMMETRY AXIS BASED OBJECT RECOGNITION UNDER TRANSLATION, ROTATION AND SCALING.
- Author
-
HYDER, MASHUD, ISLAM, MD. MONIRUL, AKHAND, M. A. H., and MURASE, KAZUYUKI
- Subjects
ARTIFICIAL neural networks ,OPTICAL pattern recognition ,ALGORITHMS ,ARTIFICIAL intelligence ,FEATURE extraction ,COMPUTER science - Abstract
This paper presents a new approach, known as symmetry axis based feature extraction and recognition (SAFER), for recognizing objects under translation, rotation and scaling. Unlike most previous invariant object recognition (IOR) systems, SAFER puts emphasis on both simplicity and accuracy of the recognition system. To achieve simplicity, it uses simple formulae for extracting invariant features from an object. The scheme used in feature extraction is based on the axis of symmetry and angles of concentric circles drawn around the object. SAFER divides the extracted features into a number of groups based on their similarity. To improve the recognition performance, SAFER uses a number of neural networks (NNs) instead of single NN are used for training and recognition of extracted features. The new approach, SAFER, has been tested on two of real world problems i.e., English characters with two different fonts and images of different shapes. The experimental results show that SAFER can produce good recognition performance in comparison with other algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
42. Beam-ACO for Simple Assembly Line Balancing.
- Author
-
Blum, Christian
- Subjects
MATHEMATICAL optimization ,OPERATIONS research ,MATHEMATICAL analysis ,PRODUCTION scheduling ,HEURISTIC ,MANUFACTURED products ,ANT algorithms ,ALGORITHMS ,MANUFACTURING processes - Abstract
Assembly line balancing problems are concerned with the optimization of manufacturing processes. In this paper we consider the so-called simple assembly line balancing problem with the objective of minimizing the number of used workstations. This problem is denoted by SALB-1 in the literature. For tackling this problem, we present a so-called Beam-ACO approach. This technique results from hybridizing the metaheuristic ant colony optimization with beam search. The experimental results show that our algorithm is a state-of-the-art method for this problem. It can solve 263 of 269 existing benchmark instances to optimality. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
43. ANALYSIS OF THE RESUME LEARNING PROCESS FOR SPIKING NEURAL NETWORKS.
- Author
-
Ponulak, Filip
- Subjects
LEARNING ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science ,ALGORITHMS ,INFORMATION technology - Abstract
In this paper we perform an analysis of the learning process with the ReSuMe method and spiking neural networks (Ponulak, 2005; Ponulak, 2006b). We investigate how the particular parameters of the learning algorithm affect the process of learning. We consider the issue of speeding up the adaptation process, while maintaining the stability of the optimal solution. This is an important issue in many real-life tasks where the neural networks are applied and where the fast learning convergence is highly desirable. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
44. A neural networks-based negative selection algorithm in fault diagnosis.
- Author
-
Gao, X. Z., Ovaska, S. J., Wang, X., and Chow, M. Y.
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,DETECTORS ,ALGORITHMS ,COMPUTER science - Abstract
Inspired by the self/nonself discrimination theory of the natural immune system, the negative selection algorithm (NSA) is an emerging computational intelligence method. Generally, detectors in the original NSA are first generated in a random manner. However, those detectors matching the self samples are eliminated thereafter. The remaining detectors can therefore be employed to detect any anomaly. Unfortunately, conventional NSA detectors are not adaptive for dealing with time-varying circumstances. In the present paper, a novel neural networks-based NSA is proposed. The principle and structure of this NSA are discussed, and its training algorithm is derived. Taking advantage of efficient neural networks training, it has the distinguishing capability of adaptation, which is well suited for handling dynamical problems. A fault diagnosis scheme using the new NSA is also introduced. Two illustrative simulation examples of anomaly detection in chaotic time series and inner raceway fault diagnosis of motor bearings demonstrate the efficiency of the proposed neural networks-based NSA. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
45. Understanding the Biases of Generalised Recombination: Part II.
- Author
-
Poli, Riccardo and Stephens, Christopher R.
- Subjects
EVOLUTIONARY computation ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science ,ALGORITHMS - Abstract
This is the second part of a two-part paper where we propose, model theoretically and study a general notion of recombination for fixed-length strings where homologous recombination, inversion, gene duplication, gene deletion, diploidy and more are just special cases. In Part I, we derived both microscopic and coarse-grained evolution equations for strings and schemata for a selecto-recombinative GA using generalised recombination, and we explained the hierarchical nature of the schema evolution equations. In this part, we provide a variety of fixed points for evolution in the case where recombination is used alone, thereby generalising Geiringer's theorem. In addition, we numerically integrate the infinite-population schema equations for some interesting problems, where selection and recombination are used together to illustrate how these operators interact. Finally, to assess by how much genetic drift can make a system deviate from the infinite-population-model predictions we discuss the results of real GA runs for the same model problems with generalised recombination, selection and finite populations of different sizes. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
46. Neural and Wavelet Network Models for Financial Distress Classification.
- Author
-
Becerra, Victor M., Galvão, Roberto K. H., Abou-Seada, Magda, and Webb, Geoff
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,COMPUTER programming ,DISCRIMINANT analysis ,COMPUTER science - Abstract
This work analyzes the use of linear discriminant models, multi-layer perception neural networks and wavelet networks for corporate financial distress prediction. Although simple and easy to interpret, linear models require statistical assumptions that may be unrealistic. Neural networks are able to discriminate patterns that are not linearly separable, but the large number of parameters involved in a neural model often causes generalization problems. Wavelet networks are classification models that implement nonlinear discriminant surfaces as the superposition of dilated and translated version of a single ‘mother wavelet’ function. In this paper, an algorithm is proposed to select dilation and translation parameters that yield a wavelet network classifier with good parsimony characteristics. The models are compared in a case study involving failed and continuing British firms in the period 1997–2000. Problems associated with over-parameterized neural networks are illustrated and the Optimal Brain Damage pruning technique is employed to obtain a parsimonious neural model. The results, supported by a re-sampling study, show that both neural and wavelet networks may be a valid alternative to classical linear discriminant models. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
47. A generalised regression algorithm for Web page categorisation.
- Author
-
Anagnostopoulos, Ioannis, Anagnostopoulos, Christos, Kouzas, George, and Vergados, Dimitrios
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,ALGEBRA ,COMPUTER science - Abstract
This paper proposes an information system that classifies Web pages according a taxonomy, which is mainly used from seven search engines/directories. The proposed classifier is a four-layer generalised regression neural network (GRNN) that aims to perform the information segmentation according to information filtering techniques using content descriptor vectors. Eight categories of Web pages were used in order to evaluate the robustness of the method, while no restrictions were imposed except for the language of the content, which is English. The system can be used as an assistant and consultative tool for classification purposes as well as for estimating the population of Web pages at any given point in time. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
48. SPEAKER-INDEPENDENT SPEECH RECOGNITION BASED ON FAST NEURAL NETWORK.
- Author
-
Cui Tao, N. and Zhang Taiyi, N.
- Subjects
ARTIFICIAL neural networks ,SPEECH perception ,COMPUTER science ,INFORMATION processing ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
This paper presents a fast Neural Network algorithm, in which the step is regarded as the function of the error and the output function of network node, and weight is calculated by different step. By adopting the fast NN algorithm, we developed a speaker-independent speech recognition system. The experiment shows that the new algorithm is over 10 times faster than the traditional BP algorithm and has better performance and spreading ability. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
49. Interpretable algorithmic forensics.
- Author
-
Garrett, Brandon L. and Rudin, Cynthia
- Subjects
CRIMINAL procedure ,LEGAL judgments ,ARTIFICIAL intelligence ,JUDGE-made law ,COMPUTER science - Abstract
One of the most troubling trends in criminal investigations is the growing use of “black box” technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be interpretable—can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling—or even credible—government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Revival of Classical Algorithms: A Bibliometric Study on the Trends of Neural Networks and Genetic Algorithms.
- Author
-
Lou, Ta-Feng and Hung, Wei-Hsi
- Subjects
GENETIC algorithms ,ARTIFICIAL intelligence ,COMPUTER science ,ALGORITHMS ,BIBLIOMETRICS ,SYNTHETIC biology ,DEEP learning - Abstract
The purpose of our bibliometric research was to capture and analyze the trends of two types of well-known classical artificial intelligence (AI) algorithms: neural networks (NNs) and genetic algorithms (GAs). Symmetry is a very popular international and interdisciplinary scientific journal that cover six major research subjects of mathematics, computer science, engineering science, physics, biology, and chemistry which are all related to our research on classical AI algorithms; therefore, we referred to the most innovative research articles of classical AI algorithms that have been published in Symmetry, which have also introduced new advanced applications for NNs and Gas. Furthermore, we used the keywords of "neural network algorithm" or "artificial neural network" to search the SSCI database from 2002 to 2021 and obtained 951 NN publications. For comparison purposes, we also analyzed GA trends by using the keywords "genetic algorithm" to search the SSCI database over the same period and we obtained 878 GA publications. All of the NN and GA publication results were categorized into eight groups for deep analyses so as to investigate their current trends and forecasts. Furthermore, we applied the Kolmogorov–Smirnov test (K–S test) to check whether our bibliometric research complied with Lotka's law. In summary, we found that the number of applications for both NNs and GAs are continuing to grow but the use of NNs is increasing more sharply than the use of GAs due to the boom in deep learning development. We hope that our research can serve as a roadmap for other NN and GA researchers to help them to save time and stay at the cutting edge of AI research trends. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.