255 results
Search Results
2. Guided Intelligent Hyper-Heuristic Algorithm for Critical Software Application Testing Satisfying Multiple Coverage Criteria.
- Author
-
Rani, S. Alagu, Akila, C., and Raja, S. P.
- Subjects
COMPUTER software testing ,APPLICATION software ,DECISION support systems ,ALGORITHMS ,INTELLIGENT agents ,OPTIMIZATION algorithms - Abstract
This paper proposes a novel algorithm that combines symbolic execution and data flow testing to generate test cases satisfying multiple coverage criteria of critical software applications. The coverage criteria considered are data flow coverage as the primary criterion, software safety requirements, and equivalence partitioning as sub-criteria. black The characteristics of the subjects used for the study include high-precision floating-point computation and iterative programs. The work proposes an algorithm that aids the tester in automated test data generation, satisfying multiple coverage criteria for critical software. The algorithm adapts itself and selects different heuristics based on program characteristics. The algorithm has an intelligent agent as its decision support system to accomplish this adaptability. Intelligent agent uses the knowledge base to select different low-level heuristics based on the current state of the problem instance during each generation of genetic algorithm execution. The knowledge base mimics the expert's decision in choosing the appropriate heuristics. black The algorithm outperforms by accomplishing 100% data flow coverage for all subjects. In contrast, the simple genetic algorithm, random testing and a hyper-heuristic algorithm could accomplish a maximum of 83%, 67% and 76.7%, respectively, for the subject program with high complexity. black The proposed algorithm covers other criteria, namely equivalence partition coverage and software safety requirements, with fewer iterations. black The results reveal that test cases generated by the proposed algorithm are also effective in fault detection, with 87.2% of mutants killed when compared to a maximum of 76.4% of mutants killed for the complex subject with test cases of other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. GRAPH MATCHING AND LEARNING IN PATTERN RECOGNITION IN THE LAST 10 YEARS.
- Author
-
FOGGIA, PASQUALE, PERCANNELLA, GENNARO, and VENTO, MARIO
- Subjects
PATTERN perception ,GRAPHIC methods ,ARTIFICIAL intelligence ,COMPUTER software ,ALGORITHMS - Abstract
In this paper, we examine the main advances registered in the last ten years in Pattern Recognition methodologies based on graph matching and related techniques, analyzing more than 180 papers; the aim is to provide a systematic framework presenting the recent history and the current developments. This is made by introducing a categorization of graph-based techniques and reporting, for each class, the main contributions and the most outstanding research results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
4. Adaptive Compute Offloading Algorithm for Metasystem Based on Deep Reinforcement Learning.
- Author
-
Wang, Chunxin, Wang, Wensheng, Li, Wenjing, Liu, Zhu, Zhu, Jinhong, and Zhang, Nan
- Subjects
ADAPTIVE computing systems ,REINFORCEMENT learning ,ALGORITHMS ,TIME-varying networks ,SEARCH algorithms - Abstract
There has been a lot of research on edge-computing task offloading in deep reinforcement learning (DRL). Deep reinforcement learning is one of the important algorithms in the current AI field, but there is still room for improvement in the time cost and adaptive correction ability of the algorithm. This paper studies the application of DRL algorithms in edge-computing task offloading, and its key innovation is to propose an MADRLCO algorithm, which is based on the design idea of the Actor–Critic framework, uses the DNN model to act as an Actor, and can more accurately locate the initial decision through iterative training, and use the LSTM model to optimize the Critic, which can be more accurate. The optimal decision can be located in a short period of time. The main work of this paper is divided into three parts: (1) The AC algorithm of the Actor–Critic framework in DRL is proposed to be applied to edge-computing task offloading. (2) To address the weak generalization ability of the basic version of the Actor–Critic algorithm in multi-objective optimization, the sequential quantitative correction and adaptive correction parameter K method are used to optimize the Critic frame, thereby improving the generalization ability of the model in multi-objective decision-making and improving the rationality of decision-making results. (3) Aiming at the problem of large time cost in the critical framework of the model, a search algorithm for resource allocation-related parameters based on the time-series prediction method is proposed (time-series forecasting is a research branch of pattern recognition), which reduces the time overhead of the algorithm and improves the adaptive correction capability of the model. The algorithm in this paper can adapt to not only the time-varying network channel state, but also the time-varying number of device connections. Finally, it is proved by experiments that compared with the DRL calculation offloading algorithm based on DNN plus binary search, the MADRLCO algorithm reduces the model training time by 66.27%, and in the environment of the time-varying number of devices in the metasystem, the average model average. The standard calculation rate is 0.0403 higher than that of the current optimal algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. The HHL algorithm: Implementation and research directions.
- Author
-
Sambhaje, Varsha and Chaurasia, Anju
- Subjects
- *
RESEARCH implementation , *ALGORITHMS , *QUANTUM computing , *LINEAR systems , *ARTIFICIAL intelligence , *MACHINE learning , *QUANTUM computers - Abstract
Linear systems of equations lie at the heart of numerous scientific and engineering challenges. In cutting-edge arena like artificial intelligence, machine learning and neuro-computation, these systems serve as a fundamental tool for mathematical modeling. Classical algorithms for solving linear systems have been extensively developed and forms the backbone of diverse applications across various scientific disciplines. While classical algorithms exist for solving linear systems, they often encounter limitations termed “NP-completeness” as data complexity increases. The emerging field of quantum computing offers a revolutionary approach to deal with these kinds of problems. The Harrow–Hassidim–Lloyd (HHL) algorithm tackles these challenges and opens new avenues for research. This study delves into the contemporary effectiveness of the HHL algorithm to address systems of linear equations. By examining recent research in quantum machine learning, we aim to assess the HHL algorithm’s potential to revolutionize the process of optimizing hyperparameters for machine learning models, resulting in increased efficiency and cost savings. This paper meticulously analyzes the HHL algorithm and explores its evolution from conception to the latest advancements. A comprehensive examination of the HHL algorithm, including its evolution over time, is thoroughly explored. The investigation delves into the potential challenges and limitations that might hinder the practical deployment of the HHL algorithm. Identifying these roadblocks will pave the way for future research and development efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A Novel YOLOv5 Deep Learning Model for Handwriting Detection and Recognition.
- Author
-
Moustapha, Maliki, Tasyurek, Murat, and Ozturk, Celal
- Subjects
- *
DEEP learning , *OBJECT recognition (Computer vision) , *ARTIFICIAL intelligence , *COMPUTER vision , *HANDWRITING , *ALGORITHMS - Abstract
Computer Vision (CV) has become an essential field in Artificial Intelligence applications. Object detection and recognition (ODR) is one of the fundamental tasks of computer vision implementations. However, developing an efficient ODR model is still a significant problem. The model's execution time and speed are the most critical features during the inference or detection and recognition process, which need to be improved using the latest object detection architectures. In this paper, the handwritten detection and recognition (HDR) model is developed based on previously known algorithms with their efficiency, such as Faster R-CNN and YOLOv4 in the first hand. On the other hand, two new models capable of detecting and recognizing handwritten digits using the latest ODR algorithm are proposed, one based on the latest YOLO family architecture (YOLOv5-HDR) with high speed and accuracy and the other using the transformers architecture (DETR). To the best of our knowledge, this is the first study to achieve a details comparison between YOLOv5 and transformers-based models in handwritten digit detection. Finally, the detailed performance analysis achieved by the paper proves that the YOLOv4-based model achieved the testing inference 13% faster than Faster R-CNN. However, the proposed YOLOv5-based model outperformed the YOLOv4 and the transformers-based one as it increased the testing execution time 25% faster than the YOLOv4, three times faster than the DETR model. A further adversarial attack test has been conducted to ensure the robust performance of the proposed model. Furthermore, numerical experiment results and their analyses demonstrate the robustness and effectiveness of the proposed YOLOv5-based model being the most stable for handwritten digit detection and recognition tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Time-Invariance Coefficients Tests with the Adaptive Multi-Factor Model.
- Author
-
Zhu, Liao, Jarrow, Robert A., and Wells, Martin T.
- Subjects
ASSETS (Accounting) ,ARBITRAGE pricing theory ,ALGORITHMS ,MACHINE learning ,ARTIFICIAL intelligence - Abstract
This paper tests a multi-factor asset pricing model that does not assume that the return's beta coefficients are constants. This is done by estimating the generalized arbitrage pricing theory (GAPT) using price differences. An implication of the GAPT is that when using price differences instead of returns, the beta coefficients are constant. We employ the adaptive multi-factor (AMF) model to test the GAPT utilizing a Groupwise Interpretable Basis Selection (GIBS) algorithm to identify the relevant factors from among all traded exchange-traded funds. We compare the performance of the AMF model with the Fama–French 5-factor (FF5) model. For nearly all time periods less than six years, the beta coefficients are time-invariant for the AMF model, but not for the FF5 model. This implies that the AMF model with a rolling window (such as five years) is more consistent with realized asset returns than is the FF5 model. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Artificial Intelligence Control Algorithm for the Steering Motion of Wheeled Soccer Robot.
- Author
-
Xiong, Xiaowei
- Subjects
ARTIFICIAL intelligence ,SOCCER ,ROBOTS ,ALGORITHMS ,ROBOT programming ,ROBOT control systems - Abstract
In this paper, the artificial intelligence control algorithm for steering robot of steering wheel is studied. The steering movement of wheeled soccer robot is controlled by artificial intelligence control algorithm, and the steering movement is modeled and simulated. Firstly, the characteristics of artificial neurons are simulated and a similar control model is constructed to complete the simulation of football. The artificial intelligence control algorithm has a dynamic feedback item compared with the traditional intelligent model, which has a better effect on the steering control of the wheeled soccer robot. In this paper, artificial intelligence control algorithm is used to optimize the parameters of artificial intelligence control algorithm, and the output of control signal of each steering part of wheeled soccer robot is simulated in the experiment, and the control of the steering action of wheeled soccer robot by artificial intelligence control algorithm is verified by experiments. Then the artificial intelligence control algorithm forms the connection structure. This method provides a good reference for steering control of wheeled soccer robots. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Artificial Intelligence in Psychomotor Learning: Modeling Human Motion from Inertial Sensor Data.
- Author
-
Santos, Olga C.
- Subjects
PERCEPTUAL motor learning ,LEARNING ,ARTIFICIAL intelligence ,MOTION ,HUMAN activity recognition ,EDUCATIONAL technology ,AUTOMOTIVE navigation systems - Abstract
Recent trends in educational technology focus on designing systems that can support students while learning complex psychomotor skills, such as those required when practicing sports and martial arts, dancing or playing a musical instrument. In this context, artificial intelligence can be key to personalize the development of these psychomotor skills by enabling the provision of effective feedback when the instructor is not present, or scaling up to a larger pool of students the feedback that an instructor would typically provide one-on-one. This paper presents the modeling of human motion gathered with inertial sensors aimed to offer a personalized support to students when learning complex psychomotor skills. In particular, when comparing learner data with those of an expert during the psychomotor learning process, artificial intelligence algorithms can allow to: (i) recognize specific motion learning units and (ii) assess learning performance in a motion unit. However, it seems that this field is still emerging, since when reviewed systematically, search results hardly included the motion modeling with artificial intelligence techniques of complex human activities measured with inertial sensors. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. A Geometry-Based Distributed Connectivity Maintenance Algorithm for Discrete-time Multi-Agent Systems with Visual Sensing Constraints.
- Author
-
Li, Xiaoli, Fu, Jinyun, Liu, Mingliang, Xu, Yangmengfei, Tan, Ying, Xin, Yangbin, Pu, Ye, and Oetomo, Denny
- Subjects
MULTIAGENT systems ,DISCRETE-time systems ,MOBILE robots ,DISTRIBUTED algorithms ,PARTICLE swarm optimization ,ALGORITHMS ,ROBOT dynamics ,ARTIFICIAL intelligence - Abstract
This article presents a novel approach to maintaining connectivity within a multi-agent system (MAS) using directional visual sensors. The approach leverages a mathematical model of the sensors and employs an optimization method to determine the position and orientation constraints for each sensor. Experimental results validate the effectiveness of the approach for a range of tasks within MAS. The text also discusses a three-step procedure for computing the desired position and orientation of a group of agents. It introduces an optimization problem for a given task and presents a gradient-descent approach to solve it. The text discusses the control objective of designing appropriate control actions for each agent to track a sequence of targets while considering the limitations of visual sensors. The main results involve establishing sufficient conditions for fulfilling the control objective and designing upper bounds for linear and angular velocity. The text also discusses a three-step control law for maintaining connectivity between agents in a network. The control law is designed to be fully distributed and ensures connectivity within the network. The given text discusses a distributed algorithm for maintaining connectivity in a multi-agent system with limited and directional sensing capabilities. The algorithm ensures that each sensor's control variables remain within their upper bounds while preserving connectivity. The effectiveness of the algorithm is validated through simulation and experimental results. The text provides brief biographical information about two individuals, one named D. degree and the other named Denny Oetomo, who have expertise in robotics and system dynamics. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
11. Graph Computing for Financial Crime and Fraud Detection: Trends, Challenges and Outlook.
- Author
-
Kurshan, Eren and Shen, Hongda
- Subjects
CRIMINAL investigation ,COMMERCIAL crimes ,FRAUD investigation ,MACHINE learning ,ARTIFICIAL intelligence - Abstract
The rise of digital payments has caused consequential changes in the financial crime landscape. As a result, traditional fraud detection approaches such as rule-based systems have largely become ineffective. Artificial intelligence (AI) and machine learning solutions using graph computing principles have gained significant interest in recent years. Graph-based techniques provide unique solution opportunities for financial crime detection. However, implementing such solutions at industrial-scale in real-time financial transaction processing systems has brought numerous application challenges to light. In this paper, we discuss the implementation difficulties current and next-generation graph solutions face. Furthermore, financial crime and digital payments trends indicate emerging challenges in the continued effectiveness of the detection techniques. We analyze the threat landscape and argue that it provides key insights for developing graph-based solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. EDITORIAL — HYBRID SOFT COMPUTING AND APPLICATIONS.
- Author
-
ABRAHAM, AJITH
- Subjects
COMPUTATIONAL intelligence ,ARTIFICIAL intelligence ,COMPUTER software ,ELECTRONIC data processing ,ALGORITHMS - Abstract
No abstract received. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
13. Research on Data Mining Algorithm Based on Pattern Recognition.
- Author
-
Zhang, Xuelong
- Subjects
DATA mining ,PATTERN recognition systems ,SUPPORT vector machines ,DATA warehousing ,ALGORITHMS - Abstract
With the advent of the era of big data, people are eager to extract valuable knowledge from the rapidly expanding data, so that they can more effectively use these massive storage data. The traditional data processing technology can only achieve basic functions such as data query and statistics, and cannot achieve the goal of extracting the knowledge existing in the data to predict the future trend. Therefore, along with the rapid development of database technology and the rapid improvement of computer's computing power, data mining (DM) came into existence. Research on DM algorithms includes knowledge of various fields such as database, statistics, pattern recognition and artificial intelligence. Pattern recognition mainly extracts features of known data samples. The DM algorithm using pattern recognition technology is a better method to obtain effective information from massive data, thus providing decision support, and has a good application prospect. Support vector machine (SVM) is a new pattern recognition algorithm proposed in recent years, which avoids dimension disaster by dimensioning and linearization. Based on this, this paper studies the DM algorithm based on pattern recognition, and proposes a DM algorithm based on SVM. The algorithm divides the vector of the SV set into two different types and iterates through multiple iterations to obtain a classifier that converges to the final result. Finally, through the cross-validation simulation experiment, the results show that the DM algorithm based on pattern recognition can effectively reduce the training time and solve the mining problem of massive data. The results show that the algorithm has certain rationality and feasibility. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. 3D Shape Reconstruction of Hybrid Reflectance using the LMS Algorithm.
- Author
-
Lee, Mal-Rey
- Subjects
PHOTOMETRY ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
In this paper, we present a new approach for determining the reflectance properties of surface and recovering 3D shapes from intensity images. We determine reflectance parameters which minimize the sum squared difference of the intensity distribution between the image of a sample sphere and the calculated image. The estimated reflectance parameters provide the range data with intensity distributions. Therefore, we generate three reference images of a range sphere, which has the same diameter as that of the sample, from the same viewpoint but with different light directions. Direct matching of the object images to the references can precisely reconstruct the shape of the object. This paper uses a plate diffuse illumination to alleviate the effects of specular spike and highlights. The simulation results show that the proposed method can estimate reflectance properties of the hybrid surface, and also recover the object shape. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
15. EDITORIAL.
- Author
-
Karaca, Yeliz, Baleanu, Dumitru, Moonis, Majaz, Muhammad, Khan, Zhang, Yu-Dong, and Gervasi, Osvaldo
- Subjects
SPACE sciences ,QUINTIC equations ,APPLIED sciences ,ALGORITHMS ,FRACTIONAL differential equations ,ARTIFICIAL intelligence - Published
- 2021
- Full Text
- View/download PDF
16. A VECTOR MATRIX REAL TIME RECURSIVE BACKPROPAGATION ALGORITHM FOR RECURRENT NEURAL NETWORKS THAT APPROXIMATE MULTI-VALUED PERIODIC FUNCTIONS.
- Author
-
STUBBERUD, PETER
- Subjects
ARTIFICIAL intelligence ,ALGORITHMS ,ARTIFICIAL neural networks ,PERIODIC functions ,MACHINE theory - Abstract
Unlike feedforward neural networks (FFNN) which can act as universal function approximators, recursive, or recurrent, neural networks can act as universal approximators for multi-valued functions. In this paper, a real time recursive backpropagation (RTRBP) algorithm in a vector matrix form is developed for a two-layer globally recursive neural network that has multiple delays in its feedback path. This algorithm has been evaluated on two GRNNs that approximate both an analytic and nonanalytic periodic multi-valued function that a feedforward neural network is not capable of approximating. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
17. Developing an optimized artificial intelligence model for S&P 500 option pricing: A hybrid GARCH model.
- Author
-
Hajizadeh, Ehsan
- Subjects
ARTIFICIAL intelligence ,PARTICLE swarm optimization ,ALGORITHMS ,FUZZY sets ,ARTIFICIAL neural networks - Abstract
In this paper, we propose two hybrid models to release some limitations and enhancement of the results. In this regard, three popular GARCH-type models are utilized for more accurate estimating of volatility, as the most important parameter for option pricing. Furthermore, the two non-parametric models based on Artificial Neural Networks and Neuro-Fuzzy Networks tuned by Particle Swarm Optimization algorithm are proposed to price call options for the S&P 500 index. By comparing the results obtained using these models, we conclude that both Neural Network and Neuro-Fuzzy Network models outperform the Black–Scholes model. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. AGGREGATION OF MULTIPLE REINFORCEMENT LEARNING ALGORITHMS.
- Author
-
JIANG, JU, KAMEL, MOHAMED S., and CHEN, LEI
- Subjects
REINFORCEMENT learning ,MACHINE learning ,REINFORCEMENT (Psychology) ,ALGORITHMS ,ARTIFICIAL intelligence ,COGNITIVE science ,LOGIC machines ,MACHINE theory - Abstract
Reinforcement learning (RL) has been successfully used in many fields. With the increasing complexity of environments and tasks, it is difficult for a single learning algorithm to cope with complicated problems with high performance. This paper proposes a new multiple learning architecture, "Aggregated Multiple Reinforcement Learning System (AMRLS)", which aggregates different RL algorithms in each learning step to make more appropriate sequential decisions than those made by individual learning algorithms. This architecture was tested on a Cart-Pole system. The presented simulation results confirm our prediction and reveal that aggregation not only provides robustness and fault tolerance ability, but also produces more smooth learning curves and needs fewer learning steps than individual learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
19. Contrastive Analysis and Feature Selection for Korean Modal Expression in Chinese-Korean Machine Translation System.
- Author
-
LI, JIN-JI, ROH, JI-EUN, KIM, DONG-IL, and LEE, JONG-HYEOK
- Subjects
MACHINE translating ,ALGORITHMS ,ARTIFICIAL intelligence ,NATURAL language processing ,ELECTRONIC data processing ,HUMAN-computer interaction ,COMPUTER science - Abstract
To generate a proper Korean predicate, a natural modal expression is the most important factor for a machine translation (MT) system. Tense, aspect, mood, negation, and voice are the major constituents related to modal expression. The linguistic encoding of a modal expression is quite different between Chinese and Korean in terms of linguistic typology and genealogy. In this paper, a new applicable categorization of Korean modality system viz. tense, aspect, mood, negation, and voice, will be proposed through a contrastive analysis of Chinese and Korean from the viewpoint of a practical MT system. In order to precisely determine the modal expression, effective feature selection frameworks for Chinese are presented with a variety of machine learning methods. As a result, our proposed approach achieved an accuracy of 83.10%. [ABSTRACT FROM AUTHOR]
- Published
- 2005
20. HANDWRITTEN WORD RECOGNITION USING CLASSIFIER ENSEMBLES GENERATED FROM MULTIPLE PROTOTYPES.
- Author
-
Günter, Simon and Bunke, Horst
- Subjects
PATTERN recognition systems ,ARTIFICIAL intelligence ,MARKOV processes ,AUTOGRAPHS ,STOCHASTIC processes ,ALGORITHMS - Abstract
Handwritten text recognition is one of the most difficult problems in the field of pattern recognition. In this paper, we describe our efforts towards improving the performance of state-of-the-art handwriting recognition systems through the use of classifier ensembles. There are many examples of classification problems in the literature where multiple classifier systems increase the performance over single classifiers. Normally one of the two following approaches is used to create a multiple classifier system. (1) Several classifiers are developed completely independent of each other and combined in a last step. (2) Several classifiers are created out of one prototype classifier by using so-called classifier ensemble creation methods. In this paper an algorithm which combines both approaches is introduced and it is used to increase the recognition rate of a hidden Markov model (HMM) based handwritten word recognizer. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
21. WASD Algorithm with Pruning-While-Growing and Twice-Pruning Techniques for Multi-Input Euler Polynomial Neural Network.
- Author
-
Zhang, Yunong, Wang, Ying, Li, Weibing, Chou, Yao, and Zhang, Zhijun
- Subjects
BACK propagation ,ALGORITHMS ,ARTIFICIAL intelligence ,EULER polynomials ,ARTIFICIAL neural networks ,NUMERICAL analysis - Abstract
Differing from the conventional back-propagation (BP) neural networks, a novel multi-input Euler polynomial neural network, in short, MIEPNN (specifically, 4-input Euler polynomial neural network, 4IEPNN) is established and investigated in this paper. In order to achieve satisfactory performance of the established MIEPNN, a weights and structure determination (WASD) algorithm with pruning-while-growing (PWG) and twice-pruning (TP) techniques is built up for the established MIEPNN. By employing the weights direct determination (WDD) method, the WASD algorithm not only determines the optimal connecting weights between hidden layer and output layer directly, but also obtains the optimal number of hidden-layer neurons. Specifically, a sub-optimal structure is obtained via the PWG technique, then the redundant hidden-layer neurons are further pruned via the TP technique. Consequently, the optimal structure of the MIEPNN is obtained. To provide a reasonable choice in practice, several different MATLAB computing routines related to the WDD method are studied. Comparative numerical-experiment results of the 4IEPNN using these different MATLAB computing routines and the standard multi-layer perceptron (MLP) neural network further verify the superior performance and efficacy of the proposed MIEPNN equipped with the WASD algorithm including PWG and TP techniques in terms of training, testing and predicting. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Enhancing betweenness algorithm for detecting communities in complex networks.
- Author
-
Chen, Benyan, Xiang, Ju, Hu, Ke, and Tang, Yi
- Subjects
ALGORITHMS ,COMPUTER networks ,ARTIFICIAL intelligence ,STRUCTURAL analysis (Engineering) ,STATISTICAL physics - Abstract
Community structure is an important topological property common to many social, biological and technological networks. First, by using the concept of the structural weight, we introduced an improved version of the betweenness algorithm of Girvan and Newman to detect communities in networks without ( intrinsic) edge weight and then extended it to networks with ( intrinsic) edge weight. The improved algorithm was tested on both artificial and real-world networks, and the results show that it can more effectively detect communities in networks both with and without ( intrinsic) edge weight. Moreover, the technique for improving the betweenness algorithm in the paper may be directly applied to other community detection algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
23. ENHANCEMENT OF COGNITIVE CREATIVITY BY DIVERSITY CLUSTERING.
- Author
-
PETRY, FREDERICK E. and YAGER, RONALD R.
- Subjects
CLUSTER analysis (Statistics) ,COGNITIVE science ,CREATIVE ability in science ,ALGORITHMS ,ENTROPY (Information theory) ,TOPOLOGICAL spaces ,ARTIFICIAL intelligence - Abstract
In this paper, formation of diverse groups of individuals based on attributes and two diversity measures are discussed. The formation of clusters in a diversity space is described. An algorithm is given for diverse clustering based on separation in the space and not the nearness using application based diversity thresholds and number of clusters. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
24. PHASE TRANSITIONS OF EXPSPACE-COMPLETE PROBLEMS.
- Author
-
ZHOU, JUNPING, HUANG, PING, YIN, MINGHAO, ZHOU, CHUNGUANG, and Cai, Jin-Yi
- Subjects
PHASE transitions ,PROBLEM solving ,ALGORITHMS ,MATHEMATICAL proofs ,ARTIFICIAL intelligence ,CONSTRAINT satisfaction ,EMPIRICAL research - Abstract
This paper explores the phase transitions of the EXPSPACE-complete problems, which mainly focus on the conformant planning problems. The research presents two conformant planning algorithms-CONFORMANT PLAN-NONEXISTENCE algorithm and CONFORMANT PLAN-EXISTENCE algorithm. By analyzing the features of the two algorithms, the phase transition area of the conformant planning problems is obtained. If the number of the operators isn't greater than θ
ub , the CONFORMANT PLAN-NONEXISTENCE algorithm can prove that nearly all the conformant planning instances have no solution. If the number of the operators isn't lower than θlb , the CONFORMANT PLAN-EXISTENCE algorithm can prove that nearly all the conformant planning instances have solutions. The results of the experiments show that there exist phase transitions from a region where almost all the conformant planning instances have no solution to a region where almost all the conformant planning instances have solutions. [ABSTRACT FROM AUTHOR]- Published
- 2010
- Full Text
- View/download PDF
25. A REPAIR ALGORITHM FOR RADIAL BASIS FUNCTION NEURAL NETWORK AND ITS APPLICATION TO CHEMICAL OXYGEN DEMAND MODELING.
- Author
-
QIAO, JUN-FEI and HAN, HONG-GUI
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER architecture ,CHEMICAL oxygen demand ,ALGORITHMS - Abstract
This paper presents a repair algorithm for the design of a Radial Basis Function (RBF) neural network. The proposed repair RBF (RRBF) algorithm starts from a single prototype randomly initialized in the feature space. The algorithm has two main phases: an architecture learning phase and a parameter adjustment phase. The architecture learning phase uses a repair strategy based on a sensitivity analysis (SA) of the network's output to judge when and where hidden nodes should be added to the network. New nodes are added to repair the architecture when the prototype does not meet the requirements. The parameter adjustment phase uses an adjustment strategy where the capabilities of the network are improved by modifying all the weights. The algorithm is applied to two application areas: approximating a non-linear function, and modeling the key parameter, chemical oxygen demand (COD) used in the waste water treatment process. The results of simulation show that the algorithm provides an efficient solution to both problems. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
26. OutRank:: A GRAPH-BASED OUTLIER DETECTION FRAMEWORK USING RANDOM WALK.
- Author
-
MOONESINGHE, H. D. K. and TAN, PANG-NING
- Subjects
ALGORITHMS ,MARKOV processes ,FALSE alarms ,ARTIFICIAL intelligence ,ERRORS - Abstract
This paper introduces a stochastic graph-based algorithm, called OutRank, for detecting outliers in data. We consider two approaches for constructing a graph representation of the data, based on the object similarity and number of shared neighbors between objects. The heart of this approach is the Markov chain model that is built upon this graph, which assigns an outlier score to each object. Using this framework, we show that our algorithm is more robust than the existing outlier detection schemes and can effectively address the inherent problems of such schemes. Empirical studies conducted on both real and synthetic data sets show that significant improvements in detection rate and false alarm rate are achieved using the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
27. INCREMENTAL FILTERING ALGORITHMS FOR PRECEDENCE AND DEPENDENCY CONSTRAINTS.
- Author
-
BARTÁK, ROMAN and ČEPEK, ONDŘEJ
- Subjects
ALGORITHMS ,ARTIFICIAL intelligence ,INFORMATION filtering ,LOGIC ,INFORMATION retrieval - Abstract
Precedence constraints specify that an activity must finish before another activity starts and hence such constraints play a crucial role in planning and scheduling problems. Many real-life problems also include dependency constraints expressing logical relations between the activities – for example, an activity requires presence of another activity in the plan. For such problems a typical objective is a maximization of the number of activities satisfying the precedence and dependency constraints. In the paper we propose new incremental filtering rules integrating propagation through both precedence and dependency constraints. We also propose a new filtering rule using the information about the requested number of activities in the plan. We demonstrate efficiency of the proposed rules on log-based reconciliation problems and min-cutset problems. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
28. AN IMPROVED ALGORITHM OF OPTICAL FORMULA EXTRACTION WITH FUZZY CLASSIFICATION.
- Author
-
Ming-Hu Ha and Xue-Dong Tian
- Subjects
ALGORITHMS ,FUZZY systems ,KERNEL functions ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks - Abstract
Formula extraction is the first stage of optical formula recognition which converts printed scientific documents into their corresponding electronic format. So far, little research has been done in this area. In this paper, an improved method using fuzzy classification and irregularity rate feature is proposed to separate formulas from texts in the printed documents. Firstly, according to a statistical threshold of distance, connected components are extracted and merged to form the areas of characters and lines. Secondly, the isolated formulas are extracted based on the line features. Finally, the formula symbols in the rest lines are labeled using irregularity degree, and the embedded formulas are located by extending kernel symbols using the propagation of context. In these steps, fuzzy classification algorithm and irregularity degree feature are introduced to solve the problems existing in traditional methods and improve the extracting accuracy. The experimental results show that the method is of great significance in both theory and practice. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
29. AN AXIOMATIC DEFINITION OF FUZZY DIVERGENCE MEASURES.
- Author
-
COUSO, INÉS and MONTES, SUSANA
- Subjects
ARTIFICIAL intelligence ,ALGORITHMS ,STOCHASTIC processes ,FUZZY systems ,UNCERTAINTY (Information theory) - Abstract
The representation of the degree of difference between two fuzzy subsets by means of a real number has been proposed in previous papers, and it seems to be useful in some situations. However, the requirement of assigning a precise number may lead us to the loss of essential information about this difference. Thus, (crisp) divergence measures studied in previous papers may not distinguish whether the differences between two fuzzy subsets are in low or high membership degrees. In this paper we propose a way of measuring these differences by means of a fuzzy valued function which we will call fuzzy divergence measure. We formulate a list of natural axioms that these measures should satisfy. We derive additional properties from these axioms, some of them are related to the properties required to crisp divergence measures. We finish the paper by establishing a one-to-one correspondence between families of crisp and fuzzy divergence measures. This result provides us with a method to build a fuzzy divergence measure from a crisp valued one. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
30. TERRAIN CLASSIFICATION USING 3D CO-OCCURRENCE FEATURES AND NEURAL NETWORKS.
- Author
-
WOO, DONG-MIN, PARK, DONG-CHUL, and NGUYEN, QUOC-DAT
- Subjects
NEURAL computers ,ALGORITHMS ,MATRICES (Mathematics) ,CLASSIFICATION ,ARTIFICIAL intelligence - Abstract
Texture analysis has been efficiently utilized in the area of terrain classification. In this application, features have been obtained in the 2D image domain. In this paper we suggest 3D co-occurrence texture features by extending the concept of co-occurrence feature to the 3D world. The suggested 3D features are described as a 3D co-occurrence matrix by using a co-occurrence histogram of digital elevations at two contiguous positions. The practical construction of the co-occurrence matrix limits the number of levels of digital elevation. If the digital elevation is quantized into a few levels over the whole DEM (Digital Elevation Map), distinctive features cannot be obtained. To resolve this quantization problem, we employ the local quantization technique which can preserve the variation of elevations with a small number of quantization levels. SOM (Self-Organizing Maps), FCM (Fuzzy C-mean) and GBFCM (Gradient Based Fuzzy C-mean) clustering algorithms are employed to implement the terrain classifier, since these ANN clustering algorithms are known as robust against the high dimensionality problem in the classification process. Experimental results show that the classification accuracy with the addition of 3D co-occurrence features is significantly improved over the conventional classification method only with 2D features. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. HANDWRITTEN CHARACTER RECOGNITION USING NONSYMMETRICAL PERCEPTUAL ZONING.
- Author
-
FREITAS, CINTHIA O. A., OLIVEIRA, LUIZ S., BORTOLOZZI, FLÁVIO, and AIRES, SIMONE B. K.
- Subjects
VISUAL perception ,ALPHABET ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,GRAPHOLOGY ,ALGORITHMS - Abstract
In this paper we present an alternative strategy to define zoning for handwriting recognition, which is based on nonsymmetrical perceptual zoning. The idea is to extract some knowledge from the confusion matrices in order to make the zoning process less empirical. The feature set considered in this work is based on concavities/convexities deficiencies, which are obtained by labeling the background pixels of the input image. To better assess the nonsymmetrical zoning we carried out experiments using four different zonings strategies. Experiments show that the nonsymmetrical zoning could be considered as a tool to build more reliable handwriting recognition systems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. ENHANCED DIRECT LEAST SQUARE FITTING OF ELLIPSES.
- Author
-
Maini, Eliseo Stefano
- Subjects
ALGORITHMS ,LEAST squares ,MATHEMATICAL statistics ,ELLIPSES (Geometry) ,COMPUTER vision ,ARTIFICIAL intelligence - Abstract
This paper presents a robust and direct algorithm for the least-square fitting of ellipses to scattered data. The proposed algorithm makes use of well-known techniques that improve the robustness of the direct least-square fitting with a modest increase of the computational burden. Furthermore, by trivial modifications of the constrained minimization problem the algorithm may be converted to perform the specific fitting of other types of conics such as hyperbola. The method is simple and accurate and can be implemented with fixed time of computation. These characteristics coupled to its robustness and specificity makes the algorithm well-suited for applications requiring real-time machine vision. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
33. EXPLORING CONDITIONS FOR THE OPTIMALITY OF NAÏVE BAYES.
- Author
-
ZHANG, HARRY
- Subjects
BAYESIAN analysis ,ALGORITHMS ,MACHINE learning ,DATA mining ,DATABASE searching ,ARTIFICIAL intelligence - Abstract
Naïve Bayes is one of the most efficient and effective inductive learning algorithms for machine learning and data mining. Its competitive performance in classification is surprising, because the conditional independence assumption on which it is based is rarely true in real-world applications. An open question is: what is the true reason for the surprisingly good performance of Naïve Bayes in classification? In this paper, we propose a novel explanation for the good classification performance of Naïve Bayes. We show that, essentially, dependence distribution plays a crucial role. Here dependence distribution means how the local dependence of an attribute distributes in each class, evenly or unevenly, and how the local dependences of all attributes work together, consistently (supporting a certain classification) or inconsistently (canceling each other out). Specifically, we show that no matter how strong the dependences among attributes are, Naïve Bayes can still be optimal if the dependences distribute evenly in classes, or if the dependences cancel each other out. We propose and prove a sufficient and necessary condition for the optimality of Naïve Bayes. Further, we investigate the optimality of Naïve Bayes under the Gaussian distribution. We present and prove a sufficient condition for the optimality of Naïve Bayes, in which the dependences among attributes exist. This provides evidence that dependences may cancel each other out. Our theoretic analysis can be used in designing learning algorithms. In fact, a major class of learning algorithms for Bayesian networks are conditional independence-based (or CI-based), which are essentially based on dependence. We design a dependence distribution-based algorithm by extending the ChowLiu algorithm, a widely used CI based algorithm. Our experiments show that the new algorithm outperforms the ChowLiu algorithm, which also provides empirical evidence to support our new explanation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
34. AN UNSUPERVISED KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM WITH KERNEL NORMALISATION.
- Author
-
ZHOU, SHANG-MING and GAN, JOHN Q.
- Subjects
KERNEL functions ,FUZZY systems ,ALGORITHMS ,COMPUTATIONAL intelligence ,ARTIFICIAL intelligence - Abstract
In this paper, a novel procedure for normalising Mercer kernel is suggested firstly. Then, the normalised Mercer kernel techniques are applied to the fuzzy c-means (FCM) algorithm, which leads to a normalised kernel based FCM (NKFCM) clustering algorithm. In the NKFCM algorithm, implicit assumptions about the shapes of clusters in the FCM algorithm is removed so that the new algorithm possesses strong adaptability to cluster structures within data samples. Moreover, a new method for calculating the prototypes of clusters in input space is also proposed, which is essential for data clustering applications. Experimental results on several benchmark datasets have demonstrated the promising performance of the NKFCM algorithm in different scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
35. A PARALLEL GENETIC ALGORITHM FOR THE GEOMETRICALLY CONSTRAINED SITE LAYOUT PROBLEM WITH UNEQUAL-SIZE FACILITIES.
- Author
-
HARMANANI, HAIDAR M., ZOUEIN, PIERRETTE P., and HAJAR, AOUNI M.
- Subjects
GENETIC algorithms ,COMBINATORIAL optimization ,ALGORITHMS ,GENETICS ,COMPUTATIONAL intelligence ,ARTIFICIAL intelligence - Abstract
Parallel genetic algorithms techniques have been used in a variety of computer engineering and science areas. This paper presents a parallel genetic algorithm to solve the site layout problem with unequal-size and constrained facilities. The problem involves coordinating the use of limited space to accommodate temporary facilities subject to geometric constraints. The problem is characterised by affinity weights used to model transportation costs between facilities, and by geometric constraints between relative positions of facilities on site. The algorithm is parallelised based on a message passing SPMD architecture using parallel search and chromosomes migration. The algorithm is tested on a variety of layout problems to illustrate its performance. In specific, in the case of: (1) loosely versus tightly constrained layouts with equal levels of interaction between facilities, (2) loosely versus tightly packed layouts with variable levels of interactions between facilities, and (3) loosely versus tightly constrained layouts. Favorable results are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
36. OPTIMAL CONTROL OF A HYSTERESIS SYSTEM BY MEANS OF CO-OPERATIVE CO-EVOLUTION.
- Author
-
BOONLONG, KITTIPONG, CHAIYARATANA, NACHOL, and KUNTANAPREEDA, SUWAT
- Subjects
HYSTERESIS ,GENETIC algorithms ,COMBINATORIAL optimization ,ALGORITHMS ,COMPUTATIONAL intelligence ,ARTIFICIAL intelligence - Abstract
This paper presents the use of a co-operative co-evolutionary genetic algorithm (CCGA) for solving optimal control problems in a hysteresis system. The hysteresis system is a hybrid control system which can be described by a continuous multivalued state-space representation that can switch between two possible discrete modes. The problems investigated cover the optimal control of the hysteresis system with fixed and free final state/time requirements. With the use of the Pontryagin maximum principle, the optimal control problems can be formulated as optimisation problems. In this case, the decision variables consist of the value of control signal when a switch between discrete modes occurs while the objective value is calculated from an energy cost function. The simulation results indicate that the use of the CCGA is proven to be highly efficient in terms of the minimal energy cost obtained in comparison to the results given by the searches using a standard genetic algorithm and a dynamic programming technique. This helps to confirm that the CCGA can handle complex optimal control problems by exploiting a co-evolutionary effect in an efficient manner. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
37. PERFORMANCE BOUNDARIES OF OPTIMAL WEIGHTED MEDIAN FILTERS.
- Author
-
Lukac, Rastislav
- Subjects
DISTRIBUTION (Probability theory) ,ALGORITHMS ,GENETIC algorithms ,GENETIC programming ,ARTIFICIAL intelligence ,IMAGE processing ,COMPUTER science ,SIGNAL processing ,FILTERS (Mathematics) - Abstract
This paper focuses on image filtering using weighted median (WM) filters, a nonlinear filter class taking advantages of the robust order-statistic theory and capability to adapt a filter behavior for a variety of statistics related to the desired signals and the noise distributions. The main contribution of the paper is the analysis of the four WM optimization schemes, namely genetic WM optimization, non-adaptive WM optimization algorithm and adaptive WM filtering utilizing the linear and the sigmoidal approximation of the sign function. The analysis is done by extensive simulations, in which several features such as noise reduction, edge preservation, error estimation and dependence of error criteria on the degree of the impulse noise corruption, are examined. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
38. Associative Memory Design Using Overlapping Decomposition and Generalized Brain-State-in-a-Box Neural Networks.
- Author
-
Oh, Cheolhwan and Żak, Stanislaw H.
- Subjects
ARTIFICIAL neural networks ,COMPUTER storage devices ,ALGORITHMS ,ARTIFICIAL intelligence - Abstract
This paper is concerned with large scale associative memory design. A serious problem with neural associative memories is the quadratic growth of the number of interconnections with the problem size. An overlapping decomposition algorithm is proposed to attack this problem. Specifically, a pattern to be processed is decomposed into overlapping sub-patterns. Then, neural sub-networks are constructed that process the sub-patterns. An error correction algorithm operates on the outputs of each sub-network in order to correct the mismatches between sub-patterns that are obtained from the independent recall processes of individual sub-networks. The performance of the proposed large scale associative memory is illustrated using two-dimensional images. It is shown that the proposed method reduces the computing cost of the design of the associative memories compared with non-interconnected associative memories. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
39. FILTER SELECTION FOR REMOVING NOISE FROM CT SCAN IMAGES USING DIGITAL IMAGE PROCESSING ALGORITHM.
- Author
-
Siddiqi, Ayesha Amir
- Subjects
COMPUTED tomography ,COMPUTER-aided diagnosis ,DIGITAL image processing ,IMAGE compression ,ALGORITHMS ,ARTIFICIAL intelligence ,NOISE - Published
- 2024
- Full Text
- View/download PDF
40. A Path Construction Algorithm for Translation Validation Using PRES+ Models.
- Author
-
Bandyopadhyay, Soumyadip, Sarkar, Dipankar, Mandal, Chittaranjan, Banerjee, Kunal, and Duddu, Krishnam Raju
- Subjects
EMBEDDED computer systems ,ARTIFICIAL intelligence ,MULTICORE processors ,ALGORITHMS ,MATHEMATICAL equivalence - Abstract
Multi-core and multi-processor architectures have predominated the domain of embedded systems permitting easy mapping of concurrent applications to such architectures. The programs, in general, are subjected to significant optimizing and parallelizing transformations, automated and also human guided, before being mapped to an architecture. Modelling parallel behaviour and formally verifying that their functionality is preserved during synthesis are challenging tasks. Untimed PRES+ models are found to be suitable for the specification of parallel behaviour. Path cover oriented equivalence checking methods have been found to be quite effective for sequential behaviour. Path construction for parallel behaviour, however, is significantly more complex than that for sequential behaviour due to all possible interleavings of the parallel operations. Identification of the path covers depends upon choosing appropriate cut-points. In this paper, the need for introducing cut-points dynamically has been underlined and a mechanism to achieve this task is proposed. Details on how to construct a path cover using dynamic cut-points is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
41. A User Experience Study on Short Video Social Apps Based on Content Recommendation Algorithm of Artificial Intelligence.
- Author
-
Qi, Wen and Li, Danyang
- Subjects
- *
USER experience , *ARTIFICIAL intelligence , *ALGORITHMS , *USER-generated content , *VIDEOS - Abstract
As short video social apps develop rapidly, feed has become the main approach or algorithm to present recommendation content to users in such apps. There are big differences in the way that video apps make use of feed flow based on artificial intelligence algorithm. Two kinds of short video social apps including DouYin and KuaiShou are studied with a user experiment in this paper. Several indicators are established to quantify the user experience differences of these two apps. The results are analyzed with correlation analysis to find out the relationship between user experience performance and content presentation mode of feed flow. The differences found from the results are explained from the perspectives of user cognition and behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Consolidating Heterogeneous Enterprise Data for Named Entity Linking and Web Intelligence.
- Author
-
Weichselbraun, Albert, Streiff, Daniel, and Scharl, Arno
- Subjects
ARTIFICIAL intelligence ,BUSINESS enterprises ,INFORMATION resources management ,DATA mining ,ELECTRONIC data processing ,ALGORITHMS - Abstract
Linking named entities to structured knowledge sources paves the way for state-of-the-art Web intelligence applications which assign sentiment to the correct entities, identify trends, and reveal relations between organizations, persons and products. For this purpose this paper introduces Recognyze, a named entity linking component that uses background knowledge obtained from linked data repositories, and outlines the process of transforming heterogeneous data silos within an organization into a linked enterprise data repository which draws upon popular linked open data vocabularies to foster interoperability with public data sets. The presented examples use comprehensive real-world data sets from Orell Füssli Business Information, Switzerland's largest business information provider. The linked data repository created from these data sets comprises more than nine million triples on companies, the companies' contact information, key people, products and brands. We identify the major challenges of tapping into such sources for named entity linking, and describe required data pre-processing techniques to use and integrate such data sets, with a special focus on disambiguation and ranking algorithms. Finally, we conduct a comprehensive evaluation based on business news from the New Journal of Zurich and AWP Financial News to illustrate how these techniques improve the performance of the Recognyze named entity linking component. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
43. Edge Weight Method for Community Detection on Mixed Scale-Free Networks.
- Author
-
Jarukasemratana, Sorn and Murata, Tsuyoshi
- Subjects
ARTIFICIAL intelligence ,DATA extraction ,GAUSSIAN distribution ,POWER law (Mathematics) ,ALGORITHMS ,SCALE-free network (Statistical physics) - Abstract
In this paper, we proposed an edge weight method for performing a community detection on mixed scale-free networks.We use the phrase 'mixed scale-free networks' for networks where some communities have node degree that follows a power law similar to scale-free networks, while some have node degree that follows normal distribution. In this type of network, community detection algorithms that are designed for scale-free networks will have reduced accuracy because some communities do not have scale-free properties. On the other hand, algorithms that are not designed for scale-free networks will also have reduced accuracy because some communities have scale-free properties. To solve this problem, our algorithm consists of two community detection steps; one is aimed at extracting communities whose node degree follows power law distribution (scale-free), while the other one is aimed at extracting communities whose node degree follows normal distribution (non scale-free). To evaluate our method, we use NMI - Normalized Mutual Information - to measure our results on both synthetic and real-world datasets comparing with both scale-free and non scale-free community detection methods. The results show that our method outperforms all other based line methods on mixed scale-free networks. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
44. Simulation-Based Algorithms for Markov Decision Processes: Monte Carlo Tree Search from AlphaGo to AlphaZero.
- Author
-
Fu, Michael C.
- Subjects
MARKOV processes ,MONTE Carlo method ,OPERATIONS research ,REINFORCEMENT learning ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
AlphaGo and its successors AlphaGo Zero and AlphaZero made international headlines with their incredible successes in game playing, which have been touted as further evidence of the immense potential of artificial intelligence, and in particular, machine learning. AlphaGo defeated the reigning human world champion Go player Lee Sedol 4 games to 1, in March 2016 in Seoul, Korea, an achievement that surpassed previous computer game-playing program milestones by IBM's Deep Blue in chess and by IBM's Watson in the U.S. TV game show Jeopardy. AlphaGo then followed this up by defeating the world's number one Go player Ke Jie 3-0 at the Future of Go Summit in Wuzhen, China in May 2017. Then, in December 2017, AlphaZero stunned the chess world by dominating the top computer chess program Stockfish (which has a far higher rating than any human) in a 100-game match by winning 28 games and losing none (72 draws) after training from scratch for just four hours! The deep neural networks of AlphaGo, AlphaZero, and all their incarnations are trained using a technique called Monte Carlo tree search (MCTS), whose roots can be traced back to an adaptive multistage sampling (AMS) simulation-based algorithm for Markov decision processes (MDPs) published in Operations Research back in 2005 [Chang, HS, MC Fu, J Hu and SI Marcus (2005). An adaptive sampling algorithm for solving Markov decision processes. Operations Research, 53, 126–139.] (and introduced even earlier in 2002). After reviewing the history and background of AlphaGo through AlphaZero, the origins of MCTS are traced back to simulation-based algorithms for MDPs, and its role in training the neural networks that essentially carry out the value/policy function approximation used in approximate dynamic programming, reinforcement learning, and neuro-dynamic programming is discussed, including some recently proposed enhancements building on statistical ranking & selection research in the operations research simulation community. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
45. NEW MDS AND CLUSTERING BASED ALGORITHMS FOR PROTEIN MODEL QUALITY ASSESSMENT AND SELECTION.
- Author
-
WANG, QINGGUO, SHANG, CHARLES, XU, DONG, and SHANG, YI
- Subjects
PROTEIN structure ,K-means clustering ,MULTIDIMENSIONAL scaling ,ALGORITHMS ,FEATURE selection ,ARTIFICIAL intelligence ,PREDICTION models - Abstract
In protein tertiary structure prediction, assessing the quality of predicted models is an essential task. Over the past years, many methods have been proposed for the protein model quality assessment (QA) and selection problem. Despite significant advances, the discerning power of current methods is still unsatisfactory. In this paper, we propose two new algorithms, CC-Select and MDS-QA, based on multidimensional scaling and k-means clustering. For the model selection problem, CC-Select combines consensus with clustering techniques to select the best models from a given pool. Given a set of predicted models, CC-Select first calculates a consensus score for each structure based on its average pairwise structural similarity to other models. Then, similar structures are grouped into clusters using multidimensional scaling and clustering algorithms. In each cluster, the one with the highest consensus score is selected as a candidate model. For the QA problem, MDS-QA combines single-model scoring functions with consensus to determine more accurate assessment score for every model in a given pool. Using extensive benchmark sets of a large collection of predicted models, we compare the two algorithms with existing state-of-the-art quality assessment methods and show significant improvement. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
46. ON THE LEARNING POTENTIAL OF THE APPROXIMATED QUANTRON.
- Author
-
LABIB, RICHARD and DE MONTIGNY, SIMON
- Subjects
MACHINE learning ,PERCEPTRONS ,ARTIFICIAL intelligence ,COMPARATIVE studies ,PATTERN recognition systems ,ALGORITHMS - Abstract
The quantron is a hybrid neuron model related to perceptrons and spiking neurons. The activation of the quantron is determined by the maximum of a sum of input signals, which is difficult to use in classical learning algorithms. Thus, training the quantron to solve classification problems requires heuristic methods such as direct search. In this paper, we present an approximation of the quantron trainable by gradient search. We show this approximation improves the classification performance of direct search solutions. We also compare the quantron and the perceptron's performance in solving the IRIS classification problem. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
47. THE ORDERED DISTRIBUTE CONSTRAINT.
- Author
-
PETIT, THIERRY and RÉGIN, JEAN-CHARLES
- Subjects
CONSTRAINT satisfaction ,CARDINAL numbers ,DISTRIBUTION (Probability theory) ,MATHEMATICAL variables ,ARTIFICIAL intelligence ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
In this paper we introduce a new cardinality constraint: Ordered Distribute. Given a set of variables, this constraint limits for each value v the number of times v or any value greater than v is taken. It extends the global cardinality constraint, that constrains only the number of times a value v is taken by a set of variables and does not consider at the same time the occurrences of all the values greater than v. We design an algorithm for achieving generalized arc-consistency on Ordered Distribute, with a time complexity linear in the sum of the number of variables and the number of values in the union of their domains. In addition, we give some experiments showing the advantage of this new constraint for problems where values represent levels whose overrunning has to be under control. Finally, we present three extensions of our constraint that can be particularly useful in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
48. MULTI-APPROACH SATELLITE IMAGES FUSION BASED ON BLIND SOURCES SEPARATION.
- Author
-
BOULILA, WADII and FARAH, IMED RIADH
- Subjects
REMOTE-sensing images ,BLIND source separation ,DATA mining ,IMAGE analysis ,ALGORITHMS ,ARTIFICIAL intelligence ,DECISION support systems ,ARTIFICIAL neural networks - Abstract
The development of satellite image acquisition tools helped improving the extraction of information about natural scenes. In the proposed approach, we try to minimize imperfections accompanying the image interpretation process and to maximize useful information extracted from these images through the use of blind source separation (BSS) and fusion methods. In order to extract maximum information from multi-sensor images, we propose to use three algorithms of BSS that are FAST- ICA2D, JADE2D, and SOBI2D. Then by employing various fusion methods such as the probability, possibility, and evidence methods we can minimize both imprecision and uncertainty. In this paper, we propose a hybrid approach based on five main steps. The first step is to apply the three BSS algorithms to the satellites images; it results in obtaining a set of image sources representing each a facet of the land cover. A second step is to choose the image having the maximum of kurtosis and negentropy. After the BSS evaluation, we proceed to the training step using neural networks. The goal of this step is to provide learning regions which are useful for the fusion step. The next step consists in choosing the best adapted fusion method for the selected source images through a case-based reasoning (CBR) module. If the CBR module does not contain a case similar to the one we are seeking, we proceed to apply the three fusion methods. The evaluation of fusion methods is a necessary step for the learning process of our CBR. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. ON THE IMPORTANCE OF BEING QUANTUM.
- Author
-
AKL, SELIM G.
- Subjects
ARTIFICIAL intelligence ,NEURAL computers ,MACHINE theory ,ALGORITHMS ,ROBOTICS - Abstract
Game playing is commonly cited in debates concerning human versus machine intelligence, and Chess is often at the center of such debates. However, the role of Chess in delineating the difference between natural and artificial intelligence has been significantly diminished since a World Chess Champion lost in a tournament against a computer. Computer brute force is regularly blamed for the human defeat. This paper proposes a Quantum Chess Board in an attempt to bring back some equilibrium, putting humans and computers on an ostensibly equal footing when faced with the uncertainties of quantum physics. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
50. DISCOVERING IMPORTANT SEQUENTIAL PATTERNS WITH LENGTH-DECREASING WEIGHTED SUPPORT CONSTRAINTS.
- Author
-
YUN, UNIL and RYU, KEUN HO
- Subjects
MATHEMATICAL sequences ,ALGORITHMS ,CONSTRAINT satisfaction ,ARTIFICIAL intelligence ,DATA mining - Abstract
Sequential pattern mining with constraints has been developed to improve the efficiency and effectiveness in mining process. Specifically, there are two interesting constraints for sequential pattern mining. First, some sequences are more important and others are less important. Weight constraints consider the importance of sequences and items within sequences. Second, patterns including only a few items are interesting if they have high support. Meanwhile, long patterns can be interesting although their supports are relatively small. Weight constraints and length-decreasing support constraints are two paradigms aimed at finding important sequential patterns and reducing uninteresting patterns. Although weight and length-decreasing support constraints are vital elements, it is hard to consider both constraints by using previous approaches. In this paper, we integrate weight and length-decreasing support constraints by pushing two constraints into the prefix projection growth method. For pruning techniques, we define the Weighted Smallest Valid Extension property and apply the property to our pruning methods for reducing search space. In performance test, we show that our algorithm mines important sequential patterns with length-decreasing support constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.