2,308 results
Search Results
2. Problem formulation in inventive design using Doc2vec and Cosine Similarity as Artificial Intelligence methods and Scientific Papers
- Author
-
Masih Hanifi, Hicham Chibane, Remy Houssin, Denis Cavallucci, Laboratoire des sciences de l'ingénieur, de l'informatique et de l'imagerie (ICube), École Nationale du Génie de l'Eau et de l'Environnement de Strasbourg (ENGEES)-Université de Strasbourg (UNISTRA)-Institut National des Sciences Appliquées - Strasbourg (INSA Strasbourg), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Les Hôpitaux Universitaires de Strasbourg (HUS)-Centre National de la Recherche Scientifique (CNRS)-Matériaux et Nanosciences Grand-Est (MNGE), Université de Strasbourg (UNISTRA)-Université de Haute-Alsace (UHA) Mulhouse - Colmar (Université de Haute-Alsace (UHA))-Institut National de la Santé et de la Recherche Médicale (INSERM)-Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS)-Université de Strasbourg (UNISTRA)-Université de Haute-Alsace (UHA) Mulhouse - Colmar (Université de Haute-Alsace (UHA))-Institut National de la Santé et de la Recherche Médicale (INSERM)-Institut de Chimie du CNRS (INC)-Centre National de la Recherche Scientifique (CNRS)-Réseau nanophotonique et optique, and Université de Strasbourg (UNISTRA)-Université de Haute-Alsace (UHA) Mulhouse - Colmar (Université de Haute-Alsace (UHA))-Centre National de la Recherche Scientifique (CNRS)-Université de Strasbourg (UNISTRA)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
Artificial Intelligence ,Control and Systems Engineering ,[INFO.INFO-IA]Computer Science [cs]/Computer Aided Engineering ,Electrical and Electronic Engineering - Published
- 2022
3. A neural network predictive control system for paper mill wastewater treatment
- Author
-
Hong Liang Liu, Guohe Huang, Yu Peng Lin, G. M. Zeng, Xiaosheng Qin, and L. He
- Subjects
Scheme (programming language) ,Artificial neural network ,business.industry ,Computer science ,Control (management) ,Process (computing) ,Control engineering ,Nonlinear programming ,Nonlinear system ,Model predictive control ,Artificial Intelligence ,Control and Systems Engineering ,Artificial intelligence ,Electrical and Electronic Engineering ,Gradient descent ,business ,computer ,computer.programming_language - Abstract
This paper presents a neural network predictive control scheme for studying the coagulation process of wastewater treatment in a paper mill. A multi-layer back-propagation neural network is employed to model the nonlinear relationships between the removal rates of pollutants and the chemical dosages, in order to adapt the system to a variety of operating conditions and acquire a more flexible learning ability. The system includes a neural network emulator of the reaction process, a neural network controller, and an optimization procedure based on a performance function that is used to identify desired control inputs. The gradient descent algorithm method is used to realize the optimization procedure. The results indicate that reasonable forecasting and control performances have been achieved through the developed system.
- Published
- 2003
4. Eliciting knowledge for material design in steel making using paper models and codification scheme
- Author
-
X.D. Fang and S.S. Shivathaya
- Subjects
Scheme (programming language) ,Knowledge management ,Knowledge representation and reasoning ,Process (engineering) ,Computer science ,business.industry ,Principal (computer security) ,Task (project management) ,Knowledge-based systems ,Risk analysis (engineering) ,Artificial Intelligence ,Control and Systems Engineering ,Structured interview ,Electrical and Electronic Engineering ,Engineering design process ,business ,computer ,computer.programming_language - Abstract
Knowledge elicitation (KEL) is the most important stage, but often the principal bottleneck, in the development of knowledge-based systems. Due to the difficulties faced in the knowledge-elicitation process, development of a knowledge-based system for material design in the steel-making industry is a complex task. An attempt is made in this paper to present a new approach to deal with knowledge elicitation for material design problems in the steel-making industry. This paper centres around the human aspects and is based on practical experience gained while developing a knowledge-based system for material design at BHP Steel, Australia. This approach involves codification of the customer's special requirements to identify the knowledge sources involved in the design process. This is followed by the use of paper models to improve the efficiency of the KEL process. The second stage of the structured interviews is based on the customer's special requirement codes for eliciting the missing information and for clarifying any ambiguities or inconsistencies. The paper also discusses the use of non-interviewing techniques to elicit the expert knowledge, in order to reduce the use of expensive interview time. The knowledge-representation scheme developed for the material design system aims at reducing the search time and storage space by utilising a codification scheme to classify various knowledge sources into appropriate categories.
- Published
- 1995
5. An expert system study for evaluating technical papers: Decision-making for an IPC
- Author
-
Boris Tamm
- Subjects
Knowledge management ,Computer science ,business.industry ,Management science ,Probabilistic logic ,Legal expert system ,Data dictionary ,computer.software_genre ,Expert system ,Task (project management) ,Subject-matter expert ,Artificial Intelligence ,Control and Systems Engineering ,Problem domain ,Electrical and Electronic Engineering ,business ,computer - Abstract
Evaluation of scientific contributions is a typical expert task that can be formalized to only a moderate extent. Therefore, the International Programme Committees collect the opinions of different experts, leaving the final decision to one expert or a small group. The aim of this paper is to model the problem domain, which does not fit into ordinary fixed or probabilistic knowledge-base structures. In this case, the knowledge must be derived and measured by some robust structures which satisfy the expert reasoning. Methods for structuring the rule base and the data dictionary, as well as logical distances between the values of the decision factors, are discussed.
- Published
- 1996
6. Preface of the special section on selected best papers of the Ninth International Workshop on Cooperative Information Agents (CIA-2004)
- Author
-
Sascha Ossowski, Matthias Klusch, and Rainer Unland
- Subjects
Ninth ,Information agents ,Artificial Intelligence ,Control and Systems Engineering ,Computer science ,Special section ,Library science ,Electrical and Electronic Engineering - Published
- 2005
7. Thermal modeling of power transformers using evolving fuzzy systems
- Author
-
André Paim Lemos, W. C. Boaventura, Walmir Matos Caminhas, and L. M. Souza
- Subjects
Computer science ,Electrical insulation paper ,Stiffness ,Fuzzy control system ,Distribution transformer ,Fuzzy logic ,Reliability engineering ,law.invention ,Artificial Intelligence ,Control and Systems Engineering ,law ,Thermal ,medicine ,Electrical and Electronic Engineering ,medicine.symptom ,Transformer - Abstract
Thermal models for distribution transformers with core immersed in oil are of utmost importance for transformers lifetime study. The hot spot temperature determines the degradation speed of the insulating paper. High temperatures cause loss of mechanical stiffness, generating failures. Since the paper is the most fragile component of the transformer, its degradation determines the lifetime limits. Thus, good thermal models are needed to generate reliable data for lifetime forecasting methodologies. It is also desired that thermal models are able to adapt to cope with changes in the transformer behavior due to structural changes, maintenance and so on. In this work we apply an evolving fuzzy model to build adaptive thermal models of distribution transformers. The model used is able to adapt its parameters and also its structure based on a stream of data. The proposed model is evaluated using actual data from an experimental transformer. The results suggest that evolving fuzzy models are a promising approach for adaptive thermal modeling of distribution transformers.
- Published
- 2012
8. Online prediction of pulp brightness using fuzzy logic models
- Author
-
Mokhtar Benaoudia, Sofiane Achiche, Marek Balazinski, and Luc Baron
- Subjects
Brightness ,Computer science ,business.industry ,Pulp (paper) ,Process variable ,engineering.material ,Raw material ,Fuzzy logic ,Artificial Intelligence ,Control and Systems Engineering ,engineering ,Electrical and Electronic Engineering ,Process engineering ,business - Abstract
The quality of thermomechanical pulp (TMP) is influenced by a large number of variables. To control the pulp and paper process, the operator has to manually choose the influencing variables, which can change significantly depending on the quality of the raw material (wood chips). Very little knowledge exists about the relationships between the quality of the pulp obtained by the TMP process and wood chip properties. The research proposed in this paper uses genetically generated knowledge bases to model these relationships while using measurements of wood chip quality, process parameter data and properties of raw material such as bleaching agents. The rule base of the knowledge bases will provide a better understanding of the relationships between the different influencing variables (input and outputs).
- Published
- 2007
9. A review of soft techniques for SMS spam classification: Methods, approaches and applications
- Author
-
Adebayo Abayomi-Alli, Olusola Abayomi-Alli, Modupe Odusami, and Sanjay Misra
- Subjects
0209 industrial biotechnology ,020901 industrial engineering & automation ,Information retrieval ,Exploit ,Artificial Intelligence ,Control and Systems Engineering ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,02 engineering and technology ,Electrical and Electronic Engineering ,Android (operating system) ,Client-side - Abstract
Background: The easy accessibility and simplicity of Short Message Services (SMS) have made it attractive to malicious users thereby incurring unnecessary costing on the mobile users and the Network providers’ resources. Aim: The aim of this paper is to identify and review existing state of the art methodology for SMS spam based on some certain metrics: AI methods and techniques, approaches and deployed environment and the overall acceptability of existing SMS applications. Methodology: This study explored eleven databases which include IEEE, Science Direct, Springer, Wiley, ACM, DBLP, Emerald, SU, Sage, Google Scholar, and Taylor and Francis, a total number of 1198 publications were found. Several screening criteria were conducted for relevant papers such as duplicate removal, removal based on irrelevancy, abstract eligibility based on the removal of papers with ambiguity (undefined methodology). Finally, 83 papers were identified for depth analysis and relevance. A quantitative evaluation was conducted on the selected studies using seven search strategies (SS): source, methods/ techniques, AI approach, architecture, status, datasets and SMS spam mobile applications. Result: A Quantitative Analysis (QA) was conducted on the selected studies and the result based on existing methodology for classification shows that machine learning gave the highest result with 49% with algorithms such as Bayesian and support vector machines showing highest usage. Unlike statistical analysis with 39% and evolutionary algorithms gave 12%. However, the QA for feature selection methods shows that more studies utilized document frequency, term frequency and n-grams techniques for effective features selection process. Result based on existing approaches for content-based, non-content and hybrid approaches is 83%, 5%, and 12% respectively. The QA based on architecture shows that 25% of existing solutions are deployed on the client side, 19% on server-side, 6% collaborative and 50% unspecified. This survey was able to identify the status of existing SMS spam research as 35% of existing study was based on proposed new methods using existing algorithms and 29% based on only evaluation of existing algorithms, 20% was based on proposed methods only. Conclusion: This study concludes with very interesting findings which shows that the majority of existing SMS spam filtering solutions are still between the “Proposed” status or “Proposed and Evaluated” status. In addition, the taxonomy of existing state of the art methodologies is developed and it is concluded that 8.23% of Android users actually utilize this existing SMS anti-spam applications. Our study also concludes that there is a need for researchers to exploit all security methods and algorithm to secure SMS thus enhancing further classification in other short message platforms. A new English SMS spam dataset is also generated for future research efforts in Text mining, Tele-marketing for reducing global spam activities.
- Published
- 2019
10. A MEMCIF-IN method for safety risk assessment in oil and gas industry based on interval numbers and risk attitudes
- Author
-
Donghong Tian, Bing Wang, Meng Zhou, and Chunlan Zhao
- Subjects
0209 industrial biotechnology ,Index (economics) ,Computer science ,business.industry ,Probability density function ,02 engineering and technology ,Interval (mathematics) ,Function (mathematics) ,Risk matrix ,020901 industrial engineering & automation ,Operator (computer programming) ,Safety risk ,Petroleum industry ,Artificial Intelligence ,Control and Systems Engineering ,Position (vector) ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,Multiple criteria ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,business - Abstract
This paper mainly proposes a novel method to construct a risk matrix for assessing safety risks in oil and gas industry. There are often multiple experts and multiple criteria involved in safety risk assessment problems and the assessment data are often given in the form of interval numbers. In order to better assess risks, the definition of interval number with distribution function and utility function is proposed in this paper. The frequency and the consequence of risk are only two needed indicators in risk matrix and their values are needed in the form of crisp values. So a multi-expert and multi-criterion information fusion based on interval number(MEMCIF-IN) model is built in this paper. Firstly, a multi-expert and multi-criterion fusion model is constructed to combine individual interval numbers into a collective interval number and integrate multiple criteria into a comprehensive index. In the fusion model, the weights of assessment experts are calculated based on the objective weights and the subjective weights simultaneously and the information of individual interval numbers is preserved without information loss in the final result. Secondly, a Continuous Weighted Ordered Weighted Aggregation(C-WOWA) operator is proposed. In the C-WOWA operator, the position weights which are generated by utility function and the importance weights which are generated by probability density function are considered at the same time. The position weights in the C-WOWA operator can correct the impact of experts’ risk attitudes and the importance weights can reflect the importance of the points themselves in an interval number. Finally, a risk matrix is constructed to show which risk is high and which is low. In addition, an application is implemented to show the practicality and rationality of the proposed method.
- Published
- 2019
11. Islanding and non-islanding disturbance detection in microgrid using optimized modes decomposition based robust random vector functional link network
- Author
-
Pradipta Kishore Dash, Lipsa Priyadarshini, Badri Narayan Sahu, and Tatiana Chakravorti
- Subjects
0209 industrial biotechnology ,IEEE 1547 ,Computer science ,Multivariate random variable ,02 engineering and technology ,Grid ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,Control theory ,Harmonics ,0202 electrical engineering, electronic engineering, information engineering ,Islanding ,Waveform ,020201 artificial intelligence & image processing ,Firefly algorithm ,Microgrid ,Electrical and Electronic Engineering ,Voltage - Abstract
This paper presents detection and classification of islanding and non-islanding disturbances in a PV based microgrid scenario. To design effective pattern recognition the microgrid is conditioned to operate in grid synchronous mode (i.e. according to IEEE 1547) as well as islanding mode. Various non-islanding disturbances such as sag, swell, harmonics, load switching along with islanding event (during grid synchronous operation, according to UL 1741, section 46) are generated in the microgrid and the non-stationary voltage signal samples for each event has been extracted. To cope with the requirement for the dynamic detection threshold, parameter adaptive Variational Mode Decomposition (PAVMD) with Robust Regularized Random Vector Functional Link Network (RRVFLN) has been introduced in this paper. These extracted waveforms are subjected to the proposed novel PAVMD algorithm where firefly algorithm has been used for parameter optimization. Distinguishable features are extracted from the output of the proposed method PAVMD. Applicability of PAVMD with RRVFLN has been tested for different disturbances in grid-connected mode as well as islanding mode condition which is completely a new contribution to the existing literature. The proposed algorithm has been tested for noisy condition as well. Classification accuracy achieved with the proposed method is satisfactory and acceptable.
- Published
- 2019
12. Sentiment analysis on stock social media for stock price movement prediction
- Author
-
Hamid Beigy and Ali Derakhshan
- Subjects
0209 industrial biotechnology ,Social network ,Computer science ,business.industry ,Sentiment analysis ,02 engineering and technology ,Data science ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,Social media ,Stock market ,Graphical model ,Electrical and Electronic Engineering ,business ,Stock (geology) - Abstract
The opinions of other people are an essential piece of information for making informed decisions. With the increase in using the Internet, today the web becomes an excellent source of user’s viewpoints in different domains. However, in one hand, the growing volume of opinionated text and on the other hand, complexity caused by contrast in user opinion, makes it almost impossible to read all of these reviews and make an informed decision. These requirements have encouraged a new line of research on mining user reviews, which is called opinion mining. User’s viewpoints could change during the time, and this is an important issue for companies. One of the most challenging problems of opinion mining is model-based opinion mining, which aim to model the generation of words by modeling their probabilities. In this paper, we address the problem of model-based opinion mining by introducing a part-of-speech graphical model to extract user’s opinions and test it in two different datasets in English and Persian where the Persian dataset is gathered in this paper from Iranian stock market social network. In the prediction of the stock market by this model, we achieved an accuracy better than methods that are using explicit sentiment labels for comments.
- Published
- 2019
13. Multi-category ternion support vector machine
- Author
-
Reshma Rastogi, Pooja Saigal, and Suresh Chandra
- Subjects
0209 industrial biotechnology ,Training set ,Computational complexity theory ,Computer science ,business.industry ,Pattern recognition ,02 engineering and technology ,System of linear equations ,Support vector machine ,020901 industrial engineering & automation ,Binary classification ,Hyperplane ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quadratic programming ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Classifier (UML) - Abstract
This paper proposes a three-class classifier termed as ‘Ternion support vector machine’ (TerSVM) and its tree based multi-category classification approach termed as ‘Multi-category ternion support vector machine’ (M-TerSVM). The proposed classifier, TerSVM, is motivated by twin multi-class support vector classification (Twin-KSVC) and evaluates the data patterns for three outputs ( + 1 , − 1 , 0 ) . Twin-KSVC has very high computational complexity, which makes it infeasible for real-world problems. Our proposed classifier, TerSVM, overcomes this limitation, as it formulates three unconstrained minimization problems (UMPs), instead of quadratic programming problems as solved by Twin-KSVC. The UMPs of TerSVM are solved as systems of linear equations which determine three proximal nonparallel hyperplanes. TerSVM can also be used as a binary classifier. This work also proposes a multi-category classification algorithm, M-TerSVM, that extents our three-class classifier (TerSVM) into multi-category framework. For a K -class problem, M-TerSVM constructs a classifier model in the form of a ternion tree of height ⌊ K ∕ 2 ⌋ , where the data is partitioned into three groups at each level. Our algorithm uses a novel procedure to identify a reduced training set which improves its learning time. Numerical experiments performed on synthetic and benchmark datasets indicate that M-TerSVM outperforms other classical multi-category approaches like one-against-all and Twin-KSVC, in terms of generalization ability and learning time. This paper proposes the application of M-TerSVM for handwritten digit recognition and color image classification.
- Published
- 2019
14. A hybrid particle swarm optimization for the selective pickup and delivery problem with transfers
- Author
-
Marie-Ange Manier, Zhihao Peng, Zaher Al Chami, Hervé Manier, Franche-Comté Électronique Mécanique, Thermique et Optique - Sciences et Technologies (UMR 6174) (FEMTO-ST), Université de Technologie de Belfort-Montbeliard (UTBM)-Ecole Nationale Supérieure de Mécanique et des Microtechniques (ENSMM)-Université de Franche-Comté (UFC), and Université Bourgogne Franche-Comté [COMUE] (UBFC)-Université Bourgogne Franche-Comté [COMUE] (UBFC)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Profit (real property) ,Linear programming ,Computer science ,Transportation ,[INFO.INFO-SE]Computer Science [cs]/Software Engineering [cs.SE] ,02 engineering and technology ,[INFO.INFO-IU]Computer Science [cs]/Ubiquitous Computing ,Set (abstract data type) ,[INFO.INFO-CR]Computer Science [cs]/Cryptography and Security [cs.CR] ,020901 industrial engineering & automation ,Artificial Intelligence ,Vehicle routing problem ,0202 electrical engineering, electronic engineering, information engineering ,Pickup ,Electrical and Electronic Engineering ,Constraint (mathematics) ,Metaheuristic ,Particle swarm optimization ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,[INFO.INFO-MA]Computer Science [cs]/Multiagent Systems [cs.MA] ,Control and Systems Engineering ,Selective problem ,[INFO.INFO-ET]Computer Science [cs]/Emerging Technologies [cs.ET] ,020201 artificial intelligence & image processing ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,Transfers ,Integer (computer science) - Abstract
International audience; In pickup and delivery problems, all the demands should be transported from pickup points (suppliers) to delivery points (customers) by vehicles while respecting a set of constraints. Honoring all demands is sometimes impossible when taking all the constraints into account. Therefore, the selective aspect is added to relax the constraint that all the demands should be satisfied. This paper studies a variant called the selective pickup and delivery problem with transfers (SPDPT). The transfers mean that some demands can be transferred from one vehicle to another one, which gives a chance to find more solutions. A mixed integer linear program is firstly proposed to describe the studied problem. Two objectives have been considered in the paper, maximizing the profit and minimizing the distance. The model is then validated on new generated instances. Due to the complexity of the problem, large instances could not be solved to optimality in a reasonable time. As an alternative, a new metaheuristic based on a hybrid particle swarm optimization is developed to tackle this bi-objective problem. The results show that this proposed method is efficient and competitive.
- Published
- 2019
15. Heuristic design of fuzzy inference systems: A review of three decades of research
- Author
-
Varun Kumar Ojha, Vaclav Snasel, and Ajith Abraham
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Fuzzy inference ,Computer Science - Artificial Intelligence ,Computer science ,Evolutionary algorithm ,02 engineering and technology ,Fuzzy logic ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Neural and Evolutionary Computing (cs.NE) ,Electrical and Electronic Engineering ,Interpretability ,Heuristic ,business.industry ,Computer Science - Neural and Evolutionary Computing ,Maximization ,Fuzzy control system ,Artificial Intelligence (cs.AI) ,Control and Systems Engineering ,Genetic fuzzy systems ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Curse of dimensionality - Abstract
This paper provides an in-depth review of the optimal design of type-1 and type-2 fuzzy inference systems (FIS) using five well known computational frameworks: genetic-fuzzy systems (GFS), neuro-fuzzy systems (NFS), hierarchical fuzzy systems (HFS), evolving fuzzy systems (EFS), and multi-objective fuzzy systems (MFS), which is in view that some of them are linked to each other. The heuristic design of GFS uses evolutionary algorithms for optimizing both Mamdani-type and Takagi-Sugeno-Kang-type fuzzy systems. Whereas, the NFS combines the FIS with neural network learning systems to improve the approximation ability. An HFS combines two or more low-dimensional fuzzy logic units in a hierarchical design to overcome the curse of dimensionality. An EFS solves the data streaming issues by evolving the system incrementally, and an MFS solves the multi-objective trade-offs like the simultaneous maximization of both interpretability and accuracy. This paper offers a synthesis of these dimensions and explores their potentials, challenges, and opportunities in FIS research. This review also examines the complex relations among these dimensions and the possibilities of combining one or more computational frameworks adding another dimension: deep fuzzy systems., 53 pages, 16 figures
- Published
- 2019
16. Implementing Deep Learning for comprehensive aircraft icing and actuator/sensor fault detection/identification
- Author
-
Yiqun Dong
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Deep learning ,Control engineering ,02 engineering and technology ,Fault (power engineering) ,Fault detection and isolation ,Identification (information) ,020901 industrial engineering & automation ,Flight dynamics ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Actuator ,Transfer of learning ,Icing - Abstract
The detection and identification for aircraft icing and actuator/sensor fault has been a lasting topic in flight safety researches. The current algorithms are usually tailored for some specific cases (faults/icing locations, magnitude, etc.). Although the performance of the algorithm in the designated cases may be good, the transferring of it to other different cases is usually heavy as parameters tuning or even algorithm redesigning may be required. In this paper, the author advocates exploring a comprehensive scheme that balance both good performance and wide transferability for different cases. Referring to the current advances in other research communities, we follow the state-of-art Deep Learning (DL) and transfer learning (TL) concepts. A scheme for the icing/actuator fault detection using the DL technique is firstly constructed. The TL is then adopted to transfer this scheme to other different tasks, e.g. fault/icing identification, sensor fault detection. Test results show that the TL-enhanced DL scheme exhibits not only good performance for the designated detection task, but also reflects flexible transferability at low tuning efforts. Via this paper the author advocates furtherly exploring the potentials of the novel DL and TL technique as to advancing the researches/techniques in the flight dynamics and control realm.
- Published
- 2019
17. An interval-valued intuitionistic fuzzy DEMATEL method combined with Choquet integral for sustainable solid waste management
- Author
-
Huchang Liao, Norsyahida Zulkifli, Enrique Herrera-Viedma, Abdullah Al-Barakati, and Lazim Abdullah
- Subjects
Solid waste management ,Mathematical optimization ,Computer science ,020209 energy ,Aggregate (data warehouse) ,Intuitionistic fuzzy ,02 engineering and technology ,Multiple-criteria decision analysis ,Interval valued ,Operator (computer programming) ,Choquet integral ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering - Abstract
The decision making trial and evaluation laboratory (DEMATEL) is a pragmatic method used to construct the structural correlation of criteria in a multi-criteria decision making (MCDM) problem. This paper proposes modifications to DEMATEL. Different from the typical DEMATEL which utilizes crisp numbers, this modification introduces interval-valued intuitionistic fuzzy numbers to enhance judgements in group decision-making environment. We use the interval-valued intuitionistic fuzzy weighted averaging operator instead of the typical averaging operator to aggregate decision-makers’ preferences. This paper proposes a combination of interval-valued intuitionistic fuzzy numbers, DEMATEL and the Choquet integral to allow a strong interrelationship between the sum of rows and the sum of columns of criteria in structural correlation analysis. The feasibility and applicability of the proposed method is illustrated by a numerical example about sustainable solid waste management. The result indicates that ‘collaboration and synergy’ is the most influential criterion for sustainable solid waste management. Comparative results are presented to show the validity of the proposed method.
- Published
- 2019
18. A Stackelberg security Markov game based on partial information for strategic decision making against unexpected attacks
- Author
-
Silvia E. Albarran and Julio B. Clempner
- Subjects
Computer Science::Computer Science and Game Theory ,0209 industrial biotechnology ,Mathematical optimization ,Markov chain ,Computer science ,02 engineering and technology ,Markov model ,Nonlinear programming ,Variable (computer science) ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,Complete information ,0202 electrical engineering, electronic engineering, information engineering ,Stackelberg competition ,Resource allocation ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering - Abstract
This paper considers an important class of Stackelberg security problems, which is characterized by the fact that defenders and attackers have incomplete information at each stage about the value of the current state. The inability to observe the exact state is motivated by the fact that it is impossible to measure exactly the state variables of the defenders and attackers. Most existing approaches for computing Stackelberg security games provide no guarantee if the estimated model is inaccurate. In order to solve this drawback, this paper presents several important results. First, it provides a novel solution for computing the Stackelberg security games for multiple players, considering finite resource allocation in domains with incomplete information. This new model is restricted to a partially observable Markov model. Next, we suggest a two-step iterative proximal/gradient method for computing the Stackelberg equilibrium for the security game: in each step, the algorithm solves an independent convex nonlinear programming problem implementing a regularized penalty approach. Regularization ensures the convergence to one (unique) of the possible equilibria of the Stackelberg game. To make the problem computationally tractable, we define the c -variable method for partially observable Markov games. Third, we show by simulation that our proposed model overcomes the disadvantages of previous Stackelberg security games solvers. Hence, as our final contribution, we present a new random walk method based on partial information. A numerical example for protecting airports terminals suggests the effectiveness of the proposed method, presenting the resulting patrolling strategies and two different realizations of the Stackelberg security game employing the partially observable random walk algorithm.
- Published
- 2019
19. Reinforcement learning for pricing strategy optimization in the insurance industry
- Author
-
Fernando Fernández, Elena Krasheninnikova, Javier García, Roberto Maestre, Comunidad de Madrid, and Ministerio de Economía y Competitividad (España)
- Subjects
Informática ,0209 industrial biotechnology ,Customer retention ,Computer science ,business.industry ,02 engineering and technology ,Maximization ,Microeconomics ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,Carry (investment) ,Reinforcement learning ,0202 electrical engineering, electronic engineering, information engineering ,Pricing strategy optimization ,Revenue ,020201 artificial intelligence & image processing ,Markov decision process ,Electrical and Electronic Engineering ,business ,Constraint (mathematics) ,Insurance industry ,Financial services - Abstract
Pricing is a fundamental problem in the banking sector, and is closely related to a number of financial products such as credit scoring or insurance. In the insurance industry an important question arises, namely: how can insurance renewal prices be adjusted? Such an adjustment has two conflicting objectives. On the one hand, insurers are forced to retain existing customers, while on the other hand insurers are also forced to increase revenue. Intuitively, one might assume that revenue increases by offering high renewal prices, however this might also cause many customers to terminate their contracts. Contrarily, low renewal prices help retain most existing customers, but could negatively affect revenue. Therefore, adjusting renewal prices is a non-trivial problem for the insurance industry. In this paper, we propose a novel modelization of the renewal price adjustment problem as a sequential decision problem and, consequently, as a Markov decision process (MDP). In particular, this study analyzes two different strategies to carry out this adjustment. The first is about maximizing revenue analyzing the effect of this maximization on customer retention, while the second is about maximizing revenue subject to the client retention level not falling below a given threshold. The former case is related to MDPs with a single criterion to be optimized. The latter case is related to Constrained MDPs (CMDPs) with two criteria, where the first one is related to optimization, while the second is subject to a constraint. This paper also contributes with the resolution of these models by means of a model-free Reinforcement Learning algorithm. Results have been reported using real data from the insurance division of BBVA, one of the largest Spanish companies in the banking sector. This work has been partially funded by the TIN2015-65686-C5 Spanish Ministerio de Economía y Competitividad project and FEDER funds. Javier García is partially supported by the Comunidad de Madrid (Spain) funds under the project 2016-T2/TIC-1712.
- Published
- 2019
20. Industry 4.0: A bibliometric analysis and detailed overview
- Author
-
Pranab K. Muhuri, Ajith Abraham, and Amit K. Shukla
- Subjects
Structure (mathematical logic) ,0209 industrial biotechnology ,Bibliometric analysis ,Industry 4.0 ,Computer science ,02 engineering and technology ,Data science ,Field (computer science) ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Citation ,Industrial Revolution - Abstract
With the arrival of Industry 4.0, the overall transformation using digital integration and intelligent engineering has taken a giant leap towards futuristic technology. All devices today are equipped with machine learning, automation has become a priority and thus another industrial revolution is in the making. In this state-of-the-art paper, we have performed bibliometric analysis and an extensive survey on recent developments in the field of “Industry 4.0”. In bibliometric analysis, different performance metrics are extracted, such as: total papers, total citations, and citation per paper. Further, top 10 of the most productive and highly cited authors, major subject areas, sources or journals, countries, and institutions are evaluated. A list of highly influential papers is also assessed. Later on, a detailed discussion of the most cited papers is analysed and a sectional classification is provided. This paper summarizes the growth structure of Industry 4.0 during the last 5 years and provides the concise background overview of Industry 4.0 related works and various application areas.
- Published
- 2019
21. Solving a new bi-objective hierarchical hub location problem with an M∕M∕c queuing framework
- Author
-
Melahat Khodemani-Yazdi, Mahdi Bashiri, Reza Tavakkoli-Moghaddam, and Yaser Rahimi
- Subjects
0209 industrial biotechnology ,Queueing theory ,Mathematical optimization ,Computer science ,Sorting ,02 engineering and technology ,Poisson distribution ,Fuzzy logic ,Variable (computer science) ,symbols.namesake ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,Genetic algorithm ,Simulated annealing ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Fixed cost - Abstract
This paper presents a bi-objective hierarchical hub location problem with hub facilities as servicing centers. The objectives are to minimize the total cost (i.e., fixed cost of establishing hub facilities and transportation cost) and the maximum route length, simultaneously. Hub facilities are categorized as central and local ones. The queuing frameworks for these types of facilities are considered as M ∕ M ∕ c and M ∕ M ∕ 1 , respectively. Moreover, density functions for the traveling time and number of entities are assumed to be Exponential and Poisson functions. The presented mathematical model is solved by a new game theory variable neighborhood fuzzy invasive weed optimization (GVIWO) as introduced in this paper. To evaluate the efficiency of this proposed algorithm, some experiments are conducted and the related results are compared with the non-dominated sorting genetic algorithm (NSGA-II) and hybrid simulated annealing (HSA) algorithm with respect to some comparison metrics. The results show that the proposed GVIWO algorithm outperforms the NSGA-II and HSA. Finally, the conclusion is provided.
- Published
- 2019
22. GAPN-LA: A framework for solving graph problems using Petri nets and learning automata
- Author
-
Mohammad Reza Meybodi, Seyed Mehdi Vahidipour, Alireza Rezvanian, and Mehdi Esnaashari
- Subjects
Vertex (graph theory) ,Structure (mathematical logic) ,0209 industrial biotechnology ,Theoretical computer science ,Learning automata ,Computer science ,02 engineering and technology ,Petri net ,Graph ,Domain (software engineering) ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering - Abstract
A fusion of learning automata and Petri nets, referred to as APN-LA, has been recently introduced in the literature for achieving adaptive Petri nets. A number of extensions to this adaptive Petri net have also been introduced; together we name them the APN-LA family. Members of this family can be utilized for solving problems in the domain of graph problems; each member is suitable for a specific category within this domain. In this paper, we aim at generalizing this family into a single framework, called generalized APN-LA (GAPN-LA), which can be considered as a framework for solving graph-based problems. This framework is an adaptive Petri net, organized into a graph structure. Each place or transition in the underlying Petri net is mapped into exactly one vertex of the graph, and each vertex of the graph represents a part of the underlying Petri net. A vertex in GAPN-LA can be considered as a module, which, in cooperation with other modules in the framework, helps in solving the problem at hand. To elaborate the problem-solving capability of the GAPN-LA, several graph-based problems have been solved in this paper using the proposed framework.
- Published
- 2019
23. Orienteering-based informative path planning for environmental monitoring
- Author
-
Alessandro Farinelli, Lorenzo Bottarelli, Manuele Bicego, and Jason Blum
- Subjects
Informative path planning ,Mobile sensors ,Active learning ,Gaussian process ,Orienteering ,Computer science ,Active learning (machine learning) ,020209 energy ,Computation ,Real-time computing ,Process (computing) ,Context (language use) ,02 engineering and technology ,Level set ,Artificial Intelligence ,Control and Systems Engineering ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Motion planning ,Electrical and Electronic Engineering - Abstract
The use of robotic mobile sensors for environmental monitoring applications has gained increasing attention in recent years. In this context, a common application is to determine the region of space where the analyzed phenomena is above or below a given threshold level — this problem is known as level set estimation. One example is the analysis of water in a lake, where the operators might want to determine where the dissolved oxygen level is above a critical threshold value. Recent research proposes to model the spatial phenomena of interest using Gaussian Processes, and then use an informative path planning procedure to determine where to gather data. In this paper, in contrast to previous works, we consider the case where a mobile platform with low computational power can continuously acquire measurements with a negligible energy cost. This scenario imposes a change in the perspective, since now efficiency is achieved by reducing the distance traveled by the mobile platform and the computation required by this path selection process. In this paper we propose two active learning algorithms aimed at facing this issue: specifically, (i) SBOLSE casts informative path planning into an orienteering problem and (ii) PULSE that exploits a less accurate but computationally faster path selection procedure. Evaluation of our algorithms, both on a real world and a synthetic dataset show that our approaches can compute informative paths that achieve a high quality classification, while significantly reducing the travel distance and the computation time.
- Published
- 2019
24. Robust structure low-rank representation in latent space
- Author
-
Cong-Zhe You, Vasile Palade, and Xiaojun Wu
- Subjects
0209 industrial biotechnology ,Rank (linear algebra) ,Computer science ,business.industry ,Dimensionality reduction ,Pattern recognition ,Image processing ,02 engineering and technology ,Matrix (mathematics) ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Discriminant ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Noise (video) ,Artificial intelligence ,Electrical and Electronic Engineering ,Cluster analysis ,Representation (mathematics) ,business - Abstract
Subspace clustering algorithms are usually used when processing high-dimensional data, such as in computer vision. This paper presents a robust low-rank representation (LRR) method that incorporates structure constraints and dimensionality reduction for subspace clustering. The existing LRR and its extensions use noise data as the dictionary, while this influences the final clustering results. The method proposed in this paper uses a discriminant dictionary for matrix recovery and completion in order to find the lowest rank representation of the data matrix. As the algorithm performs clustering operations in low-dimensional latent space, the computational efficiency of the algorithm is higher, which is also a major advantage of the proposed algorithm in this paper. A large number of experiments on standard datasets show the efficiency and effectiveness of the proposed method in subspace clustering problems.
- Published
- 2019
25. A review of state-of-the-art techniques for abnormal human activity recognition
- Author
-
Dinesh Kumar Vishwakarma and Chhavi Dhiman
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Representation (systemics) ,Homeland security ,Context (language use) ,02 engineering and technology ,Variation (game tree) ,Machine learning ,computer.software_genre ,Activity recognition ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Smart environment ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer - Abstract
The concept of intelligent visual identification of abnormal human activity has raised the standards of surveillance systems, situation cognizance, homeland safety and smart environments. However, abnormal human activity is highly diverse in itself due to the aspects such as (a) the fundamental definition of anomaly (b) feature representation of an anomaly, (c) its application, and henceforth (d) the dataset. This paper aims to summarize various existing abnormal human activity recognition (AbHAR) handcrafted and deep approaches with the variation of the type of information available such as two-dimensional or three-dimensional data. Features play a vital role in an excellent performance of an AbHAR system. The proposed literature provides feature designs of abnormal human activity recognition in a video with respect to the context or application such as fall detection, Ambient Assistive Living (AAL), homeland security, surveillance or crowd analysis using RGB, depth and skeletal evidence. The key contributions and limitations of every feature design technique, under each category: 2D and 3D AbHAR, in respective contexts are tabulated that will provide insight of various abnormal action detection approaches. Finally, the paper outlines newly added datasets for AbHAR by the researchers with added complexities for method validations.
- Published
- 2019
26. An efficient unsupervised image quality metric with application for condition recognition in kiln
- Author
-
Lianhong Wang, Xiaogang Zhang, Yicong Zhou, Hua Chen, Dingxiang Wang, and Leyuan Wu
- Subjects
Computational complexity theory ,Image quality ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Video quality ,Naturalness ,Image texture ,Artificial Intelligence ,Control and Systems Engineering ,Metric (mathematics) ,Feature (machine learning) ,Benchmark (computing) ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
In this paper, we propose an unsupervised textural-intensity-based natural image quality evaluator (TI-NIQE) by modelling the texture, structure and naturalness of an image. In detail, an effective quality-aware feature named as textural intensity (TI) is proposed in this paper to detect image texture. The image structure is captured by the distribution of gradients and basis images. The naturalness is characterized through the distributions of the locally mean subtracted and contrast normalized (MSCN) coefficients and the products of pairs of the adjacent MSCN coefficients. Furthermore, a new application pattern of image quality assessment (IQA) measures is proposed by taking the quality scores as the essential input of the recognition model. Using statistics of video quality scores computed by TI-NIQE as input features, an automatic IQA-based visual recognition model is proposed for the condition recognition in rotary kiln. Extensive experiments on benchmark datasets demonstrate that TI-NIQE shows better performance both in accuracy and computational complexity than other state-of-the-art unsupervised IQA methods, and experimental results on real-world data show that the recognition model has high prediction accuracy for condition recognition in rotary kiln.
- Published
- 2022
27. Fault diagnosis of modular multilevel converter based on adaptive chirp mode decomposition and temporal convolutional network
- Author
-
Qun Guo, Xinhao Zhang, Jing Li, and Gang Li
- Subjects
business.industry ,Computer science ,Noise (signal processing) ,Reliability (computer networking) ,Modular design ,Fault (power engineering) ,Signal ,Artificial Intelligence ,Control and Systems Engineering ,Robustness (computer science) ,Modulation ,Chirp ,Electrical and Electronic Engineering ,business ,Algorithm - Abstract
The reliability of the insulated gate bipolar transistors (IGBTs) is essential to the stable operation of the modular multilevel converter (MMC) system. However, there are a large number of IGBTs in the MMC system and the open-circuit faults of IGBTs are usually so hidden that it is difficult to find. Therefore, this article proposes a fault diagnosis framework based on temporal convolutional network (TCN) integrating adaptive chirp mode decomposition (ACMD) and silhouette coefficient (SC). First, ACMD is used to extract and reconstruct signal components from the original signal. Then, in order to avoid artificial selection of signal components, silhouette coefficient is introduced to characterize the importance of each component. Finally, the TCN model automatically extracts the features of the signal components and outputs the classification results. The main contributions are as follows: (1) A complete fault diagnosis framework that can adaptively extract features and perform fault classification is proposed in the paper. (2) For the MMC using the carrier-phase-shifted pulsewidth modulation strategy, the fault can be located to the IGBT by the output current. (3) Under certain noise conditions, the fault diagnosis proposed in the paper method still has good robustness. (4) The signal visualization of different residual blocks and channels explains the working mechanism of the AMCD-SC-TCN framework.
- Published
- 2022
28. Multi-camera joint spatial self-organization for intelligent interconnection surveillance
- Author
-
Jiayang Nie, Tao Yang, Zhaoyang Lu, Yuguang Xie, Congcong Li, and Jing Li
- Subjects
Self-organization ,Interconnection ,Computer science ,Real-time computing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Artificial Intelligence ,Control and Systems Engineering ,Feature (computer vision) ,Robustness (computer science) ,Smart city ,Key (cryptography) ,Redundancy (engineering) ,Noise (video) ,Electrical and Electronic Engineering - Abstract
The construction of smart city makes information interconnection play an increasingly important role in intelligent surveillance systems. Especially the interconnection among massive cameras is the key to realizing the evolution from current fragmented monitoring to interconnection surveillance. However, it remains a challenging problem in practical systems due to large sensor quantity, various camera types, and complex spatial layout. Aimed at this problem, this paper proposes a novel multi-camera joint spatial self-organization approach, which realizes interconnection surveillance by unifying cameras into one imaging space. Differing from existing back-end data association strategy, our method takes front-end data calibration as a breakthrough to relate surveillance data. Specifically, this paper first initials camera spatial parameter by sequence complementary feature integration. Through integrating complementarity and redundancy among sequence features, our method has robustness under scene dynamic changes and noise. Then, we propose a multi-camera joint optimization method based on common monitoring coverage correlation analysis to estimate a more accurate relative relationship. By leveraging the two strategies, the spatial relationship and visual data association across monitoring cameras are returned finally. Our system organizes all cameras into a unified imaging space by itself. Extensive experimental evaluations on an actual campus environment demonstrate our method achieves remarkable performance.
- Published
- 2022
29. IMA health state evaluation using deep feature learning with quantum neural network
- Author
-
Zhiyue Liu, Yige Luo, Zehai Gao, and Cunbao Ma
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Deep learning ,Noise reduction ,Reliability (computer networking) ,02 engineering and technology ,Integrated modular avionics ,computer.software_genre ,Data set ,Quantum neural network ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,State (computer science) ,Data mining ,Electrical and Electronic Engineering ,business ,Feature learning ,computer - Abstract
Integrated modular avionics is one of the most advanced systems. Its performance deeply impacts on the working condition of aircraft. In order to enhance the safety and reliability of aircraft, the health state of the integrated modular avionics should be evaluated accurately. In this paper, a novel deep learning method is developed to evaluate the health state. Firstly, as one of the deep learning methods, stacked denoising autoencoders is used to extract the features from the raw data immediately to retain original information. Secondly, the extracted features are fed into the quantum neural network to classify the data set. The loss function of the quantum neural network is evolved to improve the classification performance. Experiments conducted on standard datasets show that the proposed method is more effective and robust than other four conventional algorithms. Finally, this paper builds an integrated modular avionics degradation model by the changing probability of the occurrence of soft faults in the whole life serves and the proposed method is applied to the health state evaluation.
- Published
- 2018
30. A novel projection nonparallel support vector machine for pattern classification
- Author
-
Liming Liu, Ling Jing, Qiuling Hou, and Ling Zhen
- Subjects
0209 industrial biotechnology ,Optimization problem ,Computer science ,02 engineering and technology ,Support vector machine ,020901 industrial engineering & automation ,Kernel method ,Hyperplane ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Structural risk minimization ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,Classifier (UML) - Abstract
In this paper, we propose a novel nonparallel classifier termed as projection nonparallel support vector machine (PNPSVM), which is fully different from the existing nonparallel classifiers. The new classifier needs two steps to obtain the optimal proximal hyperplanes. The first step is to obtain two projection directions, which could achieve maximum class separability by minimizing the within-class distance and maximizing the between-class distance simultaneously, to be treated as the normal vectors of the optimal proximal hyperplanes, and the second step is to determine the specific locations of the optimal proximal hyperplanes based on an appropriate central sample. Furthermore, the improved successive overrelaxation (SOR) algorithm is applied to solve our PNPSVM. The incomparable advantages of this paper can be summarized as follows: (1) implementing the structural risk minimization principle in the primal problems; (2) utilizing the potential structural information of data by considering both the tightness between the similar patterns and the discrepancy between the dissimilar pairs; (3) the kernel trick can be applied directly since the dual problems have the similar elegant formulation as that of standard support vector machine; (4) SOR technique is introduced to solve our optimization problems. The comprehensive experimental results on an artificial dataset and twenty-four UCI datasets demonstrate the effectiveness of our method in classification accuracy.
- Published
- 2018
31. A MapReduce implementation of posterior probability clustering and relevance models for recommendation
- Author
-
Álvaro Barreiro, Daniel Valcarce, and Javier Parapar
- Subjects
Computer science ,business.industry ,Big data ,Probabilistic logic ,020207 software engineering ,02 engineering and technology ,Recommender system ,Machine learning ,computer.software_genre ,Ranking ,Artificial Intelligence ,Control and Systems Engineering ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Relevance (information retrieval) ,Language model ,Artificial intelligence ,Electrical and Electronic Engineering ,Cluster analysis ,business ,computer - Abstract
Relevance-Based Language Models are a formal probabilistic approach for explicitly introducing the concept of relevance in the Statistical Language Modelling framework. Recently, they have been determined as a very effective way of computing recommendations. When combining this new recommendation approach with Posterior Probabilistic Clustering for computing neighbourhoods, the item ranking is further improved, radically surpassing rating prediction recommendation techniques. Nevertheless, in the current landscape where the number of recommendation scenarios reaching the big data scale is increasing day after day, high figures of effectiveness are not enough. In this paper, we address one urging and common need of recommendation systems which is algorithm scalability. Particularly, we adapted those highly effective algorithms to the functional MapReduce paradigm, that has been previously proved as an adequate tool for enabling recommenders scalability. We evaluated the performance of our approach under realistic circumstances, showing a good scalability behaviour on the number of nodes in the MapReduce cluster. Additionally, as a result of being able to execute our algorithms distributively, we can show measures in a much bigger collection supporting the results presented on the seminal paper.
- Published
- 2018
32. ThermalNet: A deep reinforcement learning-based combustion optimization system for coal-fired boiler
- Author
-
Yin Cheng, Bo Pang, Weidong Zhang, and Yuexin Huang
- Subjects
Computer science ,business.industry ,020209 energy ,Boiler (power generation) ,Thermal power station ,02 engineering and technology ,Coal fired ,Combustion ,Supercritical fluid ,Nonlinear system ,Artificial Intelligence ,Control and Systems Engineering ,Optimization system ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Distributed control system ,Process engineering ,business - Abstract
This paper presents a combustion optimization system for coal-fired boilers that includes a trade-off between emissions control and boiler efficiency. Designing an optimizer for this nonlinear, multiple-input multiple-output problem is challenging. This paper describes the development of an integrated combustion optimization system called ThermalNet, which is based on a deep Q-network (DQN) and a long short-term memory (LSTM) module. ThermalNet is a highly automated system consisting of an LSTM–ConvNet predictor and a DQN optimizer. The LSTM–ConvNet extracts the features of boiler behavior from the distributed control system (DCS) operational data of a supercritical thermal plant. The DQN reinforcement learning optimizer contributes to the online development of policies based on static and dynamic states. ThermalNet establishes a sequence of control actions that both reduce emissions and simultaneously enhance fuel utilization. The internal structure of the DQN optimizer demonstrates a greater representation capacity than does the shallow multilayer optimizer. The presented experiments indicate the effectiveness of the proposed optimization system.
- Published
- 2018
33. Application of new training methods for neural model reference control
- Author
-
Amir Jafari and Martin T. Hagan
- Subjects
Training set ,Computer science ,System identification ,Extrapolation ,020206 networking & telecommunications ,02 engineering and technology ,computer.software_genre ,Data set ,Maxima and minima ,Recurrent neural network ,Artificial Intelligence ,Control and Systems Engineering ,Search algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Electrical and Electronic Engineering ,Cluster analysis ,computer - Abstract
In this paper, we introduce new, more efficient, methods for training recurrent neural networks (RNNs) for system identification and Model Reference Control (MRC). These methods are based on a new understanding of the error surfaces of RNNs that has been developed in recent years. These error surfaces contain spurious valleys that disrupt the search for global minima. The spurious valleys are caused by instabilities in the networks, which become more pronounced with increased prediction horizons. The new methods described in this paper increase the prediction horizons in a principled way that enables the search algorithms to avoid the spurious valleys. The work also presents a novelty sampling method for collecting new data wisely. A clustering method determines when an RNN is extrapolating, which occurs when the RNN operates outside the region spanned by the training set, where adequate performance cannot be guaranteed. The new method presented in this paper uses a clustering method for extrapolation detection, and then the novel data is added to the original training set. The network performance is improved when additional training is performed with the augmented data set. The new techniques are applied to the model reference control of a magnetic levitation system. The techniques are tested on both simulated and experimental versions of the system.
- Published
- 2018
34. Block-Matching Fuzzy C-Means clustering algorithm for segmentation of color images degraded with Gaussian noise
- Author
-
Francisco J. Gallegos-Funes, Alberto J. Rosales-Silva, Fernando Gamino-Sánchez, Isabel V. Hernández-Gutiérrez, Eduardo Ramos-Diaz, Dante Mújica-Vargas, Jean Marie Vianney Kinani, and Blanca E. Carvajal-Gámez
- Subjects
Computer science ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Color space ,symbols.namesake ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Cluster analysis ,business.industry ,Color image ,020206 networking & telecommunications ,Pattern recognition ,Sparse approximation ,Noise ,Additive white Gaussian noise ,Control and Systems Engineering ,Gaussian noise ,Computer Science::Computer Vision and Pattern Recognition ,symbols ,RGB color model ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Subspace topology - Abstract
In this paper, we present the Block-Matching Fuzzy C-Means (BMFCM) clustering algorithm to segment RGB color images degraded with Additive White Gaussian Noise (AWGN). The contribution of this paper is threefold, namely, noise level estimation, denoising and segmentation. First, two Additive White Gaussian Noise estimation algorithms are proposed to compute the noise variance of the observed noisy color image. Second, we propose an image denoising method based on the enhanced sparse representation using a Block-Matching approach. Third, the Block-Matching Fuzzy C-Means clustering algorithm is proposed. The motivation behind the proposed clustering algorithm is to improve the characteristics of the standard Fuzzy C-Means algorithm, and apply them to segment noisy color images. For this reason, the local information of every color component is incorporated in the Fuzzy C-Means using the proposed Block-Matching based filter as an Additive White Gaussian Noise estimator to determine whether the central pixel in a sliding window is noisy. The presented Additive White Gaussian Noise estimation algorithms are used in the proposed Block-matching method to improve its accuracy. The chromatic subspace of the IJK color space is also applied in the proposed clustering approach providing better segmentation results and reducing the processing time; this is because the algorithm is reduced in a bi-dimensional clustering approach. Finally, visual and numerical experiments demonstrate that the proposed algorithms provide better segmentation results in the presence and absence of AWGN in comparison with other segmentation methods.
- Published
- 2018
35. Tree Growth Algorithm (TGA): A novel approach for solving optimization problems
- Author
-
Mohammad Mahdi Paydar, Armin Cheraghalipour, and Mostafa Hajiaghaei-Keshteli
- Subjects
0209 industrial biotechnology ,Optimization problem ,Computer science ,02 engineering and technology ,Tree (data structure) ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,Robustness (computer science) ,Metaheuristic algorithms ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Combinatorial optimization ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Metaheuristic ,Algorithm - Abstract
Nowadays, most of real world problems are complex and hence they cannot be solved by exact methods. So generally, we have to utilize approximate methods such as metaheuristics. So far, a significant amount of metaheuristic algorithms are proposed which are different with together in algorithm motivation and steps. Similarly, this paper presents the Tree Growth Algorithm (TGA) as a novel method with different approach to address optimization tasks. The proposed algorithm is inspired by trees competition for acquiring light and foods. Diversification and intensification phases and their tradeoff are detailed in the paper. Besides, the proposed algorithm is verified by using both mathematical and engineering benchmarks commonly used in this research area. This new approach in metaheuristic is compared and studied with well-known optimization algorithms and the comparison of TGA with standard versions of these employed algorithms showed the superiority of TGA in these problems. Also, convergence analysis and significance tests via some nonparametric technique are employed to confirm efficiency and robustness of the TGA. According to the results of conducted tests, the TGA can be considered as a successful metaheuristic and suitable for optimization problems. Therefore, the main purpose of providing this algorithm is achieving to better results, especially in continuous problems, due to the natural behavior inspired by trees. A novel metaheuristic algorithm called TGA inspired by trees behavior is developed.A comprehensive literature review of metaheuristics have been proposed.Taguchi method is utilized to tune the parameters of the algorithms.TGA is evaluated using performance evaluation and statistical analysis.TGA is measured by thirty benchmark functions and five engineering problems.
- Published
- 2018
36. Solving task allocation problem in multi Unmanned Aerial Vehicles systems using Swarm intelligence
- Author
-
Edison Pignaton de Freitas, Ana L. C. Bazzan, Iulisloi Zacarias, Janaina Schwarzrock, Ricardo Queiroz de Araujo Fernandes, and Leonardo Henrique Moreira
- Subjects
Strategic planning ,0209 industrial biotechnology ,Heuristic ,Heuristic (computer science) ,Computer science ,business.industry ,02 engineering and technology ,Swarm intelligence ,Complement (complexity) ,Task (project management) ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
The envisaged usage of multiple Unmanned Aerial Vehicles (UAVs) to perform cooperative tasks is a promising concept for future autonomous military systems. An important aspect to make this usage a reality is the solution of the task allocation problem in these cooperative systems. This paper addresses the problem of tasks allocation among agents representing UAVs, considering that the tasks are created by a central entity, in which the decision of which task will be performed by each agent is not decided by this central entity, but by the agents themselves. The assumption that tasks are created by a central entity is a reasonable one, given the way strategic planning is carried up in military operations. To enable the UAVs to have the ability to decide which tasks to perform, concepts from swarm intelligence and multi-agent system approach are used. Heuristic methods are commonly used to solve this problem, but they present drawbacks. For example, many tasks end up not begin performed even if the UAVs have enough resources to execute them. To cope with this problem, this paper proposes three algorithm variants that complement each other to form a new method aiming to increase the amount of performed tasks, so that a better task allocation is achieved. Through experiments in a simulated environment, the proposed method was evaluated, yielding enhanced results for the addressed problem compared to existing methods reported in the literature.
- Published
- 2018
37. Generalizable surrogate model features to approximate stress in 3D trusses
- Author
-
Javier Irizarry, Mehdi Nourbakhsh, and John Haymaker
- Subjects
Artificial neural network ,Computer science ,Truss ,020101 civil engineering ,02 engineering and technology ,Finite element method ,0201 civil engineering ,Set (abstract data type) ,020303 mechanical engineering & transports ,Surrogate model ,0203 mechanical engineering ,Artificial Intelligence ,Control and Systems Engineering ,Feature (machine learning) ,Boundary value problem ,Electrical and Electronic Engineering ,Algorithm ,Parametric statistics - Abstract
Existing neural network (NN) models that predict finite element analysis (FEA) of 3D trusses are not generalizable. For example, if a model is designed for a ten-bar truss, it cannot accurately predict the analysis results of a 12-bar truss. Such changes require new sample data and model retraining, reducing the time-saving value of the approach. This paper introduces Generalizable Surrogate Models (GSMs) that use a set of feature descriptors of physical structures to aggregate analysis data from various structures, enabling a more general model that predicts performance for a variety of geometric class, topology and boundary conditions. The paper presents training of generalizable models on parametric dome, wall, and slab structures, and demonstrates the accuracy and generalizability of these GSMs compared to traditional NNs. Results demonstrate first how to combine and use analysis data from various structures to predict the performance of the members of structures of the same class with different topology and boundary conditions. The results further demonstrate how these GSMs more closely predict FEA results than NN models exclusively created for a specific structure. The methodology of this study can be adopted by researchers and engineers to create predictive models for approximation of FEA.
- Published
- 2018
38. Inter-domain routing for communication networks using Hierarchical Hopfield Neural Networks
- Author
-
Hitalo Oliveira da Silva and Carmelo J. A. Bastos-Filho
- Subjects
Hierarchy (mathematics) ,Artificial neural network ,Computer science ,Inter-domain ,Distributed computing ,020207 software engineering ,02 engineering and technology ,Telecommunications network ,Reduction (complexity) ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Routing (electronic design automation) - Abstract
This paper presents the Hierarchical Hopfield Neural Networks (HHNN). HHNN is a novel Hopfield Neural Network (HNN) approach. HHNN is composed of a hierarchy of self-sufficient HNNs, aiming to reduce the neural network structure and mitigate convergence problems. The HNNN composition depends on the applied problem. In this paper, the problem approached is the inter-domain routing for communication networks. Thus, the hierarchy of HNNs mimics the structure of communication networks (domains, nodes, and links). The proof of concept and the comparison between HNNN with the state-of-art HNN occurs using an implementation of them in the Java programming language. Besides, the performance analysis of the HHNN runs on a parallel hardware platform, using VHDL to develop it. The results have demonstrated a reduction of 93 . 75 % and 99 . 98 % in the number of neurons and connections to build the neural network, respectively. Furthermore, the mean time to achieve convergence of HHNN is rough 1 . 52 % of the total time needed by the current state-of-art HNN approach. It is also less susceptible to early convergence problems when used in communications networks with a large number of nodes. Last, but not least, the VHDL implementation shows that convergence time of HHNN is comparable to routing algorithms used in practical applications.
- Published
- 2018
39. Design of optimal high pass and band stop FIR filters using adaptive Cuckoo search algorithm
- Author
-
Pradeep Kumar Das, Rutuparna Panda, Ajith Abraham, and Shubhendu Kumar Sarangi
- Subjects
0209 industrial biotechnology ,Finite impulse response ,Computer science ,Attenuation ,Value (computer science) ,02 engineering and technology ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Cuckoo search ,High-pass filter ,Algorithm - Abstract
This paper presents an efficient design of digital FIR high pass and band stop filters using an adaptive cuckoo search algorithm (ACSA). The important features of ACSA are — (i) the step size is independent and (ii) it is accurately decided from the current fitness value. The step size is decided according to the current fitness value within the iteration process. This increases the convergence speed. The other five global optimizers are also used for optimization. The optimal solutions obtained by the ACSA are compared with the other global optimizers. CEC 2005 benchmark test functions are considered for the comparison. The results are compared in terms of the convergence speed, accuracy, deviation from the desired response, minimum stop-band attenuation and maximum pass-band attenuation. The statistical analysis, i.e. t -Test is performed to claim the superiority of the proposed approach. The simulation results presented in this paper reveal the fact that the performance of the ACSA is better than the other algorithms.
- Published
- 2018
40. Single-step prediction method of burning zone temperature based on real-time wavelet filtering and KELM
- Author
-
Shizeng Lu, Xiaohong Wang, Hongliang Yu, Huijun Dong, and Yongjian Sun
- Subjects
Basis (linear algebra) ,Mean squared error ,Computer science ,020208 electrical & electronic engineering ,Training (meteorology) ,02 engineering and technology ,Support vector machine ,Operator (computer programming) ,Electronic stability control ,Artificial Intelligence ,Control and Systems Engineering ,Wavelet filtering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm - Abstract
The single-step prediction of burning zone temperature plays an important role in the safety and stability control of cement rotary kiln. This is because, the abnormal temperature events can be found as early as possible and the operator can take effective emergency measures in time. In this paper, the burning zone temperature single-step prediction method based on real-time wavelet filtering and kernel extreme learning machine is studied. Firstly, the visual inspection device is used to detect the burning zone temperature. And then, the amplitude limited filtering method is used to weaken the effects of temperature anomalies. On this basis, the real-time filtering of the burning zone temperature is realized by combining the sliding time window and wavelet filtering method. After that, the single-step prediction of burning zone temperature is realized by combining the sliding time window and kernel extreme learning machine method. At last, the burning zone temperature prediction method is validated. The minimum root mean squared error of the 5 consecutive days is 0 . 4259 ° C . The single average running time of model training and prediction of kernel extreme learning machine is much less than support vector regression, which is very helpful for the online prediction of burning zone temperature. The result shows that the burning zone temperature single-step prediction method proposed in this paper is feasible and effective.
- Published
- 2018
41. Link prediction in weighted social networks using learning automata
- Author
-
Behnaz Moradabadi and Mohammad Reza Meybodi
- Subjects
Computer Science::Machine Learning ,Training set ,Social network ,Learning automata ,business.industry ,Computer science ,02 engineering and technology ,Task (project management) ,Set (abstract data type) ,Action (philosophy) ,Artificial Intelligence ,Control and Systems Engineering ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,Link (knot theory) ,business ,Social network analysis - Abstract
Link prediction is an important task in Social Network Analysis. The present paper addresses predicting the emergence of future relationships among nodes in a social network. Our study focuses on a strategy of learning automata for link prediction in weighted social networks. In this paper, we try to estimate the weight of each test link directly from the weights information in the network. To do so, we take advantage of using learning automata, intelligent tools that try to learn the optimal action based on reinforcement signals. In the method proposed here, there exist one learning automata for each test link that must be predicted and each learning automata tries to learn the true weight of the corresponding link based on the weight of links in the current network. All learning automata iteratively select their action as the weight of corresponding links. The set of learning automata actions will then be used to calculate the weight of training links and each learning automata will be rewarded or punished according to its influence upon the true weight estimating of the training set. A final prediction is then performed based on the estimated weights. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when weights are considered.
- Published
- 2018
42. The challenge of advanced model-based FDIR for real-world flight-critical applications
- Author
-
Ali Zolghadri
- Subjects
020301 aerospace & aeronautics ,0209 industrial biotechnology ,business.industry ,Computer science ,Scale (chemistry) ,Survivability ,02 engineering and technology ,Fault (power engineering) ,Fault detection and isolation ,Identification (information) ,020901 industrial engineering & automation ,Key factors ,0203 mechanical engineering ,Artificial Intelligence ,Control and Systems Engineering ,Systems engineering ,Electrical and Electronic Engineering ,Aerospace ,business ,Simulation - Abstract
This paper aims at providing a brief perspective of advanced model-based Fault Detection, Identification and Recovery (FDIR) for aerospace and flight-critical systems. A number of practical key factors for designing credible technological options are emphasized. Such considerations are decisive for the survivability of the design during ground/flight Validation & Verification (V&V) activities. The views reported in this paper are based on lessons learnt and results achieved through actions undertaken with Airbus during the last decade. As an illustrative example, a model-based fault monitoring technique is presented which has reached level 5 on Technological Readiness Level scale under V&V investigations at Airbus.
- Published
- 2018
43. Hyperspectral classification based on spectral–spatial convolutional neural networks
- Author
-
Feng Jiang, Chifu Yang, Congcong Chen, Zhiguo Liu, Weizheng Shen, Shaohui Liu, and Seungmin Rho
- Subjects
Computer science ,business.industry ,0211 other engineering and technologies ,Hyperspectral imaging ,Pattern recognition ,02 engineering and technology ,Convolutional neural network ,Support vector machine ,Artificial Intelligence ,Control and Systems Engineering ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Classifier (UML) ,021101 geological & geomatics engineering - Abstract
Hyperspectral image classification is an important task in remote sensing image analysis. Traditional machine learning techniques are difficult to deal with hyperspectral images directly, because hyperspectral images have too many redundant spectral channels. In this paper we propose a novel method for hyperspectral image classification, by which spectral and spatial features are jointly exploited from hyperspectral images. Firstly, considering the local similarity in spatial domain, we employ a large spatial window to get image blocks from hyperspectral image Secondly, each spectral channel of the image block is filtered to extract their spatial and spectral features, after that the features are merged by convolutional layers. Finally, the fully-connected layers are used to get the classification result. Comparing with other state-of-the-art techniques, the proposed method pays more attention to the correlation of spatial neighborhood by using a large spatial window in the network. In addition, we combine the proposed network with the traditional support vector machine (SVM) classifier to improve the performance of hyperspectral image classification. Moreover, an adaptive method of the spatial window sizes selection is proposed in this paper. Experimental results conducted on the AVIRIS and ROSIS datasets demonstrate that the proposed method outperforms the state-of-the-art techniques.
- Published
- 2018
44. Evolving model identification for process monitoring and prediction of non-linear systems
- Author
-
Goran Andonovski, Sao Blai, Gaper Mui, and Igor krjanc
- Subjects
Structure (mathematical logic) ,business.industry ,Process (engineering) ,Computer science ,020208 electrical & electronic engineering ,System identification ,Cloud computing ,02 engineering and technology ,Fuzzy logic ,Automation ,Industrial engineering ,Identification (information) ,Nonlinear system ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,business - Abstract
This paper tackles the problem of model identification for monitoring of non-linear processes using evolving fuzzy models. To ensure a high production quality and to match the economic requirements, industrial processes are becoming increasingly complicated in both their structure and their degree of automation. Therefore, evolving systems, because of their data-driven and adaptive nature, appear to be a useful tool for modeling such complex and non-linear processes. In this paper the identification of evolving cloud-based fuzzy models is treated for process monitoring purposes. Moreover, the evolving part of the algorithm was improved with the inclusion of some new cloud-management mechanisms. To evaluate the proposed method two different processes, but both complex and non-linear, were used. The first one is a simulated Tennessee Eastman benchmark process model, while the second one is a real water-chiller plant.
- Published
- 2018
45. The fusion of multispectral palmprints using the information set based features and classifier
- Author
-
Madasu Hanmandlu and Jyotsana Grover
- Subjects
Fusion ,Information set ,Computer science ,business.industry ,Multispectral image ,Feature extraction ,Pattern recognition ,02 engineering and technology ,01 natural sciences ,Artificial Intelligence ,Control and Systems Engineering ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Electrical and Electronic Engineering ,010306 general physics ,business - Abstract
This paper presents three texture features, viz., topothesy-fractal dimension, Hanman transform, and structure function based transform for the multispectral palmprint based authentication. It introduces the notion of information set originating from the Hanman–Anirban entropy. Using information set, Hanman transform features are derived. The topothesy-fractal dimension features arise from the structure function on representing the intensity variation on the texture surface. The structure function based transform features are derived from both structure function and the Hanman transform. Apart from the feature extraction, the fuzzy classifier based on the information processing is also developed. A novel score level fusion is proposed using Triangular-norms and Triangular-conorms. Thus the paper’s contribution is three-fold: i) New features for multispectral palmprint, ii) novel classifier for authentication, and iii) score level fusion for improving the accuracy. The rigorous experimental results certify that the proposed approaches make a substantial improvement in the authentication accuracy and outperform the contemporary approaches.
- Published
- 2018
46. The object-oriented dynamic task assignment for unmanned surface vessels
- Author
-
Xiaotong Cheng, Du Bin, Xuesong Zou, Yu Lu, and Weidong Zhang
- Subjects
Object-oriented programming ,Computer science ,media_common.quotation_subject ,Real-time computing ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Kinematics ,Bidding ,Auction algorithm ,Task (project management) ,Artificial Intelligence ,Control and Systems Engineering ,Proportional navigation ,Electrical and Electronic Engineering ,Interception ,Function (engineering) ,media_common - Abstract
This paper investigates the task assignment and guidance issues of unmanned surface vessels (USVs) interception. When the USVs formation is invaded by some moving objects during its escort, it is necessary for the unmanned systems to assign defenders to prevent attackers approaching the vulnerable target in antagonistic scenarios. This action requires efficient guidance and task assignment strategies. With this in mind, this paper presents the Integral Proportional Navigation Guidance (IPNG) with Tabu Dynamic Consensus-Based Auction Algorithm (TDCBAA) in marine interception scenario. First, IPNG is introduced in the interception game considering the USV kinematic model, which can effectively reduce the individual interception time. Second, a new bidding function is designed for moving objects interception with the consideration of the attackers’ types, positions and interception time. Finally, a TDCBAA is designed to solve the task assignment subproblem, resulting in a shorter overall interception time and a higher interception success rate. Simulations demonstrate that the proposed algorithm can optimize the allocation of defenders in real-time and intercept the attackers more quickly compared with other classical algorithms, which is more suitable in situations where attackers are approaching from all directions.
- Published
- 2021
47. Prediction of crime rate in urban neighborhoods based on machine learning
- Author
-
Jingyi He and Hao Zheng
- Subjects
Artificial neural network ,Computer science ,Sample (statistics) ,Plan (drawing) ,Floor plan ,Planner ,Transport engineering ,Hotspot (Wi-Fi) ,Artificial Intelligence ,Control and Systems Engineering ,Test set ,Electrical and Electronic Engineering ,Set (psychology) ,computer ,computer.programming_language - Abstract
As the impact of crime on the lives of residents has increased, there are a number of methods for predicting where crime will occur. They tend to explore only the association established between a single factor and the distribution of crime. In order to more accurately and quickly visualize and predict crime distribution in different neighborhoods, and to provide a basis for security planning and design by planning designers, this paper uses GAN neural networks to build a prediction model of city floor plans and corresponding crime distribution maps. We take Philadelphia as the research sample, use more than 2 million crime information of Philadelphia from 2006 to 2018 to draw the crime hotspot distribution map, and collect the corresponding map of Philadelphia, and train the model for predicting the crime rate of the city with more than two thousand sets of one-to-one corresponding images as the training set. When the training is complete, a floor plan can be fed directly to the model, and the model will immediately feed back a hotspot map reflecting the crime distribution. Using the untrained Philadelphia data as the test set, the model can accurately predict crime concentration areas and the predicted crime concentration areas are similar to the concentration areas considered in previous studies. With the feedback from the model, the city layout can be adjusted and the crime rate can be greatly reduced when the simulated city planner tunes into the city plan. In addition the ideas in this paper can be applied as a set of methodologies to predict other relevant urban characteristic parameters and visualize them.
- Published
- 2021
48. Secured communication using efficient artificial neural synchronization
- Author
-
Abdulfattah Noorwali, Mohammad Zubair Khan, and Arindam Sarkar
- Subjects
Binary tree ,Artificial neural network ,Computer science ,business.industry ,Computer Science::Neural and Evolutionary Computation ,Hash function ,Set (abstract data type) ,Artificial Intelligence ,Control and Systems Engineering ,Synchronization (computer science) ,Key (cryptography) ,Session key ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Key exchange - Abstract
In this paper, an efficient artificial neural group synchronization is proposed for secured neural key exchange over public channels. To share the key over a public network, two Artificial Neural Networks (ANNs) are coordinated by mutual learning. The primary issue of neural coordination is assessing the synchronization of two parties’ ANNs in the absence of weights from the other. There is a delay in coordination measurement in existing techniques, which affects the confidentiality of neural coordination. Furthermore, research into the mutual learning of a cluster of ANNs is limited. This paper introduces a mutual learning methodology for measuring the entire synchronization of the set of ANNs quickly and efficiently. The measure of coordination is determined by the frequency with which the two networks have had the same outcome in prior rounds. When a particular threshold is reached, the hash is used to decide whether all networks are properly coordinated. The modified methodology uses has value of the weight vectors to achieve full coordination between two communicating entities. This technique has several advantages, including (1) Generation of session key via complete binary tree-based group mutual neural synchronization of ANNs over the public channel. (2) Unlike existing methods, the suggested method allows two communication entities to recognize full coordination faster. (3) Brute force, geometric, impersonation, and majority attacks are all considered in this proposed scheme. Tests to validate the performance of the proposed methodology are carried out, and the results show that the proposed methodology outperforms similar approaches already in use.
- Published
- 2021
49. Towards dense people detection with deep learning and depth images
- Author
-
Carlos A. Luna, Cristina Losada-Gutierrez, Daniel Pizarro, David Casillas-Perez, Javier Macias-Guarasa, Roberto Martin-Lopez, and David Fuentes-Jimenez
- Subjects
Fine-tuning ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,Synthetic data ,Image (mathematics) ,Range (mathematics) ,Artificial Intelligence ,Control and Systems Engineering ,Position (vector) ,Identity (object-oriented programming) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
This paper describes a novel DNN-based system, named PD3net , that detects multiple people from a single depth image, in real time. The proposed neural network processes a depth image and outputs a likelihood map in image coordinates, where each detection corresponds to a Gaussian-shaped local distribution, centered at each person’s head. This likelihood map encodes both the number of detected people as well as their position in the image, from which the 3D position can be computed. The proposed DNN includes spatially separated convolutions to increase performance, and runs in real-time with low budget GPUs. We use synthetic data for initially training the network, followed by fine tuning with a small amount of real data. This allows adapting the network to different scenarios without needing large and manually labeled image datasets. Due to that, the people detection system presented in this paper has numerous potential applications in different fields, such as capacity control, automatic video-surveillance, people or groups behavior analysis, healthcare or monitoring and assistance of elderly people in ambient assisted living environments. In addition, the use of depth information does not allow recognizing the identity of people in the scene, thus enabling their detection while preserving their privacy. The proposed DNN has been experimentally evaluated and compared with other state-of-the-art approaches, including both classical and DNN-based solutions, under a wide range of experimental conditions. The achieved results allows concluding that the proposed architecture and the training strategy are effective, and the network generalize to work with scenes different from those used during training. We also demonstrate that our proposal outperforms existing methods and can accurately detect people in scenes with significant occlusions.
- Published
- 2021
50. Octonion continuous orthogonal moments and their applications in color stereoscopic image reconstruction and zero-watermarking
- Author
-
Wu Xiaoming, Hongling Gao, Chunpeng Wang, Zhiqiu Xia, Bin Ma, Qixian Hao, and Jian Li
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Zero (complex analysis) ,Stereoscopy ,Image processing ,Iterative reconstruction ,Stability (probability) ,Octonion ,law.invention ,Artificial Intelligence ,Control and Systems Engineering ,law ,Robustness (computer science) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Digital watermarking ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Continuous orthogonal moments (COMs) are a type of effective image features widely used in various fields of image processing. However, most of the existing COMs are used for processing flat images and are not suitable for color stereoscopic images. For this reason, this paper first proposes an octonion theory applicable to color stereoscopic images, all color components of color stereoscopic images are coded by using the imaginary part of octonion, and all color components are processed as a whole, and the internal relations among all components are preserved. Then this paper combines the octonion theory with COMs to propose the octonion continuous orthogonal moments (OCOMs). The OCOMs fully reflect and retain the specific correlations between the left- and right-view components of color stereoscopic images, and provide good image description capability. Experimental results show that OCOMs have strong stability and good reconstruction performance when processing color stereoscopic images. Compared with other zero-watermarking methods, the zero-watermarking method embedded by OCOMs has stronger robustness.
- Published
- 2021
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.