20,040 results
Search Results
2. Chaotic time series prediction for the game, Rock-Paper-Scissors
- Author
-
Paolo Patelli, Franco Salvetti, and Simone Nicolo
- Subjects
Computer Science::Computer Science and Game Theory ,Exploit ,business.industry ,ComputingMilieux_PERSONALCOMPUTING ,Chaotic ,Lyapunov exponent ,Chaos theory ,symbols.namesake ,symbols ,Entropy (information theory) ,Reinforcement learning ,Artificial intelligence ,Time series ,business ,Algorithm ,Game theory ,Software ,Mathematics - Abstract
Two players of Rock-Paper-Scissors are modeled as adaptive agents which use a reinforcement learning algorithm and exhibit chaotic behavior in terms of trajectories of probability in mixed strategies space. This paper demonstrates that an external super-agent can exploit the behavior of the other players to predict favorable moments to play against one of the other players the symbol suggested by a sub-optimal strategy. This third agent does not affect the learning process of the other two players, whose only goal is to beat each other. The choice of the best moment to play is based on a threshold associated with the Local Lyapunov Exponent or the Entropy, each computed by using the time series of symbols played by one of the other players. A method for automatically adapting such a threshold is presented and evaluated. The results show that these techniques can be used effectively by a super-agent in a game involving adaptive agents that exhibit collective chaotic behavior.
- Published
- 2007
3. Determination of color changes of inks on the uncoated paper with the offset printing during drying using artificial neural networks
- Author
-
Erdoğan Köse
- Subjects
Artificial neural network ,Inkwell ,Color vision ,Process (computing) ,Color management ,law.invention ,Wavelength ,Chart ,Artificial Intelligence ,law ,visual_art ,visual_art.visual_art_medium ,Offset printing ,Algorithm ,Software ,Mathematics - Abstract
This study attempts to determinate color changes based on time in inks applied on the surface of wood-free uncoated paper with offset printing during drying. This study consists of two main cases: (1) Experimental analysis: By preparing a test page according to the 12647-2 principle with an offset printing system, test prints were applied to 120 g/m2 wood-free uncoated paper using an ECI 2002 CMYK test chart. Each press was measured being subject to process every 15 min in the first 2 h, then hour by hour between 2 and 12 h, then 4---4 h between 12 and 24 h, and then 6---6 h between 24 and 48 h. CIELAB and reflectance values between 380 and 720 nm of the target, 1,485 colors of the test chart were obtained. To see the drying and color changes of the ink on paper, changes were determined by printing on the paper and applying artificial neural network (ANN) to spectrophotometer data at the stated time intervals. (2) Empirical analysis: The use of the ANN has been proposed as numerical approach to get of empirical equations of color changes in inks applied on the surface of wood-free uncoated paper with offset printing during drying. Based on the outputs of the study, ANN model can be used to estimate the effects of digital proofing systems used in color management on print quality with high confidence with the use of the acquired equations without experimental study. In the study, as colors are defined in terms of wave length, in case, all wave lengths are taken into consideration, certain wave length changes have been taken into consideration.
- Published
- 2014
4. Binary vs. non-binary constraints☆☆This paper includes results that first appeared in [1,4,23]. This research has been supported in part by the Canadian Government through their NSERC and IRIS programs, and by the EPSRC Advanced Research Fellowship program
- Author
-
Peter van Beek, Fahiem Bacchus, Toby Walsh, and Xinguang Chen
- Subjects
Linguistics and Language ,Mathematical optimization ,Dual encoding ,Backtracking ,Constraint satisfaction ,Language and Linguistics ,Artificial Intelligence ,Hidden variable encoding ,Constraint satisfaction dual problem ,Constraint logic programming ,Local consistency ,Decomposition method (constraint satisfaction) ,Non-binary constraints ,Look-ahead ,Algorithm ,Constraint satisfaction problem ,Mathematics - Abstract
There are two well known transformations from non-binary constraints to binary constraints applicable to constraint satisfaction problems (CSPs) with finite domains: the dual transformation and the hidden (variable) transformation. We perform a detailed formal comparison of these two transformations. Our comparison focuses on two backtracking algorithms that maintain a local consistency property at each node in their search tree: the forward checking and maintaining arc consistency algorithms. We first compare local consistency techniques such as arc consistency in terms of their inferential power when they are applied to the original (non-binary) formulation and to each of its binary transformations. For example, we prove that enforcing arc consistency on the original formulation is equivalent to enforcing it on the hidden transformation. We then extend these results to the two backtracking algorithms. We are able to give either a theoretical bound on how much one formulation is better than another, or examples that show such a bound does not exist. For example, we prove that the performance of the forward checking algorithm applied to the hidden transformation of a problem is within a polynomial bound of the performance of the same algorithm applied to the dual transformation of the problem. Our results can be used to help decide if applying one of these transformations to all (or part) of a constraint satisfaction model would be beneficial.
- Published
- 2002
5. Looking for a Simple Big Five Factorial Structure in the Domain of Adjectives* * The original data upon which this paper is based are available at www.hhpub.com/journals/ejpa
- Author
-
Marco Perugini, Stefano Livi, and Marcello Gallucci
- Subjects
Structure (mathematical logic) ,Factorial ,business.industry ,Sample (statistics) ,computer.software_genre ,Domain (software engineering) ,Set (abstract data type) ,Simple (abstract algebra) ,Benchmark (surveying) ,Generalizability theory ,Artificial intelligence ,business ,Algorithm ,computer ,Applied Psychology ,Natural language processing ,Mathematics - Abstract
Summary: The Big Five factors structure is currently the benchmark for personality dimensions. In the domain of adjectives, various instruments have been developed to measure the Big Five. In this contribution we propose a methodology to find a simple factorial structure and we apply this methodology to the domain of Big Five as measured by adjectives. Using data collected on a sample of 337 subjects, we propose a five-factor benchmark structure derived from the 50 best marker adjectives selected among the adjectives contained in three instruments specifically developed to measure the Big Five (i.e., Goldberg 's 100 adjectives list, IASR-B5, and SACBIF). We use this common factor structure (or benchmark structure) to investigate the differences and the similarities between the three operationalizations of the Big Five, and to investigate the placements of the full set of adjectives contained in the three instruments. The main features of the proposed methodology and the generalizability of the obtained results are discussed.
- Published
- 2000
6. A note on the paper 'Minimizing total tardiness on parallel machines with preemptions' by Kravchenko and Werner (2012)
- Author
-
Chams Lahlou, Odile Bellenguez-Morineau, Damien Prot, Institut de Recherche en Communications et en Cybernétique de Nantes (IRCCyN), Mines Nantes (Mines Nantes)-École Centrale de Nantes (ECN)-Ecole Polytechnique de l'Université de Nantes (EPUN), Université de Nantes (UN)-Université de Nantes (UN)-PRES Université Nantes Angers Le Mans (UNAM)-Centre National de la Recherche Scientifique (CNRS), Mines Nantes (Mines Nantes), Systèmes Logistiques et de Production (SLP), and Université de Nantes (UN)-Université de Nantes (UN)-PRES Université Nantes Angers Le Mans (UNAM)-Centre National de la Recherche Scientifique (CNRS)-Mines Nantes (Mines Nantes)-École Centrale de Nantes (ECN)-Ecole Polytechnique de l'Université de Nantes (EPUN)
- Subjects
Discrete mathematics ,021103 operations research ,Supply chain management ,Tardiness ,0211 other engineering and technologies ,General Engineering ,0102 computer and information sciences ,02 engineering and technology ,[INFO.INFO-RO]Computer Science [cs]/Operations Research [cs.RO] ,Management Science and Operations Research ,Mathematical proof ,01 natural sciences ,Scheduling (computing) ,010201 computation theory & mathematics ,Artificial Intelligence ,Algorithm ,Software ,Mathematics - Abstract
International audience; In this note, we point out two major errors in the paper "Minimizing total tardiness on parallel machines with preemptions" by Kravchenko and Werner (2012). More precisely, they claimed to have proved that both problems P|pmtn|∑T j and P|r j ,p j =p,pmtn|∑T j are NP -Hard. We give a counter-example to their proofs, letting the complexity of these two problems open.
- Published
- 2013
7. Smartphone-Based Artificial Intelligence–Assisted Prediction for Eyelid Measurements: Algorithm Development and Observational Validation Study
- Author
-
Yen-Chang Hsiao, Hung-Chang Chen, Erh-Chien Hung, Oscar K. Lee, Ruei-Feng Chen, and Shin-Shi Tzeng
- Subjects
Adult ,Validation study ,limit ,Intraclass correlation ,Health Informatics ,Image processing ,margin reflex distance 2 ,smartphone ,margin reflex distance 1 ,symbols.namesake ,medicine ,Blepharoptosis ,Humans ,image ,observational ,Mathematics ,Original Paper ,Data collection ,algorithm ,business.industry ,Levator muscle ,deep learning ,Eyelids ,prediction ,Middle Aged ,artificial intelligence ,eye ,Pearson product-moment correlation coefficient ,medicine.anatomical_structure ,AI ,symbols ,Observational study ,levator muscle function ,processing ,Artificial intelligence ,Eyelid ,measurement ,business ,Algorithm ,Algorithms - Abstract
Background Margin reflex distance 1 (MRD1), margin reflex distance 2 (MRD2), and levator muscle function (LF) are crucial metrics for ptosis evaluation and management. However, manual measurements of MRD1, MRD2, and LF are time-consuming, subjective, and prone to human error. Smartphone-based artificial intelligence (AI) image processing is a potential solution to overcome these limitations. Objective We propose the first smartphone-based AI-assisted image processing algorithm for MRD1, MRD2, and LF measurements. Methods This observational study included 822 eyes of 411 volunteers aged over 18 years from August 1, 2020, to April 30, 2021. Six orbital photographs (bilateral primary gaze, up-gaze, and down-gaze) were taken using a smartphone (iPhone 11 Pro Max). The gold-standard measurements and normalized eye photographs were obtained from these orbital photographs and compiled using AI-assisted software to create MRD1, MRD2, and LF models. Results The Pearson correlation coefficients between the gold-standard measurements and the predicted values obtained with the MRD1 and MRD2 models were excellent (r=0.91 and 0.88, respectively) and that obtained with the LF model was good (r=0.73). The intraclass correlation coefficient demonstrated excellent agreement between the gold-standard measurements and the values predicted by the MRD1 and MRD2 models (0.90 and 0.84, respectively), and substantial agreement with the LF model (0.69). The mean absolute errors were 0.35 mm, 0.37 mm, and 1.06 mm for the MRD1, MRD2, and LF models, respectively. The 95% limits of agreement were –0.94 to 0.94 mm for the MRD1 model, –0.92 to 1.03 mm for the MRD2 model, and –0.63 to 2.53 mm for the LF model. Conclusions We developed the first smartphone-based AI-assisted image processing algorithm for eyelid measurements. MRD1, MRD2, and LF measures can be taken in a quick, objective, and convenient manner. Furthermore, by using a smartphone, the examiner can check these measurements anywhere and at any time, which facilitates data collection.
- Published
- 2021
8. A Quasi-Minimal Model for Paper-Like Surfaces
- Author
-
Adrien Bartoli, Mathieu Perriollat, Laboratoire des sciences et matériaux pour l'électronique et d'automatique (LASMEA), Université Blaise Pascal - Clermont-Ferrand 2 (UBP)-Centre National de la Recherche Scientifique (CNRS), and BARTOLI, Adrien
- Subjects
Surface (mathematics) ,Smoothness ,Developable surface ,business.industry ,Bent molecular geometry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,020207 software engineering ,Geometry ,Reconstruction algorithm ,02 engineering and technology ,Real image ,Minimal model ,Generative model ,[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithm ,ComputingMilieux_MISCELLANEOUS ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Smoothly bent paper-like surfaces are developable. They are however difficult to minimally parameterize since the number of meaningful parameters is intrinsically dependent on the actual deformation. Previous generative models are either incomplete, i.e. limited to subsets of developable surfaces, or depend on huge parameter sets. We propose a generative model governed by a quasi-minimal set of intuitive parameters, namely rules and angles. More precisely, a flat mesh is bent along guiding rules, while a number of extra rules controls the level of smoothness. The generated surface is guaranteed to be developable. A fully automatic multi-camera three dimensional reconstruction algorithm, including model-based bundle-adjustment, demonstrates our model on real images.
- Published
- 2007
9. Deep Learning Approach for Imputation of Missing Values in Actigraphy Data: Algorithm Development Study
- Author
-
Chang Hyung Hong, Jong-Hwan Jang, Sang Joon Son, Eun Young Kim, Dukyong Yoon, Hyun Woong Roh, Jung-Gu Choi, and Tae Young Kim
- Subjects
Adult ,Male ,020205 medical informatics ,Mean squared error ,Statistical assumption ,imputation ,Health Informatics ,02 engineering and technology ,Information technology ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Imputation (statistics) ,Mathematics ,autoencoder ,Original Paper ,business.industry ,Deep learning ,deep learning ,030229 sport sciences ,Middle Aged ,Missing data ,T58.5-58.64 ,Actigraphy ,Autoencoder ,Data set ,accelerometer ,Female ,Artificial intelligence ,Public aspects of medicine ,RA1-1270 ,business ,Bayesian linear regression ,Algorithm ,Algorithms - Abstract
Background Data collected by an actigraphy device worn on the wrist or waist can provide objective measurements for studies related to physical activity; however, some data may contain intervals where values are missing. In previous studies, statistical methods have been applied to impute missing values on the basis of statistical assumptions. Deep learning algorithms, however, can learn features from the data without any such assumptions and may outperform previous approaches in imputation tasks. Objective The aim of this study was to impute missing values in data using a deep learning approach. Methods To develop an imputation model for missing values in accelerometer-based actigraphy data, a denoising convolutional autoencoder was adopted. We trained and tested our deep learning–based imputation model with the National Health and Nutrition Examination Survey data set and validated it with the external Korea National Health and Nutrition Examination Survey and the Korean Chronic Cerebrovascular Disease Oriented Biobank data sets which consist of daily records measuring activity counts. The partial root mean square error and partial mean absolute error of the imputed intervals (partial RMSE and partial MAE, respectively) were calculated using our deep learning–based imputation model (zero-inflated denoising convolutional autoencoder) as well as using other approaches (mean imputation, zero-inflated Poisson regression, and Bayesian regression). Results The zero-inflated denoising convolutional autoencoder exhibited a partial RMSE of 839.3 counts and partial MAE of 431.1 counts, whereas mean imputation achieved a partial RMSE of 1053.2 counts and partial MAE of 545.4 counts, the zero-inflated Poisson regression model achieved a partial RMSE of 1255.6 counts and partial MAE of 508.6 counts, and Bayesian regression achieved a partial RMSE of 924.5 counts and partial MAE of 605.8 counts. Conclusions Our deep learning–based imputation model performed better than the other methods when imputing missing values in actigraphy data.
- Published
- 2020
10. Interpretable Conditional Recurrent Neural Network for Weight Change Prediction: Algorithm Development and Validation Study
- Author
-
Yu Rang Park, Youngin Kim, and Ho Heon Kim
- Subjects
Male ,obesity ,Validation study ,020205 medical informatics ,Psychological intervention ,Health Informatics ,Information technology ,02 engineering and technology ,explainable AI ,03 medical and health sciences ,0302 clinical medicine ,Weight loss ,Weight Loss ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Humans ,030212 general & internal medicine ,development ,mHealth ,intervention ,Mathematics ,validation ,Original Paper ,behavior modification ,Weight change ,weight ,interpretable AI ,artificial intelligence ,T58.5-58.64 ,United States ,Weighting ,Weight Reduction Programs ,Cross-Sectional Studies ,Recurrent neural network ,Mean absolute percentage error ,Neural Networks, Computer ,Public aspects of medicine ,RA1-1270 ,medicine.symptom ,Algorithm - Abstract
Background In recent years, mobile-based interventions have received more attention as an alternative to on-site obesity management. Despite increased mobile interventions for obesity, there are lost opportunities to achieve better outcomes due to the lack of a predictive model using current existing longitudinal and cross-sectional health data. Noom (Noom Inc) is a mobile app that provides various lifestyle-related logs including food logging, exercise logging, and weight logging. Objective The aim of this study was to develop a weight change predictive model using an interpretable artificial intelligence algorithm for mobile-based interventions and to explore contributing factors to weight loss. Methods Lifelog mobile app (Noom) user data of individuals who used the weight loss program for 16 weeks in the United States were used to develop an interpretable recurrent neural network algorithm for weight prediction that considers both time-variant and time-fixed variables. From a total of 93,696 users in the coaching program, we excluded users who did not take part in the 16-week weight loss program or who were not overweight or obese or had not entered weight or meal records for the entire 16-week program. This interpretable model was trained and validated with 5-fold cross-validation (training set: 70%; testing: 30%) using the lifelog data. Mean absolute percentage error between actual weight loss and predicted weight was used to measure model performance. To better understand the behavior factors contributing to weight loss or gain, we calculated contribution coefficients in test sets. Results A total of 17,867 users’ data were included in the analysis. The overall mean absolute percentage error of the model was 3.50%, and the error of the model declined from 3.78% to 3.45% by the end of the program. The time-level attention weighting was shown to be equally distributed at 0.0625 each week, but this gradually decreased (from 0.0626 to 0.0624) as it approached 16 weeks. Factors such as usage pattern, weight input frequency, meal input adherence, exercise, and sharp decreases in weight trajectories had negative contribution coefficients of –0.021, –0.032, –0.015, and –0.066, respectively. For time-fixed variables, being male had a contribution coefficient of –0.091. Conclusions An interpretable algorithm, with both time-variant and time-fixed data, was used to precisely predict weight loss while preserving model transparency. This week-to-week prediction model is expected to improve weight loss and provide a global explanation of contributing factors, leading to better outcomes.
- Published
- 2021
11. Querying temporal and spatial constraint networks in PTIME☆☆A preliminary version of this paper appeared in the proceedings of AAAI-99. A different version of this paper (aimed at a database audience) appeared in the proceedings of the workshop STDBM99
- Author
-
Spiros Skiadopoulos and Manolis Koubarakis
- Subjects
Linguistics and Language ,Complexity of constraint satisfaction ,Computational complexity theory ,Temporal reasoning ,Constraint networks ,Language and Linguistics ,Constraint (information theory) ,Computational complexity ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Closure (mathematics) ,Artificial Intelligence ,Tractable reasoning ,Constraint databases ,Variable elimination ,Computational problem ,Boolean satisfiability problem ,Algorithm ,Computer Science::Databases ,Mathematics ,Spatial reasoning - Abstract
We start with the assumption that temporal and spatial knowledge usually captured by constraint networks can be represented and queried more effectively by using the scheme of indefinite constraint databases. Because query evaluation in this scheme is in general a hard computational problem, we seek tractable instances of query evaluation. We assume that we have a class of constraints C with some reasonable computational and closure properties (the computational properties of interest are that the satisfiability problem and an appropriate version of the variable elimination problem for C should be solvable in PTIME). Under this assumption, we exhibit general classes of indefinite constraint databases and first-order modal queries for which query evaluation can be done with PTIME data complexity. We then search for tractable instances of C among the subclasses of Horn disjunctive linear constraints over the rationals. From previous research we know that the satisfiability problem for Horn disjunctive linear constraints is solvable in PTIME, but not the variable elimination problem. Thus we try to discover subclasses of Horn disjunctive linear constraints with tractable variable elimination problems. The class of UTVPI≠ constraints is the largest class that we show to have this property. Finally, we restate our general tractability results with C ranging over the newly discovered tractable classes. Interesting tractable query answering problems for indefinite temporal and spatial constraint databases are identified in this way. We close our complexity analysis by precisely outlining the frontier between tractable and possibly intractable query answering problems.
- Full Text
- View/download PDF
12. Semantics and computation of the generalized modus ponens: The long paper
- Author
-
Roger Martin-Clouaire
- Subjects
Deduction theorem ,Dependency (UML) ,implication functions ,Semantics (computer science) ,Antecedent (logic) ,deduction rules ,t-norms ,Applied Mathematics ,approximate reasoning systems ,Fuzzy logic ,Theoretical Computer Science ,imprecision ,Modus ponendo tollens ,Constructive dilemma ,Artificial Intelligence ,Calculus ,fuzzy logic ,generalized modus ponens ,Modus ponens ,uncertainty ,Algorithm ,Software ,trapezoidal possibility distributions ,Mathematics - Abstract
The generalized modus ponens is a fuzzy logic pattern of reasoning that permits inferences to be made with rules having imprecise information in both their antecedent and consequent parts. Several alternatives are available to represent the meaning one wishes to assign to a given rule. This paper first explores four of the most often encountered possibilities, in the case where a single rule is considered at a time. Second, the behavior of two of them (which seem sufficient for practical use in deduction systems) is investigated in the situation where the dependency between antecedent and consequent variables is described by a collection of rules rather than a single rule. Conjectures are made about what is semantically important in the result yielded by the exact computation of the generalized modus ponens. With these hypothesis it is shown that one can get a meaningful approximation of what is produced by the generalized modus ponens technique and also avoid the well-known inefficiency problem associated with its computation.
- Full Text
- View/download PDF
13. From binary temporal relations to non-binary ones and back☆☆Parts of this paper have been published in [36] and in [39]
- Author
-
Steffan Staab
- Subjects
Linguistics and Language ,Theoretical computer science ,Binary relation ,Temporal reasoning ,Binary number ,Interval (mathematics) ,Language and Linguistics ,Consistency (database systems) ,Artificial Intelligence ,Local consistency ,Granularity ,Non-binary constraints ,Representation (mathematics) ,Abstraction ,Algorithm ,Constraint satisfaction problem ,Mathematics - Abstract
In this paper a new approach towards temporal reasoning is presented that scales up from the temporal relations commonly used in Allen's qualitative interval calculus and in quantitative temporal constraint satisfaction problems to include interval relations with distances, temporal rules and other non-binary relations into the reasoning scheme. For this purpose, we generalize well-known methods for constraint propagation, determination of consistency and computation of the minimal network from simpler schemes that only allow for binary relations. Thereby, we find that levels of granularity play a major role for applying these techniques in our more expressive framework. Indeed, the technical preliminaries we provide are especially apt to investigate the switching between different granularities of representation, hence illucitating and exploiting the tradeoff between expressiveness and efficiency of temporal reasoning schemes on the one side and between expressiveness and understandability on the other side.
- Full Text
- View/download PDF
14. State aggregation for fast likelihood computations in molecular evolution
- Author
-
Nicolas Salamin, Iakov I. Davydov, and Marc Robinson-Rechavi
- Subjects
0106 biological sciences ,0301 basic medicine ,Statistics and Probability ,Computer science ,Computation ,Markov process ,Markov model ,Machine learning ,computer.software_genre ,010603 evolutionary biology ,01 natural sciences ,Biochemistry ,Sensitivity and Specificity ,Evolution, Molecular ,03 medical and health sciences ,symbols.namesake ,Encoding (memory) ,State space ,Molecular Biology ,Selection (genetic algorithm) ,030304 developmental biology ,Mathematics ,Probability ,0303 health sciences ,Markov chain ,Models, Genetic ,Heuristic ,business.industry ,Computational Biology ,State (functional analysis) ,Original Papers ,Markov Chains ,Computer Science Applications ,Phylogenetics ,Computational Mathematics ,030104 developmental biology ,Computational Theory and Mathematics ,symbols ,State (computer science) ,Artificial intelligence ,business ,Algorithm ,computer ,Algorithms ,Software - Abstract
Motivation Codon models are widely used to identify the signature of selection at the molecular level and to test for changes in selective pressure during the evolution of genes encoding proteins. The large size of the state space of the Markov processes used to model codon evolution makes it difficult to use these models with large biological datasets. We propose here to use state aggregation to reduce the state space of codon models and, thus, improve the computational performance of likelihood estimation on these models. Results We show that this heuristic speeds up the computations of the M0 and branch-site models up to 6.8 times. We also show through simulations that state aggregation does not introduce a detectable bias. We analyzed a real dataset and show that aggregation provides highly correlated predictions compared to the full likelihood computations. Finally, state aggregation is a very general approach and can be applied to any continuous-time Markov process-based model with large state space, such as amino acid and coevolution models. We therefore discuss different ways to apply state aggregation to Markov models used in phylogenetics. Availability and Implementation The heuristic is implemented in the godon package (https://bitbucket.org/Davydov/godon) and in a version of FastCodeML (https://gitlab.isb-sib.ch/phylo/fastcodeml). Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2016
15. Using evolutionary Expectation Maximization to estimate indel rates
- Author
-
Ian Holmes
- Subjects
Statistics and Probability ,DNA Mutational Analysis ,Molecular Sequence Data ,Evolutionary algorithm ,Machine learning ,computer.software_genre ,Biochemistry ,Evolution, Molecular ,Artificial Intelligence ,Stochastic grammar ,Expectation–maximization algorithm ,Computer Simulation ,Hidden Markov model ,Indel ,Molecular Biology ,Phylogeny ,Sequence Deletion ,Mathematics ,Likelihood Functions ,Sequence ,Models, Statistical ,Base Sequence ,Models, Genetic ,Markov chain ,business.industry ,Sequence Analysis, DNA ,Original Papers ,Markov Chains ,Computer Science Applications ,Phylogenetics ,Computational Mathematics ,Computational Theory and Mathematics ,DNA Transposable Elements ,Forward algorithm ,Artificial intelligence ,business ,Sequence Alignment ,Algorithm ,computer ,Algorithms - Abstract
Motivation The Expectation Maximization (EM) algorithm, in the form of the Baum–Welch algorithm (for hidden Markov models) or the Inside-Outside algorithm (for stochastic context-free grammars), is a powerful way to estimate the parameters of stochastic grammars for biological sequence analysis. To use this algorithm for multiple-sequence evolutionary modelling, it would be useful to apply the EM algorithm to estimate not only the probability parameters of the stochastic grammar, but also the instantaneous mutation rates of the underlying evolutionary model (to facilitate the development of stochastic grammars based on phylogenetic trees, also known as Statistical Alignment). Recently, we showed how to do this for the point substitution component of the evolutionary process; here, we extend these results to the indel process. Results We present an algorithm for maximum-likelihood estimation of insertion and deletion rates from multiple sequence alignments, using EM, under the single-residue indel model owing to Thorne, Kishino and Felsenstein (the ‘TKF91’ model). The algorithm converges extremely rapidly, gives accurate results on simulated data that are an improvement over parsimonious estimates (which are shown to underestimate the true indel rate), and gives plausible results on experimental data (coronavirus envelope domains). Owing to the algorithm's close similarity to the Baum–Welch algorithm for training hidden Markov models, it can be used in an ‘unsupervised’ fashion to estimate rates for unaligned sequences, or estimate several sets of rates for sequences with heterogenous rates. Availability Software implementing the algorithm and the benchmark is available under GPL from http://www.biowiki.org/ Contact ihh@berkeley.edu
- Published
- 2005
16. Erratum to: Correction to the paper 'On the theory of statistical decision functions'
- Author
-
Kameo Matusita
- Subjects
Statistics and Probability ,business.industry ,Evidential reasoning approach ,Decision rule ,Artificial intelligence ,Statistical theory ,business ,Algorithm ,Mathematics - Published
- 1952
17. Predicting protein contact map using evolutionary and physical constraints by integer programming
- Author
-
Jinbo Xu and Zhiyong Wang
- Subjects
Statistics and Probability ,Mathematical optimization ,Protein contact map ,Protein Structure and Function ,Biochemistry ,Protein Structure, Secondary ,Evolution, Molecular ,03 medical and health sciences ,Matrix (mathematics) ,Artificial Intelligence ,Molecular Biology ,Integer programming ,030304 developmental biology ,Mathematics ,0303 health sciences ,Sequence ,Ismb/Eccb 2013 Proceedings Papers Committee July 21 to July 23, 2013, Berlin, Germany ,Sequence Homology, Amino Acid ,030302 biochemistry & molecular biology ,Large numbers ,Proteins ,Mutual information ,Programming, Linear ,Original Papers ,Computer Science Applications ,Computational Mathematics ,Computational Theory and Mathematics ,Key (cryptography) ,Pairwise comparison ,Algorithm - Abstract
Motivation: Protein contact map describes the pairwise spatial and functional relationship of residues in a protein and contains key information for protein 3D structure prediction. Although studied extensively, it remains challenging to predict contact map using only sequence information. Most existing methods predict the contact map matrix element-by-element, ignoring correlation among contacts and physical feasibility of the whole-contact map. A couple of recent methods predict contact map by using mutual information, taking into consideration contact correlation and enforcing a sparsity restraint, but these methods demand for a very large number of sequence homologs for the protein under consideration and the resultant contact map may be still physically infeasible. Results: This article presents a novel method PhyCMAP for contact map prediction, integrating both evolutionary and physical restraints by machine learning and integer linear programming. The evolutionary restraints are much more informative than mutual information, and the physical restraints specify more concrete relationship among contacts than the sparsity restraint. As such, our method greatly reduces the solution space of the contact map matrix and, thus, significantly improves prediction accuracy. Experimental results confirm that PhyCMAP outperforms currently popular methods no matter how many sequence homologs are available for the protein under consideration. Availability: http://raptorx.uchicago.edu. Contact: moc.liamg@uxobnij
- Published
- 2013
18. A 3D shape matching framework
- Author
-
Ee-Chien Chang, Mohan S. Kankanhalli, Rong Xu, and Zhiyong Huang
- Subjects
Similarity (geometry) ,business.industry ,Short paper ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Volume (computing) ,Value (computer science) ,Principal component analysis ,Shape matching ,Computer vision ,Artificial intelligence ,Representation (mathematics) ,business ,Algorithm ,Mathematics - Abstract
The research of 3D shape matching starts to get more attention recently. In this short paper, we present a 3D shape matching framework. First, we give a general definition of similarity value of two objects. Then, an algorithm is proposed consisting of Principal Component Analysis (PCA), voxelization, and iterative coarse-to-fine aligning of two objects using multi-resolution volume representation. This process will align two objects to have the maximum overlap. Then, we can compute the similarity value for shape matching. The experiment results show the effectiveness of our framework.
- Published
- 2003
19. Design of variable elliptical filters with direct tunability in 2D domain
- Author
-
K. R. Sreelekha and T. S. Bindiya
- Subjects
Mean squared error ,Applied Mathematics ,2D Filters ,Bandwidth (signal processing) ,Ripple ,Filter (signal processing) ,Computer Science Applications ,Filter design ,Artificial Intelligence ,Hardware and Architecture ,Approximation error ,Signal Processing ,Elliptic filter ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
This paper proposes the design of transformation based two dimensional (2D) elliptical filters using multi-objective artificial bee colony (MOABC) algorithm. The MOABC algorithm finds the one dimensional (1D) cut-off frequencies and McClellan transformation coefficients by optimizing the contour approximation errors at pass-band and stop-band regions of elliptical filters. This design approach is compared with the state of the art approaches in terms of contour approximation error, pass-band ripple and stop-band attenuation to ensure the efficiency. It is seen that an efficient mapping of 1D filter to 2D filter is possible with minimum contour approximation error using the proposed approach. The second proposal in this paper is the design of low complexity variable bandwidth and variable orientation elliptical filters with direct 2D tunability. This is achieved by mapping the coefficients of different 2D filters into a 2D Farrow structure. The performance of the proposed variable elliptical filter is measured in terms of mean error, mean square error, root mean square error, pass-band ripple, stop-band attenuation and 2D cut-off frequencies. The main feature of the proposed 2D variable filter design lies in the considerable reduction in the number of multipliers when compared to the existing methods, which in turn reduces the resource utilization and power consumption.
- Published
- 2021
20. Interval joint robust regression method
- Author
-
Ullysses da N. Rosendo, Francisco de A. T. de Carvalho, and Eufrásio de Andrade Lima Neto
- Subjects
Artificial Intelligence ,Iterative method ,Cognitive Neuroscience ,Outlier ,Regression analysis ,Interval (mathematics) ,Radius ,Algorithm ,Time complexity ,Regression ,Computer Science Applications ,Robust regression ,Mathematics - Abstract
Interval-valued data are needed to manage either the uncertainty related to measurements, or the variability inherent to the description of complex objects representing group of individuals. A number of regression methods suitable to interval variables describing variability of complex objects are already available. However, less attention has been given to methods that, simultaneously, take into account the full interval information and are resistant to interval outlier observations, even with the frequent presence of atypical observations on interval-valued data sets. This paper proposes a new robust linear regression method for interval variables, where the presence of outliers either in the center or in the radius penalize both the center and the radius regression models. Moreover, the interval observations with outliers on both center and radius are more penalized than those observations with outliers only in the center (or in the radius). Besides, this paper provides a suitable iterative algorithm to estimate the parameters of the proposed method. The algorithm estimates the parameters of the center (or of the radius) model taking into account both information of the center and the radius. The convergence and time complexity of the iterative algorithm are also presented. Finally, the performance of the new method is compared with some previous robust regression approaches and evaluated on synthetic and real interval-valued data sets.
- Published
- 2021
21. Parameterized and exact algorithms for finding a read-once resolution refutation in 2CNF formulas
- Author
-
K. Subramani and Piotr J. Wojciechowski
- Subjects
True quantified Boolean formula ,Applied Mathematics ,Parameterized complexity ,Computer Science::Computational Complexity ,Resolution (logic) ,Satisfiability ,Decidability ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Artificial Intelligence ,Computer Science::Logic in Computer Science ,Conjunctive normal form ,Boolean satisfiability problem ,Algorithm ,Time complexity ,Mathematics - Abstract
In this paper, we discuss algorithms for the problem of finding read-once resolution refutations of unsatisfiable 2CNF formulas within the resolution refutation system. Broadly, a read-once resolution refutation is one in which each constraint (input or derived) is used at most once. Read-once resolution refutations have been widely studied in the literature for a number of constraint system-refutation system pairs. For instance, read-once resolution has been analyzed for boolean formulas in conjunctive normal form (CNF) and read-once cutting planes have been analyzed for polyhedral systems. By definition, read-once refutations are compact, and hence valuable in applications that place a premium on visualization. The satisfiability problem (SAT) is concerned with finding a satisfying assignment for a boolean formula in CNF. While SAT is NP-complete in general, there exist some interesting subclasses of CNF formulas, for which it is decidable in polynomial time. One such subclass is the class of 2CNF formulas, i.e., CNF formulas in which each clause has at most two literals. The existence of efficient algorithms for satisfiability checking in 2CNF formulas (2SAT), makes this class useful from the perspective of modeling selected logic programs. The work in this paper is concerned with the read-once refutability problem (under resolution) in this subclass. Although 2SAT is decidable in polynomial time, the problem of finding a read-once resolution refutation of an unsatisfiable 2CNF formula is NP-complete. We design non-trivial, parameterized and exact exponential algorithms for this problem. Additionally, we study the computational complexity of finding a shortest read-once resolution refutation of a 2CNF formula.
- Published
- 2021
22. A detection algorithm for cherry fruits based on the improved YOLO-v4 model
- Author
-
Rongli Gai, Na Chen, and Hai Yuan
- Subjects
0209 industrial biotechnology ,Backbone network ,Small volume ,Feature extraction ,Network structure ,02 engineering and technology ,Ripeness ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Shading ,Orchard ,Algorithm ,Software ,Digital agriculture ,Mathematics - Abstract
"Digital" agriculture is rapidly affecting the value of agricultural output. Robotic picking of the ripe agricultural product enables accurate and rapid picking, making agricultural harvesting intelligent. How to increase product output has also become a challenge for digital agriculture. During the cherry growth process, realizing the rapid and accurate detection of cherry fruits is the key to the development of cherry fruits in digital agriculture. Due to the inaccurate detection of cherry fruits, environmental problems such as shading have become the biggest challenge for cherry fruit detection. This paper proposes an improved YOLO-V4 deep learning algorithm to detect cherry fruits. This model is suitable for cherry fruits with a small volume. It is proposed to increase the network based on the YOLO-V4 backbone network CSPDarknet53 network, combined with DenseNet The density between layers, the a priori box in the YOLO-V4 model, is changed to a circular marker box that fits the shape of the cherry fruit. Based on the improved YOLO-V4 model, the feature extraction is enhanced, the network structure is deepened, and the detection speed is improved. To verify the effectiveness of this method, different deep learning algorithms of YOLO-V3, YOLO-V3-dense and YOLO-V4 are compared. The results show that the mAP (average accuracy) value obtained by using the improved YOLO-V4 model (YOLO-V4-dense) network in this paper is 0.15 higher than that of yolov4. In actual orchard applications, cherries with different ripeness of cherries in the same area can be detected, and the fruits with larger ripeness differences can be artificially intervened, and finally, the yield of cherry fruits can be increased.
- Published
- 2021
23. Real-Time Arrhythmia Classification Algorithm Using Time-Domain ECG Feature Based on FFNN and CNN
- Author
-
Guangda Liu, Mengkun Dong, Jing Cai, Xinlei Hu, Weiguang Ni, and Ge Zhou
- Subjects
Article Subject ,Heartbeat ,Computer science ,General Mathematics ,0206 medical engineering ,02 engineering and technology ,QRS complex ,Classifier (linguistics) ,QA1-939 ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Time domain ,Latency (engineering) ,business.industry ,Deep learning ,General Engineering ,Engineering (General). Civil engineering (General) ,020601 biomedical engineering ,ComputingMethodologies_PATTERNRECOGNITION ,Feature (computer vision) ,020201 artificial intelligence & image processing ,Artificial intelligence ,TA1-2040 ,business ,Algorithm ,Mathematics - Abstract
To solve the problem of real-time arrhythmia classification, this paper proposes a real-time arrhythmia classification algorithm using deep learning with low latency, high practicality, and high reliability, which can be easily applied to a real-time arrhythmia classification system. In the algorithm, a classifier detects the QRS complex position in real time for heartbeat segmentation. Then, the ECG_RRR feature is constructed according to the heartbeat segmentation result. Finally, another classifier classifies the arrhythmia in real time using the ECG_RRR feature. This article uses the MIT-BIH arrhythmia database and divides the 44 qualified records into two groups (DS1 and DS2) for training and evaluation, respectively. The result shows that the recall rate, precision rate, and overall accuracy of the algorithm’s interpatient QRS complex position prediction are 98.0%, 99.5%, and 97.6%, respectively. The overall accuracy for 5-class and 13-class interpatient arrhythmia classification is 91.5% and 75.6%, respectively. Furthermore, the real-time arrhythmia classification algorithm proposed in this paper has the advantages of practicability and low latency. It is easy to deploy the algorithm since the input is the original ECG signal with no feature processing required. And, the latency of the arrhythmia classification is only the duration of one heartbeat cycle.
- Published
- 2021
24. Electric short-term load forecast integrated method based on time-segment and improved MDSC-BP
- Author
-
Lu Jing, Chen Shiwen, and Wang Rui
- Subjects
0209 industrial biotechnology ,Control and Optimization ,bp neural network ,Artificial neural network ,Control engineering systems. Automatic machinery (General) ,Maximum deviation ,maximum deviation similarity criterion ,02 engineering and technology ,Term (time) ,load forecast ,Systems engineering ,TA168 ,020901 industrial engineering & automation ,Similarity criterion ,Artificial Intelligence ,Control and Systems Engineering ,TJ212-225 ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Mathematics ,Time segment - Abstract
In this paper, an integrated forecast method is proposed based on multi-resources data, which improves the maximum deviation similarity criterion (MDSC) of time-segment BP neural network. The existing short-term load forecast methods for power systems will lead to the low accuracy or even failure of the load prediction method since the multi-stage load change and weather fluctuation factors are not considered. The improved similar day category screening method with the time-segment BP neural network model is employed to deal with the above problem in this paper, where a regional load characteristic law is used to divide the load into seven time periods such that a time-segment BP neural network model is proposed. Based on the feature vector and the real-time meteorological data of the forecast day, the trained forecast model can provide the load value of the forecast day and overcome the restriction of the historical load data. Meanwhile, the prediction accuracy and the training time are also improved under the fluctuating meteorological conditions. Finally, a load forecast of a certain area is given to show the prediction accuracy of different types of days can reach more than 96%,illustrate the effectiveness of the proposed methods.
- Published
- 2021
25. Research and Application of Combined Algorithm Based on Sustainable Computing and Artificial Intelligence
- Author
-
Bo Hu
- Subjects
Sustainable development ,Article Subject ,business.industry ,Computer science ,General Mathematics ,Big data ,General Engineering ,Information technology ,020206 networking & telecommunications ,02 engineering and technology ,Engineering (General). Civil engineering (General) ,Green computing ,Search algorithm ,QA1-939 ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,020201 artificial intelligence & image processing ,Ecosystem ,The Internet ,Artificial intelligence ,TA1-2040 ,business ,Algorithm ,Mathematics - Abstract
The Internet is a popular form of information technology development in the new century, and it organizes and analyzes big data by taking effective measures to find useful information. With manpower, it is obviously not enough to be in such a huge information system, so the emergence of sustainable computing and artificial intelligence has become the core of large-scale data processing at this stage. This paper studies the application of the combined algorithm based on sustainable computing and artificial intelligence. In this paper, a new combined intelligent search algorithm is proposed by combining sustainable computing with artificial intelligence. The combination algorithm firstly analyzes the value from the aspects of ecological environment and economic benefits and studies the overall evaluation of sustainable development ability. Secondly, the energy analysis method is used to establish a reasonable comprehensive ecosystem and evaluate its impact on the sustainable development of environment and economy. Finally, the impact of resource consumption, wind speed detection, waste discharge, and utilization of renewable resources in a certain area is analyzed by simulation. Through the experimental results, on the one hand, it is proved that the data obtained by the combined algorithm are more accurate than the single algorithm; on the other hand, the combined algorithm can be further sublimated and widely used for other data detection. The combination algorithm proposed in this paper can effectively detect the required data and has high applicability.
- Published
- 2021
26. Relation and Application Method of Deep Learning Sea Target Detection and Segmentation Algorithm
- Author
-
Zheng Wang, Guangfu Li, and Jia Ren
- Subjects
Relation (database) ,Computer science ,business.industry ,General Mathematics ,Deep learning ,0211 other engineering and technologies ,General Engineering ,Wavelet transform ,Cascade algorithm ,Speckle noise ,02 engineering and technology ,Engineering (General). Civil engineering (General) ,Field (computer science) ,Constant false alarm rate ,QA1-939 ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,TA1-2040 ,business ,Algorithm ,Mathematics ,021101 geological & geomatics engineering - Abstract
Target detection and segmentation algorithms have long been one of the main research directions in the field of computer vision, especially in the study of sea surface image understanding, these two tasks often need to consider the collaborative work at the same time, which is very high for the computing processor performance requirements. This article aims to study the deep learning sea target detection and segmentation algorithm. This paper uses wavelet transform-based filtering method for speckle noise suppression, deep learning-based method for land masking, and the target detection part uses an improved CFAR cascade algorithm. Finally, the best separable features are selected to eliminate false alarms. In order to further illustrate the feasibility of the scheme, this paper uses measured data and simulation data to verify the scheme and discusses the effect of different signal-to-noise ratio, sea target type, and attitude on the algorithm performance. The research data show that the deep learning sea target detection and segmentation algorithm has good detection performance and is generally applicable to ship target detection of different types and different attitudes. The results show that the deep learning sea target detection and segmentation algorithm fully takes into account the irregular shape and texture of the interfering target detected in the optical remote sensing image so that the accuracy rate is 32.7% higher and the efficiency is increased by about 1.3 times. The deep learning sea target detection is compared with segmentation algorithm, and it has strong target characterization ability and can be applied to ship targets of different scales.
- Published
- 2020
27. Two-Module Weight-Based Sum Code in Residue Ring Modulo M=4
- Author
-
Valery Sapozhnikov, Dmitry Efanov, and Vladimir Sapozhnikov
- Subjects
Adder ,Computer Networks and Communications ,Applied Mathematics ,Modulo ,Generalized algorithm ,Data vector ,Data bits ,Weighting ,Artificial Intelligence ,Control and Systems Engineering ,Error detection and correction ,Algorithm ,Encoder ,Mathematics - Abstract
The paper describes research results of features of error detection in data vectors by sum codes. The task is relevant in this setting, first of all, for the use of sum codes in the implementation of the checkable discrete systems and the technical means for the diagnosis of their components. Methods for sum codes constructing are described. A brief overview in the field of methods for sum codes constructing is provided. The article highlights codes for which the values of all data bits are taken into account once by the operations of summing their values or the values of the weight coefficients of the bits during the formation of the check vector. The paper also highlights codes that are formed when the data vectors are initially divided into subsets, in particular, into two subsets. An extension of the sum code class obtained by isolating two independent parts in the data vectors, as well as weighting the bits of the data vectors at the stage of code construction, is proposed. The paper provides a generalized algorithm for two-module weighted codes construction, and describes their features obtained by weighing with non-ones weight coefficients for one of data bits in each of the subvectors, according to which the total weight is calculated. Particular attention is paid to the two-module weight-based sum code, for which the total weight of the data vector in the residue ring modulo M = 4 is determined. It is shown that the purpose of the inequality between the bits of the data vector in some cases gives improvements in the error detection characteristics compared to the well-known two-module codes. Some modifications of the proposed two-module weighted codes are described. A method for calculating the total number of undetectable errors in the two-module sum codes in the residue ring modulo M = 4 with one weighted bit in each of the subsets is proposed. Detailed characteristics of error detection by the considered codes both by the multiplicities of undetectable errors and by their types (unidirectional, symmetrical and asymmetrical errors) are given. The proposed codes are compared with known codes. A method for the synthesis of two-module sum encoders on a standard element base of the single signals adders is proposed. The classification of two-module sum codes is presented.
- Published
- 2020
28. Bounded Fuzzy Possibilistic Method
- Author
-
Hossein Yazdani
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,0209 industrial biotechnology ,Fuzzy clustering ,Computer Science - Artificial Intelligence ,Logic ,Fuzzy set ,Machine Learning (stat.ML) ,02 engineering and technology ,Fuzzy logic ,Machine Learning (cs.LG) ,Artificial Intelligence (cs.AI) ,020901 industrial engineering & automation ,Statistics - Machine Learning ,Artificial Intelligence ,Bounded function ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Set theory ,Cluster analysis ,Algorithm ,Membership function ,Mathematics - Abstract
This paper introduces Bounded Fuzzy Possibilistic Method (BFPM) by addressing several issues that previous clustering/classification methods have not considered. In fuzzy clustering, object's membership values should sum to 1. Hence, any object may obtain full membership in at most one cluster. Possibilistic clustering methods remove this restriction. However, BFPM differs from previous fuzzy and possibilistic clustering approaches by allowing the membership function to take larger values with respect to all clusters. Furthermore, in BFPM, a data object can have full membership in multiple clusters or even in all clusters. BFPM relaxes the boundary conditions (restrictions) in membership assignment. The proposed methodology satisfies the necessity of obtaining full memberships and overcomes the issues with conventional methods on dealing with overlapping. Analysing the objects' movements from their own cluster to another (mutation) is also proposed in this paper. BFPM has been applied in different domains in geometry, set theory, anomaly detection, risk management, diagnosis diseases, and other disciplines. Validity and comparison indexes have been also used to evaluate the accuracy of BFPM. BFPM has been evaluated in terms of accuracy, fuzzification constant (different norms), objects' movement analysis, and covering diversity. The promising results prove the importance of considering the proposed methodology in learning methods to track the behaviour of data objects, in addition to obtain accurate results.
- Published
- 2020
29. The Distance Induced OWA Operator with Application to Multi-criteria Group Decision Making
- Author
-
Weiwei Liu, Ying Zhou, Yuzhen Hu, Chengju Gong, and Yi Su
- Subjects
Group (mathematics) ,Computational intelligence ,Hamming distance ,02 engineering and technology ,Interval (mathematics) ,Measure (mathematics) ,Distance measures ,Theoretical Computer Science ,Group decision-making ,Operator (computer programming) ,Computational Theory and Mathematics ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Mathematics - Abstract
Some aggregation operators with distance measures have been proposed, but distance measure values play the role of argument variables. In this paper, the induced ordered weighted averaging (IOWA) operator with the Hamming distance is proposed, termed as the distance induced ordered weighted averaging (DIOWA) operator. The most distinctive characteristic of the DIOWA operator is that the distance measure values play the role of order-inducing variables. Some properties and special cases of the DIOWA operator are analyzed. This new operator is further extended to the uncertain situations represented by interval numbers. A new multi-criteria group decision-making (MCGDM) method with the proposed operators in this paper is also studied. Finally, a numerical example about how to select the best candidate is given to show how to use the DIOWA operator with interval numbers in group decision making.
- Published
- 2020
30. Combinatorial Iterative Algorithms for Computing the Centroid of an Interval Type-2 Fuzzy Set
- Author
-
Shu-Ping Wan and Xianliang Liu
- Subjects
Lebesgue measure ,Iterative method ,Applied Mathematics ,Fuzzy set ,Centroid ,02 engineering and technology ,Interval (mathematics) ,Type (model theory) ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Almost everywhere ,Differentiable function ,Algorithm ,Mathematics - Abstract
Computing the centroid of an interval type-2 fuzzy set (IT2 FS) is an important type-reduction method. The aim of this paper is to develop a new method to calculate the centroid of an IT2 FS when the problems of centroid computation of an IT2 FS are continuous. For the continuous centroid computation problems, the structures of optimal solutions are strictly proven from mathematics for the first time in this paper. Furthermore, we also prove that the structures of the optimal solutions are unique in the sense of almost everywhere equal, i.e., if there are two optimal solutions $f_1(x)$ and $f_2(x)$ , the Lebesgue measure of $\{x|f_1(x)\neq f_2(x)\}$ is equal to 0. Subsequently, a combinatorial iterative (CI) method is proposed to solve the roots of the sufficiently differentiable objective functions. It is proven that the convergence of the proposed iterative method is at least sixth order. Based on the proposed iterative method, two algorithms, called CI algorithms, are devised to compute the centroid of an IT2 FS. The efficiencies of CI algorithms are demonstrated by comparing the continuous Karnik–Mendel algorithms and the Hallye's methods with the CI algorithms through three numerical examples.
- Published
- 2020
31. Artificial intelligence : A powerful paradigm for scientific research
- Author
-
Xu, Yongjun, Liu, Xin, Cao, Xin, Huang, Changping, Liu, Enke, Qian, Sen, Liu, Xingchen, Wu, Yanjun, Dong, Fengliang, Qiu, Cheng-Wei, Qiu, Junjun, Hua, Keqin, Su, Wentao, Wu, Jian, Xu, Huiyu, Han, Yong, Fu, Chenguang, Yin, Zhigang, Liu, Miao, Roepman, Ronald, Dietmann, Sabine, Virta, Marko, Kengara, Fredrick, Zhang, Ze, Zhang, Lifu, Zhao, Taolan, Dai, Ji, Yang, Jialiang, Lan, Liang, Luo, Ming, Liu, Zhaofeng, An, Tao, Zhang, Bin, He, Xiao, Cong, Shan, Liu, Xiaohong, Zhang, Wei, Lewis, James P., Tiedje, James M., Wang, Qi, An, Zhulin, Wang, Fei, Zhang, Libo, Huang, Tao, Lu, Chuan, Cai, Zhipeng, Wang, Fang, Zhang, Jiabao, and Department of Microbiology
- Subjects
geoscience ,materials science ,IDENTIFICATION ,mathematics ,life science ,DEEP ,PREDICTION ,SYMMETRY ,MODELS ,deep learning ,medical science ,information science ,artificial intelligence ,chemistry ,113 Computer and information sciences ,CATALOG ,machine learning ,NEUROSCIENCE ,NEURAL-NETWORKS ,ALGORITHM ,OPTIMIZATION ,physics - Abstract
Y Artificial intelligence (AI) coupled with promising machine learning (ML) techniques well known from computer science is broadly affecting many aspects of various fields including science and technology, industry, and even our day-to-day life. The ML techniques have been developed to analyze high-throughput data with a view to obtaining useful insights, categorizing, predicting, and making evidence-based decisions in novel ways, which will promote the growth of novel applications and fuel the sustainable booming of AI. This paper undertakes a comprehensive survey on the development and application of AI in different aspects of fundamental sciences, including information science, mathematics, medical science, materials science, geoscience, life science, physics, and chemistry. The challenges that each discipline of science meets, and the potentials of AI techniques to handle these challenges, are discussed in detail. Moreover, we shed light on new research trends entailing the integration of AI into each scientific discipline. The aim of this paper is to provide a broad research guideline on fundamental sciences with potential infusion of AI, to help motivate researchers to deeply understand the state-of-the-art applications of AI-based fundamental sciences, and thereby to help promote the continuous development of these fundamental sciences.
- Published
- 2021
32. A correction method for the near field approximated model based localization techniques
- Author
-
Pascal Charge, Yide Wang, Parth Raj Singh, Institut d'Électronique et des Technologies du numéRique (IETR), Université de Nantes (UN)-Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), Charlier, Sandrine, Université de Nantes (UN)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS), and Nantes Université (NU)-Université de Rennes 1 (UR1)
- Subjects
Systematic error ,Mathematical optimization ,Correction method ,Computational complexity theory ,[SPI] Engineering Sciences [physics] ,Near and far field ,02 engineering and technology ,Signal ,[SPI]Engineering Sciences [physics] ,Artificial Intelligence ,Angle of arrival ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,Approximated model ,Electrical and Electronic Engineering ,Mathematics ,Wavefront ,Applied Mathematics ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,Fresnel approximation error ,near field sources localization ,[SPI.TRON] Engineering Sciences [physics]/Electronics ,[SPI.TRON]Engineering Sciences [physics]/Electronics ,Computational Theory and Mathematics ,Signal Processing ,Computer Vision and Pattern Recognition ,Statistics, Probability and Uncertainty ,Algorithm - Abstract
International audience; Almost all of the existing near field sources localization techniques use an ap-proximated model of the spherical wavefront to reduce the computational com-plexity. This approximation adds a systematic bias to the estimated range andangle of arrival (AOA) when the received signal has the spherical wavefront. Inthis paper, we propose an effcient correction method to mitigate the systematicerror introduced by the use of the approximated model in the existing near fieldsources localization techniques. The performance of the correction method isshown by simulation results.
- Published
- 2017
33. Alzheimer Identification through DNA Methylation and Artificial Intelligence Techniques
- Author
-
Javier Caballero Villarraso and Gerardo Alfonso Perez
- Subjects
Identification ,animal structures ,Computer science ,General Mathematics ,Value (computer science) ,Machine learning ,computer.software_genre ,Reduction (complexity) ,Computer Science (miscellaneous) ,QA1-939 ,Engineering (miscellaneous) ,algorithm ,business.industry ,fungi ,Process (computing) ,food and beverages ,Support vector machine ,Maxima and minima ,Algorithm ,Nonlinear system ,Identification (information) ,embryonic structures ,Alzheimer ,identification ,Artificial intelligence ,business ,human activities ,computer ,Mathematics ,Curse of dimensionality - Abstract
A nonlinear approach to identifying combinations of CpGs DNA methylation data, as biomarkers for Alzheimer (AD) disease, is presented in this paper. It will be shown that the presented algorithm can substantially reduce the amount of CpGs used while generating forecasts that are more accurate than using all the CpGs available. It is assumed that the process, in principle, can be non-linear, hence, a non-linear approach might be more appropriate. The proposed algorithm selects which CpGs to use as input data in a classification problem that tries to distinguish between patients suffering from AD and healthy control individuals. This type of classification problem is suitable for techniques, such as support vector machines. The algorithm was used both at a single dataset level, as well as using multiple datasets. Developing robust algorithms for multi-datasets is challenging, due to the impact that small differences in laboratory procedures have in the obtained data. The approach that was followed in the paper can be expanded to multiple datasets, allowing for a gradual more granular understanding of the underlying process. A 92% successful classification rate was obtained, using the proposed method, which is a higher value than the result obtained using all the CpGs available. This is likely due to the reduction in the dimensionality of the data obtained by the algorithm that, in turn, helps to reduce the risk of reaching a local minima.
- Published
- 2021
34. A Novel Data Driven Machine Learning Algorithm For Fuzzy Estimates of Optimal Portfolio Weights and Risk Tolerance Coefficient
- Author
-
Alex Paseka, Md. Erfanul Hoque, You Liang, Ruppa K. Thulasiram, and Aerambamoorthy Thavaneswaran
- Subjects
business.industry ,Risk measure ,Statistics::Other Statistics ,Machine learning ,computer.software_genre ,Fuzzy logic ,Rate of return on a portfolio ,Lasso (statistics) ,Computer Science::Computational Engineering, Finance, and Science ,Fuzzy number ,Portfolio ,Artificial intelligence ,Portfolio optimization ,Volatility (finance) ,business ,computer ,Algorithm ,Mathematics - Abstract
Recently, there has been a growing interest in portfolio optimization using graphical LASSO (GL) machine learning method, by assuming normality for asset returns. However, a major drawback is that most of the asset returns follow non-normal distributions and sample percentiles are used to study the portfolio optimization with Value-at-Risk (VaR) as a risk measure. In this paper, a data-driven random weights algorithm (RWA) and a sign correlation based portfolio return distribution are used to study the fuzzy portfolio optimization. The superiority of RWA over the commonly used genetic algorithm (GA) in computing the optimal portfolio weights is demonstrated by comparing the computing time. When comparing the estimate of the risk tolerance coefficient and the theoretical value for tangency portfolios with volatility as a risk measure, RWA outperforms (smaller absolute error) the GA. The novelty of this paper is the use of RWA and GA to calculate the fuzzy estimates (interval estimates) of the risk tolerance coefficient/optimal weights and using the sign correlation to obtain the data-driven distribution of the portfolio returns. More specifically the novelty is to obtain the fuzzy estimates of the risk tolerance coefficient and portfolio weights by modelling the portfolio volatility as an asymmetric triangular fuzzy number from the data-driven observed portfolio volatilities. In particular, the proposed RWA as well as GA lead to machine learning solutions for the portfolio optimization problems without a closed form solution and provide fuzzy estimates of the risk tolerance coefficient and the optimal portfolio weights.
- Published
- 2021
35. Model Order Reduction Based on Agglomerative Hierarchical Clustering
- Author
-
Donald C. Wunsch and Seaar Al-Dabooni
- Subjects
Model order reduction ,Computer Networks and Communications ,02 engineering and technology ,Transfer function ,Computer Science Applications ,Hierarchical clustering ,Inverted pendulum ,Artificial Intelligence ,Robustness (computer science) ,Full state feedback ,0202 electrical engineering, electronic engineering, information engineering ,Padé approximant ,020201 artificial intelligence & image processing ,Cluster analysis ,Algorithm ,Software ,Mathematics - Abstract
This paper presents an improved method for reducing high-order dynamical system models via clustering. Agglomerative hierarchical clustering based on performance evaluation (HC-PE) is introduced for model order reduction. This method computes the reduced order denominator of the transfer function model by clustering system poles in a hierarchical dendrogram. The base layer represents an $n{\textrm {th}}$ order system, which is used to calculate each successive layer to reduce the model order until finally reaching a second-order system. HC-PE uses a mean-squared error (MSE) in every reduced order, which modifies the pole placement process. The coefficients for the numerator of the reduced model are calculated by using the Pade approximation (PA) or alternatively a genetic algorithm (GA). Several numerical examples of reducing techniques are taken from the literature to compare with HC-PE. Two classes of results are shown in this paper. The first sets are single-input single-output models that range from simple models to 48th order systems. The second sets of experiments are with a multi-input multioutput model. We demonstrate the best performance for HC-PE through minimum MSEs compared with other methods. Furthermore, the robustness of HC-PE combined with PA or GA is confirmed by evaluating the third-order reduced model for the triple-link inverted pendulum model by adding a disturbance impulse signal and by changing model parameters. The relevant stability proofs are provided in Appendixes A and B in the supplementary material. HC-PE with PA slightly outperforms its performance with GA, but both approaches are attractive alternatives to other published methods.
- Published
- 2019
36. Forecasting seasonal time series based on fuzzy techniques
- Author
-
Vilém Novák and Linh Nguyen
- Subjects
0209 industrial biotechnology ,Series (mathematics) ,Logic ,Natural logic ,02 engineering and technology ,Fuzzy logic ,020901 industrial engineering & automation ,Artificial Intelligence ,Component (UML) ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Autoregressive integrated moving average ,Algorithm ,Mathematics - Abstract
This paper is devoted to a method for the forecasting of seasonal time series. The core of our approach is based on the fuzzy transform and fuzzy natural logic (FNL) techniques. Under the assumption that a time series can be additively decomposed into a trend-cycle, a seasonal component and an irregular fluctuation, the forecasting is a combination of individual forecasting of each of these constituents. More precisely, the trend-cycle and the seasonal component are predicted with the help of fuzzy transform, pattern recognition and fuzzy natural logic techniques. To model the irregular component, we apply the Box–Jenkins approach. In the paper, we compare the suggested method with two other well-known methods, namely STL (see [14] ) and ARIMA ones.
- Published
- 2019
37. General 3-D Type-II Fuzzy Logic Systems in the Polar Frame: Concept and Practice
- Author
-
Hamzeh Zakeri, Fereidoon Moghadas Nejad, and Ahmad Fahimifar
- Subjects
Applied Mathematics ,Fuzzy set ,Marsaglia polar method ,Fuzzy logic ,Electronic mail ,Smoothing spline ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,Control theory ,Fuzzy set operations ,Spline interpolation ,Algorithm ,Membership function ,Mathematics - Abstract
This paper deals with the concept and application of general three-dimensional (3-D) type-2 fuzzy logic systems in the polar frame (G3DT2FLS). 1 1 Because many acronyms are used in this paper, they are summarized in Table I for the convenience of the reader. We focused on the automatic membership function (MF) Generator, general type-2 polar fuzzy membership, new geometric operators in polar frame, inference consisting of fuzzy 3-D polar rules and antecedents/consequents $\theta$ -slice and $\alpha$ -planes. The cubic smoothing spline is introduced to generate the upper and lower MFs according to the information theory. Three indices of compactness, smoothness, and entropy are employed for tuning of MFs. A measure of ultrafuzziness, min, max, equal, and reduced ultrafuzziness are suggested for the polar method. Additionally, the set theoretic operations of 3-D type-2 fuzzy sets in the polar frame are discussed. We also prove the join, meet, centroid, and type-reduction operations in the polar frame. Several rule sets are given to show the usefulness and complexity of the proposed method. The performance of the partial general 3D polar type-2 fuzzy logic system showed linear growth as the number of rules was increased. Computation time tests showed that the algorithm reduces the computation time by a maximum of 67% compared with discrete MF and is shown by an extreme 98% for the geometric procedure. These results indicate significant improvements in computation time for the spline interpolation over the existing methods.
- Published
- 2019
38. Granular fuzzy rough sets based on fuzzy implicators and coimplicators
- Author
-
Bao Qing Hu and Bo Wen Fang
- Subjects
0209 industrial biotechnology ,Mathematics::General Mathematics ,Logic ,Axiomatic system ,02 engineering and technology ,Constructive ,Fuzzy logic ,ComputingMethodologies_PATTERNRECOGNITION ,020901 industrial engineering & automation ,Artificial Intelligence ,Approximation error ,Approximation operators ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,ComputingMethodologies_GENERAL ,Limit (mathematics) ,Fuzzy rough sets ,Algorithm ,Mathematics ,Variable precision - Abstract
This paper introduces granular fuzzy rough sets in the view of fuzzy implicators and fuzzy coimplicators, and discusses the constructive and axiomatic approach to fuzzy granules based on fuzzy implicators and coimplicators. Moreover, we study the connection between fuzzy granules and fuzzy relations and discuss the relationship between existing granular fuzzy rough set models and that proposed in this paper. Considering the absolute error limit, we introduce the concept of the granular variable precision fuzzy rough sets based on fuzzy implicators and coimplicators. Then we present four propositions to ensure that the approximation operators can be efficiently calculated.
- Published
- 2019
39. Fuzzy Monotonic K-Nearest Neighbor Versus Monotonic Fuzzy K-Nearest Neighbor
- Author
-
Ran Wang, Hong Zhu, and Xizhao Wang
- Subjects
Selection (relational algebra) ,Applied Mathematics ,Value (computer science) ,Monotonic function ,Fuzzy logic ,k-nearest neighbors algorithm ,Noise ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,Range (statistics) ,Sensitivity (control systems) ,Algorithm ,Mathematics - Abstract
In real-life applications, monotonic classification is a widespread task, where the improvement of a particular input value cannot result in an inferior output. A common drawback of the existing algorithms for monotonic classification is their sensitivity to noise data which particularly refer to monotonicity violations in the monotonic circumstance. Motivated by weakening the impact of noises, the Fuzzy Monotonic K-Nearest Neighbor (FMKNN) is proposed in this paper, which constructs monotonic classifiers by taking advantage of the fuzzy dominance relation between a pair of instances, especially that between incomparable instances for the first time. Through tuning the thresholds of fuzzy dominance relation degrees, FMKNN intends to decrease the disturbance caused by noises which considerably affect the selection range of the K-Nearest Neighbors in different extent. The experimental results show that the best average improvement degrees of FMKNN in terms of the KNN-based and non-KNN-based classifiers on all the involved datasets arrive at 28%, 11% and 29% with respect to ACCU, MAE and NMI, respectively, which demonstrates the superiority of our proposed FMKNN over other state-of-the-art monotonic classifiers including the Monotonic Fuzzy K-Nearest Neighbor (MFKNN) which disperses the impact of noise data by converting crisp class labels into class membership vectors.
- Published
- 2022
40. Angular Radon spectrum for rotation estimation
- Author
-
Dario Lodi Rizzini
- Subjects
Radon transform ,020207 software engineering ,02 engineering and technology ,Correlation function (astronomy) ,Mixture model ,Translation (geometry) ,Parallel ,Point distribution model ,Artificial Intelligence ,Orientation (geometry) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Algorithm ,Rotation (mathematics) ,Software ,Mathematics - Abstract
This paper presents a robust method for rotation estimation of planar point sets using the Angular Radon Spectrum (ARS). Given a Gaussian Mixture Model (GMM) representing the point distribution, the ARS is a continuous function derived from the Radon Transform of such distribution. The ARS characterizes the orientation of a point distribution by measuring its alignment w.r.t. a pencil of parallel lines. By exploting its translation and angular-shift invariance, the rotation angle between two point sets can be estimated through the correlation of the corresponding spectra. Beside its definition, the novel contributions of this paper include the efficient computation of the ARS and of the correlation function through their Fourier expansion, and a new algorithm for assessing the rotation between two point sets. Moreover, experiments with standard benchmark datasets assess the performance of the proposed algorithm and other state-of-the-art methods in presence of noisy and incomplete data.
- Published
- 2018
41. Intelligent Analysis of Core Identification Based on Intelligent Algorithm of Core Identification
- Author
-
Xiaolin Zhu and Wei Lv
- Subjects
Article Subject ,Computer science ,Machine vision ,business.industry ,Deep learning ,Process (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,System model ,Identification (information) ,Mobile phone ,Modeling and Simulation ,Line (geometry) ,Core (graph theory) ,QA1-939 ,Artificial intelligence ,business ,Algorithm ,Mathematics - Abstract
The communication recognition of mobile phone core is a test of the development of machine vision. The size of mobile phone core is very small, so it is difficult to identify small defects. Based on the in-depth study of the algorithm, combined with the actual needs of core identification, this paper improves the algorithm and proposes an intelligent algorithm suitable for core identification. In addition, according to the actual needs of core wire recognition, this paper makes an intelligent analysis of the core wire recognition process. In addition, this paper improves the traditional communication image recognition algorithm and analyzes the data of the recognition algorithm according to the shape and image characteristics of the mobile phone core. Finally, after constructing the functional structure of the system model constructed in this paper, the system model is verified and analyzed, and on this basis, the performance of the improved core recognition algorithm proposed in this paper is verified and analyzed. From the results of online monitoring and recognition, the statistical accuracy of mobile phone core video recognition is about 90%, which has higher accuracy in mobile phone core image recognition than traditional recognition algorithms. The core line recognition algorithm based on deep learning and machine vision is effective and has a good practical effect.
- Published
- 2021
- Full Text
- View/download PDF
42. Granular representation of OWA-based fuzzy rough sets
- Author
-
Salvatore Greco, Chris Cornelis, Roman Słowiński, and Marko Palangetić
- Subjects
0209 industrial biotechnology ,Logic ,Rule induction ,Fuzzy set ,Granular computing ,02 engineering and technology ,Extension (predicate logic) ,Fuzzy logic ,Interpretation (model theory) ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Rough set ,Representation (mathematics) ,Algorithm ,Mathematics - Abstract
Granular representations of crisp and fuzzy sets play an important role in rule induction algorithms based on rough set theory. In particular, arbitrary fuzzy sets can be approximated using unions of simple fuzzy sets called granules. These granules, in turn, have a straightforward interpretation in terms of human-readable fuzzy “if..., then...” rules. In this paper, we are considering a fuzzy rough set model based on ordered weighted average (OWA) aggregation over considered values. We show that this robust extension of the classical fuzzy rough set model, which has been applied successfully in various machine learning tasks, also allows for a granular representation. In particular, we prove that when approximations are defined using a directionally convex t-norm and its residual implicator, the OWA-based lower and upper approximations are definable as unions of fuzzy granules. This result has practical implications for rule induction from such fuzzy rough approximations.
- Published
- 2022
43. Further Results on Sampled-Data $H_{\infty }$ Filtering for T–S Fuzzy Systems With Asynchronous Premise Variables
- Author
-
Yongsik Jin, Wookyong Kwon, and Sang-Moon Lee
- Subjects
Applied Mathematics ,Linear matrix inequality ,Fuzzy control system ,Filter (signal processing) ,Function (mathematics) ,Fuzzy logic ,Filter design ,Matrix (mathematics) ,Computational Theory and Mathematics ,Artificial Intelligence ,Control and Systems Engineering ,Affine transformation ,Algorithm ,Mathematics - Abstract
This paper presents a new sampled-data fuzzy filter design method for Takagi-Sugeno fuzzy systems with the asynchronous premise variables. In the new fuzzy filter design method, the membership functions of the filter are affine transformed by scaling and biasing the system's membership functions. Taking the advantage of the newly proposed method, the asynchronous problem of the premise variables between the system and filter is easily resolved. Based on the looped function and a modified free-weighting matrix inequality, the sampled system's output is handled, and the filter design condition is formulated in terms of a parameterized linear matrix inequality with affine matched fuzzy parameter vectors. Additionally, a modified Finsler's lemma is devised to handle the affine matched fuzzy parameter vectors. By utilizing the relationship between the transformed membership functions, the filter design condition for H_infinity performance is enhanced with larger allowable maximum bounds of variable sampling intervals. Lastly, the superiority of the presented method is verified by comparing the numerical simulations with existing methods.
- Published
- 2022
44. PM2.5 concentration forecasting at surface monitoring sites using GRU neural network based on empirical mode decomposition
- Author
-
Xinyi Li, Bing Zhang, Jiadong Ren, and Guoyan Huang
- Subjects
Sequence ,Environmental Engineering ,010504 meteorology & atmospheric sciences ,Series (mathematics) ,Artificial neural network ,Mean squared error ,business.industry ,Deep learning ,010501 environmental sciences ,01 natural sciences ,Pollution ,Hilbert–Huang transform ,Component (UML) ,Environmental Chemistry ,Artificial intelligence ,Symmetric mean absolute percentage error ,business ,Waste Management and Disposal ,Algorithm ,0105 earth and related environmental sciences ,Mathematics - Abstract
The main component of haze is the particulate matter (PM) 2.5. How to explore the laws of PM2.5 concentration changes is the main content of air quality prediction. Combining the characteristics of temporality and non-linearity in PM2.5 concentration series, more and more deep learning methods are currently applied to PM2.5 predictions, but most of them ignore the non-stationarity of time series, which leads to a lower accuracy of model prediction. To address this issue, an integration method of gated recurrent unit neural network based on empirical mode decomposition (EMD-GRU) for predicting PM2.5 concentration was proposed in this paper. This method uses empirical mode decomposition (EMD) to decompose the PM2.5 concentration sequence first and then fed the multiple stationary sub-sequences obtained after the decomposition and the meteorological features into the constructed GRU neural network successively for training and predicting. Finally, the sub-sequences of the prediction output are added to obtain the prediction results of PM2.5 concentration. The forecast result of the case in this paper show that the EMD-GRU model reduces the RMSE by 44%, MAE by 40.82%, and SMAPE by 11.63% compared to the single GRU model.
- Published
- 2020
45. Automatic Urinary Sediments Visible Component Detection Based on Improved YOLO Algorithm
- Author
-
Shengyu Zhang, Lin Jiao, Wang Qijin, and Shifeng Dong
- Subjects
Feature fusion ,business.industry ,Deep learning ,05 social sciences ,Detector ,Feature extraction ,010501 environmental sciences ,01 natural sciences ,Object detection ,Kernel (image processing) ,Urinary sediment ,0502 economics and business ,Urine sediment ,Artificial intelligence ,050207 economics ,business ,Algorithm ,0105 earth and related environmental sciences ,Mathematics - Abstract
In this paper, the end-to-end object detection algorithm based on deep learning is used to analyze the urinary sediment visible component. In order to further improve the detection accuracy of the YOLOv3 algorithm on urine sediment visible component dataset, this paper presented a new way of determined the training sample, which can enhance the quality of training samples. Then we change the convolution kernel receptive field size of the feature fusion layer to increase the detection precision. The experimental results on the dataset show that the improved YOLOv3 has higher detection accuracy, which 5 categories of urinary sediment visible component. The results shows that our method obtain the best mean average precision (mAP) of 90.1%, which is better than the original YOLOv3 model 0.6% higher. The average detection time of the model is 0.047s per frame at 800 × 600 resolution.
- Published
- 2020
46. OPUS-TASS: a protein backbone torsion angles and secondary structure predictor based on ensemble neural networks
- Author
-
Qinghua Wang, Jianpeng Ma, and Gang Xu
- Subjects
Statistics and Probability ,Multi-task learning ,Inference ,01 natural sciences ,Biochemistry ,Protein Structure, Secondary ,03 medical and health sciences ,Protein structure ,0103 physical sciences ,Molecular Biology ,Protein secondary structure ,030304 developmental biology ,Mathematics ,0303 health sciences ,010304 chemical physics ,Artificial neural network ,business.industry ,Deep learning ,Proteins ,Protein structure prediction ,Computer Science Applications ,Computational Mathematics ,Recurrent neural network ,Computational Theory and Mathematics ,Artificial intelligence ,Neural Networks, Computer ,business ,Algorithm - Abstract
Motivation Predictions of protein backbone torsion angles (ϕ and ψ) and secondary structure from sequence are crucial subproblems in protein structure prediction. With the development of deep learning approaches, their accuracies have been significantly improved. To capture the long-range interactions, most studies integrate bidirectional recurrent neural networks into their models. In this study, we introduce and modify a recently proposed architecture named Transformer to capture the interactions between the two residues theoretically with arbitrary distance. Moreover, we take advantage of multitask learning to improve the generalization of neural network by introducing related tasks into the training process. Similar to many previous studies, OPUS-TASS uses an ensemble of models and achieves better results. Results OPUS-TASS uses the same training and validation sets as SPOT-1D. We compare the performance of OPUS-TASS and SPOT-1D on TEST2016 (1213 proteins) and TEST2018 (250 proteins) proposed in the SPOT-1D paper, CASP12 (55 proteins), CASP13 (32 proteins) and CASP-FM (56 proteins) proposed in the SAINT paper, and a recently released PDB structure collection from CAMEO (93 proteins) named as CAMEO93. On these six test sets, OPUS-TASS achieves consistent improvements in both backbone torsion angles prediction and secondary structure prediction. On CAMEO93, SPOT-1D achieves the mean absolute errors of 16.89 and 23.02 for ϕ and ψ predictions, respectively, and the accuracies for 3- and 8-state secondary structure predictions are 87.72 and 77.15%, respectively. In comparison, OPUS-TASS achieves 16.56 and 22.56 for ϕ and ψ predictions, and 89.06 and 78.87% for 3- and 8-state secondary structure predictions, respectively. In particular, after using our torsion angles refinement method OPUS-Refine as the post-processing procedure for OPUS-TASS, the mean absolute errors for final ϕ and ψ predictions are further decreased to 16.28 and 21.98, respectively. Availability and implementation The training and the inference codes of OPUS-TASS and its data are available at https://github.com/thuxugang/opus_tass. Supplementary information Supplementary data are available at Bioinformatics online.
- Published
- 2020
47. Plant Disease Identification Based on Deep Learning Algorithm in Smart Farming
- Author
-
Yan Guo, Jin Zhang, Xiaonan Hu, Wei Wang, Zhipeng Xue, Chengxin Yin, and Yu Zou
- Subjects
0106 biological sciences ,Decision support system ,Article Subject ,business.industry ,Computer science ,Deep learning ,02 engineering and technology ,01 natural sciences ,Plant disease ,Identification (information) ,Agriculture ,Modeling and Simulation ,QA1-939 ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Transfer of learning ,computer ,Algorithm ,Mathematics ,010606 plant biology & botany ,Rust (programming language) ,computer.programming_language - Abstract
The identification of plant disease is the premise of the prevention of plant disease efficiently and precisely in the complex environment. With the rapid development of the smart farming, the identification of plant disease becomes digitalized and data-driven, enabling advanced decision support, smart analyses, and planning. This paper proposes a mathematical model of plant disease detection and recognition based on deep learning, which improves accuracy, generality, and training efficiency. Firstly, the region proposal network (RPN) is utilized to recognize and localize the leaves in complex surroundings. Then, images segmented based on the results of RPN algorithm contain the feature of symptoms through Chan–Vese (CV) algorithm. Finally, the segmented leaves are input into the transfer learning model and trained by the dataset of diseased leaves under simple background. Furthermore, the model is examined with black rot, bacterial plaque, and rust diseases. The results show that the accuracy of the method is 83.57%, which is better than the traditional method, thus reducing the influence of disease on agricultural production and being favorable to sustainable development of agriculture. Therefore, the deep learning algorithm proposed in the paper is of great significance in intelligent agriculture, ecological protection, and agricultural production.
- Published
- 2020
- Full Text
- View/download PDF
48. Algorithms to compute the largest invariant set contained in an algebraic set for continuous-Time and discrete-Time nonlinear systems
- Author
-
Antonio Tornambè, Laura Menini, and Corrado Possieri
- Subjects
0209 industrial biotechnology ,Polynomial ,Settore ING-INF/04 ,Asymptotic stability ,020208 electrical & electronic engineering ,Stability (learning theory) ,discrete-Time systems ,02 engineering and technology ,Algebraic geometry ,Analytic set ,Set (abstract data type) ,020901 industrial engineering & automation ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,invariance ,Vector field ,Invariant (mathematics) ,Variety (universal algebra) ,nonlinear systems ,Algorithm ,Information Systems ,Mathematics - Abstract
In this paper, some computational tools are proposed to determine the largest invariant set, with respect to either a continuous-time or a discrete-time system, that is contained in an algebraic set. In particular, it is shown that if the vector field governing the dynamics of the system is polynomial and the considered analytic set is a variety, then algorithms from algebraic geometry can be used to solve the considered problem. Examples of applications of the method ( spanning from the characterization of the stability to the computation of the zero dynamics ) are given all throughout the paper.
- Published
- 2020
49. The optimized algorithm based on machine learning for inverse kinematics of two painting robots with non-spherical wrist
- Author
-
Lerui Chen, Xiaoqi Wang, Heyu Hu, and Jianfu Cao
- Subjects
0209 industrial biotechnology ,Kinematics ,Computer science ,Velocity ,02 engineering and technology ,Wrist ,computer.software_genre ,Machine Learning ,020901 industrial engineering & automation ,Skeletal Joints ,Medicine and Health Sciences ,0202 electrical engineering, electronic engineering, information engineering ,Musculoskeletal System ,Multidisciplinary ,Basis (linear algebra) ,Physics ,Applied Mathematics ,Simulation and Modeling ,Classical Mechanics ,Robotics ,Arms ,medicine.anatomical_structure ,Physical Sciences ,Engineering and Technology ,Medicine ,020201 artificial intelligence & image processing ,Anatomy ,Robots ,Algorithm ,Algorithms ,Research Article ,Computer and Information Sciences ,Movement ,Science ,Research and Analysis Methods ,Machine learning ,Motion ,Machine Learning Algorithms ,Artificial Intelligence ,medicine ,Least-Squares Analysis ,Inverse kinematics ,business.industry ,Mechanical Engineering ,Gauss ,Biology and Life Sciences ,Body Limbs ,Robot ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
This paper studies the inverse kinematics of two non-spherical wrist configurations of painting robot. The simplest analytical solution of orthogonal wrist configuration is deduced in this paper for the first time. For the oblique wrist configuration, there is no analytical solution for the configuration. So it is necessary to solve by general method, which cannot achieve high precision and high speed as analytic solution. Two general methods are optimized in this paper. Firstly, the elimination method is optimized to reduce the solving speed to 20% of the original one, and the completeness of the method is supplemented. Based on the Gauss damped least squares method, a new optimization method is proposed to improve the solving speed. The enhanced step length coefficient is introduced to conduct studies with the machine learning correlation method. It has been proved that, on the basis of ensuring the stability of motion, the number of iterations can be effectively reduced and the average number of iterations can be less than 5 times, which can effectively improve the speed of solution. In the simulation and experimental environment, it is verified.
- Published
- 2020
50. Performance assessment of a V-trough photovoltaic system and prediction of power output with different machine learning algorithms
- Author
-
Ali Etem Gürel, İlhan Ceylan, Ümit Ağbulut, Alper Ergün, and [Belirlenecek]
- Subjects
Coefficient of determination ,Design ,Mean squared error ,CPV ,020209 energy ,Strategy and Management ,02 engineering and technology ,Machine learning algorithms ,Solar ,Machine learning ,computer.software_genre ,Modules ,Industrial and Manufacturing Engineering ,Degradation ,0202 electrical engineering, electronic engineering, information engineering ,V-trough ,0505 law ,General Environmental Science ,Mathematics ,Energy ,Artificial neural network ,Cleaner production ,Renewable Energy, Sustainability and the Environment ,business.industry ,Deep learning ,05 social sciences ,Photovoltaic system ,Building and Construction ,Concentrator ,Power (physics) ,Support vector machine ,Power prediction ,Kernel (statistics) ,050501 criminology ,Artificial intelligence ,business ,Ann ,Algorithm ,computer - Abstract
This study carried out in two stages. In the first stage, four different-sized layers were designed and manufactured for a concentrated photovoltaic system. These layers were used to change the concentration ratio and area ratio of the system. Furthermore, a new power coefficient equation with this paper is proposed to the literature for the determination of the system performance. In the second stage of the study, the power outputs measured in the study were predicted with four machine-learning algorithms, namely support vector machine, artificial neural network, kernel and nearest-neighbor, and deep learning. To evaluate the success of these machine learning algorithms, coefficient of determination (R-2), root mean squared error (RMSE), mean bias error (MBE), t-statistics (t-stat) and mean absolute bias error (MABE) have been discussed in the paper. The experimental results demonstrated that the double-layer application for the concentrator has ensured better results and enhanced the power by 16%. The average concentration ratio for the double-layer was calculated to be 1.8. Based on these data, the optimum area ratio was determined to be 9 for this V-trough concentrator. Furthermore, the power coefficient was calculated to be 1.35 for optimum area ratio value. R-2 of all algorithms is bigger than 0.96. Support vector machine algorithm has generally presented better prediction results particularly with very satisfying R-2, RMSE, MBE, and MABE of 0.9921, 0.7082 W, 0.3357 W, and 0.6238 W, respectively. Then it is closely followed by kernel-nearest neighbor, artificial neural network, and deep learning algorithms, respectively. In conclusion, this paper is reporting that the proposed new power coefficient approach is giving more reliable results than efficiency data and the power output data of concentrated photovoltaic systems can be highly predicted with the machine learning algorithms. (c) 2020 Elsevier Ltd. All rights reserved. Karabuk University Scientific Research Projects Coordination UnitKarabuk University [KBU-BAP-15/1-YL-019] This study is supported by Karabuk University Scientific Research Projects Coordination Unit. Project Number: KBU-BAP-15/1-YL-019. WOS:000561594800091 2-s2.0-85086895804
- Published
- 2020
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.