1,497 results
Search Results
2. Design of variable elliptical filters with direct tunability in 2D domain
- Author
-
K. R. Sreelekha and T. S. Bindiya
- Subjects
Mean squared error ,Applied Mathematics ,2D Filters ,Bandwidth (signal processing) ,Ripple ,Filter (signal processing) ,Computer Science Applications ,Filter design ,Artificial Intelligence ,Hardware and Architecture ,Approximation error ,Signal Processing ,Elliptic filter ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
This paper proposes the design of transformation based two dimensional (2D) elliptical filters using multi-objective artificial bee colony (MOABC) algorithm. The MOABC algorithm finds the one dimensional (1D) cut-off frequencies and McClellan transformation coefficients by optimizing the contour approximation errors at pass-band and stop-band regions of elliptical filters. This design approach is compared with the state of the art approaches in terms of contour approximation error, pass-band ripple and stop-band attenuation to ensure the efficiency. It is seen that an efficient mapping of 1D filter to 2D filter is possible with minimum contour approximation error using the proposed approach. The second proposal in this paper is the design of low complexity variable bandwidth and variable orientation elliptical filters with direct 2D tunability. This is achieved by mapping the coefficients of different 2D filters into a 2D Farrow structure. The performance of the proposed variable elliptical filter is measured in terms of mean error, mean square error, root mean square error, pass-band ripple, stop-band attenuation and 2D cut-off frequencies. The main feature of the proposed 2D variable filter design lies in the considerable reduction in the number of multipliers when compared to the existing methods, which in turn reduces the resource utilization and power consumption.
- Published
- 2021
3. Parameterized and exact algorithms for finding a read-once resolution refutation in 2CNF formulas
- Author
-
K. Subramani and Piotr J. Wojciechowski
- Subjects
True quantified Boolean formula ,Applied Mathematics ,Parameterized complexity ,Computer Science::Computational Complexity ,Resolution (logic) ,Satisfiability ,Decidability ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Artificial Intelligence ,Computer Science::Logic in Computer Science ,Conjunctive normal form ,Boolean satisfiability problem ,Algorithm ,Time complexity ,Mathematics - Abstract
In this paper, we discuss algorithms for the problem of finding read-once resolution refutations of unsatisfiable 2CNF formulas within the resolution refutation system. Broadly, a read-once resolution refutation is one in which each constraint (input or derived) is used at most once. Read-once resolution refutations have been widely studied in the literature for a number of constraint system-refutation system pairs. For instance, read-once resolution has been analyzed for boolean formulas in conjunctive normal form (CNF) and read-once cutting planes have been analyzed for polyhedral systems. By definition, read-once refutations are compact, and hence valuable in applications that place a premium on visualization. The satisfiability problem (SAT) is concerned with finding a satisfying assignment for a boolean formula in CNF. While SAT is NP-complete in general, there exist some interesting subclasses of CNF formulas, for which it is decidable in polynomial time. One such subclass is the class of 2CNF formulas, i.e., CNF formulas in which each clause has at most two literals. The existence of efficient algorithms for satisfiability checking in 2CNF formulas (2SAT), makes this class useful from the perspective of modeling selected logic programs. The work in this paper is concerned with the read-once refutability problem (under resolution) in this subclass. Although 2SAT is decidable in polynomial time, the problem of finding a read-once resolution refutation of an unsatisfiable 2CNF formula is NP-complete. We design non-trivial, parameterized and exact exponential algorithms for this problem. Additionally, we study the computational complexity of finding a shortest read-once resolution refutation of a 2CNF formula.
- Published
- 2021
4. A detection algorithm for cherry fruits based on the improved YOLO-v4 model
- Author
-
Rongli Gai, Na Chen, and Hai Yuan
- Subjects
0209 industrial biotechnology ,Backbone network ,Small volume ,Feature extraction ,Network structure ,02 engineering and technology ,Ripeness ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Shading ,Orchard ,Algorithm ,Software ,Digital agriculture ,Mathematics - Abstract
"Digital" agriculture is rapidly affecting the value of agricultural output. Robotic picking of the ripe agricultural product enables accurate and rapid picking, making agricultural harvesting intelligent. How to increase product output has also become a challenge for digital agriculture. During the cherry growth process, realizing the rapid and accurate detection of cherry fruits is the key to the development of cherry fruits in digital agriculture. Due to the inaccurate detection of cherry fruits, environmental problems such as shading have become the biggest challenge for cherry fruit detection. This paper proposes an improved YOLO-V4 deep learning algorithm to detect cherry fruits. This model is suitable for cherry fruits with a small volume. It is proposed to increase the network based on the YOLO-V4 backbone network CSPDarknet53 network, combined with DenseNet The density between layers, the a priori box in the YOLO-V4 model, is changed to a circular marker box that fits the shape of the cherry fruit. Based on the improved YOLO-V4 model, the feature extraction is enhanced, the network structure is deepened, and the detection speed is improved. To verify the effectiveness of this method, different deep learning algorithms of YOLO-V3, YOLO-V3-dense and YOLO-V4 are compared. The results show that the mAP (average accuracy) value obtained by using the improved YOLO-V4 model (YOLO-V4-dense) network in this paper is 0.15 higher than that of yolov4. In actual orchard applications, cherries with different ripeness of cherries in the same area can be detected, and the fruits with larger ripeness differences can be artificially intervened, and finally, the yield of cherry fruits can be increased.
- Published
- 2021
5. The Distance Induced OWA Operator with Application to Multi-criteria Group Decision Making
- Author
-
Weiwei Liu, Ying Zhou, Yuzhen Hu, Chengju Gong, and Yi Su
- Subjects
Group (mathematics) ,Computational intelligence ,Hamming distance ,02 engineering and technology ,Interval (mathematics) ,Measure (mathematics) ,Distance measures ,Theoretical Computer Science ,Group decision-making ,Operator (computer programming) ,Computational Theory and Mathematics ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Mathematics - Abstract
Some aggregation operators with distance measures have been proposed, but distance measure values play the role of argument variables. In this paper, the induced ordered weighted averaging (IOWA) operator with the Hamming distance is proposed, termed as the distance induced ordered weighted averaging (DIOWA) operator. The most distinctive characteristic of the DIOWA operator is that the distance measure values play the role of order-inducing variables. Some properties and special cases of the DIOWA operator are analyzed. This new operator is further extended to the uncertain situations represented by interval numbers. A new multi-criteria group decision-making (MCGDM) method with the proposed operators in this paper is also studied. Finally, a numerical example about how to select the best candidate is given to show how to use the DIOWA operator with interval numbers in group decision making.
- Published
- 2020
6. Mixed-order sampling of 2-D frequency distributions by using the concept of common superset
- Author
-
Toshihiro Hori
- Subjects
Applied Mathematics ,Crystal system ,Sampling (statistics) ,020206 networking & telecommunications ,02 engineering and technology ,Subset and superset ,Computer Science Applications ,Artificial Intelligence ,Hardware and Architecture ,Lattice (order) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Cluster (physics) ,020201 artificial intelligence & image processing ,Frequency distribution ,Multidimensional systems ,Algorithm ,Computer Science::Databases ,Software ,Information Systems ,Mathematics - Abstract
It has been found that tiling clusters and pair regions of frequency distributions play an important role in the sampling problems. When a given 2-D frequency distribution is made up of one tiling cluster, first-order sampling can be used, and when it is made up of two tiling clusters with the same periodicity lattice system, second-order sampling can be used by dividing tiling clusters into sets of pair regions. However, what kind of sampling can be used for other complicated 2-D frequency distributions has not been found. The sampling of frequency distributions which are made up of tiling clusters that belong to different periodicity lattice systems is discussed in this paper. We introduce the concept of common superset as the lattice system which is the common superset of all the lattice systems derived from individual tiling clusters. In practice, the sampling of a 2-D frequency distribution made up of one main body components and two debris ones is calculated in this paper. This type of sampling is a mixture of first-order and higher-order samplings. This sampling method can be applied to other 2-D frequency distributions, when there is a common superset for the given frequency distribution.
- Published
- 2018
7. Synthetic Correlation Coefficient Between Hesitant Fuzzy Sets with Applications
- Author
-
Guidong Sun, Xin Guan, Xiao Yi, and Zheng Zhou
- Subjects
0209 industrial biotechnology ,Correlation coefficient ,HFSS ,Fuzzy set ,02 engineering and technology ,Variance (accounting) ,Fuzzy logic ,Theoretical Computer Science ,Weighting ,020901 industrial engineering & automation ,Computational Theory and Mathematics ,Artificial Intelligence ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Mathematics - Abstract
Hesitant fuzzy sets (HFSs) are becoming more and more popular in the fuzzy domain and attract great attentions. As an important research orientation of HFSs, the correlation coefficient measurement between HFSs is a hot topic. Although some correlation coefficients have been proposed in the previous paper, we have to claim that the existing correlation coefficients are more or less counter-intuitive under some circumstances. On one hand, they consider only one feature of the HFSs and ignore some other important features contributing to the correlation coefficient. On the other hand, they require the lengths of the memberships of each hesitant fuzzy element (HFE) in the HFSs to be same. Therefore, we point out the shortcomings of the existing correlation coefficients in this paper and propose the synthetic correlation coefficient between the HFSs considering the integrality, the distribution and the length of the membership. Firstly, we define such basic concepts as the mean, the variance and the length rate of the HFEs and HFSs to represent the integrality, the distribution and the length. Secondly, based on these basic concepts, we define the mean, the variance and the length correlation coefficients. Furthermore, we construct the synthetic correlation coefficient by weighting these three basic correlation coefficients. In addition, to cope with the practical issues, we extend the synthetic correlation coefficient to the weighted form. Finally, we apply the synthetic correlation coefficient to such information fusion problems as data association, pattern recognition, medical diagnosis, decision making and cluster analysis. Along with some practical examples, the superiority of the proposed synthetic correlation coefficient in validation, discrimination, accuracy, intuitiveness and efficiency is illustrated in detail.
- Published
- 2018
8. Performance analysis and improvement of direct position determination based on Doppler frequency shifts in presence of model errors: case of known waveforms
- Author
-
Jiexin Yin, Ding Wang, Ruirui Liu, Hongyi Yu, and Yunlong Wang
- Subjects
Mean squared error ,Applied Mathematics ,Estimator ,020206 networking & telecommunications ,02 engineering and technology ,Function (mathematics) ,Computer Science Applications ,Reduction (complexity) ,Artificial Intelligence ,Hardware and Architecture ,Position (vector) ,Robustness (computer science) ,Signal Processing ,Hyperparameter optimization ,0202 electrical engineering, electronic engineering, information engineering ,Partial derivative ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
The direct position determination (DPD) method based on Doppler frequency shifts for signals with known waveforms is first proposed by Amar and Weiss. Although this method exhibits excellent asymptotic performance, but does not account for the effects of uncertainties in the receiver positions and velocities. These uncertainties may result in a considerable reduction of localization accuracy. In this paper, the statistical performance of the DPD estimator is investigated under receiver position and velocity errors (also called model errors). We derive an analytical expression for the mean square error in the estimated source location in the case where the estimator assumes that the receiver positions and velocities are accurate, but they in fact contain errors. The main difficulty in the mathematical analysis is that the DPD cost function is not explicit with respect to the emitter position. Consequently, some algebraic manipulation is required to derive closed-form expressions for the first- and second-order partial derivatives of the cost function. The Cramer–Rao bounds (CRBs) for the target position estimation are also deduced in the presence and absence of model errors. These CRBs provide insights into the effects of model errors on the localization accuracy. Two improved DPD methods are developed based on our analysis results. The first has lower complexity than the common grid search, and the second exhibits increased robustness to model errors. Simulation results support and corroborate the theoretical developments in this paper.
- Published
- 2018
9. On characteristic cones of discrete nD autonomous systems: theory and an algorithm
- Author
-
Debasattam Pal and Mousumi Mukherjee
- Subjects
Constant coefficients ,Applied Mathematics ,Open problem ,Structure (category theory) ,020206 networking & telecommunications ,02 engineering and technology ,Symbolic computation ,Cone (formal languages) ,Domain (mathematical analysis) ,Computer Science Applications ,Artificial Intelligence ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algebraic number ,Multidimensional systems ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
In this paper, we provide a complete answer to the question of characteristic cones for discrete autonomous nD systems, with arbitrary $$n\geqslant 2$$ , described by linear partial difference equations with real constant coefficients. A characteristic cone is a special subset (having the structure of a cone) of the domain (here $$\mathbb {Z}^n$$ ) such that the knowledge of the trajectories on this set uniquely determines them over the whole domain. Despite its importance in numerous system-theoretic issues, the question of characteristic sets for multidimensional systems has not been answered in its full generality except for Valcher’s seminal work for the special case of 2D systems (Valcher in IEEE Trans Circuits and Syst Part I Fundam Theory Appl 47(3):290–302, 2000). This apparent lack of progress is perhaps due to inapplicability of a crucial intermediate result by Valcher to cases with $$n\geqslant 3$$ . We illustrate this inapplicability of the above-mentioned result in Sect. 3 with the help of an example. We then provide an answer to this open problem of characterizing characteristic cones for discrete nD autonomous systems with general n; we prove an algebraic condition that is necessary and sufficient for a given cone to be a characteristic cone for a given system of linear partial difference equations with real constant coefficients. In the second part of the paper, we convert this necessary and sufficient condition to another equivalent algebraic condition, which is more suited from algorithmic perspective. Using this result, we provide an algorithm, based on Grobner bases, that is implementable using standard computer algebra packages, for testing whether a given cone is a characteristic cone for a given discrete autonomous nD system.
- Published
- 2018
10. DWT–SVD–WHT Watermarking Using Varying Strength Factor Derived From Means of the WHT Coefficients
- Author
-
Navneet Yadav
- Subjects
Discrete wavelet transform ,Multidisciplinary ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Watermark ,02 engineering and technology ,Image (mathematics) ,Feature (computer vision) ,Singular value decomposition ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Segmentation ,Artificial intelligence ,business ,Digital watermarking ,Algorithm ,Mathematics ,Block (data storage) - Abstract
A hybrid, watermarking technique is presented in this paper. This robust watermarking technique uses discrete wavelet transform (DWT), singular value decomposition (SVD) and Walsh–Hadamard transform (WHT) together in tandem with each other. There are two novelties contained in this paper. First is the combined use of DWT, SVD and WHT, and second is the first ever use of the $$4\times 4$$ block segmentation in the robust watermarking. Another unique feature of the presented technique is that visual quality of the watermarked image has been made controllable. A varying valued strength factor having a different value for every block ( $$8\times 8$$ or $$4\times 4$$ ) of the image has been used. Proposed scheme is very secure as the safety of the embedded watermark is not compromised even if there is an interception of the side information. The proposed technique shows remarkable performance when compared with seven contemporary watermarking schemes.
- Published
- 2017
11. Gravity Sliding Algorithm for Linear Programming
- Author
-
Pei-Zhuang Wang, Ho-Chung Lui, Hai-Tao Liu, and Sicong Guo
- Subjects
Simplex ,Linear programming ,020209 energy ,Feasible region ,02 engineering and technology ,Strongly polynomial algorithm ,Computer Science Applications ,Artificial Intelligence ,Norm (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Business, Management and Accounting (miscellaneous) ,020201 artificial intelligence & image processing ,Statistics, Probability and Uncertainty ,Algorithm ,Subspace topology ,Mathematics - Abstract
An algorithm named Gravity Sliding is presented in the paper, which emulates the gravity sliding motion in a feasible region D constrained by a group of hyper planes in $$R^{m}$$ . At each stage point P of the sliding path, we need to calculate the projection of gravity vector g on constraint planes: if a constraint plane blocks the way at P, then it may change the direction of sliding path. The core technique is the synthetical treatment for multiple blocking planes, which is a basic problem of structural adjustment in practice; while the whole path provides the solution of a linear programming. Existing LP algorithms have no intuitive vision to emulate gravity sliding, therefore, their paths are not able to avoid circling and roving, and they could not provide a best direction at each step for structural adjustment. The first author presented the algorithm Cone Cutting (Wang in Inf Technol Decis Mak 10(1):65–82, 2011), which provides an intuitive explanation for Simplex pivoting. And then the algorithm Gradient Falling (Wang in Ann Data Sci 1(1):41–71, 2014. doi: 10.1007/s40745-014-0005-9 ) was presented, which emulates the gradient motion on the feasible region. This paper is an improvement of gradient falling algorithm: in place of the description focusing on the null subspace of norm vectors, we focus the description on the expanding subspace of the very vectors in this paper. It makes the projection calculation easier and faster. We guess that the sliding path realized by the algorithm is the optimal path and the number of stage points of the path is limited by a polynomial function of the dimension number and the number of constraint planes.
- Published
- 2017
12. A complete ranking of trapezoidal fuzzy numbers and its applications to multi-criteria decision making
- Author
-
Dhanasekaran Ponnialagan, Jeevaraj Selvaraj, and Lakshmana Gomathi Nayagam Velu
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Fuzzy classification ,Fuzzy measure theory ,Fuzzy set ,02 engineering and technology ,Type-2 fuzzy sets and systems ,Defuzzification ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,Fuzzy set operations ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Membership function ,Mathematics - Abstract
The problem (or scenario) involving qualitative or imprecise information is not solvable by classical set theory. To overcome the shortcoming of classical set theory, Zadeh (Inf Control 8(3):338–356, 26) introduced the concept of fuzzy sets that generalizes the concept of classical sets. Fuzzy set theory allows modelling and handling of imprecise information in an effective way. As a special class of fuzzy sets, fuzzy numbers (FN) which are very much important in decision making was introduced by Dubois and Prade (Int J Syst Sci 9:631–626, 12). The available methods for solving multi-criteria decision making problems (MCDM) are problem dependent in nature due to the partial ordering on the class of FN. Total ordering on the class of FN by countable number of real-valued parameters was achieved by Wang and Wang (Fuzzy Sets Syst 243:131–141, 21). A complete ranking on the class of trapezoidal fuzzy numbers (TrFNs) using finite number of score functions is achieved in this paper. In this paper, a new ranking procedure (complete) on the class of TrFNs using the concepts of mid-point, radius, left and right fuzziness of TrFN is proposed and further we introduce a method for solving fuzzy multi-criteria decision making (Fuzzy MCDM) problem. Finally, comparisons of our proposed method with familiar existing methods are listed.
- Published
- 2017
13. Sensitivity Analysis of Intuitionistic Fuzzy Solid Transportation Problem
- Author
-
Shashi Aggarwal and Chavi Gupta
- Subjects
Mathematical optimization ,021103 operations research ,0211 other engineering and technologies ,Intuitionistic fuzzy ,Computational intelligence ,02 engineering and technology ,Theoretical Computer Science ,Computational Theory and Mathematics ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Sensitivity (control systems) ,Degeneracy (mathematics) ,Algorithm ,Software ,Mathematics ,Solid transportation problem - Abstract
This paper focuses on sensitivity analysis of intuitionistic fuzzy solid transportation problem (IFSTP). In real-world situation, IFSTP seems to be more realistic than solid transportation problem (STP) as available information is uncertain. Due to degeneracy of IFSTP or that of STP, Type I sensitivity analysis seems to be impractical. Hence , in this paper, a new algorithm is proposed to determine the Type II sensitivity ranges to overcome this problem. A numerical example is illustrated to demonstrate the feasibility of the proposed algorithm and the efficiency of recommended method over existing method.
- Published
- 2017
14. Image Retrieval Based on Discrete Fractional Fourier Transform Via Fisher Discriminant
- Author
-
Jiangzhong Cao, Xiao-Zhi Zhang, Qingyun Dai, Bingo Wing-Kuen Ling, and Daniel P. K. Lun
- Subjects
Optimization problem ,business.industry ,Applied Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Linear discriminant analysis ,Stationary point ,Discrete Fourier transform ,Fractional Fourier transform ,Image (mathematics) ,Classification rule ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithm ,Image retrieval ,Mathematics - Abstract
Discrete fractional Fourier transform (DFrFT) is a powerful signal processing tool. This paper proposes a method for DFrFT-based image retrieval via Fisher discriminant and 1-NN classification rule. First, this paper proposes to extend the conventional discrete Fourier transform (DFT) descriptors to the DFrFT descriptors to be used for representing the edges of images. The DFrFT descriptors extracted from the training images are employed to construct a dictionary, for which the corresponding optimal rotational angles of the DFrFTs are required to be determined. This dictionary design problem is formulated as an optimization problem, where the Fisher discriminant is the objective function to be minimized. This optimization problem is nonconvex (Guan et al. in IEEE Trans Image Process 20(7):2030---2048, 2011; Ho et al. in IEEE Trans Signal Process 58(8):4436---4441, 2010). Furthermore, both the intraclass separation and interclass separation of the DFrFT descriptors are independent of the rotational angles if these separations are defined in terms of the 2-norm operator. To tackle these difficulties, the 1-norm operator is employed. However, this reformulated optimization problem is nonsmooth. To solve this problem, the nondifferentiable points of the objective function are found. Then, the stationary points between any two consecutive nondifferentiable points are identified. The objective function values are evaluated at these nondifferentiable points and these stationary points. The smallest L objective function values are picked up and the corresponding rotational angles are determined, which are then used to construct the dictionary. Here, L is the total number of the rotational angles of the DFrFTs used to construct the dictionary. Finally, an 1-NN classification rule is applied to perform the image retrieval. Application examples and experimental results show that our proposed method outperforms the conventional DFT approach.
- Published
- 2016
15. Multi-level interval-valued fuzzy concept lattices and their attribute reduction
- Author
-
Lifeng Li
- Subjects
Fuzzy classification ,Theoretical computer science ,02 engineering and technology ,Type-2 fuzzy sets and systems ,Defuzzification ,Fuzzy logic ,Artificial Intelligence ,020204 information systems ,Fuzzy mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,Fuzzy set operations ,020201 artificial intelligence & image processing ,Fuzzy associative matrix ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Mathematics - Abstract
The paper introduces the multi-level interval-valued fuzzy concept lattices in an interval-valued fuzzy formal context. It introduces the notion of multi-level attribute reductions in an interval-valued fuzzy formal context and investigates related properties. In addition, the paper formulates a corresponding attribute reduction method by constructing a discernibility matrix and its associated Boolean function. The paper also proposes the multi-level granule representation in interval-valued fuzzy formal contexts.
- Published
- 2016
16. Minimum class variance support vector ordinal regression
- Author
-
Zengxi Huang, Jinrong Hu, and Xiaoming Wang
- Subjects
0301 basic medicine ,Ordinal data ,business.industry ,Generalization ,Contrast (statistics) ,02 engineering and technology ,Variance (accounting) ,Machine learning ,computer.software_genre ,Ordinal regression ,Support vector machine ,Ordinal optimization ,03 medical and health sciences ,030104 developmental biology ,Artificial Intelligence ,Kernelization ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Algorithm ,Software ,Mathematics - Abstract
The support vector ordinal regression (SVOR) method is derived from support vector machine and developed to tackle the ordinal regression problems. However, it ignores the distribution characteristics of the data. In this paper, we propose a novel method to handle the ordinal regression problems. This method is referred to as minimum class variance support vector ordinal regression (MCVSVOR). In contrast with SVOR, MCVSVOR explicitly takes into account the distribution of the categories and achieves better generalization performance. Moreover, the problem of MCVSVOR can be transformed into one of SVOR. Thus, the existing software of SVOR can be used to solve the problem of MCVSVOR. In the paper, we first discuss the linear case of MCVSVOR and then develop the nonlinear MCVSVOR through using the kernelization trick. The comprehensive experiment results show that the proposed method is effective and can achieve better generalization performance in contrast with SVOR.
- Published
- 2016
17. Relevance vector machines using weighted expected squared distance for ore grade estimation with incomplete data
- Author
-
Keyou You, Shiji Song, Xunan Zhang, Yukui Zhang, and Cheng Wu
- Subjects
business.industry ,Inverse ,Computational intelligence ,Regression analysis ,02 engineering and technology ,010502 geochemistry & geophysics ,Machine learning ,computer.software_genre ,01 natural sciences ,Weighting ,Relevance vector machine ,Artificial Intelligence ,Spatial reference system ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,computer ,Software ,0105 earth and related environmental sciences ,Mathematics ,Extreme learning machine - Abstract
Accurate ore grade estimation is crucial to mineral resources evaluation and exploration. In this paper, we consider the borehole data collected from the Solwara 1 deposit, where the hydrothermal sulfide ore body is quite complicated with incomplete ore grade values. To solve this estimation problem, the relevance vector machine (RVM) and the expected squared distance (ESD) algorithm are incorporated into one regression model. Moreover, we improve the ESD algorithm by weighting the attributes of the data set and propose the weighted expected squared distance (WESD). In this paper, we uncover the symbiosis characteristics among different elements of the deposits by statistical analysis, which leads to estimating certain metal based on the data of other elements instead of on geographical position. The proposed WESD-RVM features high sparsity and accuracy, as well as the capability of handling incomplete data. Effectiveness of the proposed model is demonstrated by comparing with other estimating algorithms, such as inverse distance weighted method and Kriging algorithm which utilize only geographical spatial coordinates for inputs; extreme learning machine, which is unable to deal with incomplete data; and ordinary ESD based RVM regression model without entropy weighted distance. The experimental results show that the proposed WESD-RVM outperforms other methods with considerable predictive and generalizing ability.
- Published
- 2016
18. Branch pipe routing based on 3D connection graph and concurrent ant colony optimization algorithm
- Author
-
Dan Jiang, Yanfeng Qu, and Qingyan Yang
- Subjects
Optimal design ,0209 industrial biotechnology ,Mathematical optimization ,Optimization problem ,Ant colony optimization algorithms ,System optimization ,02 engineering and technology ,Industrial and Manufacturing Engineering ,Pipeline transport ,020901 industrial engineering & automation ,Pipe routing ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Branch point ,Mathematics - Abstract
Pipe routing, in particular branch pipes with multiple terminals, has an important influence on product performance and reliability. This paper develops a new rectilinear branch pipe routing approach for automatic generation of the optimal rectilinear branch pipe routes in constrained spaces. Firstly, this paper presents a new 3D connection graph, which is constructed by extending a new 2D connection graph. The new 2D connection graph is constructed according to five criteria in discrete Manhattan spaces. The 3D connection graph can model the 3D constrained layout space efficiently. The length of pipelines and the number of bends are modeled as the optimal design goal considering the number of branch points and three types of engineering constraints. Three types of engineering constraints are modeled by this 3D graph and potential value. Secondly, a new concurrent Max–Min Ant System optimization algorithm, which adopts concurrent search strategy and dynamic update mechanism, is used to solve Rectilinear Branch Pipe Routing optimization problem. This algorithm can improve the search efficiency in 3D constrained layout space. Numerical comparisons with other current approaches in literatures demonstrate the efficiency and effectiveness of the proposed approach. Finally, a case study of pipe routing for aero-engines is conducted to validate this approach.
- Published
- 2016
19. Multiple straight-line fitting using a Bayes factor
- Author
-
Leonardo Romero, Cuauhtemoc Gomez, and Carlos Lara-Alvarez
- Subjects
Statistics and Probability ,Observational error ,business.industry ,Applied Mathematics ,Bayesian probability ,0211 other engineering and technologies ,020302 automobile design & engineering ,Robotics ,Bayes factor ,02 engineering and technology ,Machine learning ,computer.software_genre ,Computer Science Applications ,Set (abstract data type) ,0203 mechanical engineering ,Artificial intelligence ,business ,computer ,Algorithm ,021101 geological & geomatics engineering ,Mathematics - Abstract
This paper introduces a Bayesian approach to solve the problem of fitting multiple straight lines to a set of 2D points. Other approaches use many arbitrary parameters and threshold values, the proposed criterion uses only the parameters of the measurement errors. Models with multiple lines are useful in many applications, this paper analyzes the performance of the new approach to solve a classical problem in robotics: finding a map of lines from laser measurements. Tests show that the Bayesian approach obtains reliable models.
- Published
- 2016
20. Semiparametric Bayesian inference on generalized linear measurement error models
- Author
-
Nian-Sheng Tang, De-Wang Li, and An-Min Tang
- Subjects
Statistics and Probability ,Kullback–Leibler divergence ,Bayesian probability ,Multivariate normal distribution ,Bayesian inference ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,0504 sociology ,Statistics::Methodology ,0101 mathematics ,Cook's distance ,Mathematics ,business.industry ,05 social sciences ,050401 social sciences methods ,Pattern recognition ,Variable-order Bayesian network ,Statistics::Computation ,symbols ,Artificial intelligence ,Statistics, Probability and Uncertainty ,Bayesian linear regression ,business ,Algorithm ,Gibbs sampling - Abstract
The classical assumption in generalized linear measurement error models (GLMEMs) is that measurement errors (MEs) for covariates are distributed as a fully parametric distribution such as the multivariate normal distribution. This paper uses a centered Dirichlet process mixture model to relax the fully parametric distributional assumption of MEs, and develops a semiparametric Bayesian approach to simultaneously obtain Bayesian estimations of parameters and covariates subject to MEs by combining the stick-breaking prior and the Gibbs sampler together with the Metropolis–Hastings algorithm. Two Bayesian case-deletion diagnostics are proposed to identify influential observations in GLMEMs via the Kullback–Leibler divergence and Cook’s distance. Computationally feasible formulae for evaluating Bayesian case-deletion diagnostics are presented. Several simulation studies and a real example are used to illustrate our proposed methodologies.
- Published
- 2016
21. Reaching optimized parameter set: protein secondary structure prediction using neural network
- Author
-
Jyotshna Dongardive and Siby Abraham
- Subjects
0301 basic medicine ,030102 biochemistry & molecular biology ,Artificial neural network ,Correlation coefficient ,Biomolecules (q-bio.BM) ,68T45 ,Quantitative Biology - Quantitative Methods ,Set (abstract data type) ,03 medical and health sciences ,030104 developmental biology ,Quantitative Biology - Biomolecules ,Rate of convergence ,Artificial Intelligence ,FOS: Biological sciences ,Encoding (memory) ,Convergence (routing) ,Sensitivity (control systems) ,Layer (object-oriented design) ,Algorithm ,Quantitative Methods (q-bio.QM) ,Software ,Mathematics - Abstract
We propose an optimized parameter set for protein secondary structure prediction using three layer feed forward back propagation neural network. The methodology uses four parameters viz. encoding scheme, window size, number of neurons in the hidden layer and type of learning algorithm. The input layer of the network consists of neurons changing from 3 to 19, corresponding to different window sizes. The hidden layer chooses a natural number from 1 to 20 as the number of neurons. The output layer consists of three neurons, each corresponding to known secondary structural classes viz. alpha helix, beta strands and coils respectively. It also uses eight different learning algorithms and nine encoding schemes. Exhaustive experiments were performed using non-homologues dataset. The experimental results were compared using performance measures like Q3, sensitivity, specificity, Mathew correlation coefficient and accuracy. The paper also discusses the process of obtaining a stabilized cluster of 2530 records from a collection of 11340 records. The graphs of these stabilized clusters of records with respect to accuracy are concave, convergence is monotonic increasing and rate of convergence is uniform. The paper gives BLOSUM62 as the encoding scheme, 19 as the window size, 19 as the number of neurons in the hidden layer and One- Step Secant as the learning algorithm with the highest accuracy of 78%. These parameter values are proposed as the optimized parameter set for the three layer feed forward back propagation neural network for the protein secondary structure predictionv, Comment: 22 pages, 9 figures, 28 tables
- Published
- 2016
22. General relation-based variable precision rough fuzzy set
- Author
-
Weimin Ma, Eric C. C. Tsang, and Bingzhen Sun
- Subjects
0209 industrial biotechnology ,Fuzzy classification ,Fuzzy set ,Dominance-based rough set approach ,02 engineering and technology ,Type-2 fuzzy sets and systems ,computer.software_genre ,Fuzzy logic ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,Fuzzy set operations ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Rough set ,Data mining ,Algorithm ,computer ,Software ,Mathematics - Abstract
In order to effectively handle the real-valued data sets in practice, it is valuable from theoretical and practical aspects to combine fuzzy rough set and variable precision rough set so that a powerful tool can be developed. That is, the model of fuzzy variable precision rough set, which not only can handle numerical data but also is less sensitive to misclassification and perturbation,In this paper, we propose a new variable precision rough fuzzy set by introducing the variable precision parameter to generalized rough fuzzy set, i.e., the variable precision rough fuzzy set based on general relation. We, respectively, define the variable precision rough lower and upper approximations of any fuzzy set and it level set with variable precision parameter by constructive approach. Also, we present the properties of the proposed model in detail. Meanwhile, we establish the relationship between the variable precision rough approximation of a fuzzy set and the rough approximation of the level set for a fuzzy set. Furthermore, we give a new approach to uncertainty measure for variable precision rough fuzzy set established in this paper in order to overcome the limitations of the traditional methods. Finally, some numerical example are used to illuminate the validity of the conclusions given in this paper.
- Published
- 2015
23. A lagrangian propagator for artificial neural networks in constraint programming
- Author
-
Michele Lombardi, Stefano Gualandi, Lombardi, Michele, and Gualandi, Stefano
- Subjects
Mathematical optimization ,02 engineering and technology ,symbols.namesake ,Artificial Intelligence ,Computational Theory and Mathematic ,Constraint logic programming ,0202 electrical engineering, electronic engineering, information engineering ,Constraint programming ,Discrete Mathematics and Combinatorics ,Overhead (computing) ,Subgradient method ,Discrete Mathematics and Combinatoric ,Mathematics ,Lagrangian relaxation ,Artificial neural network ,020208 electrical & electronic engineering ,Constraint satisfaction ,Neural network ,Constraint (information theory) ,Computational Theory and Mathematics ,symbols ,020201 artificial intelligence & image processing ,Algorithm ,Software - Abstract
This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of [27], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.
- Published
- 2015
24. An efficient pruning strategy for approximate string matching over suffix tree
- Author
-
Jianzhong Li, Hong Gao, Huan Hu, and Hongzhi Wang
- Subjects
Compressed suffix array ,Suffix tree ,String (computer science) ,Generalized suffix tree ,Commentz-Walter algorithm ,020207 software engineering ,02 engineering and technology ,String searching algorithm ,Approximate string matching ,law.invention ,Human-Computer Interaction ,Artificial Intelligence ,Hardware and Architecture ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Depth-first search ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
Approximate string matching over suffix tree with depth-first search (ASM_ST_DFS), a classical algorithm in the field of approximate string matching, was originally proposed by Ricardo A. Baeza-Yates and Gaston H. Gonnet in 1990. The algorithm is one of the most excellent algorithms for approximate string matching if combined with other indexing techniques. However, its time complexity is sensitive to the length of pattern string because it searches $$m+k$$m+k characters on each path from the root before backtracking. In this paper, we propose an efficient pruning strategy to solve this problem. We prove its correctness and efficiency in theory. Particularly, we proved that if the pruning strategy is adopted, it averagely searches O(k) characters on each path before backtracking instead of O(m). Considering each internal node of suffix tree has multiple branches, the pruning strategy should work very well. We also experimentally show that when k is much smaller than m, the efficiency improves hundreds of times, and when k is not much smaller than m, it is still several times faster. This is the first paper that tries to solve the backtracking problem of ASM_ST_DFS in both theory and practice.
- Published
- 2015
25. Fast algorithms of attribute reduction for covering decision systems with minimal elements in discernibility matrix
- Author
-
Ming Sun, Yanyan Yang, and Ze Dong
- Subjects
0209 industrial biotechnology ,Reduction (recursion theory) ,Relation (database) ,Computation ,Computational intelligence ,02 engineering and technology ,Matrix (mathematics) ,020901 industrial engineering & automation ,Artificial Intelligence ,Pattern recognition (psychology) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Rough set ,Algorithm ,Software ,Maximal element ,Mathematics - Abstract
Covering rough sets, which generalize traditional rough sets by considering coverings instead of partitions, are introduced to deal with set-valued, missing-valued or real-valued data sets. For decision systems with such kinds of data sets, attribute reduction with covering rough sets aims to delete superfluous attributes, and fast algorithms of finding reducts are clearly meaningful for practical problems. In the existing study of attribute reduction with covering rough sets, the approach of discernibility matrix is the theoretical foundation. However, it always shares heavy computation load and large store space because of finding and storing all elements in the discernibility matrix. In this paper, we find that only minimal elements in the discernibility matrix are sufficient to find reducts. This fact motivates us in this paper to develop algorithms to find reducts by only employing minimal elements without computing other elements in the discernibility matrix. We first define the relative discernible relation of covering to characterize the relationship between minimal elements in the discernibility matrix and particular sample pairs in the covering decision system. By employing this relative discernible relation, we then develop algorithms to search the minimal elements in the discernibility matrix and find reducts for the covering decision system. Finally, experimental comparisons with other existing algorithms of covering rough sets on several data sets demonstrate that the proposed algorithms in this paper can greatly reduce the running time of finding reducts.
- Published
- 2015
26. A new fuzzy clustering algorithm for the segmentation of brain tumor
- Author
-
T. Kalaiselvi, Pagavathigounder Balasubramaniam, and V. P. Ananthi
- Subjects
0209 industrial biotechnology ,Fuzzy classification ,Fuzzy clustering ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Thresholding ,Fuzzy logic ,Theoretical Computer Science ,020901 industrial engineering & automation ,Histogram ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Geometry and Topology ,Artificial intelligence ,Cluster analysis ,business ,Algorithm ,Software ,Membership function ,Mathematics - Abstract
This paper introduces a new method of clustering algorithm based on interval-valued intuitionistic fuzzy sets (IVIFSs) generated from intuitionistic fuzzy sets to analyze tumor in magnetic resonance (MR) images by reducing time complexity and errors. Based on fuzzy clustering, during the segmentation process one can consider numerous cases of uncertainty involving in membership function, distance measure, fuzzifier, and so on. Due to poor illumination of medical images, uncertainty emerges in their gray levels. This paper concentrates on uncertainty in the allotment of values to the membership function of the uncertain pixels. Proposed method initially pre-processes the brain MR images to remove noise, standardize intensity, and extract brain region. Subsequently IVIFSs are constructed to utilize in the clustering algorithm. Results are compared with the segmented images obtained using histogram thresholding, k-means, fuzzy c-means, intuitionistic fuzzy c-means, and interval type-2 fuzzy c-means algorithms and it has been proven that the proposed method is more effective.
- Published
- 2015
27. A scheme for distributed compressed video sensing based on hypothesis set optimization techniques
- Author
-
Jian Chen, Yonghong Kuo, and Kai Wu
- Subjects
Elastic net regularization ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Term (time) ,Set (abstract data type) ,Distance-vector routing protocol ,Sampling (signal processing) ,Artificial Intelligence ,Hardware and Architecture ,Norm (mathematics) ,Signal Processing ,Euclidean geometry ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Information Systems ,Mathematics ,Block (data storage) - Abstract
Multi-hypothesis prediction technique can greatly take advantage of the correlation between the video frames to obtain a high quality performance. In this paper, we propose a scheme for distributed compressive video sensing based on hypothesis set optimization techniques which further enhances the reconstruction quality and reconstruction speed of video compared with existing programs. The innovation in this paper includes four parts: (1) superb hypotheses selection-based hybrid hypothesis prediction technique, which selects the superb hypotheses from the original hypothesis set corresponding to the block to be reconstructed in the video sequence to form a new set, and then implements the hybrid hypothesis prediction (HHP) with the new one; (2) hypothesis set update-based hybrid hypothesis prediction technique, which selects the high quality hypotheses and derives new hypotheses by interpolating, and then replaces the noisy hypotheses with the new ones; (3) advanced hybrid hypothesis prediction technique, which improves the judgment formula of HHP model through averaging the Euclidean distances to each measurement to realize the goal of the adaptive judgment of the HHP model in various sampling rates; (4) adaptive weighted elastic net (AWEN) technique, which combines norm, $$\ell _1$$l1, $$\ell _2$$l2 and then weights both of them with the distance vector to form AWEN penalty term. The simulation results show that our proposal outperforms the start-of-the-art schemes without using the hypothesis set optimization techniques.
- Published
- 2015
28. 3D face recognition in the Fourier domain using deformed circular curves
- Author
-
Hamid Krim and Deokwoo Lee
- Subjects
Geodesic ,0211 other engineering and technologies ,Image processing ,02 engineering and technology ,Facial recognition system ,Matrix (mathematics) ,symbols.namesake ,Dimension (vector space) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Point (geometry) ,Computer vision ,Mathematics ,021110 strategic, defence & security studies ,business.industry ,Applied Mathematics ,Computer Science Applications ,Fourier transform ,Hardware and Architecture ,Face (geometry) ,Signal Processing ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithm ,Software ,Information Systems - Abstract
One of the most significant problems in image and vision applications is the efficient representation of a target image containing a large amount of data with high complexity. The ability to analyze high dimensional signals in a lower dimension without losing their information, has been crucial in the field of image processing. This paper proposes an approach to 3D face recognition using dimensionality reduction based on deformed circular curves, on the shortest geodesic distances, and on the properties of the Fourier Transform. Measured geodesic distances information generates a matrix whose entities are geodesic distances between the reference point and an arbitrary point on a 3D object, and an one-dimensional vector is generated by reshaping the matrix without losing the original properties of the target object. Following the property of the Fourier Transform, symmetry of the magnitude response, the original signal can be analyzed in the lower dimensional space without loss of inherent characteristics. This paper mainly deal with the efficient representation and recognition algorithm using deformed circular curves and the simulation shows promising result for recognition of geometric face information.
- Published
- 2015
29. Robustness of classification ability of spiking neural networks
- Author
-
Yan Liu, Pingping Zhang, and Jie Yang
- Subjects
FOS: Computer and information sciences ,Gaussian ,Computer Science::Neural and Evolutionary Computation ,MathematicsofComputing_NUMERICALANALYSIS ,Xor problem ,Aerospace Engineering ,Perturbation (astronomy) ,Machine Learning (stat.ML) ,Ocean Engineering ,Machine Learning (cs.LG) ,symbols.namesake ,Statistics - Machine Learning ,Electrical and Electronic Engineering ,Mathematics ,Spiking neural network ,Artificial neural network ,business.industry ,Applied Mathematics ,Mechanical Engineering ,Computer Science - Learning ,Control and Systems Engineering ,symbols ,Probability distribution ,Artificial intelligence ,business ,Algorithm - Abstract
It is well-known that the robustness of artificial neural networks (ANNs) is important for their wide ranges of applications. In this paper, we focus on the robustness of the classification ability of a spiking neural network which receives perturbed inputs. Actually, the perturbation is allowed to be arbitrary styles. However, Gaussian perturbation and other regular ones have been rarely investigated. For classification problems, the closer to the desired point, the more perturbed points there are in the input space. In addition, the perturbation may be periodic. Based on these facts, we only consider sinusoidal and Gaussian perturbations in this paper. With the SpikeProp algorithm, we perform extensive experiments on the classical XOR problem and other three benchmark datasets. The numerical results show that there is not significant reduction in the classification ability of the network if the input signals are subject to sinusoidal and Gaussian perturbations., Comment: Aceppted by Nonlinear Dynamics
- Published
- 2015
30. Quantified maximum satisfiability
- Author
-
Alexey Ignatiev, Mikoláš Janota, and Joao Marques-Silva
- Subjects
060201 languages & linguistics ,Polynomial hierarchy ,True quantified Boolean formula ,06 humanities and the arts ,02 engineering and technology ,Extension (predicate logic) ,Function (mathematics) ,Satisfiability ,Set (abstract data type) ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Computational Theory and Mathematics ,Artificial Intelligence ,0602 languages and literature ,Maximum satisfiability problem ,0202 electrical engineering, electronic engineering, information engineering ,Discrete Mathematics and Combinatorics ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Linear search ,Mathematics - Abstract
In recent years, there have been significant improvements in algorithms both for Quantified Boolean Formulas (QBF) and for Maximum Satisfiability (MaxSAT). This paper studies an optimization extension of QBF and considers the problem in a quantified MaxSAT setting. More precisely, the general QMaxSAT problem is defined for QBFs with a set of soft clausal constraints and consists in finding the largest subset of the soft constraints such that the remaining QBF is true. Two approaches are investigated. One is based on relaxing the soft clauses and performing an iterative search on the cost function. The other approach, which is the main contribution of the paper, is inspired by recent work on MaxSAT, and exploits the iterative identification of unsatisfiable cores. The paper investigates the application of these approaches to the two concrete problems of computing smallest minimal unsatisfiable subformulas (SMUS) and smallest minimal equivalent subformulas (SMES), decision versions of which are well-known problems in the second level of the polynomial hierarchy. Experimental results, obtained on representative problem instances, indicate that the core-guided approach for the SMUS and SMES problems outperforms the use of iterative search over the values of the cost function. More significantly, the core-guided approach to SMUS also outperforms the state-of-the-art SMUS extractor Digger.
- Published
- 2015
31. A non-negative representation learning algorithm for selecting neighbors
- Author
-
Jiancheng Lv, Zhang Yi, and Lili Li
- Subjects
Manifold alignment ,Convex geometry ,020206 networking & telecommunications ,Polytope ,02 engineering and technology ,Combinatorics ,Artificial Intelligence ,Convex polytope ,0202 electrical engineering, electronic engineering, information engineering ,Local tangent space alignment ,Tangent space ,020201 artificial intelligence & image processing ,Convex combination ,Cluster analysis ,Algorithm ,Computer Science::Databases ,Software ,Mathematics - Abstract
A crucial step in many manifold clustering and embedding algorithms involves selecting the neighbors of a data point. In this paper, a non-negative representation learning algorithm is proposed to select the neighbors of a data point, so that they are almost always lying in the tangent space of the manifold at the given query point. This is very important to multi-manifold clustering and manifold embedding. Convex geometry theory states that a data point on a face of a convex polytope can be a convex combination of the vertices of the polytope, and that the non-zero combination scalars correspond to the vertices of the face the point lies on. The intent of this paper is to use points that that are near to a point as the vertices of a convex polytope, and select the vertices on the same face as this point to be the neighbors. Clearly, the point can be a convex combination of its neighbors. We can observe that these neighbors almost always lie in the tangent space of the manifold at the query point and can preserve the local manifold structure well. Based on this basic idea, we construct an objective function. Moreover, an augmented Lagrange multipliers method is proposed for solving it to derive a non-negative representation, and then the neighbors of a data point can be obtained. The result of our neighbor selection algorithm can be used by other clustering methods such as LEM or manifold embedding methods such as LLE. We demonstrate the effectiveness and efficiency of the proposed method through experiments on synthetic data and a real-world problem.
- Published
- 2015
32. Fast, flexible MUS enumeration
- Author
-
Mark H. Liffiton, Ammar Malik, Alessandro Previti, and Joao Marques-Silva
- Subjects
Structure (mathematical logic) ,020207 software engineering ,Enumeration algorithm ,02 engineering and technology ,Time cost ,Constraint (information theory) ,Computational Theory and Mathematics ,Artificial Intelligence ,Face (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,Enumeration ,Discrete Mathematics and Combinatorics ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Mathematics - Abstract
The problem of enumerating minimal unsatisfiable subsets (MUSes) of an infeasible constraint system is challenging due first to the complexity of computing even a single MUS and second to the potentially intractable number of MUSes an instance may contain. In the face of the latter issue, when complete enumeration is not feasible, a partial enumeration of MUSes can be valuable, ideally with a time cost for each MUS output no greater than that needed to extract a single MUS. Recently, two papers independently presented a new MUS enumeration algorithm well suited to partial MUS enumeration (Liffiton and Malik, 2013, Previti and Marques-Silva, 2013). The algorithm exhibits good anytime performance, steadily producing MUSes throughout its execution; it is constraint agnostic, applying equally well to any type of constraint system; and its flexible structure allows it to incorporate advances in single MUS extraction algorithms and eases the creation of further improvements and modifications. This paper unifies and expands upon the earlier work, presenting a detailed explanation of the algorithm's operation in a framework that also enables clearer comparisons to previous approaches, and we present a new optimization of the algorithm as well. Expanded experimental results illustrate the algorithm's improvement over past approaches and newly explore some of its variants.
- Published
- 2015
33. An optimizing model to solve the nesting problem of rectangle pieces based on genetic algorithm
- Author
-
Lang Huang, Shuwei Liu, Xixing Li, Shunsheng Guo, Hongtao Tang, and Li Li
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,021103 operations research ,Fitness function ,0211 other engineering and technologies ,Process (computing) ,Nesting (process) ,02 engineering and technology ,Horizontal line test ,Industrial and Manufacturing Engineering ,020901 industrial engineering & automation ,Artificial Intelligence ,Search algorithm ,Genetic algorithm ,Largest empty rectangle ,Rectangle ,Algorithm ,Software ,Mathematics - Abstract
In the process of cement equipment manufacturing, the demand of rectangle pieces of steel structure is very large. The traditional manual nesting, which is simply cutting by hand-making according to the arrangement of the number and size, causes the low efficiency and material wasting. To solve the problem above, this paper proposes an optimizing model for nesting problem of rectangle pieces. Firstly, with the aim of the maximum utilization ratio of the sheet, the optimization mathematical model for nesting problem of rectangle pieces is established. The lowest horizontal line searching algorithm is described in detail. Secondly, the mathematical model is solved to get the optimal solution by the combination of genetic algorithm and the lowest horizontal line searching algorithm. In the solution process, this paper presents the methods of gene encoding and decoding, definition of fitness function, the design of genetic operators and the design of algorithm operating parameters. Finally, we use one sheet as an example to illustrate the proposed model and algorithm process. Experimental results have shown that the proposed approach is able to achieve rectangle pieces nesting with the maximum material utilization ratio.
- Published
- 2015
34. Image inpainting algorithm based on TV model and evolutionary algorithm
- Author
-
Yunshan Wei, Kangshun Li, Zhen Yang, and Wenhua Wei
- Subjects
Pixel ,Matching (graph theory) ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Evolutionary algorithm ,Inpainting ,020206 networking & telecommunications ,Image processing ,Computational intelligence ,02 engineering and technology ,Theoretical Computer Science ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Visual communication ,Geometry and Topology ,Artificial intelligence ,business ,Algorithm ,Software ,Mathematics ,FSA-Red Algorithm - Abstract
With the development of modern image processing techniques, the numbers of images increase at a high speed in network. As a new form of visual communication, image is widely used in network transmission. However, the image information would be lost after transmission. In view of this, we are motivated to restore the image to make it complete in an effective and efficient way in order to save the network bandwidth. At present, there are two main methods for digital image restoration, texture-based method and non-textured-based method. In the texture-based method, Criminisi algorithm is a widely used algorithm. However, the inaccurate completion order and the inefficiency in searching matching patches are two main limitations of Criminisi algorithm. To overcome these shortcomings, in this paper, an exemplar image completion based on evolutionary algorithm is proposed. In the non-textured-based method, total variation method is a typical algorithm. An improved total variation algorithm is proposed in this paper. In the improved algorithm, the diffusion coefficients are defined according to the distance and direction between the damaged pixel and its neighborhood pixel. Experimental results show that the proposed algorithms have better general performance in image completion. And these two new algorithms could improve the experience of network surfing and reduce the network communication cost.
- Published
- 2014
35. A best linear threshold classification with scale mixture of skew normal populations
- Author
-
Hea-Jung Kim
- Subjects
Statistics and Probability ,Scale (ratio) ,business.industry ,Skew ,Process (computing) ,Inference ,Pattern recognition ,Latent variable ,Computational Mathematics ,Bayes' theorem ,Distribution (mathematics) ,Robustness (computer science) ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,Algorithm ,Mathematics - Abstract
This paper describes a threshold classification with $$K$$ K populations whose membership category is associated with the threshold process of a latent variable. It is seen that the optimal procedure (Bayes procedure) for the classification involves a nonlinear classification rule and hence, its analytic properties and an efficient estimation can not be explored due to its complex distribution. As an alternative, this paper proposes the best linear procedure and verifies its effectiveness. For this, the present paper provides the necessary theories for deriving the linear rule and its properties, an efficient inference, and a simulation study that sheds light on the performance of the best linear procedure. It also provides three real data examples to demonstrate the applicability of the best linear procedure.
- Published
- 2014
36. Weights and structure determination of multiple-input feed-forward neural network activated by Chebyshev polynomials of Class 2 via cross-validation
- Author
-
Dongsheng Guo, Yunong Zhang, Xiaotian Yu, Zhijun Zhang, and Yonghua Yin
- Subjects
Chebyshev polynomials ,Mathematical optimization ,Artificial neural network ,Artificial Intelligence ,Generalization ,Structure (category theory) ,Feedforward neural network ,Class (biology) ,Algorithm ,Software ,Cross-validation ,Backpropagation ,Mathematics - Abstract
Differing from conventional improvements on backpropagation (BP) neural network, a novel neural network is proposed and investigated in this paper to overcome the BP neural-network weaknesses, which is called the multiple-input feed-forward neural network activated by Chebyshev polynomials of Class 2 (MINN-CP2). In addition, to obtain the optimal number of hidden-layer neurons and the optimal linking weights of the MINN-CP2, the paper develops an algorithm of weights and structure determination (WASD) via cross-validation. Numerical studies show the effectiveness and superior abilities (in terms of approximation and generalization) of the MINN-CP2 equipped with the algorithm of WASD via cross-validation. Moreover, an application to gray image denoising demonstrates the effective implementation and application prospect of the proposed MINN-CP2 equipped with the algorithm of WASD via cross-validation.
- Published
- 2014
37. A Modified Learning Algorithm for Interval Perceptrons with Interval Weights
- Author
-
Zhengxue Li, Huisheng Zhang, Wei Wu, Yan Liu, and Dakun Yang
- Subjects
Computer Science::Machine Learning ,Computer Networks and Communications ,General Neuroscience ,Absolute value ,Interval (mathematics) ,Function (mathematics) ,Perceptron ,Term (time) ,Interval arithmetic ,Error function ,Quadratic equation ,Artificial Intelligence ,Algorithm ,Software ,Mathematics - Abstract
In many applications, it is natural to use interval data to describe various kinds of uncertainties. This paper is concerned with a one-layer interval perceptron with the weights and the outputs being intervals and the inputs being real numbers. In the original learning method for this interval perceptron, an absolute value function is applied for newly learned radii of the interval weights, so as to force the radii to be positive. This approach seems unnatural somehow, and might cause oscillation in the learning procedure as indicated in our numerical experiments. In this paper, a modified learning method is proposed for this one-layer interval perceptron. We do not use the function of the absolute value, and instead, we replace, in the error function, the radius of each interval weight by a quadratic term. This simple trick does not cause any additional computational work for the learning procedure, but it brings about the following three advantages: First, the radii of the intervals of the weights are guaranteed to be positive during the learning procedure without the help of the absolute value function. Secondly, the oscillation mentioned above is eliminated and the convergence of the learning procedure is improved, as indicated by our numerical experiments. Finally, a by-product is that the convergence analysis of the learning procedure is now an easy job, while the analysis for the original learning method is at least difficult, if not impossible, due to the non-smoothness of the absolute value function involved.
- Published
- 2014
38. The model order reduction using LS, RLS and MV estimation methods
- Author
-
Ehsan Malekshahi and Seyed-Mohammad-Ali Mohammadi
- Subjects
Model order reduction ,Recursive least squares filter ,business.industry ,Computation ,System identification ,Robotics ,Mechatronics ,Least squares ,Computer Science Applications ,Minimum-variance unbiased estimator ,Control and Systems Engineering ,Artificial intelligence ,business ,Algorithm ,Mathematics - Abstract
Reducing the order of high-order systems eliminates the complexity of them. So, the system analysis and the controller design are done easily. Many model reduction methods have been presented up to now. In this paper, LS (Least squares), RLS (Recursive Least Squares) and MV (Minimum Variance) will be presented as model reduction methods. MV is an adaptive theory that we will use the concept of it to reduce the order of systems. This idea has not been presented so far. One advantage of this method over the other methods, is that it has several parameters to achieve the desired reduced system. The other advantages of this method are that the speed is more and the computations are less compared to the other methods. In this paper, after explaining the three methods, two illustrative examples are provided to demonstrate the applicability of them.
- Published
- 2014
39. Algorithms for the theory of restrictions of scalar $$n$$ n -D systems to proper subspaces of $$\mathbb {R}^n$$ R n
- Author
-
Harish K. Pillai and Debasattam Pal
- Subjects
Discrete mathematics ,Applied Mathematics ,Scalar (mathematics) ,Solution set ,Linear subspace ,Upper and lower bounds ,Computer Science Applications ,Artificial Intelligence ,Hardware and Architecture ,Signal Processing ,Algebraic number ,Algorithm ,Software ,Subspace topology ,Information Systems ,Mathematics - Abstract
In this paper, we study the restrictions of solutions of a scalar system of PDEs to a proper subspace of the domain $$\mathbb {R}^n$$ R n . The object of study is associated with certain intersection ideals. In the paper, we provide explicit algorithms to calculate these intersection ideals. We next deal with when a given subspace is "free" with respect to the solution set of a system of PDEs--this notion of freeness is related to restrictions and intersection ideals. We again provide algorithms and checkable algebraic criterion to answer the question of freeness of a subspace. Finally, we provide an upper bound to the dimension of free subspaces that can be associated with the solution set of a system of PDEs.
- Published
- 2014
40. A note on the preemptive scheduling to minimize total completion time with release time and deadline constraints
- Author
-
Zhichao Geng, Long Wan, and Jinjiang Yuan
- Subjects
Schedule ,Job shop scheduling ,General Engineering ,Preemption ,Management Science and Operations Research ,Release time ,Combinatorics ,Set (abstract data type) ,Reduction (complexity) ,Artificial Intelligence ,Interval scheduling ,Time complexity ,Algorithm ,Software ,Mathematics - Abstract
In this paper, we consider two problems about the preemptive scheduling of a set of jobs with release times on a single machine. In the first problem, each job has a deadline. The objective is to find a feasible schedule which minimizes the total completion time of the jobs. In the second problem (called two-agent scheduling problem), the set of jobs is partitioned into two subsets $$\mathcal{J}^{(1)}$$J(1) and $$\mathcal{J}^{(2)}$$J(2). Each job in $$\mathcal{J}^{(2)}$$J(2) has a deadline. The objective is to find a feasible schedule which minimizes the total completion time of the jobs in $$\mathcal{J}^{(1)}$$J(1). For the first problem, Du and Leung (Journal of Algorithms 14:45---68, 1993) showed that the problem is NP-hard. We show in this paper that there is a flaw in their NP-hardness proof. For the second problem, Leung et al. (Operations Research 58:458---469, 2010) showed that the problem can be solved in polynomial time. Yuan et al. (Private Communication) showed that their polynomial-time algorithm is invalid so the complexity of the second problem is still open. In this paper, by a modification of Du and Leung's NP-hardness proof, we show that the first problem is NP-hard even when the jobs have only two distinct deadlines. Using the same reduction, we also show that the second problem is NP-hard even when the jobs in $$\mathcal{J}^{(2)}$$J(2) has a common deadline $$D>0$$D>0 and a common release time 0.
- Published
- 2014
41. The Remote Sensing Image Matching Algorithm Based on the Normalized Cross-Correlation and SIFT
- Author
-
Xingxing Shen and Wenxing Bao
- Subjects
Matching (statistics) ,Computational complexity theory ,Cross-correlation ,business.industry ,Geography, Planning and Development ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-invariant feature transform ,Pattern recognition ,Image (mathematics) ,Range (mathematics) ,Transformation (function) ,Robustness (computer science) ,Earth and Planetary Sciences (miscellaneous) ,Computer vision ,Artificial intelligence ,business ,Algorithm ,Remote sensing ,Mathematics - Abstract
SIFT (scale invariant feature transform) is one of the most robust and widely used image matching algorithms based on local features. However, its computational complexity is high. In order to reduce the matching time, an improved feature matching algorithm is proposed in this paper under the premise of stable registration accuracy. This paper proposed a normalized cross-correlation with SIFT combination of remote sensing image matching algorithm. The basic idea of the algorithm is performing the space geometry transformation of the input image with reference to the base image. Then the normalized cross-correlation captures the relevant part of the remote sensing images. By this way, we can reduce the matching range. So some unnecessary calculations are properly omitted. By utilizing the SIFT algorithm, we match the preprocessed remote sensing images, and get the registration points. This can shorten the matching time and improve the matching accuracy. Its robustness is increased correspondingly. The experimental results show that the proposed Normalized cross-correlation plus SIFT algorithm is more rapid than the standard SIFT algorithm while the performance is favorably compared to the standard SIFT algorithm when matching among structured scene images. The experiment results confirm the feasibility of our methods.
- Published
- 2013
42. Fault diagnosis of timed discrete event systems using Dioid Algebra
- Author
-
Sobhi Baniardalani and Javad Askari
- Subjects
business.industry ,Concurrency ,Linear system ,Time evolution ,Robotics ,Mechatronics ,Computer Science Applications ,Automaton ,Nondeterministic algorithm ,Control and Systems Engineering ,Dioid algebra ,Artificial intelligence ,business ,Algorithm ,Mathematics - Abstract
This paper deals with the fault diagnosis problem in a concurrent Timed Discrete Event System (TDES). In a TDES, concurrency leads to more complexity in the diagnoser and appears where, at a certain time, some user must choose among several resources. To cope with this problem, a new model-based diagnoser is proposed in this paper. This diagnoser uses Durational Graph (DG), a main subclass of timed automata for representing the time evolution of the TDES. The proposed diagnoser predicts all possible timed-event trajectories that may be generated by the DG. This prediction procedure is complicated for nondeterministic DG’s that are obtained for concurrent TDES’s. To solve this problem, a new Dioid Algebra, Union-Plus Algebra is introduced in this paper. Based on this Algebra, a reachability matrix is defined for a DG that plays an essential role in predicting the time behavior of TDES. By using reachability matrix, a prediction procedure is carried on via an effective equation set that is similar to linear system state equations in ordinary algebra. These results provide a suitable framework for designing an observer-based diagnoser that is illustrated by an example.
- Published
- 2013
43. Improving three-dimensional point reconstruction from image correspondences using surface curvatures
- Author
-
Chin-Hung Teng
- Subjects
Mean curvature ,Optimization problem ,business.industry ,Gaussian ,3D reconstruction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Triangulation (computer vision) ,Topology ,Synthetic data ,Computer Science Applications ,symbols.namesake ,Hardware and Architecture ,Gaussian curvature ,symbols ,Point (geometry) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Recovering three-dimensional (3D) points from image correspondences is an important and fundamental task in computer vision. Traditionally, the task is completed by triangulation whose accuracy has its limitation in some applications. In this paper, we present a framework that incorporates surface characteristics such as Gaussian and mean curvatures into 3D point reconstruction to enhance the reconstruction accuracy. A Gaussian and mean curvature estimation scheme suitable to the proposed framework is also introduced in this paper. Based on this estimation scheme and the proposed framework, the 3D point recovery from image correspondences is formulated as an optimization problem with the surface curvatures modeled as soft constraints. To analyze the performance of proposed 3D reconstruction approach, we generated some synthetic data, including the points on the surfaces of a plane, a cylinder and a sphere, to test the approach. The experimental results demonstrated that the proposed framework can indeed improve the accuracy of 3D point reconstruction. Some real-image data were also tested and the results also confirm this point.
- Published
- 2013
44. Matching pursuit decomposition for high-resolution direction of arrival
- Author
-
Sedigheh Ghofrani
- Subjects
Underdetermined system ,Covariance matrix ,Applied Mathematics ,Speech recognition ,Direction of arrival ,Array processing ,Instantaneous phase ,Matching pursuit ,Computer Science Applications ,Narrowband ,Artificial Intelligence ,Hardware and Architecture ,Robustness (computer science) ,Signal Processing ,Algorithm ,Software ,Information Systems ,Mathematics - Abstract
Time-frequency analysis was combined with array processing to develop a direction of arrival (DOA) estimation method. Spatial time-frequency distribution (STFD) was introduced as the natural means to deal with source signals that are localizable in the time-frequency (TF) domain. It was shown that estimating the signal and noise subspaces are improved by constructing the subspaces from the TF signatures of the signal arrivals rather than from the spatial data covariance matrix, which is commonly used in conventional multiple signal classification (MUSIC). However, the instantaneous frequency signature is needed in real application of STFD. For this purpose, identification of auto-term regions are needed and it is often difficult for really closed space sources because cross terms mask the auto terms. It means the cross term amplitude is greater than the auto terms. In this paper, three high-resolution DOA estimation approaches of non-stationary narrowband signals using matching pursuit (MP), are developed. We demonstrate the proposed technique's source discriminatory capability, its robustness against noise, and employing for underdetermined problem as well. In this paper, we consider the first sensor output as reference and decompose it by using MP decomposition based on Gabor and chirplet dictionaries. The coefficients of MP contain the steering vector information and so they can be used to estimate the DOA. In addition, the chosen MP atoms are used to implement the modified STFD based on Wigner Ville distribution and Rihaczek time frequency distribution as well. We show that using either coefficients or chosen atoms to estimate the DOA in array processing outperforms the conventional MUSIC for different scenarios. Some simulation results, showing the performance of three proposed approaches based on MP and also showing their advantages and drawbacks, are presented.
- Published
- 2013
45. A Novel Indefinite Kernel Dimensionality Reduction Algorithm: Weighted Generalized Indefinite Kernel Discriminant Analysis
- Author
-
Jing Yang and Liya Fan
- Subjects
Discrete mathematics ,Computer Networks and Communications ,General Neuroscience ,Kernel principal component analysis ,Kernel method ,Artificial Intelligence ,Variable kernel density estimation ,Kernel embedding of distributions ,Kernel (statistics) ,Radial basis function kernel ,Kernel Fisher discriminant analysis ,Generalized singular value decomposition ,Algorithm ,Software ,Mathematics - Abstract
Kernel methods are becoming increasingly popular for many real-world learning problems. And these methods for data analysis are frequently considered to be restricted to positive definite kernels. In practice, however, indefinite kernels arise and demand application in pattern analysis. In this paper, we present several formal extensions of kernel discriminant analysis (KDA) methods which can be used with indefinite kernels. In particular they include indefinite KDA (IKDA) based on generalized singular value decomposition (IKDA/GSVD), pseudo-inverse IKDA, null space IKDA and range space IKDA. Similar to the case of LDA-based algorithms, IKDA-based algorithms also fail to consider that different contribution of each pair of class to the discrimination. To remedy this problem, weighted schemes are incorporated into IKDA extensions in this paper and called them weighted generalized IKDA algorithms. Experiments on two real-world data sets are performed to test and evaluate the effectiveness of the proposed algorithms and the effect of weights on indefinite kernel functions. The results show that the effect of weighted schemes is very significantly.
- Published
- 2013
46. Copula selection for graphical models in continuous Estimation of Distribution Algorithms
- Author
-
Rogelio Salinas-Gutiérrez, Arturo Hernández-Aguirre, and Enrique Villa-Diharce
- Subjects
Statistics and Probability ,business.industry ,Gaussian ,Pattern recognition ,Mutual information ,Separable space ,Copula (probability theory) ,Computational Mathematics ,symbols.namesake ,Estimation of distribution algorithm ,EDAS ,symbols ,Graphical model ,Artificial intelligence ,Statistics, Probability and Uncertainty ,business ,Likelihood function ,Algorithm ,Mathematics - Abstract
This paper presents the use of graphical models and copula functions in Estimation of Distribution Algorithms (EDAs) for solving multivariate optimization problems. It is shown in this work how the incorporation of copula functions and graphical models for modeling the dependencies among variables provides some theoretical advantages over traditional EDAs. By means of copula functions and two well known graphical models, this paper presents a novel approach for defining new EDAs. Either dependence is modeled by a copula function chosen from a predefined set of six functions that aim to cover a wide range of inter-relations. It is also shown how the use of mutual information in the learning of graphical models implies a natural way of employing copula entropies. The experimental results on separable and non-separable functions show that the two new EDAs, which adopt copula functions to model dependencies, perform better than their original version with Gaussian variables.
- Published
- 2013
47. A graph-based pipe routing algorithm in aero-engine rotational space
- Author
-
Chengen Wang and Qiang Liu
- Subjects
Equal-cost multi-path routing ,Visibility graph ,Industrial and Manufacturing Engineering ,Physics::Fluid Dynamics ,Private Network-to-Network Interface ,Artificial Intelligence ,Search algorithm ,Graph (abstract data type) ,K shortest path routing ,Destination-Sequenced Distance Vector routing ,Suurballe's algorithm ,Algorithm ,Software ,Mathematics - Abstract
Appropriately routing pipes poses a considerable challenge to complex product developments such as aero-engine engineering. This paper presents a graph-based routing algorithm that tries to find the shortest collision-free pipe paths in an aero-engine's circumferential space between the hub and the casing. The routing algorithm extends the visibility graph originally used for robot path planning in 2D spaces to aero-engine's 3D circumferential spaces by incorporating geodesics and a number of engineering rules. Subsequently, this paper presents two adaptive strategies to automatically determine the circumferential layers and regions, on which pipes are to be routed. Pipe configurations with shortest total lengths are further found by using a graph searching algorithm. Finally, numerical computations have shown that the proposed routing algorithm outperforms previous algorithms in terms of pipe lengths and computation efficiency.
- Published
- 2013
48. Improving the efficiency of traditional DTW accelerators
- Author
-
Romain Tavenard, Laurent Amsaleg, IDIAP Research Institute, Multimedia content-based indexing (TEXMEX), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria), Université de Rennes 1 (UR1), Université de Rennes (UNIV-RENNES)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université de Rennes 1 (UR1), and Institut National des Sciences Appliquées (INSA)-Université de Rennes (UNIV-RENNES)-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Inria Rennes – Bretagne Atlantique
- Subjects
Indexing Trees ,Lower Bounds ,Dynamic time warping ,Upper Bounds ,[INFO.INFO-DB]Computer Science [cs]/Databases [cs.DB] ,Quantization (signal processing) ,Computation ,Search engine indexing ,Response time ,Upper and lower bounds ,Human-Computer Interaction ,Artificial Intelligence ,Hardware and Architecture ,Bounding overwatch ,Simple function ,Indexing ,Algorithm ,Software ,Dynamic Time Warping ,Information Systems ,Mathematics - Abstract
Dynamic time warping (DTW) is the most popular approach for evaluating the similarity of time series, but its computation is costly. Therefore, simple functions lower bounding DTW distances have been designed, accelerating searches by quickly pruning sequences that could not possibly be best matches. The tighter the bounds, the more they prune and the better the performance. Designing new functions that are even tighter is difficult because their computation is likely to become complex, canceling the benefits of their pruning. It is possible, however, to design simple functions with a higher pruning power by relaxing the no false dismissal assumption, resulting in approximate lower bound functions. This paper describes how very popular approaches accelerating DTW such as $$\text {LB}\_\text {Keogh}{}$$ LB _ Keogh and $$\text {LB}\_\text {PAA}{}$$ LB _ PAA can be made more efficient via approximations. The accuracy of approximations can be tuned, ranging from no false dismissal to potential losses when aggressively set for great response time savings. At very large scale, indexing time series is mandatory. This paper also describes how approximate lower bound functions can be used with iSAX. Furthermore, it shows that a $$k$$ k -means-based quantization step for iSAX gives significant performance gains.
- Published
- 2013
49. Inductive conformal anomaly detection for sequential detection of anomalous sub-trajectories
- Author
-
Göran Falkman and Rikard Laxhammar
- Subjects
Local outlier factor ,Applied Mathematics ,Conformal anomaly ,Detector ,Overfitting ,computer.software_genre ,Artificial Intelligence ,Outlier ,Trajectory ,Anomaly detection ,Data mining ,Anomaly (physics) ,Algorithm ,computer ,Mathematics - Abstract
Detection of anomalous trajectories is an important problem for which many algorithms based on learning of normal trajectory patterns have been proposed. Yet, these algorithms are typically designed for offline anomaly detection in databases and are insensitive to local sub-trajectory anomalies. Generally, previous anomaly detection algorithms often require tuning of many parameters, including ad-hoc anomaly thresholds, which may result in overfitting and high alarm rates. The main contributions of this paper are two-fold: The first is the proposal and analysis of the Inductive Conformal Anomaly Detector (ICAD), which is a general and parameter-light anomaly detection algorithm that has well-calibrated alarm rate. ICAD is a generalisation of the previously proposed Conformal Anomaly Detector (CAD) based on the concept of Inductive Conformal Predictors. The main advantage of ICAD compared to CAD is the improved computational efficiency. The only design parameter of ICAD is the Non-Conformity Measure (NCM). The second contribution of this paper concerns the proposal and investigation of the Sub-Sequence Local Outlier (SSLO) NCM, which is designed for sequential detection of anomalous sub-trajectories in the framework of ICAD. SSLO-NCM is based on Local Outlier Factor (LOF) and is therefore sensitive to local sub-trajectory anomalies. The results from the empirical investigations on an unlabelled set of vessel trajectories illustrate the most anomalous trajectories detected for different parameter values of SSLO-NCM, and confirm that the empirical alarm rate is indeed well-calibrated.
- Published
- 2013
50. Minimizing sequence-dependent setup costs in feeding batch processes under due date restrictions
- Author
-
Stefan Bock and Kathrin Klamroth
- Subjects
Mathematical optimization ,Supply chain management ,General Engineering ,Binary number ,Management Science and Operations Research ,Type (model theory) ,Dynamic programming ,Artificial Intelligence ,Enumeration ,Minification ,State (computer science) ,Constant (mathematics) ,Algorithm ,Software ,Mathematics - Abstract
This paper addresses the minimization of sequence-dependent setup costs in feeding batch processes. Since feeding batch processes often supply subsequent time-critical stages with modules, hard due date restrictions have to be met. It is common that feeding batch processes possess a specific structure of setup costs that are proportional to resulting machine state differences. If each job has a different batch type, the integration of hard due dates leads to a problem that is equivalent to a specific variant of the Line-TSPTW with general processing times and deadlines. We show that this well-known problem, whose complexity status has been unknown for a long time, is binary $$\mathcal{NP }$$ NP -hard. Although the more relevant version with a constant number of batch types is known to be strongly polynomial, no practically applicable exact solution method can be found in the literature. Therefore, this paper proposes new solution algorithms. Specifically, Dynamic Programming and Branch&Bound approaches are developed. By making use of a modified problem definition, a new enumeration scheme, and a specifically designed dominance rule, even complex problem instances with up to 200 jobs are optimally solved by a new best-first Branch&Bound algorithm. Apart from a detailed complexity analysis, the efficiency of the proposed approaches is validated by computational experiments.
- Published
- 2013
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.