1,815 results on '"Ramer–Douglas–Peucker algorithm"'
Search Results
2. Spatial Factors Affecting User’s Perception in Map Simplification: An Empirical Analysis
- Author
-
Del Fatto, Vincenzo, Paolino, Luca, Sebillo, Monica, Vitiello, Giuliana, Tortora, Genoveffa, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Bertolotto, Michela, editor, Ray, Cyril, editor, and Li, Xiang, editor
- Published
- 2008
- Full Text
- View/download PDF
3. ECG compression with Douglas-Peucker algorithm and fractal interpolation
- Author
-
Hafedh Belmabrouk, Abdullah Bajahzar, and Hichem Guedri
- Subjects
Computer science ,Phase (waves) ,Value (computer science) ,02 engineering and technology ,fractal interpolation ,iterated function system (ifs) ,Electrocardiography ,Fractal ,Iterated function system ,Ramer–Douglas–Peucker algorithm ,0502 economics and business ,QA1-939 ,0202 electrical engineering, electronic engineering, information engineering ,compression method ,Applied Mathematics ,05 social sciences ,Signal Processing, Computer-Assisted ,General Medicine ,Data Compression ,Computational Mathematics ,Fractals ,Modeling and Simulation ,Compression ratio ,020201 artificial intelligence & image processing ,Ecg compression ,douglas-peucker algorithm (dp) ,electrocardiogram (ecg) ,General Agricultural and Biological Sciences ,Algorithm ,TP248.13-248.65 ,Mathematics ,Algorithms ,050203 business & management ,Biotechnology ,Interpolation - Abstract
In this paper, we propose a new ECG compression method using the fractal technique. The proposed approaches utilize the fact that ECG signals are a fractal curve. This algorithm consists of three steps: First, the original ECG signals are processed and they are converted into a 2-D array. Second, the Douglas-Peucker algorithm (DP) is used to detect critical points (compression phase). Finally, we used the fractal interpolation and the Iterated Function System (IFS) to generate missing points (decompression phase). The proposed (suggested) methodology is tested for different records selected from PhysioNet Database. The obtained results showed that the proposed method has various compression ratios and converges to a high value. The average compression ratios are between 3.19 and 27.58, and also, with a relatively low percentage error (PRD), if we compare it to other methods. Results depict also that the ECG signal can adequately retain its detailed structure when the PSNR exceeds 40 dB.
- Published
- 2021
4. Validation of Black-and-White Topology Optimization Designs
- Abstract
Topology optimization has seen rapid developments in its field with algorithms getting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry are one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent on the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from topology optimization algorithms which output the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into a CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed to measure the extracted geometry's deviation from the post-processed result. The workflow is coded using MATLAB and uses an image-based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website:https://github.com/M87K452b/postprocessing-topopt.
- Published
- 2021
- Full Text
- View/download PDF
5. Validation of Black-and-White Topology Optimization Designs
- Abstract
Topology optimization has seen rapid developments in its field with algorithms getting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry are one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent on the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from topology optimization algorithms which output the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into a CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed to measure the extracted geometry's deviation from the post-processed result. The workflow is coded using MATLAB and uses an image-based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website:https://github.com/M87K452b/postprocessing-topopt.
- Published
- 2021
- Full Text
- View/download PDF
6. Research on 3D geological modeling method based on section thinning-densification and close-range photogrammetry
- Author
-
Ming Hao, Jianlong Zhang, Chao Deng, Zhengwei He, Fan Deng, Donghui Wang, and Ling Xiaoming
- Subjects
010504 meteorology & atmospheric sciences ,Series (mathematics) ,business.industry ,Computer science ,Structure (category theory) ,010502 geochemistry & geophysics ,3D modeling ,01 natural sciences ,Rendering (computer graphics) ,Ramer–Douglas–Peucker algorithm ,Section (archaeology) ,Line (geometry) ,General Earth and Planetary Sciences ,Point (geometry) ,business ,Algorithm ,0105 earth and related environmental sciences - Abstract
When 3D modeling of the geological body is carried out based on the section, problems such as unable to close the structural plane, slow rendering speed of the model and uneven and long and narrow section triangle network will occur due to the situation of super near point or too dense fold point. To solve the appealing problem, this paper proposes a section thinning-densification method based on the Douglas Peucker algorithm and equidistance algorithm. When processing the original section data, the Douglas Peucker algorithm is used for thinning, and then the equal distance is used for density increase. In this way, the section obtained by thinning-densification method solves the problem of uneven break points on the original section line and does not affect the shape of the section line, effectively solving the problem caused by too dense or too sparse break points on the section line in 3D geological modeling Series of modeling problems. In this paper, through the research on the modeling method of the above ground structure, the application demonstration of the above-ground and underground integrated three-dimensional model construction in Chengdu International Biological city is carried out, which verifies that the above-ground and underground integrated three-dimensional model construction technology is feasible and applicable and worthy of promotion.
- Published
- 2020
7. A Wireless Sensor Network Model Using Douglas-Peucker Algorithm in The IoT Environment
- Author
-
Se-Jung Lim
- Subjects
Ramer–Douglas–Peucker algorithm ,Computer science ,business.industry ,Real-time computing ,Internet of Things ,business ,Wireless sensor network - Published
- 2019
8. Validation of Black-and-White Topology Optimization Designs
- Author
-
Garla Venkatakrishnaiah, Sharath Chandra and Varadaraju, Harivinay
- Subjects
Topology Optimization ,geometry smoothing ,Applied Mechanics ,Teknisk mekanik ,88-line code ,Ramer-Douglas-Peucker algorithm ,post-processing ,Savitzky-Golay filter ,CAD ,tan-h filter ,interpolation ,threshold projection filter - Abstract
Topology optimization has seen rapid developments in its field with algorithms getting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry are one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent on the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from topology optimization algorithms which output the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into a CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed to measure the extracted geometry's deviation from the post-processed result. The workflow is coded using MATLAB and uses an image-based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website:https://github.com/M87K452b/postprocessing-topopt.
- Published
- 2021
9. A Chain-Based Wireless Sensor Network Model Using the Douglas-Peucker Algorithm in the Iot Environment
- Author
-
Se-Jung Lim
- Subjects
BCFA ,Chain Formation ,Data Collection ,Douglas-Peucker line-simplification algorithm ,Internet of Things (IoT) ,PEGASIS ,Wireless Sensor Networks (WSNs) ,business.industry ,Computer science ,General Engineering ,Engineering (General). Civil engineering (General) ,Chain formation ,Ramer–Douglas–Peucker algorithm ,TA1-2040 ,business ,Internet of Things ,Wireless sensor network ,Computer network - Abstract
WSNs which are the major component in the IoT mainly use interconnected intelligent wireless sensors. These wireless sensors sense monitor and gather data from their surroundings and then deliver them to users or access connected IoT devices remotely. One of the main issues in WSNs is that sensor nodes are generally powered by batteries, but because of the rugged environments, it is difficult to add energy. The other one may cause an unbalanced energy consumption among sensor nodes due to the uneven distribution of sensors. For these reasons, the death of nodes by the energy exhausting and the performance of the network may rapidly decrease. Hence, an efficient algorithm study for prolonging the network lifetime of WSNs is one of important challenges. In this paper, a chain-based wireless sensor network model is proposed to improve network performance with balanced energy consumption via the solution of the long-distance communication problem. The proposed algorithm is consisted of three phases: Segmentation, Chain Formation, and Data Collection. In segmentation phase, an optimal distance tolerance is determined, and then the network field is divided into small sub-regions according to its value. The chain formation is started from the sub-region far away from the sink, and then extended, and sensed data are collected along a chain and transmitted to a sink. Simulations have been performed to compare with PEGASIS and Enhanced PEGASIS using an OMNET++ simulator. The simulation results from this study showed that the proposed algorithm prolonged the network lifetime via the achievement of the balanced energy consumption compared to PEGASIS and Enhanced PEGASIS. The proposed algorithm can be used in any applications to improve network performance of WSNs.
- Published
- 2021
10. lFIT: an unsupervised discretization method based on the Ramer–Douglas–Peucker algorithm
- Author
-
Furkan Goz, Alev Mutlu, and Orhan Akbulut
- Subjects
Set (abstract data type) ,Standard error ,General Computer Science ,Discretization ,Line fitting ,Ramer–Douglas–Peucker algorithm ,Computer science ,Benchmark (computing) ,Univariate ,Preprocessor ,Electrical and Electronic Engineering ,Algorithm - Abstract
Discretization is the process of converting continuous values into discrete values. It is a preprocessing step of several machine learning and data mining algorithms and the quality of discretization may drastically affect the performance of these algorithms. In this study we propose a discretization algorithm, namely line fitting-based discretization (lFIT), based on the Ramer--Douglas--Peucker algorithm. It is a static, univariate, unsupervised, splitting-based, global, and incremental discretization method where intervals are determined based on the Ramer--Douglas--Peucker algorithm and the quality of partitioning is assessed based on the standard error of the estimate. To evaluate the performance of the proposed method, a set of experiments are conducted on ten benchmark datasets and the achieved results are compared to those obtained by eight state-of-the-art discretization methods. Experimental results show that lFIT achieves higher predictive accuracy and produces less number of inconsistency while it generates larger number of intervals. The obtained results are also validated through Friedman's test and Holm's post hoc test which revealed the fact that lFIT produces discretization schemes that statistically comply both with supervised and unsupervised discretization methods.
- Published
- 2019
11. A method for simplifying ship trajectory based on improved Douglas–Peucker algorithm
- Author
-
Liangbin Zhao and Guo-You Shi
- Subjects
Environmental Engineering ,Automatic Identification System ,Computer science ,Trajectory compression ,Process (computing) ,020101 civil engineering ,Ocean Engineering ,02 engineering and technology ,0201 civil engineering ,Term (time) ,law.invention ,Ramer–Douglas–Peucker algorithm ,law ,Compression (functional analysis) ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,020201 artificial intelligence & image processing ,Compression time ,Algorithm - Abstract
Automatic identification system (AIS) can provide massive ship trajectory data that is valuable for mining information in water traffic. However, large sizes lead to difficulties in storing, querying, and processing the aforementioned data. In the present study, to better compress ship trajectory data regarding compression time and efficiency, a method based on the improved Douglas–Peucker (DP) algorithm is presented. In the process of compression, the proposed method considers the shape of vessel trajectory derived from course information of track points. Parallel experiments are conducted based on AIS data gathered over the duration of a month in the Chinese Zhou Shan islands. The results indicate that this method can effectively compress ship trajectory information. Additionally, when compared with the traditional DP algorithm, this method can significantly reduce the compression time and exhibits better performance at high compression strengths. Also, the proposed method outperforms other existing trajectory compression algorithms in term of compression time.
- Published
- 2018
12. RectMap: A Boundary‐Reserved Map Deformation Approach for Visualizing Geographical Map
- Author
-
Zhai Shuangpo, Si Li, Liang Ronghua, and Guodao Sun
- Subjects
Cartographic generalization ,Geographic information system ,business.industry ,Computer science ,Applied Mathematics ,Big data ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,010305 fluids & plasmas ,Visualization ,Information visualization ,Data visualization ,Ramer–Douglas–Peucker algorithm ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Mind map ,Data mining ,Electrical and Electronic Engineering ,business ,computer - Abstract
Spatial visualization has always been a primary part of information visualization and analysis, especially in the era of big data. The map, the most fundamental components of spatial visualization, is a kind of simple, intuitive and popular way to show the visualization of geographic information. The traditional map is not convenient to overlay complex elements due to its own complex filled color and the actual geographical boundaries. We aim to cut offi dusty foliage of the maps, and deliver the main structure of the map visualization result. We proposes RectMap, a boundary-reserved map deformation approach for visualizing geographical map, which can maintain the mind map of original map. The proposed approach integrate traditional Douglas-Peucker algorithm and our Gridding algorithm. The Douglas-Peucker algorithm generates a simplified map, and the Gridding algorithm optimizes the initial simplified map. Case study and user study are further conducted to demonstrate the effiectiveness and usefulness of the new-style map.
- Published
- 2018
13. Reconstruction of Medical Images Using Artificial Bee Colony Algorithm
- Author
-
Wan Zuki Azman Wan Muhamad, Nurshazneem Roslan, Zainor Ridzuan Yahya, and Nur ‘Afifah Rusdi
- Subjects
Article Subject ,Computer science ,lcsh:Mathematics ,General Mathematics ,Binary image ,General Engineering ,Boundary (topology) ,020206 networking & telecommunications ,Bézier curve ,02 engineering and technology ,lcsh:QA1-939 ,Artificial bee colony algorithm ,DICOM ,lcsh:TA1-2040 ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Curve fitting ,020201 artificial intelligence & image processing ,lcsh:Engineering (General). Civil engineering (General) ,Parametrization ,Algorithm - Abstract
The goal of this study is to assess the efficiency of Artificial Bee Colony (ABC) algorithm in finding the optimal solution of curve fitting problem specifically for medical images. Data of Computed Tomography (CT) images from two different patients were collected. The procedure of curve fitting for medical images include conversion of Digital and Communications in Medicine (DICOM) images to binary images, boundary and corner point detection, parameterization, and curve reconstruction by using ABC algorithm. Then, Sum Square Error (SSE) was used to calculate the distance of the fitted Cubic Bezier curve with the boundary of the original images. Based on the calculation and parameter tuning that had been done, the smallest error of both skulls is57.5754and28.8628, respectively. The finding of this study illustrated that the proposed method had efficiently produced fitted Bezier curve that resemble the original medical images. In addition, the used of Douglas Peucker algorithm helps to improve the performance of the proposed method since computational time can be minimized. This study had shown that the proposed method can be used as an alternative method in order to reconstruct or redesigned the medical images since it produces a small error. For future work, we are planning to explore and applied the ABC algorithm to reconstruct the missing part of the skull since it can reduce the time taken to produce the skull implant as well as reducing the cost of producing it.
- Published
- 2018
14. Design and implementation of a line simplification algorithm for network measurement system.
- Author
-
Liu, Ziluan, Jin, Yuehui, Cui, Yidong, and Wang, Qiyao
- Abstract
This paper is based on an existing distributed system TRAK for network performance measurement. In TRAK system, SVG is used to draw a series of result line charts, however, it's impossible to display all the data in just one single webpage. In this paper we introduce an improved line simplification algorithm to reduce the result data set without changing the perceptual characteristics of the result line as much as possible, and retain the special characteristics of network performance measurement. Moreover, when the result charts are partial magnified, we design an anti-simplification procedure to add details to that. This paper takes One-way Delay result chart for example to introduce these procedures in detail, meanwhile we carry out experimental tests to evaluate the improved algorithm and finally illustrate that the improved line simplification algorithm is both computationally efficient and keep the characteristics of the original line well. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
15. A Novel Global Pattern Recognition Algorithm
- Author
-
Joseph M. Stoffa and Alfred H. Stiller
- Subjects
Matching (graph theory) ,Computer science ,business.industry ,Word error rate ,Fingerprint Verification Competition ,Pattern recognition ,Fingerprint recognition ,Ramer–Douglas–Peucker algorithm ,Pattern recognition (psychology) ,Feature (machine learning) ,Artificial intelligence ,business ,Algorithm ,FSA-Red Algorithm - Abstract
The background, development, performance assessment, and analysis of a novel pattern recognition algorithm that is applicable to any set of binary images are discussed. The efficacy of the algorithm when applied to the problem of fingerprint recognition is quantified. The conclusion was that the algorithm is relatively poor as a fingerprint identification algorithm, averaging an equal error rate of approximately 19% as calculated by the rules specified in the Year 2000 Fingerprint Verification Competition. The positive attributes of the algorithm were its ultra-fast matching times, orientation independence, lack of rejection events, relative insensitivity to resolution difference, and one-way transformations. The mechanism of algorithm operation as applied to fingerprints was investigated using integral geometry. This investigation showed that the algorithm was an indirect measure of ridge width, which explained the algorithm’s relatively poor performance. Another set of experiments suggests that the algorithm may be well-suited to other pattern recognition problems, specifically cloud and precipitation particle recognition and camouflage recognition. In summary, the research extends the field of pattern recognition by developing, assessing the performance, and determining the mechanism of operation of a novel pattern recognition algorithm that is applicable to any set of binary images.
- Published
- 2019
16. Feature Extraction for Eye Movement Video Data
- Author
-
Qingxiang Wang and Yifang Yuan
- Subjects
Data processing ,genetic structures ,business.industry ,Computer science ,education ,Frame (networking) ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Eye movement ,eye diseases ,Hough transform ,law.invention ,InformationSystems_MODELSANDPRINCIPLES ,law ,Ramer–Douglas–Peucker algorithm ,Feature (computer vision) ,Eye tracking ,Computer vision ,sense organs ,Artificial intelligence ,business - Abstract
Random objects in videos are common stimuli in eye tracker based studies and their locations and time of appearance need to be detected in related research such as depression detection. In this paper, we propose a new method to extract features in eye movement video data captured by the SMI eye tracker. Firstly, we provide a feature extraction method by using the circle Hough transform and the Douglas–Peucker algorithm to extract the feature for each frame of the eye movement video data, and verify its validity in eye movement video data processing. Secondly, because the storage time of the eye tracker is more accurate than the on-screen time of the exported video, we choose to extract the storage time of the eye tracker to improve the quality of feature extraction. Finally, we add batch processing function to improve the efficiency of the experiment. Experimental results show that the method can extract the eye movement features in the eye movement video accurately and effectively.
- Published
- 2019
17. A New Algorithms of Stroke Generation Considering Geometric and Structural Properties of Road Network
- Author
-
Wenjing Li and Yi Liu
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,Geography, Planning and Development ,0211 other engineering and technologies ,lcsh:G1-922 ,02 engineering and technology ,01 natural sciences ,road networks ,Fixed angle ,Ramer–Douglas–Peucker algorithm ,Road networks ,Earth and Planetary Sciences (miscellaneous) ,Range (statistics) ,Stroke (engine) ,Computers in Earth Sciences ,Selection (genetic algorithm) ,021101 geological & geomatics engineering ,0105 earth and related environmental sciences ,angle threshold ,Volume (computing) ,information entropy ,Douglas-Peucker algorithm ,stroke ,Feature (computer vision) ,Algorithm ,lcsh:Geography (General) - Abstract
Strokes are considered an elementary unit of road networks and have been widely used in their analysis and application. However, most conventional stroke generation methods are based solely on a fixed angle threshold, which ignores road networks&rsquo, geometric and structural properties. To remedy this, this paper proposes an algorithm for generating strokes that takes into account these additional geometric and structural road network properties and that reduces the impact of stroke generation on road network quality. To this end, we introduce a model of feature-based information entropy and then utilize this model to calculate road networks&rsquo, information volume and both the elemental and neighborhood level. To make our experimental results more objective, we use the Douglas-Peucker algorithm to simplify the information change curve and to obtain the optimal angle threshold range for generating strokes for different road network structures. Finally, we apply this model to three different road networks, and the optimal threshold ranges are 54°, &ndash, 63°, (Chicago), 61°, (Moscow), 45°, 48°, (Monaco). And taking Monaco as an example, this paper conducts stroke selection experiments. The results demonstrate that our proposed algorithm has better connectivity and wider coverage than those based on a common angle threshold (60°, ).
- Published
- 2019
18. Boxed-constraint least mean square algorithm and its performance analysis
- Author
-
Haiquan Zhao and Wenyuan Wang
- Subjects
0209 industrial biotechnology ,Correctness ,Karush–Kuhn–Tucker conditions ,Cornacchia's algorithm ,Mathematics::Optimization and Control ,020206 networking & telecommunications ,02 engineering and technology ,Upper and lower bounds ,Constraint (information theory) ,Least mean squares filter ,Range (mathematics) ,020901 industrial engineering & automation ,Control and Systems Engineering ,Ramer–Douglas–Peucker algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,Software ,Mathematics - Abstract
In this paper, a novel adaptive filter algorithm, called boxed-constraint least mean square (BXCLMS) algorithm, is proposed for identifying the boxed-constrained system where the parameter to estimate is limited in a range from lower bound to upper bound. The proposed algorithm is derived by using the Karush-Kuhn-Tucker (KKT) conditions and fixed-point iteration algorithm. In addition, the stochastic behavior analysis of proposed algorithm is performed in terms of mean and mean square performance. Finally, simulations are carried out to demonstrate the performance of BXCLMS algorithm and verify the correctness of the analytical results.
- Published
- 2018
19. A method for compressing AIS trajectory data based on the adaptive-threshold Douglas-Peucker algorithm
- Author
-
Huaran Yan, Jiahuan Zhao, Yingjie Xiao, Yuanqing Tang, Chunhua Tang, and Han Wang
- Subjects
Environmental Engineering ,Computer science ,Computation ,020101 civil engineering ,Ocean Engineering ,Data compression ratio ,02 engineering and technology ,01 natural sciences ,010305 fluids & plasmas ,0201 civil engineering ,Euclidean distance ,Ramer–Douglas–Peucker algorithm ,Sliding window protocol ,Compression (functional analysis) ,0103 physical sciences ,Trajectory ,Key (cryptography) ,Algorithm - Abstract
The deficiencies of massive data like storage difficulty, computation inefficiency, and information redundancy call for ship trajectory compression. While most studies on ship trajectory compression have one or more of the following drawbacks: the drawback of low compression efficiency; the problem resulted from error ship static information when the distance threshold is set based on ship length or width; poor compression quality for some trajectories, which is caused by experience-based optimal threshold. To solve these problems, we propose the ADP (Adaptive-threshold Douglas-Peucker) algorithm based on DP (Douglas-Peucker) algorithm. By determining the key points of each trajectory through the threshold change rate, ADP no longer relies on ship static information and makes it easier to determine the threshold, which is what traditional algorithms cannot achieve. Additionally, we use the advantage of matrix operation and the method of reducing points to improve the algorithm's computation efficiency. To verify the feasibility and superiority of the proposed algorithm, we compared our algorithm with DP algorithm, Partition-DP algorithm and Sliding Window algorithm from four aspects, namely, compression rate, Synchronous Euclidean distance, Length Loss Rate and running time. The experimental results prove that our algorithm has advantages over the other three algorithms, especially in threshold setting.
- Published
- 2021
20. Performance analysis of diamond search algorithm over full search algorithm
- Author
-
Rampal Bhadu, Vijay Nath, Surender Kumar Soni, and Rahul Priyadarshi
- Subjects
Mathematical optimization ,Binary search algorithm ,Computer science ,02 engineering and technology ,Condensed Matter Physics ,Electronic, Optical and Magnetic Materials ,Quarter-pixel motion ,020401 chemical engineering ,Hardware and Architecture ,Ramer–Douglas–Peucker algorithm ,Search algorithm ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,Beam search ,020201 artificial intelligence & image processing ,0204 chemical engineering ,Electrical and Electronic Engineering ,Difference-map algorithm ,Algorithm ,Block-matching algorithm - Abstract
Motion estimation is a progression used to estimate motion vectors between two or more images with a high degree of temporal redundancy. It is commonly used in video compression to attain high compression ratios as well as used in several applications for object tracking. In this paper a novel approach for diamond search algorithm has been recommended to overcome the problem encountered by several existing block matching algorithms especially with full search algorithm in reference of peak signal-to-noise ratio, required number of examine or search points as well as computational complexity. Simulation results reflect that recommended algorithm acting well compared to all existing algorithms. Experimentally 88–99% of the motion vectors are found inside the circle which has radius of 3-pixel unit and fixed on the place of zero motion. The proposed algorithm is used to implement various standards examples such as MPEG1 and MPEG4.
- Published
- 2017
21. A two-phase algorithm for point-feature cartographic label placement
- Author
-
Yuan Ding, Changbin Wu, Nan Jiang, and Xinxin Zhou
- Subjects
021103 operations research ,Automatic label placement ,Backtracking ,Heuristic (computer science) ,0211 other engineering and technologies ,Initialization ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Exact algorithm ,010201 computation theory & mathematics ,Ramer–Douglas–Peucker algorithm ,Feature (computer vision) ,General Earth and Planetary Sciences ,Greedy algorithm ,Cartography ,Algorithm ,Mathematics - Abstract
Point-feature cartographic label placement (PFCLP) involves placing labels adjacent to their corresponding point features on a map. A widely accepted goal of PFCLP is to maximize the number of conflict-free labels. This paper presents an algorithm for PFCLP based on the four-slider (4S) model. The algorithm is composed of two phases: an initialization phase during which an initial solution is constructed by an exact algorithm and a heuristic method to maximize the probability of conflict-free labels. The initialization phase is followed by an improvement phase that adopts a backtracking greedy search. The exact algorithm can find a portion of the conflict-free labels in an optimal solution and an extension of the exact algorithm is provided that can find additional conflict-free labels. Computational tests were performed for instances based on standard sets. The two-phase algorithm generated better solutions relative to all methods previously reported in the literature. It also executes at a reasonable speed and is more stable than most other methods.
- Published
- 2017
22. A general iterative algorithm for vector equilibrium problem
- Author
-
Jin-xia Huang, San-hua Wang, and Jia-yu Mao
- Subjects
Linde–Buzo–Gray algorithm ,Algebra and Number Theory ,Ramer–Douglas–Peucker algorithm ,Iterative method ,Equilibrium problem ,Algorithm ,Analysis ,Mathematics - Published
- 2017
23. Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance
- Author
-
Yue Ruan, Xiling Xue, Jianing Tan, Xi Li, and Heng Liu
- Subjects
Theoretical computer science ,Physics and Astronomy (miscellaneous) ,General Mathematics ,Hamming distance ,01 natural sciences ,010305 fluids & plasmas ,k-nearest neighbors algorithm ,Hamming graph ,Ramer–Douglas–Peucker algorithm ,0103 physical sciences ,Metric (mathematics) ,Quantum algorithm ,010306 general physics ,Hamming weight ,Algorithm ,FSA-Red Algorithm ,Mathematics - Abstract
K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k − n e a r e s t neighbors. As a result, QKNN achieves O(n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod’s algorithm (Lloyd et al. 2013) and Wiebe’s algorithm (Wiebe et al. 2014).
- Published
- 2017
24. Block composition algorithm for constructing orthogonal n-ary operations
- Author
-
Fedir Sokhatsky and Iryna V. Fryz
- Subjects
Discrete mathematics ,Push–relabel maximum flow algorithm ,Binary GCD algorithm ,Series (mathematics) ,Composition (combinatorics) ,Arity ,Theoretical Computer Science ,Combinatorics ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Ramer–Douglas–Peucker algorithm ,Block (programming) ,Discrete Mathematics and Combinatorics ,Algorithm ,Mathematics - Abstract
We propose an algorithm for constructing orthogonal n-ary operations which is called a block composition algorithm here. Input data of the algorithm are two series of different arity operations being distributed by blocks. The algorithm consists of two parts: composition algorithm for constructing n-ary operations with orthogonal retracts from given blocks of operations and block-wise recursive algorithm for constructing orthogonal n-ary operations from obtained operations. Obtained results are illustrated by examples of orthogonal n-ary operations which are constructible by block-wise recursive algorithm and non-constructible by the well-known trivial recursive algorithm.
- Published
- 2017
25. A clustering algorithm with affine space-based boundary detection
- Author
-
Xiangli Li, Baozhi Qiu, and Qiong Han
- Subjects
DBSCAN ,Clustering high-dimensional data ,Mathematical optimization ,Fuzzy clustering ,Computer science ,Correlation clustering ,Single-linkage clustering ,02 engineering and technology ,Matrix (mathematics) ,Artificial Intelligence ,CURE data clustering algorithm ,Ramer–Douglas–Peucker algorithm ,020204 information systems ,Nearest-neighbor chain algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Cluster analysis ,k-medians clustering ,Harris affine region detector ,k-medoids ,ComputingMethodologies_PATTERNRECOGNITION ,Data stream clustering ,Affine space ,Canopy clustering algorithm ,Affinity propagation ,020201 artificial intelligence & image processing ,Affine transformation ,Algorithm - Abstract
Clustering is an important technique in data mining. The innovative algorithm proposed in this paper obtains clusters by first identifying boundary points as opposed to existing methods that calculate core cluster points before expanding to the boundary points. To achieve this, an affine space-based boundary detection algorithm was employed to divide data points into cluster boundary and internal points. A connection matrix was then formed by establishing neighbor relationships between internal and boundary points to perform clustering. Our clustering algorithm with an affine space-based boundary detection algorithm accurately detected clusters in datasets with different densities, shapes, and sizes. The algorithm excelled at dealing with high-dimensional datasets.
- Published
- 2017
26. From Al-Khwarizmi to Algorithm
- Author
-
Bahman Mehri
- Subjects
Freivalds' algorithm ,Shortest Path Faster Algorithm ,Dinic's algorithm ,Computer science ,Ramer–Douglas–Peucker algorithm ,Population-based incremental learning ,Suurballe's algorithm ,Yen's algorithm ,Algorithm ,FSA-Red Algorithm - Published
- 2017
27. Scaling iterative closest point algorithm using dual number quaternions
- Author
-
Shaokun Han, Wenze Xia, Jingya Cao, Jie Cao, and Haoyong Yu
- Subjects
0209 industrial biotechnology ,Computer science ,Feature extraction ,Point cloud ,Iterative closest point ,Point set registration ,02 engineering and technology ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,020901 industrial engineering & automation ,Transformation (function) ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Point (geometry) ,Electrical and Electronic Engineering ,Quaternion ,Algorithm ,Scaling - Abstract
This paper presents a novel extended iterative closest point algorithm which can handle 3D point sets registration problem with scaling factor. In each iterative process of this algorithm, a scaling factor is introduced into objective function used to estimate all the transformation parameters, and this paper succeeds in solving this objective function using dual number quaternions. If the corresponding point pairs have been established properly, this algorithm can give all the global optimal transformation parameters by one-step calculation rather than iterative process. This algorithm is independent of shape representation and feature extraction, and thereby it is general for scaling registration of 3D point sets. The effectiveness of proposed algorithm is demonstrated in several experiments on simulated 3D curves and real 3D point clouds. Furthermore, the execution time of proposed algorithm is analyzed by a comparative experiment. The final results demonstrate that this algorithm can successfully handle 3D point sets registration problem with scaling factor, and this algorithm has a better performance in the aspect of running speed for a big size of 3D point sets.
- Published
- 2017
28. Fast source term estimation using the PGA-NM hybrid method
- Author
-
Hui Li and Jianwen Zhang
- Subjects
Mathematical optimization ,Optimization problem ,Simplex ,010504 meteorology & atmospheric sciences ,Computer simulation ,Computer science ,Computation ,Initialization ,02 engineering and technology ,01 natural sciences ,Hybrid algorithm ,Simplex algorithm ,Artificial Intelligence ,Control and Systems Engineering ,Robustness (computer science) ,Ramer–Douglas–Peucker algorithm ,Genetic algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,0105 earth and related environmental sciences - Abstract
There are significant challenges related to estimating the source term of the atmospheric release. Urged on by robots in performing emergency responding tasks, a fast and accurate algorithm for this inversion problem is indispensable. Sometimes the NM simplex algorithm is efficient in the optimization problem, but sometimes the quality of convergence is unacceptable as a numerical breakdown, even for smooth and well-behaved functions. In contrast, full convergence might be seen in parallel genetic algorithms with a comparative slower convergence. In this paper we combine the PGA and the NM simplex algorithm by initializing simplex from the final individual of PGA results and obtaining the best vertex through simplex algorithm thereafter. A numerical simulation of the proposed algorithm shows noteworthy improvement of efficiency and robustness, compared with the PGA or the NM algorithm only. HighlightsWe shortened the computation time running on an embedded system to less than half a second that meets the need to perform emergency tasks.The algorithm is tolerant of the measurement errors.We combined the PGA and NM algorithm which was not found in other papers yet.The PGA-NM hybrid algorithm could be developed and applied to other optimization problems in a similar way.
- Published
- 2017
29. Copyright Protection Based on Zero Watermarking and Blockchain for Vector Maps
- Author
-
Yazhou Zhao, Changqing Zhu, Xu Dingjie, Qifei Zhou, and Na Ren
- Subjects
blockchain ,Computer science ,Data_MISCELLANEOUS ,Geography, Planning and Development ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,02 engineering and technology ,computer.software_genre ,Data type ,Robustness (computer science) ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Earth and Planetary Sciences (miscellaneous) ,Vector map ,vector data ,copyright protection ,Computers in Earth Sciences ,Bitwise operation ,Digital watermarking ,Lossless compression ,Geography (General) ,020208 electrical & electronic engineering ,Watermark ,zero watermarking ,G1-922 ,020201 artificial intelligence & image processing ,Data mining ,computer ,Douglas–Peucker algorithm - Abstract
Zero watermarking does not alter the original information contained in vector map data and provides perfect imperceptibility. The use of zero watermarking for data copyright protection has become a significant trend in digital watermarking research. However, zero watermarking encounters tremendous obstacles to its development and application because of its requirement to store copyright information with a third party and its difficulty in confirming copyright ownership. Aiming at the shortcomings of the existing zero watermarking technology, this paper proposes a new zero watermarking construction method based on the angular features of vector data that store the zero watermarking and copyright information on the blockchain after an XOR operation. When the watermark is being extracted, the copyright information can be extracted with the XOR operation to obtain the information stored on the blockchain. Experimental results show that the combination of zero watermarking and blockchain proposed in this paper gives full play to the advantages of the two technologies and protects the copyright of data in a lossless fashion. Compared with the traditional zero watermarking algorithms, the proposed zero watermarking algorithm exhibits stronger robustness. Moreover, the proposed data copyright protection framework with a combination of zero watermarking and blockchain can also be applied to other data types, such as images, audio, video, and remote sensing images.
- Published
- 2021
30. MODIFIED SELECTION OF INITIAL CENTROIDS FOR K- MEANS ALGORITHM
- Author
-
Aleta C. Fabregas, Bobby D. Gerardo, and Bartolome T. Tanguilig
- Subjects
0301 basic medicine ,business.industry ,Population-based incremental learning ,k-means clustering ,Initialization ,Pattern recognition ,03 medical and health sciences ,030104 developmental biology ,Ramer–Douglas–Peucker algorithm ,Canopy clustering algorithm ,Artificial intelligence ,Cluster analysis ,business ,Selection (genetic algorithm) ,FSA-Red Algorithm ,Mathematics - Abstract
This study focuses on the improved initialization of initial centroids instead of random selection for the K-means algorithm. The random selection of initial seeds is a major drawback of the original Kmeans algorithm because it leads to less reliable result of clustering the data. The modified approach of the k-means algorithm integrates the computation of the weighted mean to improve the seeds initialization. This paper shows the comparison of K-Means and Modified K-Means algorithm, using the first simple dataset of four objects and the dataset for service vehicles. The two simple applications proved that the Modified K- Means of selecting initial centroids is more reliable than K-Means Algorithm. Clustering is better achieved in the modified k-means algorithm. Article DOI: http://dx.doi.org/10.20319/mijst.2016.22.4864 This work is licensed under the Creative Commons Attribution-Non-commercial 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
- Published
- 2017
31. ℓ p -Based complex approximate message passing with application to sparse stepped frequency radar
- Author
-
Le Zheng, Xiaodong Wang, Quanhua Liu, and Arian Maleki
- Subjects
Mathematical optimization ,Computer science ,Message passing ,0211 other engineering and technologies ,020206 networking & telecommunications ,02 engineering and technology ,Least squares ,Signal ,Oracle ,law.invention ,Compressed sensing ,Sampling (signal processing) ,Lasso (statistics) ,Control and Systems Engineering ,law ,Ramer–Douglas–Peucker algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Radar ,Algorithm ,Software ,021101 geological & geomatics engineering - Abstract
Compressed sensing exploits the sparsity of the signal to reduce the sampling rate while keeping the resolution fixed, and has been widely used. In this paper we propose a new algorithm called adaptive p-CAMP and show its application in the sparse stepped frequency radar signal processing. Our algorithm is inspired by the complex approximate message passing algorithm (CAMP) that solves complex-valued LASSO. The following properties of the proposed algorithm make it superior to existing algorithms: (1) All the parameters of the algorithm are tuned dynamically and optimally. The algorithm does not require any information about the signal and is still capable of tuning the parameters as well as an oracle that has all the signal information. (2) Adaptive p-CAMP is designed to solve the complex-valued p-regularized least squares for 0p1. Hence, it can outperform CAMP. The performance of the proposed algorithm is verified by simulations and the data collected by a real radar system. HighlightsA new compressed sensing algorithm called adaptive lp-CAMP is proposed.The proposed algorithm can be applied to the sparse stepped frequency radar signal processing.The performance of the proposed algorithm is verified by simulations and real data.
- Published
- 2017
32. Recursive prediction algorithm for non-stationary Gaussian Process
- Author
-
Guiming Luo and Yulai Zhang
- Subjects
Computer science ,Inference ,Recursive partitioning ,02 engineering and technology ,Field (computer science) ,Term (time) ,Data set ,symbols.namesake ,Hardware and Architecture ,Ramer–Douglas–Peucker algorithm ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,020201 artificial intelligence & image processing ,Gaussian process ,Algorithm ,Software ,Information Systems ,FSA-Red Algorithm - Abstract
The original exact inference algorithm of the GP model runs very slow.We developed a recursive inference algorithm as an improvement.The new algorithm can obtain the same result in a shorter time.It works well for the real-time online prediction problems. Gaussian Process is a theoretically rigorous model for prediction problems. One of the deficiencies of this model is that its original exact inference algorithm is computationally intractable. Therefore, its applications are limited in the field of real-time online predictions. In this paper, a recursive prediction algorithm based on the Gaussian Process model is proposed. In recursive algorithms, the computational time of the next step can be greatly reduced by utilizing the intermediate results of the current step. The proposed recursive algorithm accelerates the prediction and avoids the loss of accuracy at the same time. Experiments are done on an ultra-short term electric load data set and the results are demonstrated to show the accuracy and efficiency of the new algorithm.
- Published
- 2017
33. Normalised Spline Adaptive Filtering Algorithm for Nonlinear System Identification
- Author
-
Zhi Li and Sihai Guan
- Subjects
Mathematical optimization ,Nonlinear system identification ,Computational complexity theory ,Computer Networks and Communications ,General Neuroscience ,020206 networking & telecommunications ,02 engineering and technology ,Least mean squares filter ,Adaptive filter ,Spline (mathematics) ,Artificial Intelligence ,Autocorrelation matrix ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Mathematics ,FSA-Red Algorithm - Abstract
This paper proposed a normalised spline adaptive filtering algorithm to improve the stability of spline adaptive filtering (SAF) algorithm against the eigenvalue spread of the autocorrelation matrix of the input signal. The new adaptive filtering algorithm is based on the normalised least mean square (NLMS) approach and the value range of the learning rate in this algorithm is specified. This algorithm is called SAF-NLMS. In this work, first the derivation of the SAF-NLMS algorithm is given. Second, detailed convergence and the computational complexity analyses are carried out. Finally, the performance of the proposed algorithm is tested according to artificial datasets and real datasets. The achieved results present actually good performance. So, in practical engineering, the algorithm can be used to solve the problem of modeling or identification of nonlinear systems.
- Published
- 2017
34. An iterative algorithm for solving sparse linear equations
- Author
-
Stephen G. Walker
- Subjects
Statistics and Probability ,Mathematical optimization ,Iterative proportional fitting ,Matrix-free methods ,Iterative method ,05 social sciences ,MathematicsofComputing_NUMERICALANALYSIS ,010103 numerical & computational mathematics ,System of linear equations ,01 natural sciences ,Generalized minimal residual method ,law.invention ,Invertible matrix ,law ,Ramer–Douglas–Peucker algorithm ,Modeling and Simulation ,Cuthill–McKee algorithm ,0502 economics and business ,Statistics ,Applied mathematics ,050211 marketing ,0101 mathematics ,Mathematics - Abstract
This article introduces a new iterative technique for solving systems of linear equations of the kind Ax = b. Convergence, and with a given rate, is guaranteed with the square nonsingular matrix A being non-negative. The iterative algorithm depends on a scheme derived from Bayesian updating. The algorithm is shown to compare very favorably with the wisely used GMRES routine. With the algorithm being easy to code, it has the potential to be highly useable.
- Published
- 2017
35. Parallel Computing Aspects in Improved Edge Cover Based Graph Coloring Algorithm
- Author
-
Prasun Chakrabarti, Harish Patidar, and Amrit Ghosh
- Subjects
Matching (graph theory) ,Computer science ,Dinic's algorithm ,Parallel algorithm ,02 engineering and technology ,Parallel computing ,Floyd–Warshall algorithm ,Edge cover ,Greedy coloring ,030507 speech-language pathology & audiology ,03 medical and health sciences ,Ramer–Douglas–Peucker algorithm ,Graph power ,0202 electrical engineering, electronic engineering, information engineering ,Graph coloring ,Time complexity ,Sequential algorithm ,Blossom algorithm ,FSA-Red Algorithm ,List coloring ,Hopcroft–Karp algorithm ,Multidisciplinary ,Graph ,Vertex (geometry) ,Edge coloring ,Graph bandwidth ,Independent set ,Reverse-delete algorithm ,020201 artificial intelligence & image processing ,Johnson's algorithm ,Suurballe's algorithm ,Fractional coloring ,0305 other medical science - Abstract
Objective: To improve the Edge Cover based Graph Coloring Algorithm (ECGCA) using independent set by incorporating parallel computing aspects in algorithm. Finding optimum time complexity is one of the main objectives of this paper. Methods/Statistical Analysis: This paper introduced some modification in ECGCA. Algorithm is implemented and tested using Java programming language. Java multithreading concept is used to achieve parallel computing in algorithm. DIMACS graph instances are used to test algorithm. Finding: Algorithm is tested on more than 75 DIMACS graph instances. To analyze the time complexity, execution time of algorithm in seconds is calculated by program. Algorithm is tested on different DIMACS graph instances. Test data is analyze in this paper and found that proposed algorithm executed in optimum time for large graphs. This paper also compared parallel algorithm and found that proposed parallel algorithm is less time complex than sequential algorithm. Most of the exact graph coloring algorithms are not suitable for large graph (more than 100 vertices) but proposed algorithm is tested on many large graphs and high execution success rate of algorithm is achieved. Application: This paper shows the experimental results of different type of application data. It means this algorithm can be used for maximum types of applications.
- Published
- 2017
36. A parameter selection method of the deterministic anti-annealing algorithm for network exploring
- Author
-
Jian Yu, Jinghong Wang, and Bianfang Chai
- Subjects
Cognitive Neuroscience ,Mixture model ,01 natural sciences ,Upper and lower bounds ,010305 fluids & plasmas ,Computer Science Applications ,Maxima and minima ,Local optimum ,Rate of convergence ,Artificial Intelligence ,Ramer–Douglas–Peucker algorithm ,0103 physical sciences ,Convergence (routing) ,Expectation–maximization algorithm ,010306 general physics ,Algorithm ,Mathematics - Abstract
The traditional expectation maximization (EM) algorithm for the mixture model can explore the structural regularities of a network efficiently. But it always traps into local maxima. A deterministic annealing EM (DAEM) algorithm is put forward to solve this problem. However, it brings about the problem of convergence speed. A deterministic anti-annealing expectation maximization (DAAEM) algorithm not only prevents poor local optima, but also improves the convergence speed. Thus, the DAAEM algorithm is used to estimate parameters of the mixture model. This algorithm always sets its initial parameter β0 by experience, which maybe get trapped into meaningless results due to too small β0, or converge to local maxima more frequently due to too large β0. A parameter selection method for β0 is designed. In our method, the convergence rate of the DAAEM algorithm for mixture model is first derived from Jacobian matrix of the posterior probabilities. Then the theoretical lower bound of β0 is computed based on the convergence rate at meaningless points. In our experiments we select β0 by rounding up the lower bound to the nearest tenth. Experiments on real and synthetic networks demonstrate that the parameter selection method is valid, and the performance of the DAAEM algorithm beginning from the selected parameter is better than the EM and DAEM algorithms for mixture model. In addition, we find that the convergence rate of the DAAEM algorithm is affected by assortative mixing by degree of a network.
- Published
- 2017
37. Acceleration of the EM algorithm using the Vector Aitken method and its Steffensen form
- Author
-
Qiu yue Li, Xu Guo, and Wang li Xu
- Subjects
Mathematical optimization ,Iterative and incremental development ,InformationSystems_INFORMATIONSYSTEMSAPPLICATIONS ,Applied Mathematics ,010102 general mathematics ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Aitken's delta-squared process ,01 natural sciences ,Steffensen's method ,010104 statistics & probability ,Acceleration ,Rate of convergence ,Ramer–Douglas–Peucker algorithm ,Computer Science::Multimedia ,Expectation–maximization algorithm ,Convergence (routing) ,0101 mathematics ,Algorithm ,Mathematics - Abstract
Based on Vector Aitken (VA) method, we propose an acceleration Expectation-Maximization (EM) algorithm, VA-accelerated EM algorithm, whose convergence speed is faster than that of EM algorithm. The VA-accelerated EM algorithm does not use the information matrix but only uses the sequence of estimates obtained from iterations of the EM algorithm, thus it keeps the flexibility and simplicity of the EM algorithm. Considering Steffensen iterative process, we have also given the Steffensen form of the VA-accelerated EM algorithm. It can be proved that the reform process is quadratic convergence. Numerical analysis illustrate the proposed methods are efficient and faster than EM algorithm.
- Published
- 2017
38. Approximation algorithms for visibility computation and testing over a terrain
- Author
-
Morteza Golkari, Sharareh Alipour, Uğur Güdükbay, Mohammad Ghodsi, and Güdükbay, Uğur
- Subjects
Randomized algorithm ,021103 operations research ,Theoretical computer science ,Computational complexity theory ,Computation ,Geography, Planning and Development ,Visibility (geometry) ,0211 other engineering and technologies ,Approximation algorithm ,Terrain ,02 engineering and technology ,Environmental Science (miscellaneous) ,Visibility computation ,Painter's algorithm ,Visibility counting problem ,Ramer–Douglas–Peucker algorithm ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Earth and Planetary Sciences (miscellaneous) ,Engineering (miscellaneous) ,Visibility testing problem ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Given a 2.5D terrain and a query point p on or above it, we want to find the triangles of terrain that are visible from p. We present an approximation algorithm to solve this problem. We implement the algorithm and test it on real data sets. The experimental results show that our approximate solution is very close to the exact solution and compared to the other similar works, the computational cost of our algorithm is lower. We analyze the computational complexity of the algorithm. We consider the visibility testing problem where the goal is to test whether a given triangle of the terrain is visible or not with respect to p. We present an algorithm for this problem and show that the average running time of this algorithm is the same as the running time of the case where we want to test the visibility between two query points p and q. We also propose a randomized algorithm for providing an estimate of the portion of the visible region of a terrain for a query point. © 2016, Società Italiana di Fotogrammetria e Topografia (SIFET).
- Published
- 2017
39. Fast motion estimation algorithm using multilevel distortion search in Walsh–Hadamard domain
- Author
-
Liang Dong and Zhibin Pan
- Subjects
Computational complexity theory ,020208 electrical & electronic engineering ,02 engineering and technology ,Peak signal-to-noise ratio ,Redundancy (information theory) ,Hadamard transform ,Ramer–Douglas–Peucker algorithm ,Search algorithm ,Motion estimation ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,Software ,Mathematics ,Data compression - Abstract
Block-matching motion estimation (BME) can efficiently reduce the temporal redundancy between the successive video sequences in video compression coding system. In this study, a fast BME algorithm using multilevel distortion search in Walsh–Hadamard domain is proposed to reduce the computational burden and speed up coding process. First, the proposed algorithm divides the block into several sub-blocks. Then, the Walsh–Hadamard transform is applied to these sub-blocks. Finally, the proposed algorithm calculates the partial block matching distortion by utilising a novel back diagonal search scheme which can quickly reject unnecessary candidate block in a multilevel manner. Experimental results show that the proposed algorithm effectively reduces the number of operations in block distortion calculation meanwhile maintains the best motion estimation matching quality. Compared with the full search, the proposed algorithm can reduce 87.19% computational complexity without any degradation of the peak signal to noise ratio. In addition, compared with the partial distortion search algorithm, successive elimination algorithm, multilevel successive elimination algorithm and the transform-domain successive elimination algorithm, the proposed algorithm can also save 68.27, 70.09, 37.81 and 37.44% computational complexity, respectively. Moreover, the proposed algorithm can also be easily incorporated into any block-based template search motion estimation algorithm.
- Published
- 2017
40. The Hierarchical Iterative Identification Algorithm for Multi-Input-Output-Error Systems with Autoregressive Noise
- Author
-
Jiling Ding
- Subjects
0209 industrial biotechnology ,Multidisciplinary ,Iterative proportional fitting ,Article Subject ,General Computer Science ,Computer science ,Iterative method ,Computer Science::Information Retrieval ,Astrophysics::High Energy Astrophysical Phenomena ,Population-based incremental learning ,02 engineering and technology ,Non-linear iterative partial least squares ,Least squares ,lcsh:QA75.5-76.95 ,Parameter identification problem ,Levenberg–Marquardt algorithm ,020901 industrial engineering & automation ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,lcsh:Electronic computers. Computer science ,Difference-map algorithm ,Algorithm ,FSA-Red Algorithm - Abstract
This paper considers the identification problem of multi-input-output-error autoregressive systems. A hierarchical gradient based iterative (H-GI) algorithm and a hierarchical least squares based iterative (H-LSI) algorithm are presented by using the hierarchical identification principle. A gradient based iterative (GI) algorithm and a least squares based iterative (LSI) algorithm are presented for comparison. The simulation results indicate that the H-LSI algorithm can obtain more accurate parameter estimates than the LSI algorithm, and the H-GI algorithm converges faster than the GI algorithm.
- Published
- 2017
41. An extended depth-first search algorithm for optimal triangulation of Bayesian networks
- Author
-
Maomi Ueno and Chao Li
- Subjects
Clique ,Mathematical optimization ,Junction tree algorithm ,Applied Mathematics ,Population-based incremental learning ,optimal triangulation ,dynamic clique maintenance ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Theoretical Computer Science ,Bayesian network ,010201 computation theory & mathematics ,Artificial Intelligence ,Ramer–Douglas–Peucker algorithm ,Search algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,probabilistic inference ,Depth-first search ,Time complexity ,Software ,Mathematics ,FSA-Red Algorithm ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
The junction tree algorithm is currently the most popular algorithm for exact inference on Bayesian networks. To improve the time complexity of the junction tree algorithm, we need to find a triangulation with the optimal total table size. For this purpose, Ottosen and Vomlel have proposed a depth-first search (DFS) algorithm. They also introduced several techniques to improve the DFS algorithm, including dynamic clique maintenance and coalescing map pruning. Nevertheless, the efficiency and scalability of that algorithm leave much room for improvement. First, the dynamic clique maintenance allows to recompute some cliques. Second, in the worst case, the DFS algorithm explores the search space of all elimination orders, which has size n!, where n is the number of variables in the Bayesian network. To mitigate these problems, we propose an extended depth-first search (EDFS) algorithm. The new EDFS algorithm introduces the following two techniques as improvements to the DFS algorithm: (1) a new dynamic clique maintenance algorithm that computes only those cliques that contain a new edge, and (2) a new pruning rule, called pivot clique pruning. The new dynamic clique maintenance algorithm explores a smaller search space and runs faster than the Ottosen and Vomlel approach. This improvement can decrease the overhead cost of the DFS algorithm, and the pivot clique pruning reduces the size of the search space by a factor of O ( n 2 ) . Our empirical results show that our proposed algorithm finds an optimal triangulation markedly faster than the state-of-the-art algorithm does. A state-of-the-art algorithm for optimal triangulation is proposed.Evaluated the degree of impact for employing the total table size as optimality criterion.A novel pruning rule for triangulation algorithm is proposed.A fast dynamic clique maintenance algorithm is proposed.
- Published
- 2017
42. Intellectual Approaches to Improvement of the Classification Decisions Quality on the Base of the SVM Classifier
- Author
-
Liliya Demidova, N. Tyart, N. Stepanov, Y. Sokolova, and I. Klyueva
- Subjects
Computer science ,Population-based incremental learning ,Computer Science::Neural and Evolutionary Computation ,02 engineering and technology ,Machine learning ,computer.software_genre ,Regularization (mathematics) ,k-nearest neighbors algorithm ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Difference-map algorithm ,General Environmental Science ,FSA-Red Algorithm ,business.industry ,Particle swarm optimization ,020206 networking & telecommunications ,020207 software engineering ,Pattern recognition ,Hybrid algorithm ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,Kernel (statistics) ,Hyperparameter optimization ,General Earth and Planetary Sciences ,Artificial intelligence ,business ,computer - Abstract
In this paper the hybrid and modified versions of the PSO algorithm applied to improvement of the search characteristics of the classical PSO algorithm in the development problem of the SVM classifier have been offered and investigated. A herewith two hybrid versions of the PSO algorithm assume the use of the classical Grid Search (GS) algorithm and the Design of Experiment (DOE) algorithm accordingly, and the modified version of the PSO algorithm realizes the simultaneous search of the kernel function type, the parameters values of the kernel function, and also the regularization parameter value. Besides, the questions of applicability of the k nearest neighbors (kNN) algorithm in the development problem of the SVM classifier have been considered.
- Published
- 2017
- Full Text
- View/download PDF
43. Semi‐real‐time algorithm for fast pattern matching
- Author
-
Jing Dong and Haibo Liu
- Subjects
0209 industrial biotechnology ,Competitive analysis ,Triangle inequality ,Population-based incremental learning ,Cornacchia's algorithm ,02 engineering and technology ,020901 industrial engineering & automation ,Search algorithm ,Ramer–Douglas–Peucker algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Pattern matching ,Electrical and Electronic Engineering ,Algorithm ,Software ,Mathematics ,FSA-Red Algorithm - Abstract
A semi-real-time pattern-matching algorithm consisting of an offline and online stage is proposed. The approach of the proposed algorithm is to perform a significant amount of the calculation required by pattern matching in the offline stage. This necessitates only a small amount of calculation in the online process to reject a great number of mismatched positions. The proposed algorithm first uses triangle inequality and orthogonal decomposition to derive the lower bounds of the distances between the pattern and the candidate windows of the base image. Then, mismatched candidate windows are rejected if their lower bounds exceed an adaptive threshold. The proposed method accelerates the online processing effectively while yielding the identical result as a full search algorithm. The proposed algorithm was compared with other state-of-the-art algorithms and the result confirms that the proposed algorithm has a distinct speed advantage over the other algorithms for online processing.
- Published
- 2016
44. On the numerical implementation of the Closest Point Projection algorithm in anisotropic elasto-plasticity with nonlinear mixed hardening
- Author
-
Mar Miñano, M.A. Caminero, and Francisco J. Montáns
- Subjects
Applied Mathematics ,Cornacchia's algorithm ,General Engineering ,02 engineering and technology ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Finite element method ,010101 applied mathematics ,Nonlinear system ,020303 mechanical engineering & transports ,0203 mechanical engineering ,Rate of convergence ,Ramer–Douglas–Peucker algorithm ,Robustness (computer science) ,0101 mathematics ,Implementation ,Algorithm ,Analysis ,Dykstra's projection algorithm ,Mathematics - Abstract
In finite element analysis, among the possible frameworks for stress-point integration algorithms in computational elastoplasticity, the implicit Closest Point Projection (CPP) algorithm is probably the most used one. The idea behind this algorithm is that all necessary variables, including the flow and hardening directions, are iteratively updated and enforced at the final solution. Therefore the algorithm is fully implicit and the final solution is independent of previous iterations. However, there are several possible implementations of the ideas behind the CPP algorithm. Even though asymptotic quadratic convergence may be obtained in all implementations if the algorithm is properly linearized, these different possibilities result in a different number of local iterations and in a different computational effort. Small stress-integration algorithms are frequently the only iterative core of large strain elastoplastic formulations. At the same time they are responsible for a large share of the overall computational time in finite element simulations and key in the overall robustness. In this work we present a new algorithm based on the ideas of the Closest Point Projection algorithm for anisotropic elastoplasticity with mixed hardening. We also compare our proposal with other possible implementations of the CPP algorithm for the same problem, namely the General CPP implementation and the Governing Parameter Method. We show that our proposal is in general more efficient. HighlightsWe propose an integration algorithm for anisotropic elastoplasticity.Our algorithm is fully implicit, based on the backward-Euler integration rule.We compare our proposal with the GCPP algorithm and the GPM.Our proposal needs significantly less iterations than GCPP and GPM.We show some demonstrative examples in small and large strains.
- Published
- 2016
45. A differential-based harmony search algorithm for the optimization of continuous problems
- Author
-
Nur Fatimah As'Sahra, Shafaatunnur Hasan, Hosein Abedinpourshotorban, and Siti Mariyam Shamsuddin
- Subjects
Continuous optimization ,0209 industrial biotechnology ,Mathematical optimization ,Meta-optimization ,Population-based incremental learning ,General Engineering ,Initialization ,02 engineering and technology ,HS algorithm ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Harmony search ,020201 artificial intelligence & image processing ,Metaheuristic ,Algorithm ,Mathematics - Abstract
We introduced a new harmony memory initialization method.We introduced a new pitch adjustment method based on DE/best/1 mutation strategy.We comprehensively studied the parameter setting of our algorithm.We compared our algorithm with the state of the art variants of HS algorithm.We compared our algorithm with the state of the art variants of DE algorithm. The performance of the Harmony Search (HS) algorithm is highly dependent on the parameter settings and the initialization of the Harmony Memory (HM). To address these issues, this paper presents a new variant of the HS algorithm, which is called the DH/best algorithm, for the optimization of globally continuous problems. The proposed DH/best algorithm introduces a new improvisation method that differs from the conventional HS in two respects. First, the random initialization of the HM is replaced with a new method that effectively initializes the harmonies and reduces randomness. Second, the conventional pitch adjustment method is replaced by a new pitch adjustment method that is inspired by a Differential Evolution (DE) mutation strategy known as DE/best/1. Two sets of experiments are performed to evaluate the proposed algorithm. In the first experiment, the DH/best algorithm is compared with other variants of HS based on 12 optimization functions. In the second experiment, the complete CEC2014 problem set is used to compare the performance of the DH/best algorithm with six well-known optimization algorithms from different families. The experimental results demonstrate the superiority of the proposed algorithm in convergence, precision, and robustness.
- Published
- 2016
46. Fast Shape Matching Algorithm Based on the Improved Douglas-Peucker Algorithm
- Author
-
Myoung-sup Sim, Ju-hyun Kwak, and Chang-hoon Lee
- Subjects
Ramer–Douglas–Peucker algorithm ,business.industry ,Computer science ,Active shape model ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Shape matching ,Pattern recognition ,02 engineering and technology ,Artificial intelligence ,business ,Algorithm - Published
- 2016
47. An efficient cuckoo search algorithm based multilevel thresholding for segmentation of satellite images using different objective functions
- Author
-
Shyam Lal and Shilpa Suresh
- Subjects
Population-based incremental learning ,General Engineering ,Particle swarm optimization ,Brute-force search ,020207 software engineering ,Image processing ,02 engineering and technology ,Thresholding ,Computer Science Applications ,Artificial Intelligence ,Ramer–Douglas–Peucker algorithm ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Cuckoo search ,Algorithm ,Mathematics ,FSA-Red Algorithm - Abstract
This paper proposes a computationally efficient optimization algorithm for segmenting colour satellite images.CS algorithm incorporating Mantegna's and McCulloch's method for modeling levy flight is presented.PSO, DPSO, ABC and CS algorithms are compared with the proposed algorithm.All these optimization algorithms are exploited using three different objective functions.Performance assessment metrics demonstrated the improvement in the efficiency of the proposed algorithm. Satellite image segmentation is challenging due to the presence of weakly correlated and ambiguous multiple regions of interest. Several bio-inspired algorithms were developed to generate optimum threshold values for segmenting such images efficiently. Their exhaustive search nature makes them computationally expensive when extended to multilevel thresholding. In this paper, we propose a computationally efficient image segmentation algorithm, called CSMcCulloch, incorporating McCulloch's method for l e ? v y flight generation in Cuckoo Search (CS) algorithm. We have also investigated the impact of Mantegna's method for l e ? v y flight generation in CS algorithm (CSMantegna) by comparing it with the conventional CS algorithm which uses the simplified version of the same. CSMantegna algorithm resulted in improved segmentation quality with an expense of computational time. The performance of the proposed CSMcCulloch algorithm is compared with other bio-inspired algorithms such as Particle Swarm Optimization (PSO) algorithm, Darwinian Particle Swarm Optimization (DPSO) algorithm, Artificial Bee Colony (ABC) algorithm, Cuckoo Search (CS) algorithm and CSMantegna algorithm using Otsu's method, Kapur entropy and Tsallis entropy as objective functions. Experimental results were validated by measuring PSNR, MSE, FSIM and CPU running time for all the cases investigated. The proposed CSMcCulloch algorithm evolved to be most promising, and computationally efficient for segmenting satellite images. Convergence rate analysis also reveals that the proposed algorithm outperforms others in attaining stable global optimum thresholds. The experiments results encourages related researches in computer vision, remote sensing and image processing applications.
- Published
- 2016
48. A robust iterative super-resolution mosaicking algorithm using an adaptive and directional Huber-Markov regularization
- Author
-
Debabrata Ghosh, Naima Kaabouch, and Wen-Chen Hu
- Subjects
Markov chain ,020206 networking & telecommunications ,Reconstruction algorithm ,02 engineering and technology ,Regularization (mathematics) ,Tikhonov regularization ,Singular value ,Ramer–Douglas–Peucker algorithm ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Algorithm ,Performance metric ,Mathematics ,FSA-Red Algorithm - Abstract
We develop a super-resolution algorithm using directional Huber-Markov regularization.We compare our algorithm with two other state-of-the-art algorithms.We perform quantitative evaluation using six performance metrics. A robust spatial-domain based super-resolution mosaicking algorithm is proposed. This technique incorporates a mosaicking algorithm, and a super-resolution reconstruction algorithm. The main contribution of this paper is the development of a super-resolution algorithm using a Huber Norm-based maximum likelihood (ML) estimation in combination with an adaptive directional Huber-Markov regularization. Another contribution is the development of a no-reference performance metric based on reciprocal singular value curve for quantitative evaluation of the proposed algorithm. Along with the above-mentioned metric, five other performance measurement metrics are used to assess the efficiency of the algorithm. The performance of this algorithm is compared with the performances of two different algorithms: the Tikhonov regularization-based and the total variation (TV)-based super-resolution mosaicking algorithms. Results show that the proposed algorithm outperforms the other two techniques in terms of lowest amount of blur and noise in the output.
- Published
- 2016
49. Incremental augmented complex adaptive IIR algorithm for training widely linear ARMA model
- Author
-
Amir Rastegarnia, Wael M. Bazzi, Azam Khalili, and Reza G. Rahmati
- Subjects
Mathematical optimization ,Computer science ,Population-based incremental learning ,Stability (learning theory) ,020206 networking & telecommunications ,02 engineering and technology ,Adaptive filter ,Ramer–Douglas–Peucker algorithm ,Signal Processing ,Learning rule ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Adaptive learning ,Electrical and Electronic Engineering ,Algorithm ,Infinite impulse response ,FSA-Red Algorithm - Abstract
In this paper, we propose a distributed adaptive learning algorithm to train the coefficients of a widely linear autoregressive moving average model by measurements collected by the nodes of a network. We assume that each node uses the augmented complex adaptive infinite impulse response (ACA-IIR) filter as the learning rule, and nodes interact with each other under an incremental mode of cooperation. To derive the proposed algorithm, called the incremental ACAIIR (IACA-IIR), we firstly formulate the distributed adaptive learning problem as an unconstrained minimization problem. Then, we apply stochastic gradient optimization argument to solve it and derive the proposed algorithm. We further find the step size range where the stability of the proposed algorithm is guaranteed. We also introduce a reduced-complexity version of the IACA-IIR algorithm. Since the proposed algorithm relies on the augmented complex statistics, it can be used to model both types of complex-valued signals (proper and improper signals). To evaluate the performance of the proposed algorithm, we use both synthetic and real-world complex signals in our simulations. The results exhibit superior performance of the proposed algorithm over the non-cooperative ACA-IIR algorithm.
- Published
- 2016
50. The game map design based on A* algorithm
- Author
-
Wang Minhui and Gao Xia
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Binary search algorithm ,Computer Networks and Communications ,Computer science ,Dinic's algorithm ,Bidirectional search ,Population-based incremental learning ,A* search algorithm ,Commentz-Walter algorithm ,Best-first search ,02 engineering and technology ,Min-conflicts algorithm ,law.invention ,020901 industrial engineering & automation ,law ,Search algorithm ,Ramer–Douglas–Peucker algorithm ,Algorithmics ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Difference-map algorithm ,Yen's algorithm ,FSA-Red Algorithm ,Fringe search ,020207 software engineering ,SSS ,Shortest Path Faster Algorithm ,Hardware and Architecture ,Beam search ,Algorithm design ,Suurballe's algorithm ,Software - Abstract
A* algorithm is widely used in the game for diameter, is currently one of the more popular heuristic search algorithm, but the algorithm has the problem of searching time and path. In this paper, a bidirectional search A* algorithm is proposed to improve the search efficiency and ensure the accuracy of the search, and effectively solve the problem of path twists and turns. Then, the algorithm is implemented by experiment simulation, which improves the effectiveness and feasibility of the algorithm in the large map.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.