304 results on '"parallel"'
Search Results
2. Efficient parallel processing of high-dimensional spatial kNN queries.
- Author
-
Jiang, Tao, Zhang, Bin, Lin, Dan, Gao, Yunjun, and Li, Qing
- Subjects
- *
PARALLEL processing , *K-nearest neighbor classification , *USER experience , *PARALLEL algorithms , *ALGORITHMS - Abstract
Some efficient top-k algorithms, i.e., Fagin's Algorithm, threshold algorithm (TA), and best position algorithm (BPA), can be used to answer k nearest neighbor (kNN) queries. However, extending the existing algorithms without further changes to the algorithms themselves would not be efficient since there are the different characteristics between the kNN queries and top-k queries. For example, the kNN queries are more distance-sensitive rather than the position of data points. Second, it is necessary to add some novel parallel heuristics and pruning policies for the kNN queries. Third, there are still many redundant random accesses among FA, TA, and BPA. In this paper, we address aforementioned these problems and take these algorithms to answer parallel kNN (PkNN) queries in spatial databases. We integrate the advantages of the B + -tree and Open MP programming and propose three efficient parallel kNN query algorithms, namely distance priority-based PkNN, optimized PkNN, and partition-based PkNN. Our performance evaluation shows that our proposed algorithms achieve significant improvement in comparison with existing algorithms, i.e., BPA and BPA2. In addition, our approaches are also capable of returning kNN results incrementally which greatly shorten the query response time and enhance user experience. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Comparison of Parallel and Non-parallel Approaches in Algorithms for CAD of Complex Systems with Higher Degree of Dependability
- Author
-
Drabowski, Mieczyslaw, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Zamojski, Wojciech, editor, Mazurkiewicz, Jacek, editor, Sugier, Jarosław, editor, and Walkowiak, Tomasz, editor
- Published
- 2020
- Full Text
- View/download PDF
4. Specialized processors and algorithms for computing standard functions
- Author
-
Turdimatov Mamirjon, Mukhtarov Farrukh, Abdurakhmonov Sultonali, Khudoynazarov Umidjon, and Muminova Mastura
- Subjects
microprogram ,parallel ,parallelization ,algorithm ,processor ,device ,memory ,micro command ,Environmental sciences ,GE1-350 - Abstract
This article discusses the problem of creating a specialized processor device that allows you to increase the number of calculated mathematical functions, resulting in a single calculation scheme for all elementary functions, an algorithm for lowering the degree of a polynomial by the Chebyshev polynomial method and a single calculation scheme for the Horner scheme, by comparing the prototype by introducing a specialized reprogramming block processor device.
- Published
- 2023
- Full Text
- View/download PDF
5. Error control with binary cyclic codes
- Author
-
Grymel, Martin-Thomas, Garside, James, and Furber, Stephen
- Subjects
621.39 ,Error Control ,Error Detection ,Error Correction ,Cyclic Code ,Cyclic Redundancy Checksum ,CRC ,Programmable ,Parallel ,Circuit ,LFSR ,Linear Feedback Shift Register ,Discrete Logarithm ,Algorithm ,Generic ,Deterministic ,Polynomial Ring ,Primitive Polynomial ,Maximum Length Sequence ,SpiNNaker - Abstract
Error-control codes provide a mechanism to increase the reliability of digital data being processed, transmitted, or stored under noisy conditions. Cyclic codes constitute an important class of error-control code, offering powerful error detection and correction capabilities. They can easily be generated and verified in hardware, which makes them particularly well suited to the practical use as error detecting codes.A cyclic code is based on a generator polynomial which determines its properties including the specific error detection strength. The optimal choice of polynomial depends on many factors that may be influenced by the underlying application. It is therefore advantageous to employ programmable cyclic code hardware that allows a flexible choice of polynomial to be applied to different requirements. A novel method is presented in this thesis to realise programmable cyclic code circuits that are fast, energy-efficient and minimise implementation resources.It can be shown that the correction of a single-bit error on the basis of a cyclic code is equivalent to the solution of an instance of the discrete logarithm problem. A new approach is proposed for computing discrete logarithms; this leads to a generic deterministic algorithm for analysed group orders that equal Mersenne numbers with an exponent of a power of two. The algorithm exhibits a worst-case runtime in the order of the square root of the group order and constant space requirements.This thesis establishes new relationships for finite fields that are represented as the polynomial ring over the binary field modulo a primitive polynomial. With a subset of these properties, a novel approach is developed for the solution of the discrete logarithm in the multiplicative groups of these fields. This leads to a deterministic algorithm for small group orders that has linear space and linearithmic time requirements in the degree of defining polynomial, enabling an efficient correction of single-bit errors based on the corresponding cyclic codes.
- Published
- 2013
6. Torus–Connected Cycles: A Simple and Scalable Topology for Interconnection Networks
- Author
-
Bossard Antoine and Kaneko Keiichi
- Subjects
algorithm ,routing ,hamiltonian ,supercomputer ,parallel ,Mathematics ,QA1-939 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Supercomputers are today made up of hundreds of thousands of nodes. The interconnection network is responsible for connecting all these nodes to each other. Different interconnection networks have been proposed; high performance topologies have been introduced as a replacement for the conventional topologies of recent decades. A high order, a low degree and a small diameter are the usual properties aimed for by such topologies. However, this is not sufficient to lead to actual hardware implementations. Network scalability and topology simplicity are two critical parameters, and they are two of the reasons why modern supercomputers are often based on torus interconnection networks (e.g., Fujitsu K, IBM Sequoia). In this paper we first describe a new topology, torus-connected cycles (TCCs), realizing a combination of a torus and a ring, thus retaining interesting properties of torus networks in addition to those of hierarchical interconnection networks (HINs). Then, we formally establish the diameter of a TCC, and deduce a point-to-point routing algorithm. Next, we propose routing algorithms solving the Hamiltonian cycle problem, and, in a two dimensional TCC, the Hamiltonian path one. Correctness and complexities are formally proved. The proposed algorithms are time-optimal.
- Published
- 2015
- Full Text
- View/download PDF
7. A New Hybrid Parallel Algorithm for MrBayes
- Author
-
Zhou, Jianfu, Wang, Gang, Liu, Xiaoguang, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Hsu, Ching-Hsien, editor, Yang, Laurence T., editor, Park, Jong Hyuk, editor, and Yeo, Sang-Soo, editor
- Published
- 2010
- Full Text
- View/download PDF
8. A Parallel Approach for Frequent Subgraph Mining in a Single Large Graph Using Spark.
- Author
-
Qiao, Fengcai, Zhang, Xin, Li, Pei, Ding, Zhaoyun, Jia, Shanshan, and Wang, Hui
- Subjects
DATA mining ,GRAPH theory ,SUBGRAPHS - Abstract
Frequent subgraph mining (FSM) plays an important role in graph mining, attracting a great deal of attention in many areas, such as bioinformatics, web data mining and social networks. In this paper, we propose SSIGRAM (Spark based Single Graph Mining), a Spark based parallel frequent subgraph mining algorithm in a single large graph. Aiming to approach the two computational challenges of FSM, we conduct the subgraph extension and support evaluation parallel across all the distributed cluster worker nodes. In addition, we also employ a heuristic search strategy and three novel optimizations: load balancing, pre-search pruning and top-down pruning in the support evaluation process, which significantly improve the performance. Extensive experiments with four different real-world datasets demonstrate that the proposed algorithm outperforms the existing GRAMI (Graph Mining) algorithm by an order of magnitude for all datasets and can work with a lower support threshold. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Efficient Implementation of Iterative Polynomial Matrix EVD Algorithms Exploiting Structural Redundancy and Parallelisation.
- Author
-
Coutts, Fraser K., Proudler, Ian K., and Weiss, Stephan
- Subjects
- *
MATRIX decomposition , *POLYNOMIALS , *ALGORITHMS , *REDUNDANCY in engineering , *MATRICES (Mathematics) - Abstract
A number of algorithms are capable of iteratively calculating a polynomial matrix eigenvalue decomposition (PEVD), which is a generalisation of the EVD and will diagonalise a parahermitian polynomial matrix via paraunitary operations. While offering promising results in various broadband array processing applications, the PEVD has seen limited deployment in hardware due to the high computational complexity of these algorithms. Akin to low complexity divide-and-conquer (DaC) solutions to eigenproblems, this paper addresses a partially parallelisable DaC approach to the PEVD. A novel algorithm titled parallel-sequential matrix diagonalisation exhibits significantly reduced algorithmic complexity and run-time when compared with existing iterative PEVD methods. The DaC approach, which is shown to be suitable for multi-core implementation, can improve eigenvalue resolution at the expense of decomposition mean squared error, and offers a trade-off between the approximation order and accuracy of the resulting paraunitary matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. A new parallel data geometry analysis algorithm to select training data for support vector machine
- Author
-
Shu Lv, Kai-bo Shi, and Yunfeng Shi
- Subjects
Measure (data warehouse) ,Mahalanobis distance ,geometry analysis ,Computer science ,General Mathematics ,Centroid ,Geometry ,Sample (statistics) ,sample reduction ,Support vector machine ,ComputingMethodologies_PATTERNRECOGNITION ,parallel ,Outlier ,Sample space ,QA1-939 ,Trigonometric functions ,support vector machine ,mahalanobis distance ,Algorithm ,Mathematics - Abstract
Support vector machine (SVM) is one of the most powerful technologies of machine learning, which has been widely concerned because of its remarkable performance. However, when dealing with the classification problem of large-scale datasets, the high complexity of SVM model leads to low efficiency and become impractical. Due to the sparsity of SVM in the sample space, this paper presents a new parallel data geometry analysis (PDGA) algorithm to reduce the training set of SVM, which helps to improve the efficiency of SVM training. The PDGA introduce Mahalanobis distance to measure the distance from each sample to its centroid. And based on this, proposes a method that can identify non support vectors and outliers at the same time to help remove redundant data. When the training set is further reduced, cosine angle distance analysis method is proposed to determine whether the samples are redundant data, ensure that the valuable data are not removed. Different from the previous data geometry analysis methods, the PDGA algorithm is implemented in parallel, which greatly saving the computational cost. Experimental results on artificial dataset and 6 real datasets show that the algorithm can adapt to different sample distributions. Which significantly reduce the training time and memory requirements without sacrificing the classification accuracy, and its performance is obviously better than the other five competitive algorithms.
- Published
- 2021
- Full Text
- View/download PDF
11. PSubCLUS: A Parallel Subspace Clustering Algorithm Based On Spark
- Author
-
Xiao Wen and Hu Juan
- Subjects
Big data applications ,Speedup ,General Computer Science ,Computer science ,General Engineering ,02 engineering and technology ,Image segmentation ,Intrusion detection system ,Load balancing (computing) ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,parallel ,Parallel processing (DSP implementation) ,SUBCLU ,020204 information systems ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,clustering algorithms ,Cluster analysis ,lcsh:TK1-9971 ,Algorithm - Abstract
Clustering is one of the most important unsupervised machine learning tasks. It is widely used to solve problems of intrusion detection, text analysis, image segmentation etc. Subspace clustering is the most important method for high-dimensional data clustering. In order to solve the problem of parallel subspace clustering for high-dimensional big data, this paper proposes a parallel subspace clustering algorithm based on spark named PSubCLUS which is inspired by SubCLU, a classical subspace clustering algorithm. While Spark is the most popular big data parallel processing platform currently, PSubCLUS uses the Resilient Distributed Datasets (RDD) provided by Spark to store data points in a distributed way. The two main performing stages of this algorithm, one-dimensional subspace clustering and iterative clustering, can be executed in parallel on each worker node of cluster. PSubCLUS also uses a repartition method based on the number of data points to achieve load balancing. Experimental results show that PSubCLUS has good parallel speedup and ideal load balancing effect, which is suitable for solving the parallel subspace clustering of high-dimensional big data.
- Published
- 2021
- Full Text
- View/download PDF
12. Inverse Radon Method Based on Electrical Field Lines for Dual-Modality Electrical Tomography
- Author
-
Lijun Xu, Zhang Cao, Wuqiang Yang, Gao Xin, and Tian Yu
- Subjects
Permittivity ,Parallel projection ,Iterative method ,Field line ,Computer science ,020208 electrical & electronic engineering ,02 engineering and technology ,Iterative reconstruction ,Inverse problem ,Parallel ,Matrix (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Tomography ,Electrical and Electronic Engineering ,Instrumentation ,Algorithm - Abstract
Usually, the sensitivity matrix-based algorithms are used for image reconstruction in electrical tomography. These algorithms assume that the inverse problem is like that in the well-established medical CT and find successful applications. In the medical CT, rather than using the matrix-based algorithms, the inverse Radon transform and its derivatives dominate. However, these inverse transforms have not been introduced to electrical tomography due to a lack of the link between the parallel projections and electrical distributions. This article presents a map from electrical field lines to parallel lines in the inverse Radon transform and a novel image reconstruction algorithm by using the transform for dual-modality electrical tomography. The transform is realized through mapping the coordinates in electric field to the assumed parallel projections. The points of intersections between electrical field lines from different exciting electrodes are mapped to those of parallel lines from the related projections in a typical inverse Radon transform. Unlike the medical CT, different excitation schemes are related to distinct equivalent projections, and iterative scanning is used to apply all available projected data. Simulated and experimental data are implemented to validate the feasibility and effectiveness of the proposed algorithm, and performance comparisons are made with three algorithms, such as Calderon’s method, iterative method based on electrical field lines, and typical Landweber method. The proposed method yields good quality images in the least time and dynamic flames are monitored. Dual-modality images of material distributions are obtained to capture the ignition and blowout of the flame evolution in a Bunsen burner.
- Published
- 2020
- Full Text
- View/download PDF
13. Fast chemical exchange saturation transfer imaging based on PROPELLER acquisition and deep neural network reconstruction
- Author
-
Tangi Roussel, Chenlu Guo, Wu Jian, Congbo Cai, Shuhui Cai, and Jens T. Rosenberg
- Subjects
Artificial neural network ,Computer simulation ,Phantoms, Imaging ,Computer science ,Propeller ,Brain ,Experimental data ,Parallel ,Magnetic Resonance Imaging ,Imaging phantom ,Rats ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Acceleration ,0302 clinical medicine ,Proof of concept ,Animals ,Female ,Radiology, Nuclear Medicine and imaging ,Neural Networks, Computer ,Chickens ,Algorithm ,Algorithms ,030217 neurology & neurosurgery - Abstract
Purpose To develop a method for fast chemical exchange saturation transfer (CEST) imaging. Methods The periodically rotated overlapping parallel lines enhanced reconstruction (PROPELLER) sampling scheme was introduced to shorten the acquisition time. Deep neural network was employed to reconstruct CEST contrast images. Numerical simulation and experiments on a creatine phantom, hen egg, and in vivo tumor rat brain were performed to test the feasibility of this method. Results The results from numerical simulation and experiments show that there is no significant difference between reference images and CEST-PROPELLER reconstructed images under an acceleration factor of 8. Conclusion Although the deep neural network is trained entirely on synthesized data, it works well on reconstructing experimental data. The proof of concept study demonstrates that the combination of the PROPELLER sampling scheme and the deep neural network enables considerable acceleration of saturated image acquisition and may find applications in CEST MRI.
- Published
- 2020
- Full Text
- View/download PDF
14. Risk Reduction in Line Grid Search for Elliptical Targets
- Author
-
Donald A. Singer
- Subjects
Square tiling ,Line search ,Orientation (computer vision) ,Computer science ,0208 environmental biotechnology ,02 engineering and technology ,010502 geochemistry & geophysics ,Parallel ,Grid ,01 natural sciences ,Standard deviation ,020801 environmental engineering ,Mathematics (miscellaneous) ,Line (geometry) ,Hyperparameter optimization ,General Earth and Planetary Sciences ,Algorithm ,0105 earth and related environmental sciences - Abstract
Structured search strategies have been well studied, but consequences of frequently made assumptions can lead to “optimum” recommendations that are far from optimum in that they ignore risks of failure. Common assumptions are many targets that have average sizes and shapes, and are randomly orientated. The effects of common assumptions are examined for parallel and grid line searches. Equations are provided for estimating probabilities of hitting elliptical targets with parallel and square grid lines along with new simplified equations when orientations are random. In a large portion of searches, the probability of missing the largest, most valuable target, dangerous ordnance, or key target is critical to the decision-maker. The risk of missing a target is frequently central to the decision. Where the target is elliptical in shape and has unknown orientation, square grid line search should be considered over line search. Square grid line search has a slightly lower expected probability of detecting an elliptical target with one or more hits than an equal-cost parallel line search, but the probability of missing regardless of orientation in square grid search is significantly lower. Where two or more hits are required, the square grid has a significantly higher probability of hitting and a slightly lower standard deviation than an equal-cost line search. For elliptical targets, the use of a square grid significantly reduces the chances of search failure.
- Published
- 2020
- Full Text
- View/download PDF
15. Robust parallel hybrid artificial bee colony algorithms for the multi-dimensional numerical optimization
- Author
-
Bilgin Avenoglu, Selen Pehlivan, Tansel Dokeroglu, TED University, Department of Computer Science, Aalto-yliopisto, and Aalto University
- Subjects
Computer science ,Artificial bee colony ,Evolutionary algorithm ,Parallel algorithm ,Particle swarm optimization ,Parallel ,Hybrid algorithm ,Hybrid ,Theoretical Computer Science ,Local optimum ,Hardware and Architecture ,Metaheuristic algorithms ,Differential evolution ,TLBO ,Algorithm ,Metaheuristic ,Software ,Information Systems - Abstract
This study proposes a set of new robust parallel hybrid metaheuristic algorithms based on artificial bee colony (ABC) and teaching learning-based optimization (TLBO) for the multi-dimensional numerical problems. The best practices of ABC and TLBO are implemented to provide robust algorithms on a distributed memory computation environment using MPI libraries. Island parallel versions of the proposed hybrid algorithm are observed to obtain much better results than those of sequential versions. Parallel pseudorandom number generators are used to provide diverse solution candidates to prevent stagnation into local optima. The performances of the proposed hybrid algorithms are compared with eight different metaheuristics algorithms of particle swarm optimization, differential evolution variants, ABC variants and evolutionary algorithm. The empirical results show that the new hybrid parallel algorithms are scalable and the best performing algorithms when compared to the state-of-the-art metaheuristics.
- Published
- 2020
- Full Text
- View/download PDF
16. Two Separate Circles With Same-Radius: Projective Geometric Properties and Applicability in Camera Calibration
- Author
-
Fengli Yang, Yue Zhao, and Xuechun Wang
- Subjects
General Computer Science ,Calibration (statistics) ,020208 electrical & electronic engineering ,General Engineering ,02 engineering and technology ,SSR circles ,Parallel ,Image (mathematics) ,Camera calibration ,Conic section ,conic dual ,Line (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,Perpendicular ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Vanishing point ,parallel lines ,image of circular points ,lcsh:TK1-9971 ,Algorithm ,Camera resectioning ,Mathematics - Abstract
In the domain of computer vision, camera calibration is a key step in recovering the two-dimensional Euclidean structure. Circles are considered important image features similar to points, lines, and conics. In this paper, a novel linear calibration method is proposed using two separate same-radius (SSR) circles as the calibration pattern. We show that the distinct pair of dual circles encodes three lines, two of which are parallel to each other and perpendicular to the remaining line. When any two coplanar or parallel circles degenerate to SSR circles, a solution can be found to recover another pair of parallel lines based on the geometric properties of the SSR circles. Using the vanishing points obtained as the key helper for determining the imaged circular points and the orthogonal vanishing points, we deduce the constraints on the image of the absolute conic (IAC) and then employ it for complete camera calibration. Furthermore, a closed-form solution for the extrinsic parameters can be obtained based on the projective invariance of the conic dual to the circular points. Evaluations based on simulated and real data confirmed the effectiveness and feasibility of the proposed algorithms.
- Published
- 2020
- Full Text
- View/download PDF
17. Turbo-SMT: Parallel coupled sparse matrix-Tensor factorizations and applications.
- Author
-
Papalexakis, Evangelos E., Mitchell, Tom M., Sidiropoulos, Nicholas D., Faloutsos, Christos, Talukdar, Partha Pratim, and Murphy, Brian
- Subjects
- *
CALCULUS of tensors , *ALGORITHMS , *DATA analysis , *DATA mining , *FUNCTIONAL magnetic resonance imaging - Abstract
How can we correlate the neural activity in the human brain as it responds to typed words, with properties of these terms (like 'edible', 'fits in hand')? In short, we want to find latent variables, that jointly explain both the brain activity, as well as the behavioral responses. This is one of many settings of the Coupled Matrix-Tensor Factorization (CMTF) problem. Can we enhance any CMTF solver, so that it can operate on potentially very large datasets that may not fit in main memory? We introduce T urbo-SMT, a meta-method capable of doing exactly that: it boosts the performance of any CMTF algorithm, produces sparse and interpretable solutions, and parallelizes any CMTF algorithm, producing sparse and interpretable solutions (up to 65 fold). Additionally, we improve upon ALS, the work-horse algorithm for CMTF, with respect to efficiency and robustness to missing values. We apply T urbo-SMT to B rainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. T urbo-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy. Finally, we demonstrate the generality of T urbo-SMT, by applying it on a FACEBOOK dataset (users, 'friends', wall-postings); there, T urbo-SMT spots spammer-like anomalies. © 2016 Wiley Periodicals, Inc. Statistical Analysis and Data Mining: The ASA Data Science Journal, 2016 [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
18. A Coverage Path Planning Algorithm for Self-Organizing Drone Swarms
- Author
-
Georgios Tsoumanis, Konstantinos Bezas, and Konstantinos Oikonomou
- Subjects
Point of interest ,Computer science ,Path (graph theory) ,Swarm behaviour ,Particle swarm optimization ,Motion planning ,Parallel ,Simple polygon ,Algorithm ,Drone - Abstract
Drone swarms are employed for multiple applications nowadays, such as early detection of forest fires, disaster response missions etc., area coverage being the main hurdle these applications face. In this work, an algorithm is proposed for drone swarms that achieves full area coverage and Point of Interest (PoI) detection with no decomposition for areas with simple polygon shapes and no obstacles. It employs small pieces of information that are exchanged among the swarm drones in real-time. Parallel lines and spiral coverage paths are examined in terms of the number of slowdowns in the swarms speed to fully collect information from a detected PoI, the scanned PoIs, and the formation switches that are executed when the swarm has to alter its flying direction. Simulations are developed for a square area shape. The results showcase that the parallel coverage path is more efficient in terms of cover time and travel distance compared to the spiral. The parallel coverage requires less slowdowns for the same percentage of scanned PoIs.
- Published
- 2021
- Full Text
- View/download PDF
19. A fault‐location algorithm for parallel line based on the long short‐term memory model using the distributed parameter line model
- Author
-
Mostafa Sedighizadeh, Behrooz Taheri, and Sirus Salehimehr
- Subjects
business.industry ,Computer science ,Deep learning ,Line model ,Energy Engineering and Power Technology ,Parallel ,Fault (power engineering) ,Long short term memory ,Modeling and Simulation ,Artificial intelligence ,Electrical and Electronic Engineering ,Power-system protection ,business ,Algorithm - Published
- 2021
- Full Text
- View/download PDF
20. A Boosted 3-D PCA Algorithm Based on Efficient Analysis Method
- Author
-
Chi-Ho Lin and Kyung Min Lee
- Subjects
Technology ,Computer science ,QH301-705.5 ,QC1-999 ,3-D PCA ,Data imbalance ,parallel ,General Materials Science ,AdaBoost ,image detection ,Biology (General) ,Instrumentation ,QD1-999 ,Analysis method ,Fluid Flow and Transfer Processes ,Process Chemistry and Technology ,Physics ,Detector ,General Engineering ,Image detection ,Engineering (General). Civil engineering (General) ,Adaboost algorithm ,Computer Science Applications ,Chemistry ,Benchmark (computing) ,TA1-2040 ,Algorithm ,performance - Abstract
In this paper, we propose a boosted 3-D PCA algorithm based on an efficient analysis method. The proposed method involves three steps that improve image detection. In the first step, the proposed method designs a new analysis method to solve the performance problem caused by data imbalance. In the second step, a parallel cross-validation structure is used to enhance the new analysis method further. We also design a modified AdaBoost algorithm to improve the detector accuracy performance of the new analysis method. In order to verify the performance of this system, we experimented with a benchmark dataset. The results show that the new analysis method is more efficient than other image detection methods.
- Published
- 2021
21. Slicing Algorithm and Partition Scanning Strategy for 3D Printing Based on GPU Parallel Computing
- Author
-
Xuhui Lai and Zhengying Wei
- Subjects
0209 industrial biotechnology ,Technology ,Marching squares ,Computer science ,02 engineering and technology ,Parallel ,Slicing ,Article ,020901 industrial engineering & automation ,General Materials Science ,Image warping ,Block (data storage) ,ComputingMethodologies_COMPUTERGRAPHICS ,Microscopy ,QC120-168.85 ,Binary image ,QH201-278.5 ,scan order ,021001 nanoscience & nanotechnology ,Engineering (General). Civil engineering (General) ,partition ,TK1-9971 ,Descriptive and experimental mechanics ,GPU slice ,Line (geometry) ,MS algorithm ,Electrical engineering. Electronics. Nuclear engineering ,curve interpolation ,TA1-2040 ,0210 nano-technology ,Normal ,Algorithm ,additive manufacturing - Abstract
Aiming at the problems of over stacking, warping deformation and rapid adjustment of layer thickness in electron beam additive manufacturing, the 3D printing slicing algorithm and partition scanning strategy for numerical control systems are studied. The GPU (graphics processing unit) is used to slice the 3D model, and the STL (stereolithography) file is calculated in parallel according to the normal vector and the vertex coordinates. The voxel information of the specified layer is dynamically obtained by adjusting the projection matrix to the slice height. The MS (marching squares) algorithm is used to extract the coordinate sequence of the binary image, and the ordered contour coordinates are output. In order to avoid shaking of the electron gun when the numerical control system is forming the microsegment straight line, and reduce metal overcrowding in the continuous curve C0, the NURBS (non-uniform rational b-splines) basis function is used to perform curve interpolation on the contour data. Aiming at the deformation problem of large block components in the forming process, a hexagonal partition and parallel line variable angle scanning technology is adopted, and an effective temperature and deformation control strategy is formed according to the European-distance planning scan order of each partition. The results show that the NURBS segmentation fits closer to the original polysurface cut line, and the error is reduced by 34.2% compared with the STL file slice data. As the number of triangular patches increases, the algorithm exhibits higher efficiency, STL files with 1,483,132 facets can be cut into 4488 layers in 89 s. The slicing algorithm involved in this research can be used as a general data processing algorithm for additive manufacturing technology to reduce the waiting time of the contour extraction process. Combined with the partition strategy, it can provide new ideas for the dynamic adjustment of layer thickness and deformation control in the forming process of large parts.
- Published
- 2021
22. DESAIN NON-PLAYER CHARACTER PERMAINAN TIC-TAC-TOE DENGAN ALGORITMA MINIMAX
- Author
-
Gunadi Emanuel, R. Kristoforus Jawa Bendi, and Arieffianto Arieffianto
- Subjects
Visual Basic ,Video game development ,Unified Modeling Language ,Modeling language ,Computer science ,ComputingMilieux_PERSONALCOMPUTING ,Process (computing) ,Non-player character ,Parallel ,Minimax ,computer ,Algorithm ,computer.programming_language - Abstract
Tic-Tac-Toe is one of the board games. It is played by filling the columns on the board with X or O in such a way as to form parallel lines vertically, horizontally and diagonally. This study aims to design Non-Player Characters (NPC) in the tic-tac-toe game with the minimax algorithm. The Tic-tac-toe game will be designed with two game modes: easy and minimum random modes. While in minimax NPC mode will determine the best step. The game development process of the tic-tac-toe application is based on a linear sequence process model. In the analysis phase, the NPC will be designed based on the concept of minimax. Software modeling was designed using Unified Modeling Language (UML), and coded with Visual Basic programming. Our tests show that NPCs with the Minimax algorithm can work well.
- Published
- 2019
- Full Text
- View/download PDF
23. Direct Parallel Computations of Second-Order Search Directions for Model Predictive Control
- Author
-
Isak Nielsen and Daniel Axehill
- Subjects
0209 industrial biotechnology ,Sequence ,Computational complexity theory ,Computer science ,Parallel algorithm ,02 engineering and technology ,Control Engineering ,Optimal control ,MPC ,optimization ,parallel ,Riccati recursion ,Computer Science Applications ,Model predictive control ,020901 industrial engineering & automation ,Reglerteknik ,Control and Systems Engineering ,Control system ,Variable elimination ,Electrical and Electronic Engineering ,Algorithm ,Linear equation - Abstract
Model predictive control (MPC) is one of the most widely spread advanced control schemes in industry today. In MPC, a constrained finite-time optimal control (CFTOC) problem is solved at each iteration in the control loop. The CFTOC problem can be solved using, for example, second-order methods, such as interior-point or active-set methods, where the computationally most demanding part often consists of computing the sequence of second-order search directions. Each search direction can be computed by solving a set of linear equations that corresponds to solving an unconstrained finite-time optimal control (UFTOC) problem. In this paper, different direct (noniterative) parallel algorithms for solving UFTOC problems are presented. The parallel algorithms are all based on a recursive variable elimination and solution propagation technique. Numerical evaluations of one of the parallel algorithms indicate that a significant boost in performance can be obtained, which can facilitate high-performance second-order MPC solvers. Funding Agencies|Swedish Research Council (VR); Center for Industrial Information Technology (CENIIT); Excellence Center at Linkoping-Lund on Information Technology (ELLIIT)
- Published
- 2019
- Full Text
- View/download PDF
24. Hybridizing simulated annealing with variable neighborhood search for bipartite graph crossing minimization
- Author
-
Gintaras Palubeckis, Arūnas Tomkevičius, and Armantas Ostreika
- Subjects
0209 industrial biotechnology ,Computer science ,Applied Mathematics ,020206 networking & telecommunications ,02 engineering and technology ,Parallel ,Tabu search ,Vertex (geometry) ,Computational Mathematics ,020901 industrial engineering & automation ,Simulated annealing ,0202 electrical engineering, electronic engineering, information engineering ,Bipartite graph ,Embedding ,Algorithm ,Time complexity ,Variable neighborhood search ,MathematicsofComputing_DISCRETEMATHEMATICS - Abstract
Given a bipartite graph, the problem we address, called the bipartite graph crossing minimization problem (BGCMP), is to find an embedding the parts of the graph along two parallel lines so that the number of edge crossings is minimized. It is assumed that each edge is drawn as a straight line segment and edges sharing an end vertex do not cross. We propose an approach for the BGCMP, which combines a simulated annealing (SA) method with a variable neighborhood search (VNS) scheme. These two algorithms are executed iteratively. At each iteration, the solution produced by SA is submitted as input to the VNS component of the approach. Our VNS algorithm uses a local search technique which is based on a fast insertion neighborhood exploration procedure. We show that the time complexity of this procedure is O(n2), where n is the order of the graph. Another fast procedure is proposed for computing the gain in the objective function value obtained by swapping positions of two vertices. We experimentally compare our algorithm (called SA-VNS) against the tabu search algorithm as well as GRASP approach from the literature. Computational results are reported on four sets of bipartite graphs. The results demonstrate the superiority of SA-VNS over the state-of-the-art methods. The source code implementing SA-VNS is made publicly available as a benchmark for future comparisons.
- Published
- 2019
- Full Text
- View/download PDF
25. Geometric Model for Generation of Contour- Parallel Lines’ Family for Cutting Tool’s Path Automated Computation
- Author
-
I. V. Krysova, K. L. Panchuk, and T. M. Myasoedova
- Subjects
020301 aerospace & aeronautics ,0203 mechanical engineering ,Cutting tool ,Computer science ,Computation ,Path (graph theory) ,02 engineering and technology ,010402 general chemistry ,Parallel ,Geometric modeling ,01 natural sciences ,Algorithm ,0104 chemical sciences - Abstract
In this paper has been proposed a geometric model for forming problem of contour-parallel lines (equidistant lines) for a flat contour with an island, and has been obtained the problem’s analytical solution, which is relevant for computer-aided design of cutting tools processing pocket surfaces on CNC machines. The proposed geometric model is based on cyclograph mapping of space on a plane. Beyond the analytical solution the geometric model differs from the known algebraic models and their solutions for considered forming problem also by the fact that it allows obtain a more complete and evident representation on the relationship and interaction for all its geometric components at the stages of 3D computer visualization. A 3D geometric model based on a cyclograph mapping of space has been proposed for obtaining the families of equidistant lines for connected and multiply connected regions with closed contours taken as a basis for pocket surfaces modeling. An algorithm for the analytical solution of the problem related to equidistant families generation is getting from the geometric model. All stages of the analytical solution are accompanied by a figurative representation of geometric objects and their relations in the geometric model’s virtual electronic space. The proposed in this paper algorithm for the case of a doubly connected polygonal region can be used as a basis for generation of equidistant families for multiply connected polygonal regions. The presence of the analytical solution for the problem related to equidistant families generation simplifies greatly the automated calculation of the tool path and preparation of control programs for pocket surfaces manufacturing on CNC machines. Have been presented an example and algorithm providing support for working capacity of the proposed geometric model for considered forming problem.
- Published
- 2019
- Full Text
- View/download PDF
26. Review for 'A fault‐location algorithm for parallel line based on the long short‐term memory model using the distributed parameter line model'
- Author
-
Arvind Singh
- Subjects
Long short term memory ,Computer science ,Line model ,Parallel ,Fault (power engineering) ,Algorithm - Published
- 2021
- Full Text
- View/download PDF
27. GPU implementation of linear morphological openings with arbitrary angle.
- Author
-
Karas, Pavel, Morard, Vincent, Bartovský, Jan, Grandpierre, Thierry, Dokládalová, Eva, Matula, Petr, and Dokládal, Petr
- Abstract
Linear morphological openings and closings are important non-linear operators from mathematical morphology. In practical applications, many different orientations of digital line segments must typically be considered. In this paper, we (1) review efficient sequential as well as parallel algorithms for the computation of linear openings and closings; (2) compare the performance of CPU implementations of four state-of-the-art algorithms; (3) describe GPU implementations of two recent efficient algorithms allowing arbitrary orientation of the line segments; (4) propose, as the main contribution, an efficient and optimized GPU implementation of linear openings; and (5) compare the performance of all implementations on real images from various applications. From our experimental results, it turned out that the proposed GPU implementation is suitable for applications with large, industrial images, running under severe timing constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
28. A gravitational torque-compensated 2-DOF planar robotic arm design and its active control
- Author
-
Erdinc Sahin Conkur and Yalçın Bulut
- Subjects
0209 industrial biotechnology ,Computer science ,hyper-redundant robots ,02 engineering and technology ,Servomotor ,Computer Science::Robotics ,020901 industrial engineering & automation ,Planar ,0203 mechanical engineering ,Control theory ,Counter-balanced mechanisms ,snake robot ,Mechanisms ,Sinc function ,Serial ,Mechanical Engineering ,Gravity Compensation ,robotic arm ,Parallel ,Mechanism (engineering) ,Manipulators ,Algorithm ,020303 mechanical engineering & transports ,gravitational torque compensation ,Robot ,Reduction (mathematics) ,Robotic arm ,Gravitational torque - Abstract
Serial robot manipulators have their servo motors with reduction gears on the link joints. When it comes to hyper-redundant robots, this kind of joint actuation mechanism cannot be implemented since this makes hyper-redundant robots too heavy. Instead, cable driven mechanisms are preferred. However, the positioning accuracy is negatively affected by the cables. This paper addresses the positioning accuracy problem of cable driven hyper-redundant robots by employing a 2-DOF robotic arm whose modules are counter-balanced. While the actuators connected to the base actively do most of the work using cables and springs, light and compact actuators connected to the links produce precise motion. The method will result in compact, light and precise hyper-redundant robotic arms. The above-mentioned procedure governed by a control software including a 2D simulator developed is experimentally proved to be a feasible method to compensate the gravitational torque successfully.
- Published
- 2021
29. The impact of unequal processing time variability on reliable and unreliable merging line performance
- Author
-
Rodrigo Romero-Silva, Zouhair Laarraf, Erika Marsillac, Sabry Shaaban, and Operations Analytics
- Subjects
Economics and Econometrics ,Coefficient of variation ,0211 other engineering and technologies ,02 engineering and technology ,Reverse logistics ,Management Science and Operations Research ,Parallel ,Industrial and Manufacturing Engineering ,0502 economics and business ,SDG 7 - Affordable and Clean Energy ,Innovation ,Remanufacturing ,Throughput (business) ,Mathematics ,Assembly line balancing ,021103 operations research ,Series (mathematics) ,05 social sciences ,Pareto principle ,General Business, Management and Accounting ,Throughput ,Variable (computer science) ,Average buffer level ,Line (geometry) ,and Infrastructure ,SDG 9 - Industry, Innovation, and Infrastructure ,SDG 9 - Industry ,Algorithm ,050203 business & management ,Simulation ,Unreliable merging lines - Abstract
Research on merging lines is expanding as their use grows significantly in the contexts of remanufacturing, reverse logistics and developing economies. This article is the first to study the behavior of unpaced, reliable, and unreliable merging assembly lines that are deliberately unbalanced with respect to their coefficients of variation (CV). Conducting a series of simulation runs with varying line lengths, buffer storage capacities and unbalanced CV patterns delivers intriguing results. For both reliable and unreliable lines, the best pattern for generating higher throughput is found to be a balanced configuration (equal CVs along both parallel lines), except for unreliable lines with a station buffer capacity of six. In that case, the highest throughput results from the descending configuration, i.e. concentrating the variable stations close to the beginning of both parallel lines and the steady stations towards the end of the line. Ordering from the least to most steady station also provides the best average buffer level. By exploring the experimental Pareto Frontier, this study shows the combined performance of unbalanced CV patterns for throughput and average buffer level. Study results suggest that caution should be exercised when assuming equivalent behavior from reliable and unreliable lines, or single serial lines and merging lines, since the relative throughput performance of some CV patterns changed between the different configurations.
- Published
- 2021
- Full Text
- View/download PDF
30. OPTIMIZATION METHOD OF SIMPLEX ALGORITHM SOLUTION.
- Author
-
POPOVICIU, Ioan
- Subjects
- *
LINEAR programming , *SPARSE matrices , *SIMPLEXES (Mathematics) , *INVERSE problems , *CALCULUS - Abstract
When speaking about linear programming problems of big dimensions with sparse matrix of the system, resolved through simplex method, it is necessary, at each iteration, to calculate the inverse of the base matrix, which leads to the loss of the rarity character of the matrix. The article proposes the replacement of the calculus of the inverse of the base matrix with the solving through iterative parallel methods of a linear system with sparse matrix of the system. [ABSTRACT FROM AUTHOR]
- Published
- 2012
31. Fast a on road networks using a scalable separator-based heuristic
- Author
-
Renjie Chen and Craig Gotsman
- Subjects
050210 logistics & transportation ,Admissible heuristic ,021103 operations research ,Binary tree ,Heuristic (computer science) ,Computer science ,05 social sciences ,0211 other engineering and technologies ,02 engineering and technology ,Parallel ,Set (abstract data type) ,Tree (data structure) ,0502 economics and business ,Scalability ,Road map ,Algorithm - Abstract
Fastest-path queries between two points in a very large road map is an increasingly important primitive in modern transportation and navigation systems, thus very efficient computation of these paths is critical for system performance and throughput. We present a novel method to compute an effective admissible heuristic for the fastest-path travel time between two points on a road map, which can be used to significantly accelerate the classical A* algorithm when computing fastest paths. Our basic method - called the Hierarchical Separator Heuristic (HSH) - is based on a hierarchical set of linear separators of the map represented by a binary tree, where all the separators are parallel lines in a specific direction. A preprocessing step computes a short vector of values per road junction based on the separator tree, which is then stored with the map and used to efficiently compute the heuristic at the online query stage. We demonstrate experimentally that this method scales well to any map size, providing a high-quality heuristic, thus very efficient A* search, for fastest-path queries between points at all distances - especially small and medium range. We show how to significantly improve the basic HSH method by combining separator hierarchies in multiple directions and by partitioning the linear separators. Experimental results on real-world road maps show that HSH achieves accuracy above 95% in estimating the true travel time between two junctions at the price of storing approximately 25 values per junction.
- Published
- 2020
- Full Text
- View/download PDF
32. Flexible calibration method for visual measurement using an improved target with vanishing constraints
- Author
-
Linghui Yang, Xiaoyun Chen, Jiarui Lin, Jigui Zhu, and Yanbiao Sun
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Image processing ,Function (mathematics) ,Parallel ,01 natural sciences ,Atomic and Molecular Physics, and Optics ,Manufacturing cost ,Edge detection ,Electronic, Optical and Magnetic Materials ,010309 optics ,Optics ,0103 physical sciences ,Calibration ,Computer Vision and Pattern Recognition ,business ,Algorithm ,Camera resectioning - Abstract
In this paper, an improved calibration method based on vanishing constraints is proposed for calculating the extrinsic parameters of cameras. First, we come up with a improved target based on the conventional target with two groups of orthogonal parallel lines. The novel target is composed of two groups of parallel lines with a certain angle range from 80° to 90°, which can reduce the difficulty of target production and the manufacturing cost. Furthermore, in the optimization process, we design a new function with a more robust penalty factor instead of using the experienced values to get the extrinsic parameters for the binocular vision sensors. Finally, on account of using the improved target and the novel optimiazation function, the proposed method is more flexible and robust compared with Zhang’s method.
- Published
- 2020
33. Impacts of Channel Network Type on the Unit Hydrograph
- Author
-
Sediqa Hassani, Almoatasem Abdoulhak, Jorge Gironás, Dario Mirossi, Kelsey A Czyzyk, and Jeffrey D. Niemann
- Subjects
lcsh:Hydraulic engineering ,010504 meteorology & atmospheric sciences ,Hydrological modelling ,Geography, Planning and Development ,Flow (psychology) ,0207 environmental engineering ,trellis ,Hydrograph ,02 engineering and technology ,Aquatic Science ,Structural basin ,Type (model theory) ,01 natural sciences ,Biochemistry ,rectangular ,Kinematic wave ,lcsh:Water supply for domestic and industrial purposes ,parallel ,lcsh:TC1-978 ,hydrologic response ,020701 environmental engineering ,0105 earth and related environmental sciences ,Water Science and Technology ,lcsh:TD201-500 ,pinnate ,Path (graph theory) ,dendritic ,Random variable ,Algorithm ,Geology - Abstract
Unit hydrographs (UHs) remain widely used in hydrologic modeling to predict the stormflow that is produced at a basin outlet in response to runoff generated throughout the basin. Numerous studies have demonstrated that a basin&rsquo, s UH depends on its geomorphic properties, and several methods estimate synthetic UHs using such properties. However, previous studies have not examined whether the channel network type (such as dendritic, parallel, pinnate, rectangular, and trellis) impacts the UH shape. The objectives of this study were to determine: (1) whether those five network types exhibit distinct UHs, and (2) whether knowledge of the network type is sufficient to replace the actual flow path structure in UH estimation. To achieve these objectives, a UH framework is developed based on kinematic wave theory. The framework allows the impacts of the network structure on the UH to be isolated into two random variables, which facilitates comparisons between network types. The framework is applied to 10 basins of each type. The results show that the five network types exhibit distinct instantaneous UHs, but the type allows accurate UH determination only for pinnate basins.
- Published
- 2020
34. Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes
- Author
-
Joan Baiges, Camilo Bayona, Universitat Politècnica de Catalunya. Departament d'Enginyeria Civil i Ambiental, and Universitat Politècnica de Catalunya. ANiComp - Anàlisi numèrica i computació científica
- Subjects
Engineering, Civil ,finite differences ,Computer science ,Engineering, Multidisciplinary ,010103 numerical & computational mathematics ,Parallel computing ,01 natural sciences ,Hierarchical database model ,Matemàtiques i estadística::Anàlisi numèrica [Àrees temàtiques de la UPC] ,adaptivity ,parallel ,adaptive mesh refinement ,Polygon mesh ,Engineering, Ocean ,0101 mathematics ,Engineering, Aerospace ,Engineering, Biomedical ,finite volumes ,Anàlisi numèrica ,Quadrilateral ,Adaptive mesh refinement ,Applied Mathematics ,high-performance computing ,Supercomputer ,Computer Science, Software Engineering ,Engineering, Marine ,Computational physics ,010101 applied mathematics ,Engineering, Manufacturing ,Engineering, Mechanical ,Computational Mathematics ,Scalability ,Engineering, Industrial ,finite elements ,Distributed memory ,Hexahedron ,load rebalancing ,Algorithm ,Numerical analysis - Abstract
No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper are its capability of handling multiple types of elements in two and three dimensions (triangular, quadrilateral, tetrahedral, and hexahedral), the small amount of memory required per processor, and the parallel scalability up to thousands of processors. The presented algorithm is also capable of dealing with nonbalanced hierarchical refinement, where multirefinement level jumps are possible between neighbor elements. An algorithm for dealing with load rebalancing is also presented, which allows us to move the hierarchical data structure between processors so that load unbalancing is kept below an acceptable level at all times during the simulation. A particular feature of the proposed algorithm is that arbitrary renumbering algorithms can be used in the load rebalancing step, including both graph partitioning and space-filling renumbering algorithms. The presented algorithm is packed in the Fortran 2003 object oriented library \textttRefficientLib, whose interface calls which allow it to be used from any computational physics code are summarized. Finally, numerical experiments illustrating the performance and scalability of the algorithm are presented.
- Published
- 2020
35. Angular Radon spectrum for rotation estimation
- Author
-
Dario Lodi Rizzini
- Subjects
Radon transform ,020207 software engineering ,02 engineering and technology ,Correlation function (astronomy) ,Mixture model ,Translation (geometry) ,Parallel ,Point distribution model ,Artificial Intelligence ,Orientation (geometry) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Algorithm ,Rotation (mathematics) ,Software ,Mathematics - Abstract
This paper presents a robust method for rotation estimation of planar point sets using the Angular Radon Spectrum (ARS). Given a Gaussian Mixture Model (GMM) representing the point distribution, the ARS is a continuous function derived from the Radon Transform of such distribution. The ARS characterizes the orientation of a point distribution by measuring its alignment w.r.t. a pencil of parallel lines. By exploting its translation and angular-shift invariance, the rotation angle between two point sets can be estimated through the correlation of the corresponding spectra. Beside its definition, the novel contributions of this paper include the efficient computation of the ARS and of the correlation function through their Fourier expansion, and a new algorithm for assessing the rotation between two point sets. Moreover, experiments with standard benchmark datasets assess the performance of the proposed algorithm and other state-of-the-art methods in presence of noisy and incomplete data.
- Published
- 2018
- Full Text
- View/download PDF
36. Synchronised Measurements Based Algorithm for Long Transmission Line Fault Analysis
- Author
-
Vladimir Terzija, V. Stanojevic, and Gary Preston
- Subjects
Engineering ,General Computer Science ,business.industry ,020209 energy ,02 engineering and technology ,Parallel ,Fault (power engineering) ,Fault indicator ,Electric power transmission ,Transmission line ,Line (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,Algorithm design ,Fault model ,business ,Algorithm - Abstract
A novel numerical algorithm for the analysis of single line to ground faults on long overhead transmission lines is presented in this paper. The algorithm is an extension of previous work by the authors and takes into consideration the shunt capacitance of the transmission line, making it applicable to long overhead transmission lines. The algorithm utilizes synchronized data sampling from both line terminals and a non-recursive parameter estimation method. The algorithm was developed using an accurate fault model including arcing phenomena and tower footing resistance, and a dynamic arc model was included in the fault model used to test the algorithm. The algorithm was derived in the time domain and simultaneously estimates the arc voltage amplitude, tower footing resistance, and the fault location. The proposed algorithm was thoroughly tested using simulated fault cases. In particular, the sensitivity of the algorithm to fault resistance, harmonics, untransposed line models, and parallel lines was investigated.
- Published
- 2018
- Full Text
- View/download PDF
37. Structure‐based interpolation method for restoring the intensity of low‐density impulse noise
- Author
-
Payman Moallem, Payam Sanaee, and Farbod Razzazi
- Subjects
Pixel ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020206 networking & telecommunications ,02 engineering and technology ,Missing data ,Impulse noise ,Parallel ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,Focus (optics) ,Algorithm ,Software ,Intensity (heat transfer) ,Image restoration ,ComputingMethodologies_COMPUTERGRAPHICS ,Interpolation - Abstract
Restoring pixel intensities corrupted by impulse noise has a great impact on the quality of decision-based filters. In this study, the authors' focus is on intensity restoration of noisy pixels. Their assumption is that noisy pixels are already established by the noise-detection unit being considered as missing data in the image. When the interpolation methods are adopted in the noise-restoration unit of the decision-based filters for the purpose of restoring the intensities of the noisy pixels, two unexpected problems emerge - jagged edges and blurred details. These drawbacks can be ameliorated by using extra information obtained from structures in the images. Their structure-based interpolation method comprises two steps: pre-interpolation and post-interpolation. In the first step (pre-interpolation), the Sibson natural neighbour interpolation is adopted for the initial estimation of the intensities of all noisy pixels. In the second step (post-interpolation, modifying-phase), for each noisy pixel in pre-interpolated image, the intensity variations of the pixels on two adjacent parallel lines, in different directions in their corresponding local windows, are analysed. Based on the obtained structural information, the intensity of the centred noisy pixel is restored more effectively. Since the structures in the images are far more noticeable at low-density impulse noise, the proposed method works more efficiently in this case; however, a gradual improvement is achieved for high-density impulse noise.
- Published
- 2018
- Full Text
- View/download PDF
38. A novel single-ended fault location scheme for parallel transmission lines using k-nearest neighbor algorithm
- Author
-
Aleena Swetapadma and Anamika Yadav
- Subjects
Scheme (programming language) ,Parallel transmission lines ,General Computer Science ,Computer science ,020209 energy ,020208 electrical & electronic engineering ,Hardware_PERFORMANCEANDRELIABILITY ,02 engineering and technology ,Fault (power engineering) ,Parallel ,Standard deviation ,Discrete Fourier transform ,k-nearest neighbors algorithm ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,computer ,Algorithm ,Load angle ,computer.programming_language - Abstract
This paper proposes k-nearest neighbour (k-NN)-based method for fault location estimation of all types of fault in parallel lines using one-terminal measurement. Discrete Fourier Transform (DFT) is used for pre-processing the signals and then the standard deviation of one cycle of pre-fault and one cycle of post-fault samples are used as inputs to k-NN algorithm. The results obtained under various fault conditions demonstrate the high accuracy of the proposed scheme to estimate the fault location. The accuracy of the k-NN-based fault location scheme is not affected by alteration in fault type including inter-circuit faults, fault location, fault inception angle, fault resistance, and pre-fault load angle.
- Published
- 2018
- Full Text
- View/download PDF
39. Fault location algorithm for multi‐terminal mixed lines with shared tower and different voltage levels
- Author
-
Ahmed Yousuf Saber
- Subjects
Computer science ,020209 energy ,020208 electrical & electronic engineering ,Soil resistivity ,Energy Engineering and Power Technology ,Hardware_PERFORMANCEANDRELIABILITY ,02 engineering and technology ,Parallel ,Computer Science::Hardware Architecture ,symbols.namesake ,Electric power system ,Electric power transmission ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Taylor series ,symbols ,Kirchhoff's circuit laws ,Electrical and Electronic Engineering ,MATLAB ,Computer Science::Operating Systems ,Algorithm ,computer ,Computer Science::Distributed, Parallel, and Cluster Computing ,computer.programming_language ,Voltage - Abstract
This study presents a new fault location algorithm for multiple-circuit shared tower transmission lines with different voltages utilising synchronised measurements. The formulation of the proposed algorithm is based on overhead transmission lines theory and Taylor series expansion of distributed-parameter line model. The untransposition of the lines and the mutual coupling between the parallel lines are considered to achieve precise fault location. Kirchhoff's current law and fault location equation are used to identify the faulty section and obtain the location of all normal shunt faults and inter-circuit faults. The power system is simulated with DIgSILENT Power Factory software and all calculations are performed by MATLAB. Numerous fault cases are conducted to demonstrate the efficacy of the proposed algorithm considering different values of fault resistances, different fault locations and all fault types. In addition, the effects of synchrophasors errors, measurement errors and the soil resistivity variations are investigated. The results of simulation prove the efficacy of the proposed algorithm for all fault cases.
- Published
- 2018
- Full Text
- View/download PDF
40. Genetic Algorithms, a Nature-Inspired Tool: A Survey of Applications in Materials Science and Related Fields: Part II.
- Author
-
Paszkowicz, Wojciech
- Subjects
GENETIC algorithms ,MATERIALS science ,CRYSTALLOGRAPHY ,SCIENCE & industry ,SURVEYS ,INDUSTRIAL efficiency - Abstract
Genetic algorithms (GAs) are a helpful tool in optimization, simulation, modelling, design, and prediction purposes in various domains of science including materials science, medicine, technology, economy, industry, environment protection, etc. Reported uses of GAs led to solving of numerous complex computational tasks. In materials science and related fields of science and technology, GAs are routinely used for materials modeling and design, for optimization of material properties, the method is also useful in organizing the material or device production at the industrial scale. Here, the most recent (years 2008–2012) applications of GAs in materials science and in related fields (solid state physics and chemistry, crystallography, production, and engineering) are reviewed. The representative examples selected from recent literature show how broad is the usefulness of this computational method. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
41. Trade-offs Between the Size of Advice and Broadcasting Time in Trees.
- Author
-
Fusco, Emanuele and Pelc, Andrzej
- Subjects
- *
ALGORITHMS , *COMPUTER networks , *TOPOLOGY , *LINEAR algebra , *BROADCASTING industry , *DETERMINISTIC chaos - Abstract
We study the problem of the amount of information required to perform fast broadcasting in tree networks. The source located at the root of a tree has to disseminate a message to all nodes. In each round each informed node can transmit to one child. Nodes do not know the topology of the tree but an oracle knowing it can give a string of bits of advice to the source which can then pass it down the tree with the source message. The quality of a broadcasting algorithm with advice is measured by its competitive ratio: the worst case ratio, taken over n-node trees, between the time of this algorithm and the optimal broadcasting time in the given tree. Our goal is to find a trade-off between the size of advice and the best competitive ratio of a broadcasting algorithm for n-node trees. We establish such a trade-off with an approximation factor of O( n), for an arbitrarily small positive constant ε. This is the first communication problem for which a trade-off between the size of advice and the efficiency of the solution is shown for arbitrary size of advice. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
42. Computation of Optimized Electrode Arrays for 3-D Electrical Resistivity Tomography Surveys
- Author
-
Dimitrios Oikonomou, Kleanthis Simyrdanis, and Nikos Papadopoulos
- Subjects
Technology ,010504 meteorology & atmospheric sciences ,QH301-705.5 ,Computer science ,QC1-999 ,Computation ,010502 geochemistry & geophysics ,Parallel ,01 natural sciences ,Inversion (discrete mathematics) ,Matrix (mathematics) ,symbols.namesake ,General Materials Science ,Electrical resistivity tomography ,Sensitivity (control systems) ,Biology (General) ,QD1-999 ,Instrumentation ,0105 earth and related environmental sciences ,Fluid Flow and Transfer Processes ,3-D ,Physics ,Process Chemistry and Technology ,General Engineering ,Engineering (General). Civil engineering (General) ,Computer Science Applications ,Data set ,Chemistry ,Jacobian matrix and determinant ,symbols ,ERT ,TA1-2040 ,optimization ,Algorithm ,Jacobian - Abstract
The present study explores the applicability and effectiveness of an optimization technique applied to electrical resistivity tomography data. The procedure is based on the Jacobian matrix, where the most sensitive measurements are selected from a comprehensive data set to enhance the least resolvable parameters of the reconstructed model. Two existing inversion programs in two and three dimensions are modified to incorporate this new approach. Both of them are selecting the optimum data from an initial comprehensive data set which is comprised of merged conventional arrays. With the two-dimensional (2-D) optimization approach, the most sensitive measurements are selected from a 2-D survey profile and then a clone of the resulting optimum profile reproduces a three-dimensional (3-D) optimum data set composed of equally spaced parallel lines. In a different approach, with the 3-D optimization technique, the optimum data are selected from a 3-D data set of equally spaced individual parallel lines. Both approaches are compared with Stummer’s optimization technique which is based on the resolution matrix. The Jacobian optimization approach has the advantage of selecting the optimum data set without the need for the solution of the inversion problem since the Jacobian matrix is calculated as part of the forward resistivity problem, thus being faster from previous published approached based on the calculation of the sensitivity matrix. Synthetic 3-D data based on the extension of previous published works for the 2-D optimization case and field data from two case studies in Greece are tested, thus verifying the validity of the present study, where fewer measurements from the initial data set (about 15–50%) are able to reconstruct a model similar with the one produced from the original comprehensive data set.
- Published
- 2021
- Full Text
- View/download PDF
43. A message combining approach for efficient array redistribution in non-all-to-all communication networks.
- Author
-
Souravlas, Stavros and Roumeliotis, Manos
- Subjects
- *
PROGRAMMABLE array logic , *PARALLEL computers , *COMPUTATIONAL mathematics , *PERMUTATIONS , *ALGORITHMS - Abstract
The Array redistribution problem is the heart of a number of applications in parallel computing. This paper presents a message combining approach for scheduling runtime array redistribution of one-dimensional arrays. The important contribution of the proposed scheme is that it eliminates the need for local data reorganization, as noted by Sundar in 2001; the blocks destined for each processor are combined in a series of messages exchanged between neighbouring nodes, so that the receiving processors do not need to reorganize the incoming data blocks before storing them to memory locations. Local data reorganization is of great importance, especially in networks where there is no direct communication between all nodes (like tori, meshes, and trees). Thus, a block must travel through a number of relays before reaching the target processor. This requires a higher number of messages generated, therefore, a higher number of data permutations within the memory of each target processor should be made to assure correct data order. The strategy is based on a relation between groups of communicating processor pairs called superclasses. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
44. A Robust Lane Detection Method Based on Vanishing Point Estimation Using the Relevance of Line Segments
- Author
-
Seong-Whan Lee, Ju Han Yoo, Sung-Kee Park, and Donghwan Kim
- Subjects
050210 logistics & transportation ,Computer science ,business.industry ,Mechanical Engineering ,05 social sciences ,Feature extraction ,02 engineering and technology ,Image segmentation ,Parallel ,Computer Science Applications ,Line segment ,Robustness (computer science) ,0502 economics and business ,Automotive Engineering ,Lookup table ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Vanishing point ,business ,Algorithm - Abstract
In this paper, a robust lane detection method based on vanishing point estimation is proposed. Estimating a vanishing point can be helpful in detecting lanes, because parallel lines converge on the vanishing point in a projected 2-D image. However, it is not easy to estimate the vanishing point correctly in an image with a complex background. Thus, a robust vanishing point estimation method is proposed that uses a probabilistic voting procedure based on intersection points of line segments extracted from an input image. The proposed voting function is defined with line segment strength that represents relevance of the extracted line segments. Next, candidate line segments for lanes are selected by considering geometric constraints. Finally, the host lane is detected by using the proposed score function, which is designed to remove outliers in the candidate line segments. Also, the detected host lane is refined by using inter-frame similarity that considers location consistency of the detected host lane and the estimated vanishing point in consecutive frames. Furthermore, in order to reduce computational costs in the vanishing point estimation process, a method using a lookup table is proposed. Experimental results show that the proposed method efficiently estimates the vanishing point and detects lanes in various environments.
- Published
- 2017
- Full Text
- View/download PDF
45. A practical first-zone distance relaying algorithm for long parallel transmission lines
- Author
-
Marcos R. Araújo and Clever Pereira
- Subjects
Engineering ,Admittance ,Iterative method ,business.industry ,020209 energy ,Energy Engineering and Power Technology ,02 engineering and technology ,Parallel ,Symmetrical components ,Fault detection and isolation ,law.invention ,Relay ,law ,Line (geometry) ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,Electrical impedance ,Algorithm - Abstract
This paper presents a novel non-iterative first-zone distance relaying algorithm for single-line-to-ground fault detection in long parallel transmission lines. The conventional distance protection scheme is based on the series impedance line model, in which the shunt capacitance and propagation effects are neglected. Such a simplification, despite being adequate for short lines, can lead to significant errors in the apparent impedance estimation for long-distance faults. The proposed algorithm is derived from a distributed parameter line model applicable to long parallel lines, in which the shunt capacitance and mutual impedance and admittance are all fully considered. Sliding faults simulations were carried out in MATLAB using symmetrical components and graph theory assuming two long parallel transmission lines. It is shown that the proposed algorithm leads to a very good agreement between the apparent and positive-sequence impedances to the faults for line lengths up to approximately 800 km, without using iterative methods. This indicates that the proposed algorithm prevents the distance relay from underreaching for long-distance faults in parallel lines, without additional computational cost.
- Published
- 2017
- Full Text
- View/download PDF
46. A Fault Location Algorithm for Two-End Series-Compensated Double-Circuit Transmission Lines Using the Distributed Parameter Line Model
- Author
-
Mirrasoul J. Mousavi, Gergely Gombos, Ning Kang, and Xiaoming Feng
- Subjects
Engineering ,business.industry ,020209 energy ,Mechanical Engineering ,020208 electrical & electronic engineering ,Phasor ,Energy Engineering and Power Technology ,Varistor ,02 engineering and technology ,Parallel ,Symmetrical components ,law.invention ,Electric power transmission ,law ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Transformer ,MATLAB ,business ,Algorithm ,computer ,Voltage ,computer.programming_language - Abstract
A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installed on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulat...
- Published
- 2017
- Full Text
- View/download PDF
47. Parallel implementation of a transportation network model
- Author
-
O’Cearbhaill, Eoin A. and O’Mahony, Margaret
- Subjects
- *
TRANSPORTATION , *ELECTRONIC data processing , *COMPUTATIONAL neuroscience , *COMPUTER networks - Abstract
Abstract: This paper describes the parallel implementation of a transport network model. ‘Single-Program, Multiple Data’ (SPMD) paradigm is employed using a simple data decomposition approach where each processor runs the same program but acts on a different subset of the data. The objective is to reduce the execution time of the model. The computationally intensive part of the model is within the assignment and simulation section and therefore this section is parallelised and executed using 1, 2, 4, 8 and 16 processors. The convergence, accuracy and performance of the parallel model are then assessed and compared to the linear implementation. The results indicate a performance increase of over 8 for the parallelised module and a speedup of 5 for the total model when the model is run using 16 processors. The efficiency, average parallelism and efficiency-execution time profile are also discussed. In the context of time savings with 16 processors compared with 1, the time saving on the IBM SP2 are of the order of 80%, and, compared to a linear implementation on a dual processor Intel machine are of the order of 86%. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
48. Identifying Redundant Flow Limits on Parallel Lines
- Author
-
Daniel K. Molzahn
- Subjects
Optimization problem ,Linear programming ,Computer science ,020209 energy ,Energy Engineering and Power Technology ,02 engineering and technology ,AC power ,Parallel ,Ellipsoid ,Electric power system ,Test case ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,Electrical and Electronic Engineering ,Algorithm - Abstract
Many power system optimization problems constrain line flows with limits specified in terms of the magnitudes of apparent power or current flows. The set of line-flow constraints may have redundancies (i.e., the feasible space may be unchanged upon removal of a subset of the line-flow constraints), which unnecessarily complicate optimization problems. This letter describes an algorithm for identifying redundant line-flow constraints corresponding to certain parallel lines. After formulating the constraints as ellipsoids in the voltage variables, redundancies are detected from the absence of intersections between pairs of ellipsoids corresponding to parallel lines. This algorithm is demonstrated using several large test cases.
- Published
- 2018
- Full Text
- View/download PDF
49. Path Planning with Discrete Geometric Shape Patterns
- Author
-
Bruno Simon, Peter Riegl, and Andreas Gaull
- Subjects
050210 logistics & transportation ,020301 aerospace & aeronautics ,Orientation (computer vision) ,Computer science ,05 social sciences ,02 engineering and technology ,Geometric shape ,Construct (python library) ,Parallel ,Collision ,Line segment ,0203 mechanical engineering ,Simple (abstract algebra) ,0502 economics and business ,Motion planning ,Algorithm - Abstract
This paper describes a novel technique for generating vehicle trajectories motivated by the need for fast and robust path planning in hazardous traffic situations. We use the well–known approach of assembling trajectories from circular arcs, line segments, and clothoids to construct curves and s- bends aligned to a given target line. To this end, we use a special geometric technique based on discrete shape patterns built from elementary blocks. This enables us to derive complex maneuvers from simple design specifications. We can use it to create curves that enclose a random angle and build s-bends that are aligned with a parallel line at any distance. If these two methods are combined, curves and s-bends can be generated which are aligned to lines of arbitrary orientation. Simulations prove the correct operation of the proposed algorithm. The applicability of the approach was demonstrated in real driving tests using a collision evasion scenario.
- Published
- 2019
- Full Text
- View/download PDF
50. HLIBCov: Parallel hierarchical matrix approximation of large covariance matrices and likelihoods with applications in parameter identification
- Author
-
Ying Sun, David E. Keyes, Ronald Kriemann, Alexander Litvinenko, and Marc G. Genton
- Subjects
parallel hierarchical matrices, daily moisture, MLE, likelihood, matrix determinant, Cholesky, parameter estimation ,Covariance function ,Parameter identification ,Computer science ,Gaussian ,Clinical Biochemistry ,likelihood ,MathematicsofComputing_NUMERICALANALYSIS ,010501 environmental sciences ,01 natural sciences ,Large datasets ,03 medical and health sciences ,symbols.namesake ,daily moisture ,ddc:570 ,Matrix determinant ,lcsh:Science ,030304 developmental biology ,0105 earth and related environmental sciences ,ComputingMethodologies_COMPUTERGRAPHICS ,HLIBpro ,0303 health sciences ,Hierarchical matrices ,Random field ,Smoothness (probability theory) ,HLIBCov ,Estimation theory ,Hierarchical matrix ,Covariance ,Parallel ,Medical Laboratory Technology ,parallel hierarchical matrices ,Cholesky ,symbols ,lcsh:Q ,MLE ,Random fields ,matrix determinant ,parameter estimation ,Algorithm ,Mathematics ,Cholesky decomposition ,Matérn covariance - Abstract
Graphical abstract, We provide more technical details about the HLIBCov package, which is using parallel hierarchical (H-) matrices to: • Approximate large dense inhomogeneous covariance matrices with a log-linear computational cost and storage requirement. • Compute matrix-vector product, Cholesky factorization and inverse with a log-linear complexity. • Identify unknown parameters of the covariance function (variance, smoothness, and covariance length). These unknown parameters are estimated by maximizing the joint Gaussian log-likelihood function. To demonstrate the numerical performance, we identify three unknown parameters in an example with 2,000,000 locations on a PC-desktop.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.