125 results
Search Results
2. Maximum Matching Sans Maximal Matching: A New Approach for Finding Maximum Matchings in the Data Stream Model.
- Author
-
Feldman, Moran and Szarf, Ariel
- Subjects
- *
TRIANGLES , *DATA modeling , *COMBINATORIAL optimization , *GREEDY algorithms , *COMPUTER science , *ALGORITHMS - Abstract
The problem of finding a maximum size matching in a graph (known as the maximum matching problem) is one of the most classical problems in computer science. Despite a significant body of work dedicated to the study of this problem in the data stream model, the state-of-the-art single-pass semi-streaming algorithm for it is still a simple greedy algorithm that computes a maximal matching, and this way obtains 1 / 2 -approximation. Some previous works described two/three-pass algorithms that improve over this approximation ratio by using their second and third passes to improve the above mentioned maximal matching. One contribution of this paper continues this line of work by presenting new three-pass semi-streaming algorithms that work along these lines and obtain improved approximation ratios of 0.6111 and 0.5694 for triangle-free and general graphs, respectively. Unfortunately, a recent work Konrad and Naidu (Approximation, randomization, and combinatorial optimization. Algorithms and techniques, APPROX/RANDOM 2021, August 16–18, 2021. LIPIcs, vol 207, pp 19:1–19:18, 2021. https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2021.19) shows that the strategy of constructing a maximal matching in the first pass and then improving it in further passes has limitations. Additionally, this technique is unlikely to get us closer to single-pass semi-streaming algorithms obtaining a better than 1 / 2 -approximation. Therefore, it is interesting to come up with algorithms that do something else with their first pass (we term such algorithms non-maximal-matching-first algorithms). No such algorithms were previously known, and the main contribution of this paper is describing such algorithms that obtain approximation ratios of 0.5384 and 0.5555 in two and three passes, respectively, for general graphs. The main significance of our results is not in the numerical improvements, but in demonstrating the potential of non-maximal-matching-first algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. The limits of local search for weighted k-set packing.
- Author
-
Neuwohner, Meike
- Subjects
- *
COMBINATORIAL optimization , *COMPUTER science , *ALGORITHMS - Abstract
We consider the weighted k-set packing problem, where, given a collection S of sets, each of cardinality at most k, and a positive weight function w : S → Q > 0 , the task is to find a sub-collection of S consisting of pairwise disjoint sets of maximum total weight. As this problem does not permit a polynomial-time o (k log k) -approximation unless P = N P (Hazan et al. in Comput Complex 15:20–39, 2006. https://doi.org/10.1007/s00037-006-0205-6), most previous approaches rely on local search. For twenty years, Berman's algorithm SquareImp (Berman, in: Scandinavian workshop on algorithm theory, Springer, 2000. https://doi.org/10.1007/3-540-44985-X%5f19), which yields a polynomial-time k + 1 2 + ϵ -approximation for any fixed ϵ > 0 , has remained unchallenged. Only recently, it could be improved to k + 1 2 - 1 63 , 700 , 993 by Neuwohner (38th International symposium on theoretical aspects of computer science (STACS 2021), Leibniz international proceedings in informatics (LIPIcs), 2021. https://doi.org/10.4230/LIPIcs.STACS.2021.53). In her paper, she showed that instances for which the analysis of SquareImp is almost tight are "close to unweighted" in a certain sense. But for the unit weight variant, the best known approximation guarantee is k + 1 3 + ϵ (Fürer and Yu in International symposium on combinatorial optimization, Springer, 2014. https://doi.org/10.1007/978-3-319-09174-7%5f35). Using this observation as a starting point, we conduct a more in-depth analysis of close-to-tight instances of SquareImp. This finally allows us to generalize techniques used in the unweighted case to the weighted setting. In doing so, we obtain approximation guarantees of k + ϵ k 2 , where lim k → ∞ ϵ k = 0 . On the other hand, we prove that this is asymptotically best possible in that local improvements of logarithmically bounded size cannot produce an approximation ratio below k 2 . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. HYPERGRAPH HORN FUNCTIONS.
- Author
-
BÉRCZI, KRISTÓF, BOROS, ENDRE, and KAZUHISA MAKINO
- Subjects
- *
ARTIFICIAL intelligence , *COMPUTER science , *POLYNOMIAL time algorithms , *DATABASES , *BOOLEAN functions , *SEMIDEFINITE programming - Abstract
Horn functions form a subclass of Boolean functions possessing interesting structural and computational properties. These functions play a fundamental role in algebra, artificial intelligence, combinatorics, computer science, database theory, and logic. In the present paper, we introduce the subclass of hypergraph Horn functions that generalizes matroids and equivalence relations. We provide multiple characterizations of hypergraph Horn functions in terms of implicate-duality and the closure operator, which are, respectively, regarded as generalizations of matroid duality and the Mac Lane-Steinitz exchange property of matroid closure. We also study algorithmic issues on hypergraph Horn functions and show that the recognition problem (i.e., deciding if a given definite Horn CNF represents a hypergraph Horn function) and key realization (i.e., deciding if a given hypergraph is realized as a key set by a hypergraph Horn function) can be done in polynomial time, while implicate sets can be generated with polynomial delay. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. MapReduce-based distributed tensor clustering algorithm.
- Author
-
Zhang, Hongjun, Li, Peng, Meng, Fanshuo, Fan, Weibei, and Xue, Zhuangzhuang
- Subjects
- *
K-means clustering , *CLUSTER analysis (Statistics) , *COMPUTER science , *ALGORITHMS , *PARTITION functions , *PARALLEL processing , *PARALLEL programming - Abstract
Cluster analysis is one of the most fundamental methods in data mining, and it has been widely used in economics, social sciences and computer science. However, with the rapid development of Internet technology, the volume of data required for various web applications has grown rapidly, making the traditional clustering analysis methods face technical challenges. How to obtain useful information in a large amount of data quickly and efficiently is an urgent problem in many industrial fields. With the continuous development of cloud computing technology, large amounts of data can be performed quickly and efficiently. Hadoop is an open source distributed cloud computing platform with HDFS (Digital File System) and MapReduce as its core. HDFS provides massive data storage, while MapReduce uses the MapReduce programming model to achieve parallel processing. Compared with the traditional parallel programming model, it contains basic functions such as data partitioning, task scheduling, and parallel processing, making it possible for users to develop distributed applications on their own without understanding the basics of distributed basics, thus facilitating the design of parallel programs. K-means algorithm is a typical clustering analysis method, which is widely used in industry, but the number of iterations will increase significantly due to the growth of data volume, thus reducing the efficiency of computation. In order to better apply to the cluster analysis of large-scale data, this paper firstly implements a parallelization algorithm based on MapReduce on Hadoop platform using the basic idea of MapReduce and improves the K-means algorithm for the problems of blindness and easy to fall into local optimum when selecting randomly in clusters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Toward the consolidation of a multimetric-based journal ranking and categorization system for computer science subject areas.
- Author
-
Hameed, Abdul, Omar, Muhammad, Bilal, Muhammad, and Han Woo Park
- Subjects
- *
COMPUTER science , *BIBLIOTHERAPY , *ARTIFICIAL intelligence , *PATTERN recognition systems , *CLUSTER analysis (Statistics) , *COMPUTER systems , *SYSTEMS theory , *PRINCIPAL components analysis - Abstract
The evaluation of scientific journals poses challenges owing to the existence of various impact measures. This is because journal ranking is a multidimensional construct that may not be assessed effectively using a single metric such as an impact factor. A few studies have proposed an ensemble of metrics to prevent the bias induced by an individual metric. In this study, a multi-metric journal ranking method based on the standardized average index (SA index) was adopted to develop an extended standardized average index (ESA index). The ESA index utilizes six metrics: the CiteScore, Source Normalized Impact per Paper (SNIP), SCImago Journal Rank (SJR), Hirsh index (H-index), Eigenfactor Score, and Journal Impact Factor from three well-known databases (Scopus, SCImago Journal & Country Rank, and Web of Science). Experiments were conducted in two computer science subject areas: (1) artificial intelligence and (2) computer vision and pattern recognition. Comparing the results of the multi-metric-based journal ranking system with the SA index, it was demonstrated that the multi-metric ESA index exhibited high correlation with all other indicators and significantly outperformed the SA index. To further evaluate the performance of the model and determine the aggregate impact of bibliometric indices with the ESA index, we employed unsupervised machine learning techniques such as clustering coupled with principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). These techniques were utilized to measure the clustering impact of various bibliometric indicators on both the complete set of bibliometric features and the reduced set of features. Furthermore, the results of the ESA index were compared with those of other ranking systems, including the internationally recognized Scopus, SJR, and HEC Journal Recognition System (HJRS) used in Pakistan. These comparisons demonstrated that the multi-metric-based ESA index can serve as a valuable reference for publishers, journal editors, researchers, policymakers, librarians, and practitioners in journal selection, decision making, and professional assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Algorithms for the Uniqueness of the Longest Common Subsequence.
- Author
-
Wang, Yue
- Subjects
- *
ALGORITHMS , *COMPUTER science - Abstract
Given several number sequences, determining the longest common subsequence is a classical problem in computer science. This problem has applications in bioinformatics, especially determining transposable genes. Nevertheless, related works only consider how to find one longest common subsequence. In this paper, we consider how to determine the uniqueness of the longest common subsequence. If there are multiple longest common subsequences, we also determine which number appears in all/some/none of the longest common subsequences. We focus on four scenarios: (1) linear sequences without duplicated numbers; (2) circular sequences without duplicated numbers; (3) linear sequences with duplicated numbers; (4) circular sequences with duplicated numbers. We develop corresponding algorithms and apply them to gene sequencing data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Resolution of the Burrows-Wheeler Transform Conjecture.
- Author
-
Kempa, Dominik and Kociumaka, Tomasz
- Subjects
- *
COMPUTER programming , *COMPUTERS in lexicography , *ALGORITHMS , *DATA structures , *COMPUTER science - Abstract
The Burrows-Wheeler Transform (BWT) is an invertible text transformation that permutes symbols of a text according to the lexicographical order of its suffixes. BWT is the main component of popular lossless compression programs (such as bzip2) as well as recent powerful compressed indexes (such as the r-index7), central in modern bioinformatics. The compressibility of BWT is quantified by the number r of equal-letter runs in the output. Despite the practical significance of BWT, no nontrivial upper bound on r is known. By contrast, the sizes of nearly all other known compression methods have been shown to be either always within a polylog n factor (where n is the length of the text) from z, the size of Lempel--Ziv (LZ77) parsing of the text, or much larger in the worst case (by an nε factor for ε > 0). In this paper, we show that r = O (z log² n) holds for every text. This result has numerous implications for text indexing and data compression; in particular: (1) it proves that many results related to BWT automatically apply to methods based on LZ77, for example, it is possible to obtain functionality of the suffix tree in O (z polylog n) space; (2) it shows that many text processing tasks can be solved in the optimal time assuming the text is compressible using LZ77 by a sufficiently large polylog n factor; and (3) it implies the first nontrivial relation between the number of runs in the BWT of the text and of its reverse. In addition, we provide an O (z polylog n)-time algorithm converting the LZ77 parsing into the run-length compressed BWT. To achieve this, we develop several new data structures and techniques of independent interest. In particular, we define compressed string synchronizing sets (generalizing the recently introduced powerful technique of string synchronizing sets11) and show how to efficiently construct them. Next, we propose a new variant of wavelet trees for sequences of long strings, establish a nontrivial bound on their size, and describe efficient construction algorithms. Finally, we develop new indexes that can be constructed directly from the LZ77 parsing and efficiently support pattern matching queries on text substrings. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Clinical Pearl: The Clinical Relevance of Neonatal Informatics.
- Author
-
Falciglia, Gustave H., Hageman, Joseph R., Hussain, Walid, Alkureishi, Lolita Alcocer, Shah, Kshama, and Goldstein, Mitchell
- Subjects
- *
MEDICAL logic , *CRITICALLY ill , *PATIENTS , *ARTIFICIAL intelligence , *NEONATAL intensive care units , *ACUTE kidney failure in children , *COMPUTER science , *NEONATAL intensive care , *HOSPITAL nurseries , *INFORMATION science , *ELECTRONIC health records , *WATER-electrolyte balance (Physiology) , *QUALITY assurance , *ALGORITHMS , *CHILDREN - Abstract
The article focuses on the importance of clinical informatics in neonatal care, highlighting its potential to provide critical resources for clinicians. Topics include the specialized data needed for neonatal care, the challenges in transitioning from paper to electronic health records, and the impact of informatics on real-time patient management and research.
- Published
- 2024
10. Design of New Word Retrieval Algorithm for Chinese-English Bilingual Parallel Corpus.
- Author
-
Zhang, Liting
- Subjects
- *
MACHINE translating , *NATURAL language processing , *ALGORITHMS , *NEW words , *ARTIFICIAL intelligence , *COMPUTER science - Abstract
Natural language processing is an important direction in the field of computer science and artificial intelligence. It can realize various theories and methods of effective communication between humans and computers using natural language. Machine learning is a branch of natural language processing research, which is based on a large-scale English-Chinese database. Due to the relatively poor alignment corpus of English and Chinese bilingual sentences containing unknown words, machine translation is unprofessional and unbalanced, which is the problem studied in this paper. The purpose of this paper is to design and implement a length-based system for sentence alignment between English and Chinese bilingual texts. The research content of this paper is mainly divided into the following parts. First, the evaluation function of bilingual sentence alignment is designed, and on this basis, the bilingual sentence alignment algorithm based on the length and the optimal sentence pair sequence search algorithm is designed. In this paper, China National Knowledge Infrastructure (CNKI) is selected as an English-Chinese bilingual candidate website and English-Chinese bilingual web pages are downloaded. After analyzing the downloaded pages, nontext content such as page tags is removed, and bilingual text information is stored so as to establish an English-Chinese bilingual corpus based on segment alignment and retain English-Chinese bilingual keywords in the web pages. Second, extract the dictionary from the software StarDict, analyze the original dictionary format, and turn it into a custom dictionary format, which is convenient and better to use the double-sentence sentence alignment system, which is conducive to expanding the number of dictionaries and increasing the professionalism of vocabulary. Finally, we extract the stems of English words from the established corpus to simplify the complexity of English word processing, reduce the noise caused by the conversion of word parts of speech, and improve the operation efficiency. A bilingual sentence alignment system based on length is implemented. Finally, the system parameters are adjusted for comparative experiments to test the system performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Comparison of Algorithms for the AI-Based Fault Diagnostic of Cable Joints in MV Networks.
- Author
-
Negri, Virginia, Mingotti, Alessandro, Tinarelli, Roberto, and Peretto, Lorenzo
- Subjects
- *
ARTIFICIAL intelligence , *MACHINE learning , *ALGORITHMS , *COMPUTER science , *JOINTS (Anatomy) , *CABLE structures - Abstract
Electrical utilities and system operators (SOs) are constantly looking for solutions to problems in the management and control of the power network. For this purpose, SOs are exploring new research fields, which might bring contributions to the power system environment. A clear example is the field of computer science, within which artificial intelligence (AI) has been developed and is being applied to many fields. In power systems, AI could support the fault prediction of cable joints. Despite the availability of many legacy methods described in the literature, fault prediction is still critical, and it needs new solutions. For this purpose, in this paper, the authors made a further step in the evaluation of machine learning methods (ML) for cable joint health assessment. Six ML algorithms have been compared and assessed on a consolidated test scenario. It simulates a distributed measurement system which collects measurements from medium-voltage (MV) cable joints. Typical metrics have been applied to compare the performance of the algorithms. The analysis is then completed considering the actual in-field conditions and the SOs' requirements. The results demonstrate: (i) the pros and cons of each algorithm; (ii) the best-performing algorithm; (iii) the possible benefits from the implementation of ML algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Speed Proportional Integrative Derivative Controller: Optimization Functions in Metaheuristic Algorithms.
- Author
-
López, Luis Fernando de Mingo, García, Francisco Serradilla, Naranjo Hernández, José Eugenio, and Blas, Nuria Gómez
- Subjects
- *
METAHEURISTIC algorithms , *ALGORITHMS , *MATHEMATICAL functions , *COMPUTATIONAL intelligence , *MATHEMATICAL optimization , *COMPUTER science , *PID controllers , *SWARM intelligence - Abstract
Recent advancements in computer science include some optimization models that have been developed and used in real applications. Some metaheuristic search/optimization algorithms have been tested to obtain optimal solutions to speed controller applications in self-driving cars. Some metaheuristic algorithms are based on social behaviour, resulting in several search models, functions, and parameters, and thus algorithm-specific strengths and weaknesses. The present paper proposes a fitness function on the basis of the mathematical description of proportional integrative derivate controllers showing that mean square error is not always the best measure when looking for a solution to the problem. The fitness developed in this paper contains features and equations from the mathematical background of proportional integrative derivative controllers to calculate the best performance of the system. Such results are applied to quantitatively evaluate the performance of twenty-one optimization algorithms. Furthermore, improved versions of the fitness function are considered, in order to investigate which aspects are enhanced by applying the optimization algorithms. Results show that the right fitness function is a key point to get a good performance, regardless of the chosen algorithm. The aim of this paper is to present a novel objective function to carry out optimizations of the gains of a PID controller, using several computational intelligence techniques to perform the optimizations. The result of these optimizations will demonstrate the improved efficiency of the selected control schema. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Dual algorithm for truncated fractional variation based image denoising.
- Author
-
Liang, Haixia and Zhang, Juli
- Subjects
- *
ALGORITHMS , *IMAGE denoising , *IMAGE reconstruction , *IMAGE processing , *COMPUTER science , *PAPER arts - Abstract
Fractional-order derivative is attracting more and more attention of researchers in image processing because of its better property in restoring more texture than the total variation. To improve the performance of fractional-order variation model in image restoration, a truncated fractional-order variation model was proposed in Chan and Liang [Truncated fractional-order variation model for image restoration, J. Oper. Res. Soc. China]. In this paper, we propose a dual approach to solve this truncated fractional-order variation model on noise removal. The proposed algorithm is based on the dual approach proposed by Chambolle [An algorithm for total variation minimisation and applications, J. Math Imaging Vis. 20 (2004), pp. 89–97]. Conversely, the Chambolle's dual approach can be treated as a special case of the proposed algorithm with fractional order α = 1. The work of this paper modifies the result in Zhang et al. [Adaptive fractional-order multi-scale method for image denoising, J. Math. Imaging Vis. 43(1) (2012), pp. 39–49. Springer Netherlands 0924–9907, Computer Science, pp. 1–11, 2011], where the convergence is not analysed. Based on the truncation, the convergence of the proposed dual method can be analysed and the convergence criteria can be provided. In addition, the accuracy of the reconstruction is improved after the truncation is taken. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. SOLVING CSPs USING WEAK LOCAL CONSISTENCY.
- Author
-
KOZIK, MARCIN
- Subjects
- *
COMPUTER science , *POLYNOMIAL approximation , *CONSTRAINT satisfaction , *ALGEBRA , *LOGIC , *ALGORITHMS , *APPROXIMATION algorithms - Abstract
The characterization of all the constraint satisfaction problems solvable by local consistency checking (also known as CSPs of bounded width) was proposed by Feder and Vardi [SIAM J. Comput., 28 (1998), pp. 57104]. It was confirmed by two independent proofs by Bulatov [Bounded Relational Width, manuscript, 2009] and Barto and Kozik [L. Barto and M. Kozik, 50th Annual IEEE Symposium on Foundations of Computer Science, 2009, pp. 595 603], [L. Barto and M. Kozik, J. ACM, 61 (2014), 3]. Later Barto [J. Logic Comput., 26 (2014), pp. 923 943] proved a collapse of the hierarchy of local consistency notions by showing that (2; 3) minimality solves all the CSPs of bounded width. In this paper we present a new consistency notion, jpq consistency, which also solves all the CSPs of bounded width. Our notion is strictly weaker than (2; 3) consistency, (2; 3) minimality, path consistency, and singleton arc consistency (SAC). This last fact allows us to answer the question of Chen, Dalmau, and Gruien [J. Logic Comput., 23 (2013), pp. 87 108] by confirming that SAC solves all the CSPs of bounded width. Moreover, as known algorithms work faster for SAC, the result implies that CSPs of bounded width can be, in practice, solved more efficiently. The definition of jpq consistency is closely related to a consistency condition obtained from the rounding of an SDP relaxation of a CSP instance. In fact, the main result of this paper is used by Dalmau et al. [Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms, SIAM, Philadelphia, ACM, New York, 2017, pp. 340{357] to show that CSPs with near unanimity polymorphisms admit robust approximation algorithms with polynomial loss. Finally, an algebraic characterization of some term conditions satisfied in algebras associated with templates of bounded width, first proved by Brady, is a direct consequence of our result. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Optimal analysis for bandit learning in matching markets with serial dictatorship.
- Author
-
Wang, Zilong and Li, Shuai
- Subjects
- *
TIME perspective , *COMPUTER science , *ROBBERS , *REGRET , *ALGORITHMS - Abstract
The problem of two-sided matching markets is well-studied in computer science and economics, owing to its diverse applications across numerous domains. Since market participants are usually uncertain about their preferences in various online matching platforms, an emerging line of research is dedicated to the online setting where one-side participants (players) learn their unknown preferences through multiple rounds of interactions with the other side (arms). Sankararaman et al. [23] provide an Ω (N log (T) Δ 2 + K log (T) Δ) regret lower bound for this problem under serial dictatorship assumption, where N is the number of players, K (≥ N) is the number of arms, Δ is the minimum reward gap across players and arms, and T is the time horizon. Serial dictatorship assumes arms have the same preferences, which is common in reality when one side participants have a unified evaluation standard. Recently, the work of Kong and Li [10] proposes the ET-GS algorithm and achieves an O (K log (T) Δ 2 ) regret upper bound, which is the best upper bound attained so far. Nonetheless, a gap between the lower and upper bounds, ranging from N to K , persists. It remains unclear whether the lower bound or the upper bound needs to be improved. In this paper, we propose a multi-level successive selection algorithm that obtains an O (N log (T) Δ 2 + K log (T) Δ) regret bound when the market satisfies serial dictatorship. To the best of our knowledge, we are the first to propose an algorithm that matches the lower bound in the problem of matching markets with bandits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Cube query interestingness: Novelty, relevance, peculiarity and surprise.
- Author
-
Gkitsakis, Dimos, Kaloudis, Spyridon, Mouselli, Eirini, Peralta, Veronika, Marcel, Patrick, and Vassiliadis, Panos
- Subjects
- *
MULTIDIMENSIONAL databases , *HUMAN behavior , *COMPUTER science , *HUMAN experimentation , *ALGORITHMS - Abstract
In this paper, we discuss methods to assess the interestingness of a query in an environment of data cubes. We assume a hierarchical multidimensional database, storing data cubes and level hierarchies. We start with a comprehensive review of related work in the fields of human behavior studies and computer science. We define the interestingness of a query as a vector of scores along different aspects, like novelty, relevance, surprise and peculiarity and complement this definition with a taxonomy of the information that can be used to assess each of these aspects of interestingness. We provide both syntactic (result-independent) and extensional (result-dependent) checks, measures and algorithms for assessing the different aspects of interestingness in a quantitative fashion. We also report our findings from a user study that we conducted, analyzing the significance of each aspect, its evolution over time and the behavior of the study's participants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Reed–Muller Codes: Theory and Algorithms.
- Author
-
Abbe, Emmanuel, Shpilka, Amir, and Ye, Min
- Subjects
- *
CODING theory , *REED-Muller codes , *ALGORITHMS , *BOOLEAN functions , *COMPUTER science , *ELECTRICAL engineering , *DECODING algorithms - Abstract
Reed-Muller (RM) codes are among the oldest, simplest and perhaps most ubiquitous family of codes. They are used in many areas of coding theory in both electrical engineering and computer science. Yet, many of their important properties are still under investigation. This paper covers some of the recent developments regarding the weight enumerator and the capacity-achieving properties of RM codes, as well as some of the algorithmic developments. In particular, the paper discusses the recent connections established between RM codes, thresholds of Boolean functions, polarization theory, hypercontractivity, and the techniques of approximating low weight codewords using lower degree polynomials (when codewords are viewed as evaluation vectors of degree r polynomials in m variables). It then overviews some of the algorithms for decoding RM codes. It covers both algorithms with provable performance guarantees for every block length, as well as algorithms with state-of-the-art performances in practical regimes, which do not perform as well for large block length. Finally, the paper concludes with a few open problems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Symmetric Cryptography on RISC-V: Performance Evaluation of Standardized Algorithms.
- Author
-
Nişancı, Görkem, Flikkema, Paul G., and Yalçın, Tolga
- Subjects
- *
CRYPTOGRAPHY , *ALGORITHMS , *QUANTUM computers , *COMPUTER security , *COMPUTER networks , *COMPUTER science - Abstract
The ever-increasing need for securing computing systems using cryptographic algorithms is spurring interest in the efficient implementation of common algorithms. While the algorithms can be implemented in software using base instruction sets, there is considerable potential to reduce memory cost and improve speed using specialized instructions and associated hardware. However, there is a need to assess the benefits and costs of software implementations and new instructions that implement key cryptographic algorithms in fewer cycles. The primary aim of this paper is to improve the understanding of the performance and cost of implementing cryptographic algorithms for the RISC-V instruction set architecture (ISA) in two cases: software implementations of the algorithms using the rv32i instruction set and using cryptographic instructions supported by dedicated hardware in additional functional units. For both cases, we describe a RISC-V processor with cryptography hardware extensions and hand-optimized RISC-V assembly language implementations of eleven cryptographic algorithms. Compared to implementations with only the rv32i instruction set, implementations with the cryptography set extension provide a 1.5× to 8.6× faster execution speed and 1.2× to 5.8× less program memory for five of the eleven algorithms. Based on our performance analyses, a new instruction is proposed to increase the implementation efficiency of the algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Single Image Super-Resolution Quality Assessment: A Real-World Dataset, Subjective Studies, and an Objective Metric.
- Author
-
Jiang, Qiuping, Liu, Zhentao, Gu, Ke, Shao, Feng, Zhang, Xinfeng, Liu, Hantao, and Lin, Weisi
- Subjects
- *
HIGH resolution imaging , *HUMAN experimentation , *IMAGE segmentation , *ALGORITHMS , *COMPUTER science - Abstract
Numerous single image super-resolution (SISR) algorithms have been proposed during the past years to reconstruct a high-resolution (HR) image from its low-resolution (LR) observation. However, how to fairly compare the performance of different SISR algorithms/results remains a challenging problem. So far, the lack of comprehensive human subjective study on large-scale real-world SISR datasets and accurate objective SISR quality assessment metrics makes it unreliable to truly understand the performance of different SISR algorithms. We in this paper make efforts to tackle these two issues. Firstly, we construct a real-world SISR quality dataset (i.e., RealSRQ) and conduct human subjective studies to compare the performance of the representative SISR algorithms. Secondly, we propose a new objective metric, i.e., KLTSRQA, based on the Karhunen-Loéve Transform (KLT) to evaluate the quality of SISR images in a no-reference (NR) manner. Experiments on our constructed RealSRQ and the latest synthetic SISR quality dataset (i.e., QADS) have demonstrated the superiority of our proposed KLTSRQA metric, achieving higher consistency with human subjective scores than relevant existing NR image quality assessment (NR-IQA) metrics. The dataset and the code will be made available at https://github.com/Zhentao-Liu/RealSRQ-KLTSRQA. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. THE COMMUNICATION COMPLEXITY OF SET INTERSECTION AND MULTIPLE EQUALITY TESTING.
- Author
-
DAWEI HUANG, PETTIE, SETH, YIXIANG ZHANG, and ZHIJUN ZHANG
- Subjects
- *
DETERMINISTIC algorithms , *DISTRIBUTED computing , *ERROR probability , *COMPUTER science , *ALGORITHMS - Abstract
In this paper we explore fundamental problems in randomized communication complexity such as computing SetIntersection on sets of size k and EqualityTesting between vectors of length k. Sağglam and Tardos [Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science, 2013, pp. 678-687] and Brody et al. [Algorithmica, 76 (2016), pp. 796-845] showed that for these types of problems, one can achieve optimal communication volume of O(k) bits, with a randomized protocol that takes O(log* k) rounds. They also proved that this is one point along the optimal round-communication trade-off curve. Aside from rounds and communication volume, there is a third parameter of interest, namely the error probability perr, which we write 2-E. It is straightforward to show that protocols for SetIntersection or EqualityTesting need to send at least Ω (k + E) bits, regardless of the number of rounds. Is it possible to simultaneously achieve optimality in all three parameters, namely O(k + E) communication and O(log* k) rounds? In this paper we prove that there is no universally optimal algorithm, and we complement the existing round-communication trade-offs [M. Sağlam and G. Tardos, Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science, 2013, pp. 678-687; J. Brody et al., Algorithmica, 76 (2016), pp. 796-845] with a new trade-off between rounds, communication, and probability of error. In particular, any protocol for solving multiple EqualityTesting in r rounds with failure probability perr = 2-E has communication volume Ω (Ek1/r). We present several algorithms for multiple EqualityTesting (and its variants) that match or nearly match our lower bound and the lower bound of [M. Sağlam and G. Tardos, Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science, 2013, pp. 678-687; J. Brody et al., Algorithmica, 76 (2016), pp. 796-845]. Lower bounds on EqualityTesting extend to SetIntersection for every r, k, and perr (which is trivial); in the reverse direction, we prove that upper bounds on EqualityTesting for r, k, perr imply similar upper bounds on SetIntersection with parameters r + 1, k, and perr. Our original motivation for considering perr as an independent parameter came from the problem of enumerating triangles in distributed (CONGEST) networks having maximum degree Δ. We prove that this problem can be solved in O(Δ / log n+log log Δ) time with high probability 1 - 1/poly(n). This beats the trivial (deterministic) O(Δ)-time algorithm and is superior to the Õ(n1/3) algorithm of [Y. Chang, S. Pettie, and H. Zhang, Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms, 2019, pp. 821-840; Y. Chang and T. Saranurak, Proceedings of the ACM Symposium on Principles of Distributed Computing, 2019, pp. 66-73] when Δ = Õ(n1/3). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. A Hierarchical Universal Algorithm for Geometric Objects' Reflection Symmetry Detection.
- Author
-
Žalik, Borut, Strnad, Damjan, Kohek, Štefan, Kolingerová, Ivana, Nerat, Andrej, Lukač, Niko, and Podgorelec, David
- Subjects
- *
SYMMETRY , *SYMMETRIC functions , *ALGORITHMS , *POINT cloud , *COMPUTATIONAL geometry - Abstract
A new algorithm is presented for detecting the global reflection symmetry of geometric objects. The algorithm works for 2D and 3D objects which may be open or closed and may or may not contain holes. The algorithm accepts a point cloud obtained by sampling the object's surface at the input. The points are inserted into a uniform grid and so-called boundary cells are identified. The centroid of the boundary cells is determined, and a testing symmetry axis/plane is set through it. In this way, the boundary cells are split into two parts and they are faced with the symmetry estimation function. If the function estimates the symmetric case, the boundary cells are further split until a given threshold is reached or a non-symmetric result is obtained. The new testing axis/plane is then derived and tested by rotation around the centroid. This paper introduces three techniques to accelerate the computation. Competitive results were obtained when the algorithm was compared against the state of the art. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Index-Based Solutions for Efficient Density Peak Clustering.
- Author
-
Rasool, Zafaryab, Zhou, Rui, Chen, Lu, Liu, Chengfei, and Xu, Jiajie
- Subjects
- *
DATA distribution , *DENSITY , *SCIENTIFIC community , *COST control , *APPROXIMATION algorithms - Abstract
Density Peak Clustering (DPC), a popular density-based clustering approach, has received considerable attention from the research community primarily due to its simplicity and fewer-parameter requirement. However, the resultant clusters obtained using DPC are influenced by the sensitive parameter $d_c$ d c , which depends on data distribution and requirements of different users. Besides, the original DPC algorithm requires visiting a large number of objects, making it slow. To this end, this paper investigates index-based solutions for DPC. Specifically, we propose two list-based index methods viz. (i) a simple List Index, and (ii) an advanced Cumulative Histogram Index. Efficient query algorithms are proposed for these indices which significantly avoids irrelevant comparisons at the cost of space. For memory-constrained systems, we further introduce an approximate solution to the above indices which allows substantial reduction in the space cost, provided that slight inaccuracies are admissible. Furthermore, owing to considerably lower memory requirements of existing tree-based index structures, we also present effective pruning techniques and efficient query algorithms to support DPC using the popular Quadtree Index and R-tree Index. Finally, we practically evaluate all the above indices and present the findings and results, obtained from a set of extensive experiments on six synthetic and real datasets. The experimental insights obtained can help to guide in selecting a befitting index. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Near-Optimal Sparsity-Constrained Group Testing: Improved Bounds and Algorithms.
- Author
-
Gebhard, Oliver, Hahn-Klimroth, Max, Parczyk, Olaf, Penschuck, Manuel, Rolvien, Maurice, Scarlett, Jonathan, and Tan, Nelvin
- Subjects
- *
ALGORITHMS , *DECODING algorithms , *ERROR probability , *RELIABILITY in engineering , *COMPUTER science - Abstract
Recent advances in noiseless non-adaptive group testing have led to a precise asymptotic characterization of the number of tests required for high-probability recovery in the sublinear regime $k = n^{\theta }$ (with $\theta \in (0,1)$), with $n$ individuals among which $k$ are infected. However, the required number of tests may increase substantially under real-world practical constraints, notably including bounds on the maximum number $\Delta $ of tests an individual can be placed in, or the maximum number $\Gamma $ of individuals in a given test. While previous works have given recovery guarantees for these settings, significant gaps remain between the achievability and converse bounds. In this paper, we substantially or completely close several of the most prominent gaps. In the case of $\Delta $ -divisible items, we show that the definite defectives (DD) algorithm coupled with a random regular design is asymptotically optimal in dense scaling regimes, and optimal to within a factor of e more generally; we establish this by strengthening both the best known achievability and converse bounds. In the case of $\Gamma $ -sized tests, we provide a comprehensive analysis of the regime $\Gamma = \Theta (1)$ , and again establish a precise threshold proving the asymptotic optimality of SCOMP (a slight refinement of DD) equipped with a tailored pooling scheme. Finally, for each of these two settings, we provide near-optimal adaptive algorithms based on sequential splitting, and provably demonstrate gaps between the performance of optimal adaptive and non-adaptive algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. GFocus: User Focus-Based Graph Query Autocompletion.
- Author
-
Yi, Peipei, Choi, Byron, Zhang, Zhiwei, Bhowmick, Sourav S, and Xu, Jianliang
- Subjects
- *
SUBMODULAR functions , *GENERATING functions , *COMPUTER science , *ALGORITHMS , *SOCIAL interaction , *HUMAN-computer interaction - Abstract
Graph query autocompletion (gQAC) generates a small list of ranked query suggestions during the graph query formulation process in a visual environment. The current state-of-the-art of gQAC provides suggestions that are formed by adding subgraph increments to arbitrary places of an existing (partial) user query. However, according to the research results on human-computer interaction (HCI), humans can only interact with a small number of recent software artifacts in hand. Hence, many of such suggestions could be irrelevant. In this paper, we present the GFocus framework that exploits a novel notion of user focus of graph query formulation (or simply focus). Intuitively, the focus is the subgraph that a user is working on. We formulate locality principles inspired by the HCI research to automatically identify and maintain the focus. We propose novel monotone submodular ranking functions for generating popular and comprehensive query suggestions only at the focus. In particular, the query suggestions of GFocus have high result counts (when they are used as queries) and maximally cover the possible suggestions at the focus. We propose efficient algorithms and an index for ranking the suggestions. Our results show that GFocus saves 12-32 percent more mouse clicks and is 35× more efficient than the state-of-the-art competitor. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. A new algorithmic decision for categorical syllogisms via Carroll's diagrams.
- Author
-
Kircali Gursoy, Necla, Senturk, Ibrahim, Oner, Tahsin, and Gursoy, Arif
- Subjects
- *
SYLLOGISM , *ALGORITHMS , *COMPUTER science , *CHARTS, diagrams, etc. , *ARTIFICIAL intelligence , *COMPUTER engineering - Abstract
In this paper, we propose a new effective algorithm for the categorical syllogisms by using a calculus system Syllogistic Logic with Carroll Diagrams, which determines a formal approach to logical reasoning with diagrams, for representations of the fundamental Aristotelian categorical syllogisms. We show that this logical reasoning is closed under the syllogistic criterion of inference. Therefore, the calculus system is implemented to let the formalism which comprises synchronically bilateral and trilateral diagrammatical appearance and naive algorithmic nature. And also, there is no need specific knowledge or exclusive ability to understand this decision procedure as well as to use it in an algorithmic system. Consequently, the empirical contributions of this paper are to design a polynomial-time algorithm at the first time in the literature to conduce to researchers getting into the act in different areas of science used categorical syllogisms such as artificial intelligence, engineering, computer science and etc. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. An [formula omitted] query time algorithm for reducing ϵ-NN to (c,r)-NN.
- Author
-
Ma, Hengzhao and Li, Jianzhong
- Subjects
- *
SPACETIME , *ALGORITHMS , *COMPUTER science - Abstract
The problem of reducing ϵ -NN to (c , r) -NN is very important in theoretical computer science area and has attracted many research efforts. In this paper a new algorithm for such problem is proposed, which achieves O (log n) query time. Comparing with the former algorithms for the same problem, this query time is the lowest, which is the main contribution of this paper. The low query time complexity is achieved by raising the preprocessing time and space complexity. How to reduce the cost added into the two complexities is also discussed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Error Detection and Correction Algorithms for the Firing Squad Synchronization Problem.
- Author
-
KYRITSIS, APOSTOLOS, LIOLIS, ORESTIS, and SIRAKOULIS, GEORGIOS CH.
- Subjects
- *
SYNCHRONIZATION , *CELLULAR automata , *ALGORITHMS , *COMPUTER science - Abstract
One of the most famous problems in computer science and cellular automata (CAs) is the Firing Squad Synchronization Problem (FSSP). In the FSSP, a one-dimensional CA with just a single active cell (initially called "General") eventually reaches a state in which all the CA cells (called "soldiers" in analogy to real-world firing squad) are simultaneously active. Since the introduction of the FSSP as a well known problem in the 60's, and during the last decades, many FSSP solution algorithms have been proposed in the literature. Nevertheless, the inherited error correction capabilities of these algorithms have not been extensively studied. In this paper, an error detection and correction algorithm that utilizes the patterns appearing in 1-D Mazoyer's 8-state solution is analyzed and presented. Furthermore, the patterns of 2-D Umeo's FSSP algorithm are also thoroughly investigated. In such a case, extra patterns emerge from CA cells grouping, during the application of 1-D FSSP solution to 2-D CA grid, enhancing the error tolerance of the proposed algorithm, i.e. almost every possible error can be detected. Finally, another version, even more efficient, of error detection and correction algorithm with memory effect considering previous CA states usage, is also introduced. Although the usage of previous CA states results to increased utilization of computational resources, at the same time it enables the proposed algorithm to detect every possible error state in every possible CA cell, advancing significantly the algorithm's performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
28. An optimal algorithm for 2-bounded delay buffer management with lookahead.
- Author
-
Kobayashi, Koji M.
- Subjects
- *
ALGORITHMS , *ONLINE algorithms , *DETERMINISTIC algorithms , *QUALITY of service , *COMPUTER science - Abstract
• We consider a variant of the online buffer management problem. • This variant is called the 2-bounded delay buffer management problem with lookahead. • We design an optimal online algorithm for this variant. The bounded delay buffer management problem, which was proposed by Kesselman et al. (STOC 2001 and SIAM Journal on Computing 33(3), 2004), is an online problem focusing on buffer management of a switch supporting Quality of Service (QoS). The problem definition is as follows: Packets arrive to a buffer over time and each packet is specified by the release time , deadline and value. An algorithm can transmit at most one packet from the buffer at each integer time and can gain its value as the profit if transmitting the packet by its deadline after its release time. The objective of this problem is to maximize the gained profit. We say that an instance of the problem is s -bounded if for any packet, an algorithm has at most s chances to transmit it. For any s ≥ 2 , Hajek (CISS 2001) showed that the competitive ratio of any deterministic algorithm is at least (1 + 5) / 2 ≥ 1.618. Recently, Veselý et al. (SODA 2019) designed an online algorithm matching the lower bound. Böhm et al. (ISAAC 2016 and Theoretical Computer Science, 2019) introduced the lookahead ability to an online algorithm. At a time t , the algorithm obtains information about packets arriving at time t + 1 , and showed that for s = 2 , there is an algorithm which achieves the competitive ratio of (− 1 + 13) / 2 ≤ 1.303. Also, they showed that the competitive ratio of any deterministic algorithm is at least (1 + 17) / 4 ≥ 1.280. In this paper, for the 2-bounded model with lookahead, we design an algorithm with a matching competitive ratio of (1 + 17) / 4. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. Connected Subgraph Defense Games.
- Author
-
Akrida, Eleni C., Deligkas, Argyrios, Melissourgos, Themistoklis, and Spirakis, Paul G.
- Subjects
- *
NASH equilibrium , *THERMODYNAMIC control , *ALGORITHMS , *COMPUTER science , *POLYNOMIAL time algorithms , *APPROXIMATION algorithms - Abstract
We study a security game over a network played between a defender and kattackers. Every attacker chooses, probabilistically, a node of the network to damage. The defender chooses, probabilistically as well, a connected induced subgraph of the network of λ nodes to scan and clean. Each attacker wishes to maximize the probability of escaping her cleaning by the defender. On the other hand, the goal of the defender is to maximize the expected number of attackers that she catches. This game is a generalization of the model from the seminal paper of Mavronicolas et al. Mavronicolas et al. (in: International symposium on mathematical foundations of computer science, MFCS, pp 717–728, 2006). We are interested in Nash equilibria of this game, as well as in characterizing defense-optimal networks which allow for the best equilibrium defense ratio; this is the ratio of k over the expected number of attackers that the defender catches in equilibrium. We provide a characterization of the Nash equilibria of this game and defense-optimal networks. The equilibrium characterizations allow us to show that even if the attackers are centrally controlled the equilibria of the game remain the same. In addition, we give an algorithm for computing Nash equilibria. Our algorithm requires exponential time in the worst case, but it is polynomial-time for λ constantly close to 1 or n. For the special case of tree-networks, we further refine our characterization which allows us to derive a polynomial-time algorithm for deciding whether a tree is defense-optimal and if this is the case it computes a defense-optimal Nash equilibrium. On the other hand, we prove that it is NP -hard to find a best-defense strategy if the tree is not defense-optimal. We complement this negative result with a polynomial-time constant-approximation algorithm that computes solutions that are close to optimal ones for general graphs. Finally, we provide asymptotically (almost) tight bounds for the Price of Defense for any λ ; this is the worst equilibrium defense ratio over all graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Metaheuristics for pharmacometrics.
- Author
-
Kim, Seongho, Hooker, Andrew C., Shi, Yu, Kim, Grace Hyun J., and Wong, Weng Kee
- Subjects
- *
ALGORITHMS , *PARTICLE swarm optimization , *COMPUTER science , *COMPUTER engineers , *METAHEURISTIC algorithms - Abstract
Metaheuristics is a powerful optimization tool that is increasingly used across disciplines to tackle general purpose optimization problems. Nature‐inspired metaheuristic algorithms is a subclass of metaheuristic algorithms and have been shown to be particularly flexible and useful in solving complicated optimization problems in computer science and engineering. A common practice with metaheuristics is to hybridize it with another suitably chosen algorithm for enhanced performance. This paper reviews metaheuristic algorithms and demonstrates some of its utility in tackling pharmacometric problems. Specifically, we provide three applications using one of its most celebrated members, particle swarm optimization (PSO), and show that PSO can effectively estimate parameters in complicated nonlinear mixed‐effects models and to gain insights into statistical identifiability issues in a complex compartment model. In the third application, we demonstrate how to hybridize PSO with sparse grid, which is an often‐used technique to evaluate high dimensional integrals, to search for D‐efficient designs for estimating parameters in nonlinear mixed‐effects models with a count outcome. We also show the proposed hybrid algorithm outperforms its competitors when sparse grid is replaced by its competitor, adaptive gaussian quadrature to approximate the integral, or when PSO is replaced by three notable nature‐inspired metaheuristic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. Apparel Design and Development Based on 3D Scanning Technology.
- Author
-
Zhang, Guodong
- Subjects
- *
IMAGE registration , *DESIGN thinking , *ALGORITHMS , *COMPUTER science , *MOTION capture (Human mechanics) - Abstract
With the development of computer science, especially the application of 3D scanning technology in garment design, intelligent modeling is realized, which is impossible to achieve in traditional design methods. In this paper, we propose the 3D model construction of human garments based on the motion recovery structure method. The eigenmatrix is obtained from the camera parameters, and the transformation matrix is calculated by matching the image feature points with the help of scale-invariant feature conversion algorithm to realize the 3D reconstruction technology of human garments based on multiview image sequences. The effectiveness of this method is verified through experiments, and it has good robustness and accuracy. Through the form of style modeling, the design thinking and method can be extended to form a more reasonable garment structure and guide the innovation of garment production mode. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Towards data mining in IoT cloud computing networks : Collaborative filtering based recommended system.
- Author
-
Kumar, Mukesh, Kumar, Sushil, and Kashyap, Panjak Kumar
- Subjects
- *
DATA mining , *INTERNET of things , *ALGORITHMS , *COMPUTER science , *SAMPLE size (Statistics) , *CLOUD computing - Abstract
The ever-increasing data due to gaining popularity of Internet of Things (IoT) needs to be adequate storage to compute complex task with accuracy and low latency and finally recommend the user's preference in the near future. The cloud computing and data mining technique having ability to provide open platform for communication and generate precise recommendation. In this regard, this paper mainly carried out two aspects: firstly, we built four layers IoT cloud computing architecture that provides an open platform for communication with various heterogeneous multi-source things. Secondly, we present a recommended system model based on collaborative filtering algorithm to enhance the accuracy rate of the items in the top priority recommended list. The proposed model inherently utilizes the user-item's scoring matrix, asymmetrical influence degree on the similar items between users and time weight decay function for the user's preferences. Finally, extensive simulations are done to show the accuracy rate, loss rate and recall rate of recommendation for the proposed model. Further, comparative analysis of results proved that our proposed model outperforms the other state-of-art model in terms of accuracy rate of the recommendation on the item with respect to data sample set size. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. On Algorithmic Statistics for Space-bounded Algorithms.
- Author
-
Milovanov, Alexey
- Subjects
- *
KOLMOGOROV complexity , *STATISTICS , *RANDOM numbers , *COMPUTER science , *ALGORITHMS , *COMPUTATIONAL complexity - Abstract
Algorithmic statistics looks for models of observed data that are good in the following sense: a model is simple (i.e., has small Kolmogorov complexity) and captures all the algorithmically discoverable regularities in the data. However, this idea can not be used in practice as is because Kolmogorov complexity is not computable. In this paper we develop an algorithmic version of algorithmic statistics that uses space-bounded Kolmogorov complexity. We prove a space-bounded version of a basic result from "classical" algorithmic statistics, the connection between optimality and randomness deficiences. The main tool is the Nisan–Wigderson pseudo-random generator. An extended abstract of this paper was presented at the 12th International Computer Science Symposium in Russia (Milovanov 10). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. Almost linear time algorithms for minsum k-sink problems on dynamic flow path networks.
- Author
-
Higashikawa, Yuya, Katoh, Naoki, Teruyama, Junichi, and Watase, Koji
- Subjects
- *
POLYNOMIAL time algorithms , *ALGORITHMS , *DATA structures , *COMPUTER science , *UNITS of time - Abstract
• Study the minsum k -sink problems on dynamic flow path networks for the confluent/non-confluent flow model. • Develop algorithms which run in almost linear time regardless of the number of sinks k. • Improve upon the previous results for the same problem with the confluent flow model. • Provide the first polynomial time algorithm for the problem with the non-confluent flow model. • The main theoretical contribution is to construct novel data structures for solving subproblems in polylogarithmic time. We address the facility location problems on dynamic flow path networks. A dynamic flow path network consists of an undirected path with positive edge lengths, positive edge capacities, and positive vertex weights. A path can be considered as a road, an edge length as the distance along the road and a vertex weight as the number of people at the site. An edge capacity limits the number of people that can enter the edge per unit time. In the dynamic flow network, given particular points on edges or vertices, called sinks , all the people evacuate from the vertices to the sinks as quickly as possible. The problem is to find the location of sinks on a dynamic flow path network in such a way that the aggregate evacuation time (i.e., the sum of evacuation times for all the people) to sinks is minimized. We consider two models of the problem: the confluent flow model and the non-confluent flow model. In the former model, the way of evacuation is restricted so that all the people at a vertex have to evacuate to the same sink, and in the latter model, there is no such restriction. In this paper, for both the models, we develop algorithms which run in almost linear time regardless of the number of sinks. It should be stressed that for the confluent flow model, our algorithm improves upon the previous result by Benkoczi et al. [Theoretical Computer Science, 2020], and one for the non-confluent flow model is the first polynomial time algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. A novel algorithmic construction for deductions of categorical polysyllogisms by Carroll's diagrams.
- Author
-
Senturk, Ibrahim, Gursoy, Necla Kircali, Oner, Tahsin, and Gursoy, Arif
- Subjects
- *
ARTIFICIAL intelligence , *ALGORITHMS , *AUTHORSHIP in literature , *COMPUTER science , *SYLLOGISM - Abstract
In this work, with the help of a calculus system syllogistic logic with Carroll's diagrams (SLCD), we construct a useful algorithm for the possible deductions of polysyllogisms (soriteses). This algorithm makes a general deduction in categorical syllogisms with the help of diagrams to depict each proposition of polysyllogisms. The developed calculus system PolySLCD (PSLCD) is used to allow a formal deduction from premises set by comprising synchronically biliteral and triliteral diagrammatical appearance and simple algorithmic nature. This algorithm can be used to deduce new conclusions, step by step, through recursive conclusion sets that are obtained from premises of categorical polysyllogisms. The fundamental contributions of this paper are accurately deducing conclusions from sets corresponding to given premises as exact human reasoning using a single algorithm and designing this algorithm based on SLCD. Therefore, it is more suitable for computer-aided solution. Since the algorithm is set-based, it is a novel algorithm in the literature and it can easily contribute to the researchers using polysyllogisms in different scientific branches, such as computer science, decision-making systems and artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Subspace clustering based on latent low rank representation with Frobenius norm minimization.
- Author
-
Yu, Song and Yiquan, Wu
- Subjects
- *
FROBENIUS manifolds , *FROBENIUS algebras , *ARTIFICIAL neural networks , *ALGORITHMS , *COMPUTER science - Abstract
The problem of subspace clustering which refers to segmenting a collection of data samples approximately drawn from a union of linear subspaces is considered in this paper. Among existing subspace clustering algorithms, low rank representation (LRR) based subspace clustering is a very powerful method and has demonstrated that its performance is good. Latent low rank representation (LLRR) subspace clustering algorithm is an improvement of the original LRR algorithm when the observed data samples are insufficient. The clustering accuracy of LLRR is higher than that of LRR. Recently, Frobenius norm minimization based LRR algorithm has been proposed and its clustering accuracy is higher than that of LRR demonstrating the effectiveness of Frobenius norm as another convex surrogate of the rank function. Combining LLRR and Frobenius norm, a new low rank representation subspace clustering algorithm is proposed in this paper. The nuclear norm in the LLRR algorithm is replaced by the Frobenius norm. The resulting optimization problem is solved via alternating direction method of multipliers (ADMM). Experimental results show that compared with LRR, LLRR and several other state-of-the-art subspace clustering algorithms, the proposed algorithm can get higher clustering accuracy. Compared with LLRR, the running time of the proposed algorithm is reduced significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Efficient Online String Matching Based on Characters Distance Text Sampling.
- Author
-
Faro, Simone, Marino, Francesco Pio, and Pavone, Arianna
- Subjects
- *
INFORMATION retrieval , *COMPUTER science , *ELECTRONIC information resource searching , *ALGORITHMS , *APPLICATION software , *HAMMING distance , *PATTERN matching , *NATURAL language processing - Abstract
Searching for all occurrences of a pattern in a text is a fundamental problem in computer science with applications in many other fields, like natural language processing, information retrieval and computational biology. Sampled string matching is an efficient approach recently introduced in order to overcome the prohibitive space requirements of an index construction, on the one hand, and drastically reduce searching time for the online solutions, on the other hand. In this paper we present a new algorithm for the sampled string matching problem, based on a characters distance sampling approach. The main idea is to sample the distances between consecutive occurrences of a given pivot character and then to search online the sampled data for any occurrence of the sampled pattern, before verifying the original text. From a theoretical point of view we prove that, under suitable conditions, our solution can achieve both linear worst-case time complexity and optimal average-time complexity. From a practical point of view it turns out that our solution shows a sub-linear behaviour in practice and speeds up online searching by a factor of up to 9, using limited additional space whose amount goes from 11 to 2.8% of the text size, with a gain up to 50% if compared with previous solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Reference-Based Framework for Spatio-Temporal Trajectory Compression and Query Processing.
- Author
-
Zheng, Kai, Zhao, Yan, Lian, Defu, Zheng, Bolong, Liu, Guanfeng, and Zhou, Xiaofang
- Subjects
- *
DATA compression , *GREEDY algorithms , *ALGORITHMS , *TELECOMMUNICATION , *DYNAMIC programming , *WIRELESS communications - Abstract
The pervasiveness of GPS-enabled devices and wireless communication technologies results in massive trajectory data, incurring expensive cost for storage, transmission, and query processing. To relieve this problem, in this paper we propose a novel framework for compressing trajectory data, REST (Reference-based Spatio-temporal trajectory compression), by which a raw trajectory is represented by concatenation of a series of historical (sub-)trajectories (called reference trajectories) that form the compressed trajectory within a given spatio-temporal deviation threshold. In order to construct a reference trajectory set that can most benefit the subsequent compression, we propose three kinds of techniques to select reference trajectories wisely from a large dataset such that the resulting reference set is more compact yet covering most footprints of trajectories in the area of interest. To address the computational issue caused by the large number of combinations of reference trajectories that may exist for resembling a given trajectory, we propose efficient greedy algorithms that run in the blink of an eye and dynamic programming algorithms that can achieve the optimal compression ratio. Compared to existing work on trajectory compression, our framework has few assumptions about data such as moving within a road network or moving with constant direction and speed, and better compression performance with fairly small spatio-temporal loss. In addition, by indexing the reference trajectories directly with an in-memory R-tree and building connections to the raw trajectories with inverted index, we develop an extremely efficient algorithm that can answer spatio-temporal range queries over trajectories in their compressed form. Extensive experiments on a real taxi trajectory dataset demonstrate the superiority of our framework over existing representative approaches in terms of both compression ratio and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Sublinear-Time Algorithms for Compressive Phase Retrieval.
- Author
-
Li, Yi and Nakos, Vasileios
- Subjects
- *
SCIENTIFIC computing , *ALGORITHMS , *IMAGE reconstruction algorithms , *APPROXIMATION algorithms - Abstract
In the problem of compressed phase retrieval, the goal is to reconstruct a sparse or approximately $k$ -sparse vector $x \in \mathbb {C} ^{n}$ given access to $y= |\Phi x|$ , where $|v|$ denotes the vector obtained from taking the absolute value of $v\in \mathbb {C} ^{n}$ coordinate-wise. In this paper we present sublinear-time algorithms for a few for-each variants of the compressive phase retrieval problem which are akin to the variants considered for the classical compressive sensing problem in theoretical computer science. Our algorithms use pure combinatorial techniques and near-optimal number of measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. An Improved Decoding Algorithm to Decode Quadratic Residue Codes Based on the Difference of Syndromes.
- Author
-
Duan, Yunde and Li, Yong
- Subjects
- *
DECODING algorithms , *CONGRUENCES & residues , *ALGORITHMS , *CIPHERS , *SYNDROMES , *COMPUTER science - Abstract
The paper proposes an improved difference-of-syndromes decoding algorithm for decoding quadratic residue codes up to half the minimum distance. Therein, soft-decision information is not essential, but it can be used to speed up the computations. The improvement over the prior art is demonstrated by simulations. Moreover, the proposed algorithm is also compared with the classic Berlekamp-Massey algorithm by taking two BCH codes as examples. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
41. A comprehensive review of moth-flame optimisation: variants, hybrids, and applications.
- Author
-
Hussien, Abdelazim G., Amin, Mohamed, and Abd El Aziz, Mohamed
- Subjects
- *
WIRELESS sensor networks , *ALGORITHMS , *COMPUTER science - Abstract
Moth-flame Optimisation Algorithm (MFO) is a new metaheuristics optimisation algorithm presented by Mirjalili in 2015 which inspired by the navigation method of moths in nature. It has gained a huge interest due to its impressive characteristics mainly: no derivation information needed in the starting phase, few numbers of parameters, simple in implementation, scalable and flexible. Till now, different variants to solve various optimisation problems such as binary, real(continuous), constraint, single-objective, multi-objective, and multimodal MFO has been introduced. Many research papers have been presented and summarised. In this review, a general overview of MFO is presented at first. Then, different variants of MFO are described which are classified into three classes: modified, hybridised, and multi-objective. Furthermore, applications of MFO in Engineering, Computer Science, Wireless Sensor Networks, and other fields are discussed. Finally, many possible and future directions are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
42. Multi-Agent Deep Reinforcement Learning for Urban Traffic Light Control in Vehicular Networks.
- Author
-
Wu, Tong, Zhou, Pan, Liu, Kai, Yuan, Yali, Wang, Xiumin, Huang, Huawei, and Wu, Dapeng Oliver
- Subjects
- *
CITY traffic , *REINFORCEMENT learning , *TRAFFIC engineering , *DEEP learning , *ROAD interchanges & intersections , *ALGORITHMS , *TRAFFIC congestion - Abstract
As urban traffic condition is diverse and complicated, applying reinforcement learning to reduce traffic congestion becomes one of the hot and promising topics. Especially, how to coordinate the traffic light controllers of multiple intersections is a key challenge for multi-agent reinforcement learning (MARL). Most existing MARL studies are based on traditional $Q$ -learning, but unstable environment leads to poor learning in the complicated and dynamic traffic scenarios. In this paper, we propose a novel multi-agent recurrent deep deterministic policy gradient (MARDDPG) algorithm based on deep deterministic policy gradient (DDPG) algorithm for traffic light control (TLC) in vehiclar networks. Specifically, the centralized learning in each critic network enables each agent to estimate the policies of other agents in the decision-making process and each agent can coordinate with each other, alleviating the problem of poor learning performance caused by environmental instability. The decentralized execution enables each agent to make decisions independently. We share parameters in actor networks to speed up the training process and reduce the memory footprint. The addition of LSTM is beneficial to alleviate the instability of the environment caused by partial observable state. We utilize surveillance cameras and vehicular networks to collect status information for each intersection. Unlike previous work, we have not only considered the vehicle but also considered the pedestrians waiting to pass through the intersection. Moreover, we also set different priorities for buses and ordinary vehicles. The experimental results in a vehicular network show that our method can run stably in various scenarios and coordinate multiple intersections, which significantly reduces vehicle congestion and pedestrian congestion. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
43. ROAM: A Fundamental Routing Query on Road Networks with Efficiency.
- Author
-
Luo, Siqiang, Cheng, Reynold, Kao, Ben, Xiao, Xiaokui, Zhou, Shuigeng, and Hu, Jiafeng
- Subjects
- *
ALGORITHMS , *TRAVEL costs , *COST shifting , *PUBLIC transit , *GRAPH algorithms - Abstract
Novel road-network applications often recommend a moving object (e.g., a vehicle) about interesting services or tasks on its way to a destination. A taxi-sharing system, for instance, suggests a new passenger to a taxi while it is serving another one. The traveling cost is then shared among these passengers. A fundamental query is: given two nodes $s$ s and $t$ t , and an area $\mathcal {A}$ A on road network graph $\mathcal {G}$ G , is there a “good” route (e.g., short enough path) $P$ P from $s$ s to $t$ t that crosses $\mathcal {A}$ A in $\mathcal {G}$ G ? In a taxi-sharing system, $s$ s and $t$ t can be a taxi's current and destined locations, and $\mathcal {A}$ A contains all the places to which a person waiting for a taxi is willing to walk. Answering this Route and Area Matching (ROAM) Query allows the application involved to recommend appropriate services to users efficiently. In this paper, we examine efficient ROAM query algorithms. Particularly, we develop solutions for finding a $\rho$ ρ -route, which is an $s$ s - $t$ t path that passes $\mathcal {A}$ A , with a length of at most $(1+\rho)$ (1 + ρ) times the shortest distance between $s$ s and $t$ t . The existence of a $\rho$ ρ -route implies that a service or task located at $\mathcal {A}$ A can be found for a given moving object $m$ m , and that $m$ m only deviates slightly from its current route. We present comprehensive studies on index-free and index-based algorithms for answering ROAM queries. Comprehensive experiments show that our algorithm runs up to 30 times faster than baseline algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
44. An Optimal XP Algorithm for Hamiltonian Cycle on Graphs of Bounded Clique-Width.
- Author
-
Bergougnoux, Benjamin, Kanté, Mamadou Moustapha, and Kwon, O-joung
- Subjects
- *
HAMILTONIAN graph theory , *ALGORITHMS , *MULTIGRAPH , *MATHEMATICAL connectedness , *COMPUTER science - Abstract
In this paper, we prove that, given a clique-width k-expression of an n-vertex graph, Hamiltonian Cycle can be solved in time n O (k) . This improves the naive algorithm that runs in time n O (k 2) by Espelage et al. (Graph-theoretic concepts in computer science, vol 2204. Springer, Berlin, 2001), and it also matches with the lower bound result by Fomin et al. that, unless the Exponential Time Hypothesis fails, there is no algorithm running in time n o (k) (Fomin et al. in SIAM J Comput 43:1541–1563, 2014). We present a technique of representative sets using two-edge colored multigraphs on k vertices. The essential idea is that, for a two-edge colored multigraph, the existence of an Eulerian trail that uses edges with different colors alternately can be determined by two information: the number of colored edges incident with each vertex, and the connectedness of the multigraph. With this idea, we avoid the bottleneck of the naive algorithm, which stores all the possible multigraphs on k vertices with at most n edges. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. Hardness and Structural Results for Half-Squares of Restricted Tree Convex Bipartite Graphs.
- Author
-
Le, Hoang-Oanh and Le, Van Bang
- Subjects
- *
PLANAR graphs , *BIPARTITE graphs , *GRAPH algorithms , *HARDNESS , *COMPUTER science , *ALGORITHMS - Abstract
Let B = (X , Y , E) be a bipartite graph. A half-square of B has one color class of B as vertex set, say X; two vertices are adjacent whenever they have a common neighbor in Y. Every planar graph is a half-square of a planar bipartite graph, namely of its subdivision. Until recently, only half-squares of planar bipartite graphs, also known as map graphs (Chen et al., in: Proceedings of the thirtieth annual ACM symposium on the theory of computing, STOC '98, pp 514–523. 10.1145/276698.276865, 1998; J ACM 49(2):127–138. 10.1145/506147.506148, 2002), have been investigated, and the most discussed problem is whether it is possible to recognize these graphs faster and simpler than Thorup's O (n 120) -time algorithm (Thorup, in: Proceedings of the 39th IEEE symposium on foundations of computer science (FOCS), pp 396–405. 10.1109/SFCS.1998.743490, 1998). In this paper, we identify the first hardness case, namely that deciding if a graph is a half-square of a balanced bisplit graph is NP-complete. (Balanced bisplit graphs form a proper subclass of star convex bipartite graphs). For classical subclasses of tree convex bipartite graphs such as biconvex, convex, and chordal bipartite graphs, we give good structural characterizations of their half-squares that imply efficient recognition algorithms. As a by-product, we obtain new characterizations of unit interval graphs, interval graphs, and of strongly chordal graphs in terms of half-squares of biconvex bipartite, convex bipartite, and of chordal bipartite graphs, respectively. Good characterizations of half-squares of star convex and star biconvex bipartite graphs are also given, giving linear-time recognition algorithms for these half-squares. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. Research on OpenCL optimization for FPGA deep learning application.
- Author
-
Zhang, Shuo, Wu, Yanxia, Men, Chaoguang, He, Hongtao, and Liang, Kai
- Subjects
- *
COMPUTER science , *MACHINE learning , *GRAPHICS processing units , *COGNITIVE science , *COMPUTER software , *ARTIFICIAL intelligence , *DEEP learning - Abstract
In recent years, with the development of computer science, deep learning is held as competent enough to solve the problem of inference and learning in high dimensional space. Therefore, it has received unprecedented attention from both the academia and the business community. Compared with CPU/GPU, FPGA has attracted much attention for its high-energy efficiency, short development cycle and reconfigurability in the aspect of deep learning algorithm. However, because of the limited research on OpenCL optimization on FPGA of deep learning algorithms, OpenCL tools and models applied to CPU/GPU cannot be directly used on FPGA. This makes it difficult for software programmers to use FPGA when implementing deep learning algorithms for a rewarding performance. To solve this problem, this paper proposed an OpenCL computational model based on FPGA template architecture to optimize the time-consuming convolution layer in deep learning. The comparison between the program applying the computational model and the corresponding optimization program provided by Xilinx indicates that the former is 8-40 times higher than the latter in terms of performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. Efficient Top-k Dominating Computation on Massive Data.
- Author
-
Han, Xixian, Li, Jianzhong, and Gao, Hong
- Subjects
- *
DATA analysis , *ALGORITHMS , *MATHEMATICAL programming , *DATA binning , *DATA modeling - Abstract
In many applications, top-k dominating query is an important operation to return k tuples with the highest domination scores in a potentially huge data space. It is analyzed that the existing algorithms have their performance problems when performed on massive data. This paper proposes a novel table-scan-based TDTS algorithm to efficiently compute top-k dominating results. TDTS first presorts the table for early termination. The early termination checking is proposed in this paper, along with the theoretical analysis of scan depth. The pruning operation for tuples is devised in this paper. The theoretical pruning effect shows that the number of tuples maintained in TDTS can be reduced substantially. The extensive experimental results, conducted on synthetic and real-life data sets, show that TDTS outperforms the existing algorithms significantly. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
48. Needs, Pains, and Motivations in Autonomous Agents.
- Author
-
Starzyk, Janusz A., Graham, James, and Puzio, Leszek
- Subjects
- *
REINFORCEMENT learning , *MACHINE learning , *ALGORITHMS - Abstract
This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
49. Collaborative linear manifold learning for link prediction in heterogeneous networks.
- Author
-
Liu, JiaHui, Jin, Xu, Hong, YuXiang, Liu, Fan, Chen, QiXiang, Huang, YaLou, Liu, MingMing, Xie, MaoQiang, and Sun, FengChi
- Subjects
- *
ALGORITHMS , *COMPUTER science , *MANIFOLDS (Mathematics) , *TOPOLOGY - Abstract
Link prediction in heterogeneous networks aims at predicting missing interactions between pairs of nodes with the help of the topology of the target network and interconnected auxiliary networks. It has attracted considerable attentions from both computer science and bioinformatics communities in the recent years. In this paper, we introduce a novel Collaborative Linear Manifold Learning (CLML) algorithm. It can optimize the consistency of nodes similarities by collaboratively using the manifolds embedded between the target network and the auxiliary network. The experiments on four benchmark datasets have demonstrated the outstanding advantages of CLML, not only in the high prediction performance compared to baseline methods, but also in the capability to predict the unknown interactions in the target networks accurately and effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Introducing Cuts Into a Top-Down Process for Checking Tree Inclusion.
- Author
-
Chen, Yangjun and Chen, Yibin
- Subjects
- *
ALGORITHMS , *DATA mining , *PLANT genes , *PLANT genetics , *COMPUTER science - Abstract
By the ordered tree inclusion we will check whether a pattern tree P can be included in a target tree T, where the order of siblings in both P and T matters. This problem has many applications in practice, such as retrieval of documents, data mining, and RNA structure matching. In this paper, we propose an efficient algorithm for this problem. Its time complexity is bounded by ${\text{O}(\vert }T\vert \cdot \;min\{h_{P}{,\;\vert \text{leaves(}}P)\vert \})$O(|T|·min{hP,|leaves(P)|}), with ${\text{O}(\vert }T\vert + \vert P\vert)$O(|T|+|P|) space being used, where hP (hT) represents the height of P (resp., T) and leaves (P) stands for the set of the leaves of P. Up to now the best algorithm for this problem needs $\Theta (\vert T\vert \cdot \vert leaves(P)\vert)$Θ(|T|·|leaves(P)|) time and ${\text{O}(\vert }P\vert + \vert T\vert)$O(|P|+|T|) space. Extensive experiments have been done, which show that the new algorithm can perform much better than the existing ones in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.