1,469 results
Search Results
2. Node similarity in the citation graph.
- Author
-
Wangzhong Lu, Janssen, J., Milios, E., Japkowicz, N., and Yongzheng Zhang
- Subjects
GRAPHIC methods ,ALGORITHMS ,COMPUTER science ,INFORMATION retrieval ,BIBLIOGRAPHICAL citations - Abstract
Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
3. Maximum Matching Sans Maximal Matching: A New Approach for Finding Maximum Matchings in the Data Stream Model.
- Author
-
Feldman, Moran and Szarf, Ariel
- Subjects
TRIANGLES ,DATA modeling ,COMBINATORIAL optimization ,GREEDY algorithms ,COMPUTER science ,ALGORITHMS - Abstract
The problem of finding a maximum size matching in a graph (known as the maximum matching problem) is one of the most classical problems in computer science. Despite a significant body of work dedicated to the study of this problem in the data stream model, the state-of-the-art single-pass semi-streaming algorithm for it is still a simple greedy algorithm that computes a maximal matching, and this way obtains 1 / 2 -approximation. Some previous works described two/three-pass algorithms that improve over this approximation ratio by using their second and third passes to improve the above mentioned maximal matching. One contribution of this paper continues this line of work by presenting new three-pass semi-streaming algorithms that work along these lines and obtain improved approximation ratios of 0.6111 and 0.5694 for triangle-free and general graphs, respectively. Unfortunately, a recent work Konrad and Naidu (Approximation, randomization, and combinatorial optimization. Algorithms and techniques, APPROX/RANDOM 2021, August 16–18, 2021. LIPIcs, vol 207, pp 19:1–19:18, 2021. https://doi.org/10.4230/LIPIcs.APPROX/RANDOM.2021.19) shows that the strategy of constructing a maximal matching in the first pass and then improving it in further passes has limitations. Additionally, this technique is unlikely to get us closer to single-pass semi-streaming algorithms obtaining a better than 1 / 2 -approximation. Therefore, it is interesting to come up with algorithms that do something else with their first pass (we term such algorithms non-maximal-matching-first algorithms). No such algorithms were previously known, and the main contribution of this paper is describing such algorithms that obtain approximation ratios of 0.5384 and 0.5555 in two and three passes, respectively, for general graphs. The main significance of our results is not in the numerical improvements, but in demonstrating the potential of non-maximal-matching-first algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. MapReduce-based distributed tensor clustering algorithm.
- Author
-
Zhang, Hongjun, Li, Peng, Meng, Fanshuo, Fan, Weibei, and Xue, Zhuangzhuang
- Subjects
K-means clustering ,CLUSTER analysis (Statistics) ,COMPUTER science ,ALGORITHMS ,PARTITION functions ,PARALLEL processing ,PARALLEL programming - Abstract
Cluster analysis is one of the most fundamental methods in data mining, and it has been widely used in economics, social sciences and computer science. However, with the rapid development of Internet technology, the volume of data required for various web applications has grown rapidly, making the traditional clustering analysis methods face technical challenges. How to obtain useful information in a large amount of data quickly and efficiently is an urgent problem in many industrial fields. With the continuous development of cloud computing technology, large amounts of data can be performed quickly and efficiently. Hadoop is an open source distributed cloud computing platform with HDFS (Digital File System) and MapReduce as its core. HDFS provides massive data storage, while MapReduce uses the MapReduce programming model to achieve parallel processing. Compared with the traditional parallel programming model, it contains basic functions such as data partitioning, task scheduling, and parallel processing, making it possible for users to develop distributed applications on their own without understanding the basics of distributed basics, thus facilitating the design of parallel programs. K-means algorithm is a typical clustering analysis method, which is widely used in industry, but the number of iterations will increase significantly due to the growth of data volume, thus reducing the efficiency of computation. In order to better apply to the cluster analysis of large-scale data, this paper firstly implements a parallelization algorithm based on MapReduce on Hadoop platform using the basic idea of MapReduce and improves the K-means algorithm for the problems of blindness and easy to fall into local optimum when selecting randomly in clusters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Software tools for learning artificial intelligence algorithms.
- Author
-
Stamenković, Srećko, Jovanović, Nenad, Vasović, Bojan, Cvjetković, Miloš, and Jovanović, Zoran
- Subjects
ARTIFICIAL intelligence ,SOFTWARE development tools ,EDUCATIONAL technology ,COMPUTER science ,ALGORITHMS ,SIMULATION software - Abstract
In recent years, artificial intelligence has become an important discipline in the field of computer science. Students, in the absence of basic prior knowledge, may have difficulty tracking materials when they first encounter complex and abstract artificial intelligence algorithms. Numerous researchers and educators point out that the use of simulation systems and software tools to illustrate the dynamic behavior of the algorithm can prove to be an effective solution. The introduction and adoption of new technologies in learning and teaching has evolved rapidly. This conceptual review paper aims to explore the emergence of innovative educational technologies in the teaching and learning of artificial intelligence. The aim of this paper is to analyze the existing representative educational tools for learning topics in the field of artificial intelligence to highlight their characteristics and areas they cover, so that readers can more easily draw conclusions about the possible use of some of the analyzed systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Bit-Close: a fast incremental concept calculation method.
- Author
-
Ke, Yunfeng, Li, Jinhai, and Li, Shen
- Subjects
COGNITIVE learning ,COMPUTER science ,CONCEPT learning ,DATA mining ,ALGORITHMS - Abstract
The theory of Formal Concept Analysis (FCA) finds diverse applications in fields like knowledge extraction, cognitive concept learning and data mining. The construction of a concept lattice significantly influences the effectiveness of formal concept analysis; hence, the development of high-performance algorithms for concept construction is crucial. In this paper, we introduce a novel algorithm called "Bit-Close" for formal concept construction. Bit-Close leverages bit representation and operations, fundamental to computer science, to enhance the In-Close algorithm. Furthermore, we explore the parallel method of Bit-Close. Our experimental results, obtained from multiple public and random datasets, demonstrate that Bit-Close outperforms In-Close by approximately 20% and is significantly better than other competing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Guest Editor's Introduction.
- Author
-
Joe, Kazuki
- Subjects
ALGORITHMS ,COMPUTER multitasking ,COMPILERS (Computer programs) ,COMPUTER programming ,PARALLEL programming ,COMPUTER science - Abstract
This article introduces the themes of the papers featured in the April 2000 special issue of the "International Journal of Parallel Programming." The first paper describes a loop transformation algorithm. The second paper proposes a new way to develop concurrent programs, called hypersequential programming. The third paper introduces a workstation cluster implementing a distributed shared-memory mechanism. Lastly, the fourth paper discusses issues of multilevel parallelization and how they affect design of parallelizing compilers.
- Published
- 2000
8. Faster algorithms for sparse ILP and hypergraph multi-packing/multi-cover problems.
- Author
-
Gribanov, Dmitry, Shumilov, Ivan, Malyshev, Dmitry, and Zolotykh, Nikolai
- Subjects
HYPERGRAPHS ,ALGORITHMS ,SPARSE matrices ,DOMINATING set ,COMPUTER science ,COMPUTATIONAL complexity - Abstract
In our paper, we consider the following general problems: check feasibility, count the number of feasible solutions, find an optimal solution, and count the number of optimal solutions in P ∩ Z n , assuming that P is a polyhedron, defined by systems A x ≤ b or A x = b , x ≥ 0 with a sparse matrix A. We develop algorithms for these problems that outperform state-of-the-art ILP and counting algorithms on sparse instances with bounded elements in terms of the computational complexity. Assuming that the matrix A has bounded elements, our complexity bounds have the form s O (n) , where s is the minimum between numbers of non-zeroes in columns and rows of A, respectively. For s = o (log n) , this bound outperforms the state-of-the-art ILP feasibility complexity bound (log n) O (n) , due to Reis & Rothvoss (in: 2023 IEEE 64th Annual symposium on foundations of computer science (FOCS), IEEE, pp. 974–988). For s = ϕ o (log n) , where ϕ denotes the input bit-encoding length, it outperforms the state-of-the-art ILP counting complexity bound ϕ O (n log n) , due to Barvinok et al. (in: Proceedings of 1993 IEEE 34th annual foundations of computer science, pp. 566–572, https://doi.org/10.1109/SFCS.1993.366830, 1993), Dyer, Kannan (Math Oper Res 22(3):545–549, https://doi.org/10.1287/moor.22.3.545, 1997), Barvinok, Pommersheim (Algebr Combin 38:91–147, 1999), Barvinok (in: European Mathematical Society, ETH-Zentrum, Zurich, 2008). We use known and new methods to develop new exponential algorithms for Edge/Vertex Multi-Packing/Multi-Cover Problems on graphs and hypergraphs. This framework consists of many different problems, such as the Stable Multi-set, Vertex Multi-cover, Dominating Multi-set, Set Multi-cover, Multi-set Multi-cover, and Hypergraph Multi-matching problems, which are natural generalizations of the standard Stable Set, Vertex Cover, Dominating Set, Set Cover, and Maximum Matching problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. The limits of local search for weighted k-set packing.
- Author
-
Neuwohner, Meike
- Subjects
- *
COMBINATORIAL optimization , *COMPUTER science , *ALGORITHMS - Abstract
We consider the weighted k-set packing problem, where, given a collection S of sets, each of cardinality at most k, and a positive weight function w : S → Q > 0 , the task is to find a sub-collection of S consisting of pairwise disjoint sets of maximum total weight. As this problem does not permit a polynomial-time o (k log k) -approximation unless P = N P (Hazan et al. in Comput Complex 15:20–39, 2006. https://doi.org/10.1007/s00037-006-0205-6), most previous approaches rely on local search. For twenty years, Berman's algorithm SquareImp (Berman, in: Scandinavian workshop on algorithm theory, Springer, 2000. https://doi.org/10.1007/3-540-44985-X%5f19), which yields a polynomial-time k + 1 2 + ϵ -approximation for any fixed ϵ > 0 , has remained unchallenged. Only recently, it could be improved to k + 1 2 - 1 63 , 700 , 993 by Neuwohner (38th International symposium on theoretical aspects of computer science (STACS 2021), Leibniz international proceedings in informatics (LIPIcs), 2021. https://doi.org/10.4230/LIPIcs.STACS.2021.53). In her paper, she showed that instances for which the analysis of SquareImp is almost tight are "close to unweighted" in a certain sense. But for the unit weight variant, the best known approximation guarantee is k + 1 3 + ϵ (Fürer and Yu in International symposium on combinatorial optimization, Springer, 2014. https://doi.org/10.1007/978-3-319-09174-7%5f35). Using this observation as a starting point, we conduct a more in-depth analysis of close-to-tight instances of SquareImp. This finally allows us to generalize techniques used in the unweighted case to the weighted setting. In doing so, we obtain approximation guarantees of k + ϵ k 2 , where lim k → ∞ ϵ k = 0 . On the other hand, we prove that this is asymptotically best possible in that local improvements of logarithmically bounded size cannot produce an approximation ratio below k 2 . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. An Efficient Physically-Based Model for Chinese Brush.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Preparata, Franco P., Qizhi Fang, Bendu Bai, Kam-Wah Wong, and Yanning Zhang
- Abstract
This paper presents a novel physically-based model for virtual Chinese brush. Compared with previous works, the main advantage of our method lies in the use of physically based modeling methods that describe the behavior of the real brush's deformation in terms of the interaction of the external and internal forces with the virtual writing paper. Instead of simulating the brush using bristles, we use points to simulate the whole brush bundle, which can drastically decrease the complexity inherent in the conventional bristle-level approach. A spring network is derived to calculate the physical deflection of brush according to the force exerted on it. With this model, we can get a more effective simulation of real brush painting. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
11. A Grid Resource Discovery Method Based on Adaptive k-Nearest Neighbors Clustering.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Dress, Andreas, Xu, Yinfeng, Zhu, Binhai, Zhang, Yan, and Jia, Yan
- Abstract
Several features of today's grid are based on centralized or hierarchical services. However, as the grid size increasing, some of their functions especially resource discovery should be decentralized to avoid performance bottlenecks and guarantee scalability. A novel grid resource discovery method based on adaptive k-Nearest Neighbors clustering is presented in this paper. A class is formed by a collection of nodes with some similarities in their characteristics, each class is managed by a leader and consists of members that serve as workers. Resource requests are ideally forwarded to an appropriate class leader that would then direct it to one of its workers. This method can handle resource requests by searching a small subset out of a large number of nodes by resource clustering which can improve the resource query efficiency; on the other hand, it also achieves well scalability by managing grid resources with adaptive mechanism. It is shown from a series of experiments that the method presented in this paper achieves more scalability and efficient lookup performance than other existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
12. The Tangent FFT.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Boztaş, Serdar, Lu, Hsiao-Feng (Francis), and Bernstein, Daniel J.
- Abstract
The split-radix FFT computes a size-n complex DFT, when n is a large power of 2, using just $4n\lg n-6n+8$ arithmetic operations on real numbers. This operation count was first announced in 1968, stood unchallenged for more than thirty years, and was widely believed to be best possible. Recently James Van Buskirk posted software demonstrating that the split-radix FFT is not optimal. Van Buskirk's software computes a size-n complex DFT using only $(34/9+o(1))n\lg n$ arithmetic operations on real numbers. There are now three papers attempting to explain the improvement from 4 to 34/9: Johnson and Frigo, IEEE Transactions on Signal Processing, 2007; Lundy and Van Buskirk, Computing, 2007; and this paper. This paper presents the "tangent FFT," a straightforward in-place cache-friendly DFT algorithm having exactly the same operation counts as Van Buskirk's algorithm. This paper expresses the tangent FFT as a sequence of standard polynomial operations, and pinpoints how the tangent FFT saves time compared to the split-radix FFT. This description is helpful not only for understanding and analyzing Van Buskirk's improvement but also for minimizing the memory-access costs of the FFT. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
13. Remote patient monitoring and classifying using the internet of things platform combined with cloud computing.
- Author
-
Iranpak, Somayeh, Shahbahrami, Asadollah, and Shakeri, Hassan
- Subjects
INTERNET of things ,PATIENT monitoring ,ELECTRONIC data processing ,ALGORITHMS ,COMPUTER science ,TELEMEDICINE ,CLOUD computing - Abstract
Many researchers have recently considered patients' health and provided an optimal and appropriate solution. With the advent of technologies such as cloud computing, Internet of Things and 5G, information can be exchanged faster and more securely. The Internet of things (IoT) offers many opportunities in the field of e-health. This technology can improve health services and lead to various innovations in this regard. Using cloud computing and IoT in this process can significantly improve the monitoring of patients. Therefore, it is important to provide a useful method in the medical industry and computer science to monitor the status of patients using connected sensors. Thus, due to its optimal efficiency, speed, and accuracy of data processing and classification, the use of cloud computing to process the data collected from remote patient sensors and IoT platform has been suggested. In this paper, a prioritization system is used to prioritize sensitive information in IoT, and in cloud computing, LSTM deep neural network is applied to classify and monitor patients' condition remotely, which can be considered as an important innovative aspect of this paper. Sensor data in the IoT platform is sent to the cloud with the help of the 5th generation Internet. The core of cloud computing uses the LSTM (long short-term memory) deep neural network algorithm. By simulating the proposed method and comparing the obtained results with other methods, it is observed that the accuracy of the proposed method is 97.13%, which has been improved by 10.41% in average over the other methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. A meta-algorithm for finding large k-plexes.
- Author
-
Conte, Alessio, Firmani, Donatella, Patrignani, Maurizio, and Torlone, Riccardo
- Subjects
MAGNITUDE (Mathematics) ,COMPUTER science ,ALGORITHMS ,METAHEURISTIC algorithms - Abstract
We focus on the automatic detection of communities in large networks, a challenging problem in many disciplines (such as sociology, biology, and computer science). Humans tend to associate to form families, villages, and nations. Similarly, the elements of real-world networks naturally tend to form highly connected groups. A popular model to represent such structures is the clique, that is, a set of fully interconnected nodes. However, it has been observed that cliques are too strict to represent communities in practice. The k-plex relaxes the notion of clique, by allowing each node to miss up to k connections. Although k-plexes are more flexible than cliques, finding them is more challenging as their number is greater. In addition, most of them are small and not significant. In this paper we tackle the problem of finding only large k-plexes (i.e., comparable in size to the largest clique) and design a meta-algorithm that can be used on top of known enumeration algorithms to return only significant k-plexes in a fraction of the time. Our approach relies on: (1) methods for strongly reducing the search space and (2) decomposition techniques based on the efficient computation of maximal cliques. We demonstrate experimentally that known enumeration algorithms equipped with our approach can run orders of magnitude faster than full enumeration. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. An Efficient Algorithm of Star Subgraph Queries on Urban Traffic Knowledge Graph.
- Author
-
Sun, Tao, Xu, Jianqiu, and Hu, Caiping
- Subjects
KNOWLEDGE graphs ,CITY traffic ,ALGORITHMS ,URBAN planning ,COMPUTER science - Abstract
Knowledge graph has wide applications in the field of computer science. In the knowledge service environment, the information is large and explosive, and it is difficult to find knowledge of common phenomena. The urban traffic knowledge graph is a knowledge system that formally describes urban traffic concepts, entities and their interrelationships. It has great application potential in application scenarios such as user travel, route planning, and urban planning. This paper first defines the urban traffic knowledge graph and the star subgraph query of the urban traffic knowledge graph. Then, the road network data and trajectory data are collected to extract the urban traffic knowledge, and the urban traffic knowledge graph is constructed with this knowledge. Finally, a star subgraph query algorithm on the urban traffic knowledge graph is proposed. The discussion of the star subgraph query mode gives the corresponding application scenarios of our method in the urban traffic knowledge graph. Experimental results verify the performance advantages of this method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. A new algorithmic decision for categorical syllogisms via Carroll's diagrams.
- Author
-
Kircali Gursoy, Necla, Senturk, Ibrahim, Oner, Tahsin, and Gursoy, Arif
- Subjects
SYLLOGISM ,ALGORITHMS ,COMPUTER science ,CHARTS, diagrams, etc. ,ARTIFICIAL intelligence ,COMPUTER engineering - Abstract
In this paper, we propose a new effective algorithm for the categorical syllogisms by using a calculus system Syllogistic Logic with Carroll Diagrams, which determines a formal approach to logical reasoning with diagrams, for representations of the fundamental Aristotelian categorical syllogisms. We show that this logical reasoning is closed under the syllogistic criterion of inference. Therefore, the calculus system is implemented to let the formalism which comprises synchronically bilateral and trilateral diagrammatical appearance and naive algorithmic nature. And also, there is no need specific knowledge or exclusive ability to understand this decision procedure as well as to use it in an algorithmic system. Consequently, the empirical contributions of this paper are to design a polynomial-time algorithm at the first time in the literature to conduce to researchers getting into the act in different areas of science used categorical syllogisms such as artificial intelligence, engineering, computer science and etc. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Extracting Information of Anti-AIDS Inhibitor from the Biological Literature Based on Ontology.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Preparata, Franco P., Qizhi Fang, Chunyan Zhang, Jin Du, and Ruisheng Zhang
- Abstract
Nowadays, it is still the primary problem to find the inhibitors of retrovirus, protease and integrase in anti-AIDS drug design. However, the research and experimental results about anti-AIDS inhibitors mainly exist in large numbers of scientific literature, not in readable format for computer. In this paper, we introduce an Ontology-based Information Extraction (OIE) approach to extract anti-AIDS inhibitors from literature. Key to the approach is the construction of anti-AIDS inhibitors ontology, which provides a semantic framework for information extraction, and annotation of corpus. Consequently, this paper primarily focuses on the architecture of OIE, on which we construct the anti-AIDS ontology using Protégé tool and annotate corpus. Finally, we employ a demonstrated application scenario to show how to annotate the PubMed articles based on the ontology we have constructed. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
18. A Sketch of a Dynamic Epistemic Semiring.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Rangan, C. Pandu, Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Leivant, Daniel, de Queiroz, Ruy, and Solin, Kim
- Abstract
This paper proposes a semiring formulation for reasoning about an agent's changing beliefs: a dynamic epistemic semiring(DES). A DES is a modal semiring extended with epistemic-action operators. The paper concentrates on the revision operator by proposing an axiomatisation, developing a basic calculus and deriving the classical AGM revision axioms in the algebra. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
19. Logiweb - A System for Web Publication of Mathematics.
- Author
-
Iglesias, Andrés, Takayama, Nobuki, and Grue, Klaus
- Abstract
Logiweb is a system for electronic publication and archival of machine checked mathematics of high typographic quality. It can verify the formal correctness of pages, i.e. mathematical papers expressed suitably. The present paper is an example of such a Logiweb page and the present paper is formally correct in the sense that it has been verified by Logiweb. The paper may of course contain informal errors like any other paper. Logiweb is neutral with respect to choice of logic and choice of notation and can support any kind of formal reasoning. Logiweb uses the World Wide Web to publish Logiweb pages and Logiweb pages can be viewed by ordinary Web browsers. Logiweb pages can reference definitions, lemmas, and proofs on previously referenced Logiweb pages across the Internet. When Logiweb verifies a Logiweb page, it takes all transitively referenced pages into account. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
20. Matlab-Based Problem-Solving Environment for Geometric Processing of Surfaces.
- Author
-
Iglesias, Andrés, Takayama, Nobuki, Gálvez, A., and Iglesias, A.
- Abstract
In this paper a new problem-solving environment (PSE) for geometric processing of surfaces is introduced. The PSE has been designed to be responsive to the needs of our collaboration with an industrial partner, the Spanish company CANDEMAT S.A., devoted to build moulds and dies for the automotive industry. The PSE has been implemented in Matlab and is aimed to support the full range of activities carried out by our partner in the field of geometric processing of surfaces for the automotive industry. Firstly, the paper describes the architecture of the system and some implementation details. Then, some examples of its application to critical problems in the automotive industry - such as the computation of the intersection curves of surfaces, the generation of tool-path trajectories for NC machining and the visualization of geometric entities stored in industrial files of several formats - are briefly described. The PSE has shown to provide our partner with accurate, reliable solutions to these and other problems and to serve as a communication channel for exchange of geometrical data as well as a platform for trial and research support. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
21. On Genome Evolution with Innovation.
- Author
-
Královič, Rastislav, Urzyczyn, Paweł, Wójtowicz, Damian, and Tiuryn, Jerzy
- Abstract
We introduce and analyse a simple probabilistic model of genome evolution. It is based on three fundamental evolutionary events: gene duplication, loss and innovation, and it is called DLI model. The focus of the paper is around the size distribution of gene families. The formulas for equilibrium gene family sizes are derived showing that they follow a logarithmic distribution. We consider also a disjoint union of DLI models and we present the result of this study. Some empirical results for microbial genomes are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
22. Robust Quantum Algorithms with ε-Biased Oracles.
- Author
-
Chen, Danny Z., Lee, D. T., Suzuki, Tomoya, Yamashita, Shigeru, Nakanishi, Masaki, and Watanabe, Katsumasa
- Abstract
This paper considers the quantum query complexity of ε-biased oracles that return the correct value with probability only 1/2 + ε. In particular, we show a quantum algorithm to compute N-bit OR functions with $O(\sqrt{N}/{\varepsilon})$ queries to ε-biased oracles. This improves the known upper bound of $O(\sqrt{N}/{\varepsilon}^2)$ and matches the known lower bound; we answer the conjecture raised by the paper [1] affirmatively. We also show a quantum algorithm to cope with the situation in which we have no knowledge about the value of ε. This contrasts with the corresponding classical situation, where it is almost hopeless to achieve more than a constant success probability without knowing the value of ε. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
23. A Tight Bound on the Number of Mobile Servers to Guarantee the Mutual Transferability Among Dominating Configurations.
- Author
-
Xiaotie Deng, Dingzhu Du, and Fujita, Satoshi
- Abstract
In this paper, we propose a new framework to provide continuous services to users by a collection of mobile servers distributed over an interconnection network. We model those mobile servers as a subset of host computers, and assume that a user host can receive the service if at least one adjacent host computer (including itself) plays the role of a server; i.e., we assume that the service could not be routed via the interconnection network. The main results obtained in this paper are summarized as follows: For the class of trees with n hosts, ⌈(n+1)/2⌉ mobile servers are necessary and sufficient to realize continuous services by the mobile servers, and for the class of Hamiltonian graphs with n hosts, ⌈(n+1)/3⌉ mobile servers are necessary and sufficient. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
24. On deriving test suites for nondeterministic finite state machines with time-outs.
- Author
-
Shabaldina, N. and Galimullin, R.
- Subjects
ALGORITHMS ,FINITE state machines ,COMPUTER simulation ,COMPUTER science ,SCIENCE ,ALGEBRA - Abstract
In the paper, the non-separability relation for finite state machines with time-outs is studied. A specific feature of such machines is integer-valued delays, or time-outs, which determine how long the finite state machine will stay in one or another state if there are no input actions. If the time-out is over and no input symbol has been applied, then the TFSM state is changed according to the transition under time delay. In the paper, an algorithm for constructing a separating sequence for such finite state machines is presented. Here, the separating sequence is a timed input sequence for which sets of input sequences of the TFSM do not intersect; hence, it is sufficient to apply the separating sequence once in order to distinguish the TFSMs by their output reactions. This algorithm underlies the algorithm for construction of test suites with respect to non-separability relation in the case where the fault domain is specified by means of a mutation machine. Test suite derivation with respect to non-separability relation by way of 'TFSM to FSM' transformation is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
25. On Algorithmic Statistics for Space-bounded Algorithms.
- Author
-
Milovanov, Alexey
- Subjects
KOLMOGOROV complexity ,STATISTICS ,RANDOM numbers ,COMPUTER science ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
Algorithmic statistics looks for models of observed data that are good in the following sense: a model is simple (i.e., has small Kolmogorov complexity) and captures all the algorithmically discoverable regularities in the data. However, this idea can not be used in practice as is because Kolmogorov complexity is not computable. In this paper we develop an algorithmic version of algorithmic statistics that uses space-bounded Kolmogorov complexity. We prove a space-bounded version of a basic result from "classical" algorithmic statistics, the connection between optimality and randomness deficiences. The main tool is the Nisan–Wigderson pseudo-random generator. An extended abstract of this paper was presented at the 12th International Computer Science Symposium in Russia (Milovanov 10). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
26. Robust Evolutionary Algorithm Design for Socio-economic Simulation.
- Author
-
Alkemade, Floortje, Poutré, Han La, and Amman, Hans M.
- Subjects
COMPUTER science ,EVOLUTIONARY economics ,EVOLUTIONARY computation ,SOCIAL learning ,ALGORITHMS ,SIMULATION methods & models ,ECONOMIC models ,STOCHASTIC convergence - Abstract
Agent-based computational economics (ACE) combines elements from economics and computer science. In this paper, we focus on the relation between the evolutionary technique that is used and the economic problem that is modeled. In the field of ACE, economic simulations often derive parameter settings for the evolutionary algorithm directly from the values of the economic model parameters. In this paper, we compare two important approaches that are dominating ACE research and show that the above practice may hinder the performance of the evolutionary algorithm and thereby hinder agent learning. More specifically, we show that economic model parameters and evolutionary algorithm parameters should be treated separately by comparing the two widely used approaches to social learning with respect to their convergence properties and robustness. This leads to new considerations for the methodological aspects of evolutionary algorithm design within the field of ACE. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
27. Connected Subgraph Defense Games.
- Author
-
Akrida, Eleni C., Deligkas, Argyrios, Melissourgos, Themistoklis, and Spirakis, Paul G.
- Subjects
NASH equilibrium ,THERMODYNAMIC control ,ALGORITHMS ,COMPUTER science ,POLYNOMIAL time algorithms ,APPROXIMATION algorithms - Abstract
We study a security game over a network played between a defender and kattackers. Every attacker chooses, probabilistically, a node of the network to damage. The defender chooses, probabilistically as well, a connected induced subgraph of the network of λ nodes to scan and clean. Each attacker wishes to maximize the probability of escaping her cleaning by the defender. On the other hand, the goal of the defender is to maximize the expected number of attackers that she catches. This game is a generalization of the model from the seminal paper of Mavronicolas et al. Mavronicolas et al. (in: International symposium on mathematical foundations of computer science, MFCS, pp 717–728, 2006). We are interested in Nash equilibria of this game, as well as in characterizing defense-optimal networks which allow for the best equilibrium defense ratio; this is the ratio of k over the expected number of attackers that the defender catches in equilibrium. We provide a characterization of the Nash equilibria of this game and defense-optimal networks. The equilibrium characterizations allow us to show that even if the attackers are centrally controlled the equilibria of the game remain the same. In addition, we give an algorithm for computing Nash equilibria. Our algorithm requires exponential time in the worst case, but it is polynomial-time for λ constantly close to 1 or n. For the special case of tree-networks, we further refine our characterization which allows us to derive a polynomial-time algorithm for deciding whether a tree is defense-optimal and if this is the case it computes a defense-optimal Nash equilibrium. On the other hand, we prove that it is NP -hard to find a best-defense strategy if the tree is not defense-optimal. We complement this negative result with a polynomial-time constant-approximation algorithm that computes solutions that are close to optimal ones for general graphs. Finally, we provide asymptotically (almost) tight bounds for the Price of Defense for any λ ; this is the worst equilibrium defense ratio over all graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. TKAP: Efficiently processing top- k query on massive data by adaptive pruning.
- Author
-
Han, Xixian, Liu, Xianmin, Li, Jianzhong, and Gao, Hong
- Subjects
ALGORITHMS ,DATA ,PROBABILITY theory ,COMPUTER science ,INFORMATION science - Abstract
In many applications, top- k query is an important operation to return a set of interesting points in a potentially huge data space. The existing algorithms, either maintaining too many candidates, or requiring assistant structures built on the specific attribute subset, or returning results with probabilistic guarantee, cannot process top- k query on massive data efficiently. This paper proposes a sorted-list-based TKAP algorithm, which utilizes some data structures of low space overhead, to efficiently compute top- k results on massive data. In round-robin retrieval on sorted lists, TKAP performs adaptive pruning operation and maintains the required candidates until the stop condition is satisfied. The adaptive pruning operation can be adjusted by the information obtained in round-robin retrieval to achieve a better pruning effect. The adaptive pruning rule is developed in this paper, along with its theoretical analysis. The extensive experimental results, conducted on synthetic and real-life data sets, show the significant advantage of TKAP over the existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. A Loop Transformation Algorithm for Communication Overlapping.
- Author
-
Ishizaki, Kazuaki, Komatsu, Hideaki, and Nakatani, Toshio
- Subjects
COMPILERS (Computer programs) ,ALGORITHMS ,PARALLEL programming ,COMPUTER programming ,COMPUTER science - Abstract
Overlapping communication with computation is a well-known approach to improving performance. Previous research has focused on optimizations performed by the programmer. This paper presents a compiler algorithm that automatically determines the appropriate loop indices of a given nested loop and applies loop interchange and tiling in order to overlap communication with computation. The algorithm avoids generating redundant communication by providing a framework for combining information on data dependence, communication, and reuse. It also describes a method of generating messages to exchange data between processors for tiled loops on distributed memory machines. The algorithm has been implemented in our High Performance Fortran (HPF) compiler, and experimental results have shown its effectiveness on distributed memory machines, such as the RISC System/6000 Scalable POWERparallel System. This paper also discusses the architectural problems of efficient optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
30. Communication in Networks with Random Dependent Faults.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Kučera, Luděk, Kučera, Antonín, Kranakis, Evangelos, Paquette, Michel, and Pelc, Andrzej
- Abstract
The aim of this paper is to study communication in networks where nodes fail in a random dependent way. In order to capture fault dependencies, we introduce the neighborhood fault model, where damaging events, called spots, occur randomly and independently with probability p at nodes of a network, and cause faults in the given node and all of its neighbors. Faults at distance at most 2 become dependent in this model and are positively correlated. We investigate the impact of spot probability on feasibility and time of communication in the fault-free part of the network. We show a network which supports fast communication with high probability, if p ≤ 1/clogn. We also show that communication is not feasible with high probability in most classes of networks, for constant spot probabilities. For smaller spot probabilities, high probability communication is supported even by bounded degree networks. It is shown that the torus supports communication with high probability when p decreases faster than 1/n1/2, and does not when p ∈ 1/O(n1/2). Furthermore, a network built of tori is designed, with the same fault-tolerance properties and additionally supporting fast communication. We show, however, that networks of degree bounded by a constant d do not support communication with high probability, if p ∈ 1/O(n1/d). While communication in networks with independent faults was widely studied, this is the first analytic paper which investigates network communication for random dependent faults. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. Series-Parallel Languages on Scattered and Countable Posets.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Kučera, Luděk, Kučera, Antonín, Bedon, Nicolas, and Rispal, Chloé
- Abstract
We initiate a study on automata recognizing labelled posets constructed from scattered and countable linear orderings. More precisely, the class of labelled posets considered in this paper is the smallest containing letters, closed under finite parallel operation and sequential product indexed by all countable and scattered linear orderings. The first result of this paper establishes that those labelled posets are precisely the N-free ones. The second result is a Kleene-like theorem, which establishes that the class of languages of labelled posets accepted by branching automata is exactly the class of rational languages. This generalizes both the finite [9] and ω-labelled posets [2,6] cases, and the Kleene-like theorem on words on linear orderings [3]. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. Space-Conscious Compression.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Kučera, Luděk, Kučera, Antonín, Gagie, Travis, and Manzini, Giovanni
- Abstract
Compression is most important when space is in short supply, so compression algorithms are often implemented in limited memory. Most analyses ignore memory constraints as an implementation detail, however, creating a gap between theory and practice. In this paper we consider the effect of memory limitations on compression algorithms. In the first part of the paper we assume the memory available is fixed and prove nearly tight upper and lower bound on how much is needed to compress a string close to its k-th order entropy. In the second part we assume the memory available grows (slowly) as more and more characters are read. In this setting we show that the rate of growth of the available memory determines the speed at which the compression ratio approaches the entropy. In particular, we establish a relationship between the rate of growth of the sliding window in the LZ77 algorithm and its convergence rate. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
33. Using Bit Selection to Do Routing Table Lookup.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Preparata, Franco P., Qizhi Fang, Zhenqiang Li, Dongqu Zheng, and Yan Ma
- Abstract
Tree-based algorithms, such as Patricia, LC-trie, LPFST, etc, are widely used to do longest prefix matching (LPM). These algorithms use all the bits of the prefix to build the tree and the bits are used from the most significant bit to the least significant bit sequentially. Thus the tree is not balanced and the tree depth is high. In this paper, we propose bit selection tree (BS-tree) to do LPM. The bits of the prefix are selected to build BS-tree based on their costs defined in this paper. BS-tree has lower tree depth and is more balanced compared to other tree-based schemes. BS-tree has good scalability to the length of the IP address and is suitable for both IPv4 and IPv6. We evaluate the performances of BS-tree using both IPv4 and IPv6, and specially refine it for IPv6 based on the observations on IPv6 address and real IPv6 routing tables. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
34. On Kabatianskii-Krouk-Smeets Signatures.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Rangan, C. Pandu, Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Carlet, Claude, Sunar, Berk, Cayrel, Pierre-Louis, Otmani, Ayoub, and Vergnaud, Damien
- Abstract
Kabastianskii, Krouk and Smeets proposed in 1997 a digital signature scheme based on random error-correcting codes. In this paper we investigate the security and the efficiency of their proposal. We show that a passive attacker who may intercept just a few signatures can recover the private key. We give precisely the number of signatures required to achieve this goal. This enables us to prove that all the schemes given in the original paper can be broken with at most 20 signatures. We improve the efficiency of these schemes by firstly providing parameters that enable to sign about 40 messages, and secondly, by describing a way to extend these few-times signatures into classical multi-time signatures. We finally study their key sizes and a mean to reduce them by means of more compact matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
35. Inverted Edwards Coordinates.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Boztaş, Serdar, Lu, Hsiao-Feng (Francis), Bernstein, Daniel J., and Lange, Tanja
- Abstract
Edwards curves have attracted great interest for several reasons. When curve parameters are chosen properly, the addition formulas use only 10M + 1S. The formulas are strongly unified, i.e., work without change for doublings; even better, they are complete, i.e., work without change for all inputs. Dedicated doubling formulas use only 3M + 4S, and dedicated tripling formulas use only 9M + 4S. This paper introduces inverted Edwards coordinates. Inverted Edwards coordinates (X1:Y1:Z1) represent the affine point (Z1/X1,Z1/Y1) on an Edwards curve; for comparison, standard Edwards coordinates (X1:Y1:Z1) represent the affine point (X1/Z1,Y1/Z1). This paper presents addition formulas for inverted Edwards coordinates using only 9M + 1S. The formulas are not complete but still are strongly unified. Dedicated doubling formulas use only 3M + 4S, and dedicated tripling formulas use only 9M + 4S. Inverted Edwards coordinates thus save 1M for each addition, without slowing down doubling or tripling. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
36. Computing Upward Topological Book Embeddings of Upward Planar Digraphs.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Tokuyama, Takeshi, Giordano, F., Liotta, G., Mchedlidze, T., and Symvonis, A.
- Abstract
This paper studies the problem of computing an upward topological book embedding of an upward planar digraph G, i.e. a topological book embedding of G where all edges are monotonically increasing in the upward direction. Besides having its own inherent interest in the theory of upward book embeddability, the question has applications to well studied research topics of computational geometry and of graph drawing. The main results of the paper are as follows. Every upward planar digraph G with n vertices admits an upward topological book embedding such that every edge of G crosses the spine of the book at most once.Every upward planar digraph G with n vertices admits a point-set embedding on any set of n distinct points in the plane such that the drawing is upward and every edge of G has at most two bends.Every pair of upward planar digraphs sharing the same set of n vertices admits an upward simultaneous embedding with at most two bends per edge. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
37. Algorithms for Minimum m-Connected k-Dominating Set Problem.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Dress, Andreas, Xu, Yinfeng, Zhu, Binhai, Shang, Weiping, and Yao, Frances
- Abstract
In wireless sensor networks, virtual backbone has been proposed as the routing infrastructure to alleviate the broadcasting storm problem and perform some other tasks such as area monitoring. Previous work in this area has mainly focused on how to set up a small virtual backbone for high efficiency, which is modelled as the minimum Connected Dominating Set (CDS) problem. In this paper we consider how to establish a small virtual backbone to balance efficiency and fault tolerance. This problem can be formalized as the minimum m-connected k-dominating set problem, which is a general version of minimum CDS problem with m = 1 and k = 1. In this paper we will propose some approximation algorithms for this problem that beat the current best performance ratios. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
38. The Minimum Risk Spanning Tree Problem.
- Author
-
Hutchison, David, Kanade, Takeo, Kittler, Josef, Kleinberg, Jon M., Mattern, Friedemann, Mitchell, John C., Naor, Moni, Nierstrasz, Oscar, Pandu Rangan, C., Steffen, Bernhard, Sudan, Madhu, Terzopoulos, Demetri, Tygar, Doug, Vardi, Moshe Y., Weikum, Gerhard, Dress, Andreas, Xu, Yinfeng, Zhu, Binhai, Chen, Xujin, and Hu, Jie
- Abstract
This paper studies a spanning tree problem with interval data that finds diverse applications in network design. Given an underlying network G = (V,E), each link e ∈ E can be established by paying a cost $c_e\in[\underline{c}_e,\overline{c}_e]$, and accordingly takes a risk $\frac{\overline{c}_e-c_e}{\overline{c}_e-\underline{c}_e}$ of link failure. The minimum risk spanning tree (MRST) problem is to establish a spanning tree in G of total cost no more than a given constant so that the risk sum over the links on the spanning tree is minimized. In this paper, we propose an exact algorithm for the MRST problem that has time-complexity of O(m2logm logn(m + n logn)), where m =
E and n = V . [ABSTRACT FROM AUTHOR] - Published
- 2007
- Full Text
- View/download PDF
39. Reversal Distance for Strings with Duplicates: Linear Time Approximation Using Hitting Set.
- Author
-
Erlebach, Thomas, Kaklamanis, Christos, Kolman, Petr, and Waleń, Tomasz
- Abstract
In the last decade there has been an ongoing interest in string comparison problems; to a large extend the interest was stimulated by genome rearrangement problems in computational biology but related problems appear in many other areas of computer science. Particular attention has been given to the problem of sorting by reversals(SBR): given two strings, A and B, find the minimum number of reversals that transform the string A into the string B (a reversalρ(i,j), i
- Published
- 2007
- Full Text
- View/download PDF
40. Primal-Dual Enumeration for Multiparametric Linear Programming.
- Author
-
Iglesias, Andrés, Takayama, Nobuki, Jones, Colin N., and Maciejowski, Jan M.
- Abstract
Optimal control problems for constrained linear systems with a linear cost can be posed as multiparametric linear programs (pLPs) and solved explicitly offline. Several algorithms have recently been proposed in the literature that solve these pLPs in a fairly efficient manner, all of which have as a base operation the computation and removal of redundant constraints. For many problems, it is this redundancy elimination that requires the vast majority of the computation time. This paper introduces a new solution technique for multiparametric linear programs based on the primal-dual paradigm. The proposed approach reposes the problem as the vertex enumeration of a linearly transformed polytope and then simultaneously computes both its vertex and halfspace representations. Exploitation of the halfspace representation allows, for smaller problems, a very significant reduction in the number of redundancy elimination operations required, resulting in many cases in a much faster algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
41. MAX-SNP Hardness and Approximation of Selected-Internal Steiner Trees.
- Author
-
Chen, Danny Z., Lee, D. T., Sun-Yuan Hsieh, and Shih-Cheng Yang
- Abstract
In this paper, we consider an interesting variant of the well-known Steiner tree problem: Given a complete graph G = (V,E) with a cost function c:E →R+ and two subsets R and R′ satisfying $R'\subset R\subseteq V$, a selected-internal Steiner tree is a Steiner tree which contains (or spans) all the vertices in R such that each vertex in R′ cannot be a leaf. The selected-internal Steiner tree problem is to find a selected-internal Steiner tree with the minimum cost. In this paper, we show that the problem is MAX SNP-hard even when the costs of all edges in the input graph are restricted to either 1 or 2. We also present an approximation algorithm for the problem. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
42. Enumerate and Expand: New Runtime Bounds for Vertex Cover Variants.
- Author
-
Chen, Danny Z., Lee, D. T., Mölle, Daniel, Richter, Stefan, and Rossmanith, Peter
- Abstract
The enumerate-and-expand paradigm for solving NP-hard problems has been introduced and applied to some Vertex Cover variants in a recently published preliminary paper. In this paper we improve on the runtime for Connected Vertex Cover, obtaining a bound of O*(2.7606k),The O*-notation is equivalent to the well-known Landau notation, except that polynomial factors may be suppressed. For instance, 2kk2n3=O*(2k). and use the technique in order to gain the fastest known method for counting the number of vertex covers in a graph, which takes O*(1.3803k) time. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
43. Improved Algorithms for the Minmax Regret 1-Median Problem.
- Author
-
Chen, Danny Z., Lee, D. T., Yu, Hung-I, Lin, Tzu-Chin, and Wang, Biing-Feng
- Abstract
This paper studies the problem of finding the 1-median on a graph where vertex weights are uncertain and the uncertainty is characterized by given intervals. It is required to find a minmax regret solution, which minimizes the worst-case loss in the objective function. Averbakh and Berman had an O(mn2log n)-time algorithm for the problem on a general graph, and had an O(nlog2n)-time algorithm on a tree. In this paper, we improve these two bounds to O(mn2 + n3log n) and O(nlog n), respectively. Keywords: Location theory, minmax regret optimization, medians. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
44. Correlation Dimension and the Quality of Forecasts Given by a Neural Network.
- Author
-
Cooper, S. Barry, Löwe, Benedikt, Torenvliet, Leen, Michalak, Krzysztof, and Kwasnicka, Halina
- Abstract
The problem addressed in this paper is searching for a dependence between the correlation dimension of a time series and the mean square error (MSE) obtained when predicting the future time series values using a multilayer perceptron. The relation between the correlantion dimension and the ability of a neural network to adapt to sample data represented by in-sample mean square error is also studied. The dependence between correlation dimension and in-sample and out-of-sample MSE is found in many real-life as well as artificial time series. The results presented in the paper were obtained using various neural network sizes and various activation functions of the output layer neurons. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
45. The Church-Turing Thesis: Breaking the Myth.
- Author
-
Cooper, S. Barry, Löwe, Benedikt, Torenvliet, Leen, Goldin, Dina, and Wegner, Peter
- Abstract
According to the interactive view of computation, communication happens during the computation, not before or after it. This approach, distinct from concurrency theory and the theory of computation, represents a paradigm shift that changes our understanding of what is computation and how it is modeled. Interaction machines extend Turing machines with interaction to capture the behavior of concurrent systems, promising to bridge these two fields. This promise is hindered by the widespread belief, incorrectly known as the Church-Turing thesis, that no model of computation more expressive than Turing machines can exist. Yet Turing's original thesis only refers to the computation of functions and explicitly excludes other computational paradigms such as interaction. In this paper, we identify and analyze the historical reasons for this widespread belief. Only by accepting that it is false can we begin to properly investigate formal models of interaction machines. We conclude the paper by presenting one such model, Persistent Turing Machines (PTMs). PTMs capture sequential interaction, which is a limited form of concurrency; they allow us to formulate the Sequential Interaction Thesis, going beyond the expressiveness of Turing machines and of the Church-Turing thesis. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
46. A Polynomial Space and Polynomial Delay Algorithm for Enumeration of Maximal Motifs in a Sequence.
- Author
-
Xiaotie Deng, Dingzhu Du, Arimura, Hiroki, and Uno, Takeaki
- Abstract
In this paper, we consider the problem of enumerating all maximal motifs in an input string for the class of repeated motifs with wild cards. A maximal motif is such a representative motif that is not properly contained in any larger motifs with the same location lists. Although the enumeration problem for maximal motifs with wild cards has been studied in (Parida et al., CPM'01), (Pisanti et al.,MFCS'03) and (Pelfrene et al., CPM'03), its output-polynomial time computability is still open. The main result of this paper is a polynomial space polynomial delay algorithm for the maximal motif enumeration problem for the repeated motifs with wild cards. This algorithm enumerates all maximal motifs in an input string of length n with O(n3) time per motif with O(n2) space and O(n3) delay. The key of the algorithm is depth-first search on a tree-shaped search route over all maximal motifs based on a technique called prefix-preserving closure extension. We also show an exponential lowerbound and a succinctness result on the number of maximal motifs, which indicate the limit of a straightforward approach. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
47. Efficient Online String Matching Based on Characters Distance Text Sampling.
- Author
-
Faro, Simone, Marino, Francesco Pio, and Pavone, Arianna
- Subjects
INFORMATION retrieval ,COMPUTER science ,ELECTRONIC information resource searching ,ALGORITHMS ,APPLICATION software ,HAMMING distance ,PATTERN matching ,NATURAL language processing - Abstract
Searching for all occurrences of a pattern in a text is a fundamental problem in computer science with applications in many other fields, like natural language processing, information retrieval and computational biology. Sampled string matching is an efficient approach recently introduced in order to overcome the prohibitive space requirements of an index construction, on the one hand, and drastically reduce searching time for the online solutions, on the other hand. In this paper we present a new algorithm for the sampled string matching problem, based on a characters distance sampling approach. The main idea is to sample the distances between consecutive occurrences of a given pivot character and then to search online the sampled data for any occurrence of the sampled pattern, before verifying the original text. From a theoretical point of view we prove that, under suitable conditions, our solution can achieve both linear worst-case time complexity and optimal average-time complexity. From a practical point of view it turns out that our solution shows a sub-linear behaviour in practice and speeds up online searching by a factor of up to 9, using limited additional space whose amount goes from 11 to 2.8% of the text size, with a gain up to 50% if compared with previous solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. A Unified Active Learning Framework for Biomedical Relation Extraction.
- Author
-
Zhang, Hong-Tao, Huang, Min-Lie, and Zhu, Xiao-Yan
- Subjects
EXPERIENTIAL learning ,ACTIVE learning ,ALGORITHMS ,PROTEINS ,COMPUTER science - Abstract
Supervised machine learning methods have been employed with great success in the task of biomedical relation extraction. However, existing methods are not practical enough, since manual construction of large training data is very expensive. Therefore, active learning is urgently needed for designing practical relation extraction methods with little human effort. In this paper, we describe a unified active learning framework. Particularly, our framework systematically addresses some practical issues during active learning process, including a strategy for selecting informative data, a data diversity selection algorithm, an active feature acquisition method, and an informative feature selection algorithm, in order to meet the challenges due to the immense amount of complex and diverse biomedical text. The framework is evaluated on protein-protein interaction (PPI) extraction and is shown to achieve promising results with a significant reduction in editorial effort and labeling time. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
49. Special Issue Optimization for Machine Learning Guest Editorial.
- Author
-
Sanguineti, Marcello and Tànfani, Elena
- Subjects
MACHINE learning ,BOOLEAN functions ,FEEDFORWARD neural networks ,ALGORITHMS ,APPLIED mathematics ,COMPUTER science ,REINFORCEMENT learning ,OPERATIONS research - Abstract
This problem can benefit from machine learning techniques, thanks to their capability of finding "hidden" relationships among the available data and therefore generate placement actions. In society, data producers and data consumers continuously exchange their roles. Since the very beginning, there has been a fruitful exchange between machine learning (ML) and optimization. [Extracted from the article]
- Published
- 2021
- Full Text
- View/download PDF
50. Formalization and Implementation of Modern SAT Solvers.
- Author
-
Filip Marić
- Subjects
ALGORITHMS ,PROGRAMMING languages ,ELECTRONIC data processing ,COMPUTER science - Abstract
Abstract Most, if not all, state-of-the-art complete SAT solvers are complex variations of the DPLL procedure described in the early 1960’s. Published descriptions of these modern algorithms and related data structures are given either as high-level state transition systems or, informally, as (pseudo) programming language code. The former, although often accompanied with (informal) correctness proofs, are usually very abstract and do not specify many details crucial for efficient implementation. The latter usually do not involve any correctness argument and the given code is often hard to understand and modify. This paper aims to bridge this gap by presenting SAT solving algorithms that are formally proved correct and also contain information required for efficient implementation. We use a tutorial, top-down, approach and develop a SAT solver, starting from a simple design that is subsequently extended, step-by-step, with a requisite series of features. The heuristic parts of the solver are abstracted away, since they usually do not affect solver correctness (although they are very important for efficiency). All algorithms are given in pseudo-code and are accompanied with correctness conditions, given in Hoare logic style. The correctness proofs are formalized within the Isabelle theorem proving system and are available in the extended version of this paper. The given pseudo-code served as a basis for our SAT solver argo-sat. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.