11 results on '"Vassos Hadzilacos"'
Search Results
2. Message-optimal protocols for Byzantine Agreement
- Author
-
Joseph Y. Halpern and Vassos Hadzilacos
- Subjects
Protocol (science) ,Computer science ,General Mathematics ,media_common.quotation_subject ,Fault tolerance ,Crash ,Computer security ,computer.software_genre ,Upper and lower bounds ,Agreement ,Theoretical Computer Science ,Quantum Byzantine agreement ,Computational Theory and Mathematics ,Theory of computation ,Message authentication code ,computer ,media_common - Abstract
It is often important for the correct processes in a distributed system to reach agreement, despite the presence of some faulty processes. Byzantine Agreement (BA) is a paradigm problem that attempts to isolate the key features of reaching agreement. We focus here on the number of messages required to reach BA, with particular emphasis on the number of messages required in thefailure-free runs, since these are the ones that occur most often in practice. The number of messages required is sensitive to the types of failures considered. In earlier work, Amduret al. (1992) established tight upper and lower bounds on the worst- and average-case number of messages required in failure-free runs for crash failures. We provide tight upper and lower bounds for all remaining types of failures that have been considered in the literature on the BA problem: receiving omission, sending omission, and general omission failures, as well as arbitrary failures with or without message authentication. We also establish a tradeoff between number of rounds and number of messages in the failure-free runs required to reach agreement in the case of crash, sending, and general omission failures.
- Published
- 1993
- Full Text
- View/download PDF
3. Transaction synchronisation in object bases
- Author
-
Vassos Hadzilacos and Thanasis Hadzilacos
- Subjects
Theoretical computer science ,Computer science ,Non-lock concurrency control ,Computer Networks and Communications ,Concurrency ,Multiversion concurrency control ,0102 computer and information sciences ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,God object ,Object-oriented design ,Theoretical Computer Science ,Concurrency control ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Programming language ,Applied Mathematics ,Distributed concurrency control ,Distributed object ,020207 software engineering ,Object resurrection ,Timestamp-based concurrency control ,Method ,Data transfer object ,Object code ,Computational Theory and Mathematics ,010201 computation theory & mathematics ,Serializability ,Object model ,Optimistic concurrency control ,computer - Abstract
In this paper we investigate the problem of synchronising transactions in an object base. An object base is a collection of objects, much the way a database is a collection of data. An object, for our purposes, consists of a collection of variables (whose values at any point in time comprise the state of that object) and a set of operations, called methods, that are the only means of accessing (sensing or modifying) the object's variablesThere is a certain sense in which a traditional database is an object base. It consists of “objects” (records, tuples or what have you) each of which has a state that can be accessed only through the operations Read and Write. The main difference is that in an object base, each object supplies its own methods and these are arbitrary. In particular, a method for a certain object may call methods of other objects to carry out its task. In contrast to certain models in which objects correspond to “levels of abstraction”, our model is completely general in this respect for example, it is permissible for a method of object A to call a method of object B which, in turn, may call some other method of object A againOne implication of this difference between data and object bases is that in the latter the assumption, commonly made in the former, that the operations which manipulate the state of the objects are short enough to be implemented serially (one at a time) is no longer valid. A related implication is that in object bases we are faced with the necessity of dealing with nested transactions, since the invocation of one method may result in further method invocationsAnother, less fundamental, difference between data and object bases is that, in addition to being of uniform type, the “objects” of a database are usually assumed to be of uniform size as well. In an object base one can imagine objects of widely differing sizes. A clock and the New York City telephone directory could be objects differing in size by orders of magnitude, yet co-existing in the same object baseIn spite of these differences it is possible to approach concurrency control in an object base in the following way. Each object is viewed as a database item. Further, each method invocation is treated as a group of Read or Write operations on those data items that were accessed as a result of that method invocation. With these analogies, any conventional database concurrency control method (two-phase locking, timestamp ordering, certification, and the whole lot) can be employed to synchronise concurrent transactions in the object base. This approach has the virtue of simplicity and may be well-suited to certain environments. It is, for example, the approach taken in the GemStone project and product (cf Maier and Stein [1987], Purdy et al [1987])We are interested in exploring approaches to concurrency control in object bases which take into account their special features and differences from databases. The hope is that this will lead to more efficient techniques. More specifically, we would like to consider mechanisms that Take into account the nested nature of transactionsAllow methods accessing an object to execute concurrently (but correctly) This seems especially important as multiprocessors become available, since forcing serial access to an object's methods restricts parallelism (bear in mind that each method could be a lengthy procedure)Are modular, in that each object is responsible for synchronizing the invocations of its own methods as it sees fitThe first two of these points have been considered by others as well. For example, Argus (cf Liskov and Scheifler [1983]) uses a synchronisation algorithm which is an adaptation of strict two-phase locking in a nested transaction environment. In addition, Argus allows multiple concurrent invocations of an object's methods1We believe that the third point is a novel idea, so we elaborate on it a bit. In accordance with (b), multiple invocations of an object's methods may be active simultaneously As these methods may operate on common data (the object's variables), they must be synchronised. That is, if we view the object's variables as a database, and the simultaneous method invocations as concurrent transactions, we have to solve the serialisability problem within a single object. We call this intra-object synchronisationIt is not difficult to see that simply ensuring serialisability within each object is not, in itself, enough to guarantee serialisability of the overall computation. For instance, then may be two transactions T1 and T2 each accessing objects A and B, so that in object A the concurrent computation of the two transactions' method executions serialises T1 before T2, while the reverse holds in object B.The effect of such an execution is not the same as running the two transactions serially in either order and the overall computation is therefore not serial, sable, even though the computation at each object is Thus, in addition to intra-object synchronisation, it is also necessary to exercise some inter-object synchronisation, whose goal will be to ensure the compatibility of the independent decisions made at each objectThe potential advantage of separating intra- from inter-object synchronisation is seen if one recalls our previous observation regarding the non-uniformity of objects in an object base. Accordingly, we may be able to allow each object use, for intra-object synchronisation, the most suitable algorithm depending on its semantics, the implementation of its methods and so on. For example, an object representing and Delete) might be implemented as a B-tree. Thus, one of the many special B-tree algorithms could be used for intra-object synchronisation by this object (cf Bayer and Schkolnick [1977], Ellis [1980], Kung and Lehman [1980], Kwong and Wood [1982], Lehman and Yao [1981], Manber and Ladner [1982], Samadi [1976]) That object would enjoy the efficiency of the special algorithm, even though that algorithm is not applicable to other types of objects. Of course, the viability of such a scheme depends on the existence of efficient inter-object synchronisation schemes that can be used with disparate intra-object synchronisation algorithms. Even though we have no definitive answer for this question, our work so far leaves us hopeful that this may indeed be possibleThe remainder of this paper is organised as follows. In Section 2 we present a formal model for representing executions in object bases, we also define what serialisable (i e correct) executions are in this context. In Section 3 we present an extension of the Serialisability Theorem of “classical” concurrency control, which relates the serialisability of an execution to the acyclicity of a graph. We exhibit the utility of this theorem by using it to derive simple proofs of the correctness of Nested Two-Phase Locking (a slight generalisation of the algorithm used in Argus) and Nested Timestamp Ordering (an algorithm proposed by Reed [1978]). We also present a corollary to this theorem that we feel justifies our hopes for modular synchronisation mechanisms. We conclude with a description of further work in Section 4This work draws on three main sources “classical” serialisability theory (e.g. Bernstein et al [1987], Papadimitriou [1986]), the theory of nested transactions (e.g. Beeri et al [1980], Lynch and Merritt [1986]), and object-oriented systems (e.g. Stefik and Bobrow [1986])
- Published
- 1991
- Full Text
- View/download PDF
4. Constant-RMR implementations of CAS and other synchronization primitives using read and write operations
- Author
-
Philipp Woelfel, Danny Hendler, Wojciech Golab, and Vassos Hadzilacos
- Subjects
Compare-and-swap ,Distributed shared memory ,Shared memory ,Programming language ,Computer science ,Asynchronous communication ,Reading (computer) ,Synchronization (computer science) ,Cache-only memory architecture ,Distributed memory ,Parallel computing ,computer.software_genre ,computer - Abstract
We consider asynchronous multiprocessors where processes communicate only by reading or writing shared memory. We show how to implement consensus, all comparison primitives (such as CAS and TAS), and load-linked/store-conditional using only a constant number of remote memory references (RMRs), in both the cache-coherent and the distributed-shared-memory models of such multiprocessors. Our implementations are blocking, rather than wait-free: they ensure progress provided all processes that invoke the implemented primitive are live.Our results imply that any algorithm using read and write operations, comparison primitives, and load-linked/store-conditional, can be simulated by an algorithm that uses read and write operations only, with at most a constant blowup in RMR complexity.
- Published
- 2007
- Full Text
- View/download PDF
5. On the complexity of greedy routing in ring-based peer-to-peer networks
- Author
-
George Giakkoupis and Vassos Hadzilacos
- Subjects
Combinatorics ,Discrete mathematics ,Random graph ,Small worlds ,Graph (abstract data type) ,Expected value ,Peer-to-peer ,Binary logarithm ,Network topology ,computer.software_genre ,Upper and lower bounds ,computer ,Mathematics - Abstract
We investigate the complexity of greedy routing in uniform ring-based random graphs, a general model that captures many topologies that have been proposed for peer-to-peer and social networks. In this model the nodes form a ring; for each node u we independently draw the set of distances along the ring from u to its "long-range contacts" from a fixed distribution P (the same for all and connect u to the corresponding nodes as well as its ring successor. We prove that, for any distribution P, in a graph with n nodes and an expected number of long-range contacts per node constructed in this fashion, the expected number of steps for greedy routing is Ω((log2n)/lalog*n), for some constant a > 1. This improves an earlier lower bound of Ω((log2n)/llog log n) by Aspnes et al. and is very close to the upper bound of O((log2n)/l) achieved by greedy routing in Kleinberg's (one-dimensional) "small-world" networks, a particular instance of uniform ring-based random graphs.
- Published
- 2007
- Full Text
- View/download PDF
6. All of us are smarter than any of us
- Author
-
Wai-Kau Lo and Vassos Hadzilacos
- Subjects
Theoretical computer science ,business.industry ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,computer ,Mathematics - Published
- 1997
- Full Text
- View/download PDF
7. Safe locking policies for dynamic databases
- Author
-
Vassos Hadzilacos and Vinay K. Chaudhri
- Subjects
concurrency control ,correctness issues ,Schedule (computer science) ,Theoretical computer science ,Database ,Computer Networks and Communications ,Generalization ,Computer science ,Applied Mathematics ,Correctness proofs ,0102 computer and information sciences ,02 engineering and technology ,16. Peace & justice ,computer.software_genre ,01 natural sciences ,Theoretical Computer Science ,Concurrency control ,Computational Theory and Mathematics ,010201 computation theory & mathematics ,Dynamic database ,If and only if ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,computer - Abstract
Yannakakis showed that a locking policy is not safe if and only if it allows a canonical nonserializable schedule of transactions in which all transactions except one are executed serially (Yannakakis, 1982). In the present paper, we study the generalization of this result to a dynamic database, that is, a database that may undergo insertions and deletions of entities. We illustrate the utility of this generalization by applying it to obtain correctness proofs of three locking policies that handle dynamic databases.
- Published
- 1995
- Full Text
- View/download PDF
8. Quantitative evaluation of a transaction facility for knowledge base management system
- Author
-
Vassos Hadzilacos, Kenneth C. Sevcik, John Mylopoulos, and Vinay K. Chaudhri
- Subjects
Database ,Non-lock concurrency control ,Computer science ,business.industry ,Concurrency ,Distributed computing ,Open Knowledge Base Connectivity ,computer.software_genre ,Knowledge-based systems ,Concurrency control ,Knowledge base ,Isolation (database systems) ,business ,Database transaction ,computer ,Optimistic concurrency control - Abstract
Large knowledge bases that are intended for applications such as CAD, corporate repositories or process control will have to be shared by multiple users. For these systems to scale up, to give acceptable performance and to exhibit consistent behavior, it is mandatory to synchronize user transactions using a concurrency control algorithm. In this paper, we examine a novel concurrency control policy called Dynamic Directed Graph (or DDG) policy that effectively exploits the rich semantic structure of a knowledge base.Our analysis is carried out in the context of a real knowledge based system application from which knowledge base structure and workload parameters are computed. These serve as a basis for studying the implementation alternatives that arise as a result of knowledge base characteristics. The implementation alternatives that we consider include selection of portions of the knowledge base structure to be exploited for concurrency control, and also the dependence of concurrency on the traversal strategy used to search through the knowledge base. We analyze the effects of various workload parameters and conclude that the DDG policy improves substantially the response time for short transactions when there is heavy data contention.
- Published
- 1994
- Full Text
- View/download PDF
9. Memory adaptive self-stabilizing protocols (extended abstract)
- Author
-
Ran El-Yaniv, Efthymios Anagnostou, and Vassos Hadzilacos
- Subjects
Scheme (programming language) ,Basis (linear algebra) ,Computer science ,business.industry ,Function (mathematics) ,Security token ,Network topology ,Upper and lower bounds ,Log-log plot ,State (computer science) ,business ,computer ,computer.programming_language ,Computer network - Abstract
We present a token-based diffusion scheme that forms the basis of efficient self-stabilizing protocols for a variety of problems including unique naming, network topology, token management. For the model where processors’ initial knowledge about the network is restricted only to their neighbours, we introduce the concept of memory adaptive protocols. In these, once the system stabilizes, the size of the memory used by each processor is a function of the actual network size — even though the system may have been started in a state where each processor “thinks” that it is embedded in a network much larger (or smaller) than the actual one. For this model, we develop memory adaptive self-stabilizing protocols for the problems mentioned above that stabilize in time O(n log log n), where n is the number of processors. For the model where processors also know an upper bound D on the diameter of the network and an upper bound on n, we develop bounded-memory self-stabilizing protocols for the same problems that stabilize in O(min{D,n}) time. All our protocols are based on a token diffusion scheme, and are uniform, in the sense that processors with the same number of neighbours execute the same program.
- Published
- 1992
- Full Text
- View/download PDF
10. A theory of reliability in database systems
- Author
-
Vassos Hadzilacos
- Subjects
Global serializability ,Schedule (computer science) ,Database ,Computer science ,Distributed computing ,Distributed concurrency control ,InformationSystems_DATABASEMANAGEMENT ,computer.software_genre ,Commitment ordering ,Artificial Intelligence ,Hardware and Architecture ,Control and Systems Engineering ,Serializability ,Two-phase locking ,Isolation (database systems) ,Snapshot isolation ,computer ,Software ,Information Systems - Abstract
Reliable concurrent processing of transactions in a database system is examined. Since serializability, the conventional concurrency control correctness criterion, is not adequate in the presence of common failures, another theory of correctness is proposed, involving the concepts of commit serializability, recoverability, and resiliency.
- Published
- 1988
- Full Text
- View/download PDF
11. An operational model for database system reliability
- Author
-
Vassos Hadzilacos
- Subjects
Physical data model ,Database ,Computer science ,Database schema ,computer.software_genre ,computer ,Database design ,Rollback ,Database testing ,Reliability (statistics) ,Database tuning ,Database model - Published
- 1983
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.