36 results on '"Luca de Alfaro"'
Search Results
2. Wikipedia Vandalism Detection: Combining Natural Language, Metadata, and Reputation Features
- Author
-
Andrew G. West, B. Thomas Adler, Paolo Rosso, Luca de Alfaro, and Santiago M. Mola-Velasco
- Subjects
Database Management ,Information retrieval ,Computer science ,media_common.quotation_subject ,Data Mining and Knowledge Discovery ,Constructive ,GeneralLiterature_MISCELLANEOUS ,Task (project management) ,Set (abstract data type) ,World Wide Web ,Metadata ,Online encyclopedia ,LENGUAJES Y SISTEMAS INFORMATICOS ,Natural language ,Reputation ,media_common - Abstract
Wikipedia is an online encyclopedia which anyone can edit. While most edits are constructive, about 7% are acts of vandalism. Such behavior is characterized by modifications made in bad faith; introducing spam and other inappropriate content. In this work, we present the results of an effort to integrate three of the leading approaches to Wikipedia vandalism detection: a spatio-temporal analysis of metadata (STiki), a reputation-based system (WikiTrust), and natural language processing features. The performance of the resulting joint system improves the state-of-the-art from all previous methods and establishes a new baseline for Wikipedia vandalism detection. We examine in detail the contribution of the three approaches, both for the task of discovering fresh vandalism, and for the task of locating vandalism in the complete set of Wikipedia revisions., The authors from Universitat Politècnica de València thank also the MICINN research project TEXT-ENTERPRISE 2.0 TIN2009-13391-C04-03 (Plan I+D+i). UPenn contributions were supported in part by ONR MURI N00014-07-1-0907. This research was partially supported by award 1R01GM089820-01A1 from the National Institute Of General Medical Sciences, and by ISSDM, a UCSC-LANL educational collaboration.
- Published
- 2011
3. Analyzing the Impact of Change in Multi-threaded Programs
- Author
-
Krishnendu Chatterjee, César Sánchez, Luca de Alfaro, and Vishwanath Raman
- Subjects
Source code ,Programming language ,Computer science ,media_common.quotation_subject ,Concurrency ,Thread (computing) ,Change impact analysis ,computer.software_genre ,User-defined function ,Test harness ,Concurrency control ,Debugging ,Operating system ,computer ,media_common - Abstract
We introduce a technique for debugging multi-threaded C programs and analyzing the impact of source code changes, and its implementation in the prototype tool DIRECT. Our approach uses a combination of source code instrumentation and runtime management. The source code along with a test harness is instrumented to monitor Operating System (OS) and user defined function calls. DIRECT tracks all concurrency control primitives and, optionally, data from the program. DIRECT maintains an abstract global state that combines information from every thread, including the sequence of function calls and concurrency primitives executed. The runtime manager can insert delays, provoking thread interleavings that may exhibit bugs that are difficult to reach otherwise. The runtime manager collects an approximation of the reachable state space and uses this approximation to assess the impact of change in a new version of the program.
- Published
- 2010
4. Stochastic Games with Lossy Channels
- Author
-
Richard Mayr, Noomene Ben Henda, Luca de Alfaro, Sven Sandberg, and Parosh Aziz Abdulla
- Subjects
Discrete mathematics ,Computer Science::Computer Science and Game Theory ,Theoretical computer science ,Stochastic game ,ComputingMilieux_PERSONALCOMPUTING ,Probabilistic logic ,Combinatorial game theory ,Lossy compression ,Decidability ,Reachability ,Markov decision process ,Representation (mathematics) ,Computer Science::Information Theory ,Mathematics - Abstract
We consider turn-based stochastic games on infinite graphs induced by game probabilistic lossy channel systems (GPLCS), the game version of probabilistic lossy channel systems (PLCS). We study games with Buchi (repeated reachability) objectives and almost-sure winning conditions. These games are pure memoryless determined and, under the assumption that the target set is regular, a symbolic representation of the set of winning states for each player can be effectively constructed. Thus, turn-based stochastic games on GPLCS are decidable. This generalizes the decidability result for PLCS-induced Markov decision processes in [10].
- Published
- 2008
5. The Complexity of Coverage
- Author
-
Luca de Alfaro, Krishnendu Chatterjee, and Rupak Majumdar
- Subjects
Mathematical optimization ,Computer science ,Vertex cover ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Set (abstract data type) ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Repeated game ,State space ,020201 artificial intelligence & image processing ,Markov decision process ,Special case ,Reset (computing) ,Reactive system ,Algorithm - Abstract
We study the problem of generating a test sequence that achieves maximal coverage for a reactive system under test. We formulate the problem as a repeated game between the tester and the system, where the system state space is partitioned according to some coverage criterion and the objective of the tester is to maximize the set of partitions (or coverage goals) visited during the game. We show the complexity of the maximal coverage problem for non-deterministic systems is PSPACE-complete, but is NP-complete for deterministic systems. For the special case of non-deterministic systems with a re-initializing "reset" action, which represent running a new test input on a re-initialized system, we show that the complexity is coNP-complete. Our proof technique for reset games uses randomized testing strategies that circumvent the exponentially large memory requirement of deterministic testing strategies.
- Published
- 2008
6. The Symbolic Approach to Repeated Games (Abstract)
- Author
-
Luca de Alfaro
- Subjects
Successor cardinal ,Discrete mathematics ,Computer Science::Computer Science and Game Theory ,Theoretical computer science ,Finite-state machine ,Computer science ,Reachability ,Repeated game ,Probabilistic logic ,Probability distribution ,Notation ,Real number - Abstract
We consider zero-sum repeated games with omega-regular goals. Such hgames are played on a finite state space over an infinite number of rounds: at every round, the players select moves, either in turns or simultaneously; the current state and the selected moves determine the successor state. A play of the game thus consists in an infinite path over the state space or, if randomization is present, in a probability distribution over paths. Omega-regular goals generalize the class of regular goals (those expressible by finite automata) to infinite sequences, and include many well-known goals, such as the reachability and safety goals, as well as the Buchi and parity goals. The algorithms for solving repeated games with omega-regular goals can be broadly divided into enumerative and symbolic/ algorithms. Enumerative algorithms consider each state individually; currently, they achieve the best worst-case complexity among the known algorithms. Symbolic algorithms compute in terms of sets of states, or functions from states to real numbers, rather than single states; such sets or functions can often be represented symbolically (hence the name of the algorithms). Even though symbolic algorithms often cannot match the worst-case complexity of the enumerative algorithms, they are often efficient in practice. We illustrate how symbolic algorithms provide uniform solutions of many classes of repeated games, from turn-based, non-randomized games where at each state one of the players can deterministically win, to concurrent and randomized games where the ability to win must be characterized in probabilistic fashion. We also show that the symbolic algorithms, and the notation used to express them, are closely related to game metrics which provide a notion of distance between game states. Roughly, the distance between two states measures how closely a player can match, from one state, the ability of winning from the other state with respect to any omega-regular goal.
- Published
- 2007
7. Magnifying-Lens Abstraction for Markov Decision Processes
- Author
-
Luca de Alfaro and Pritam Roy
- Subjects
Lens (optics) ,Reachability ,law ,Computer science ,Iterated function ,Partially observable Markov decision process ,State (functional analysis) ,Markov decision process ,Algorithm ,Upper and lower bounds ,law.invention ,Abstraction (linguistics) - Abstract
We present a novel abstraction technique which allows the analysis of reachability and safety properties of Markov decision processes with very large state spaces. The technique, called magnifying-lens abstraction, (MLA) copes with the state-explosion problem by partitioning the state-space into regions, and by computing upper and lower bounds for reachability and safety properties on the regions, rather than on the states. To compute these bounds, MLA iterates over the regions, considering the concrete states of each region in turn, as if one were sliding across the abstraction a magnifying lens which allowed viewing the concrete states. The algorithm adaptively refines the regions, using smaller regions where more detail is needed, until the difference between upper and lower bounds is smaller than a specified accuracy. We provide experimental results on three case studies illustrating that MLA can provide accurate answers, with savings in memory requirements.
- Published
- 2007
8. Solving Games Via Three-Valued Abstraction Refinement
- Author
-
Luca de Alfaro and Pritam Roy
- Subjects
Theoretical computer science ,Property (philosophy) ,Computer science ,Computation ,020207 software engineering ,02 engineering and technology ,16. Peace & justice ,Set (abstract data type) ,Simple (abstract algebra) ,Reachability ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Representation (mathematics) ,Abstraction (linguistics) - Abstract
Games that model realistic systems can have very large state-spaces, making their direct solution difficult. We present a symbolic abstractionrefinement approach to the solution of two-player games. Given a property, an initial set of states, and a game representation, our approach starts by constructing a simple abstraction of the game, guided by the predicates present in the property and in the initial set. The abstraction is then refined, until it is possible to either prove, or disprove, the property over the initial states. Specifically, we evaluate the property on the abstract game in three-valued fashion, computing an over-approximation (the may states), and an under-approximation (the must states), of the states that satisfy the property. If this computation fails to yield a certain yes/no answer to the validity of the property on the initial states, our algorithm refines the abstraction by splitting uncertain abstract states (states that are may-states, but not must-states). The approach lends itself to an efficient symbolic implementation. We discuss the property required of the abstraction scheme in order to achieve convergence and termination of our technique. We present the results for reachability and safety properties, as well as for fully general ?-regular properties.
- Published
- 2007
9. The Complexity of Stochastic Rabin and Streett Games
- Author
-
Krishnendu Chatterjee, Thomas A. Henzinger, and Luca de Alfaro
- Subjects
Computer Science::Computer Science and Game Theory ,Computer science ,Stochastic process ,business.industry ,Stochastic game ,Graph theory ,Graph ,Automaton ,Regular graph ,Markov decision process ,Artificial intelligence ,business ,Game theory ,Mathematical economics ,ComputingMilieux_MISCELLANEOUS - Abstract
The theory of graph games with ω-regular winning conditions is the foundation for modeling and synthesizing reactive processes. In the case of stochastic reactive processes, the corresponding stochastic graph games have three players, two of them (System and Environment) behaving adversarially, and the third (Uncertainty) behaving probabilistically. We consider two problems for stochastic graph games: the qualitative problem asks for the set of states from which a player can win with probability 1 (almost-sure winning); the quantitative problem asks for the maximal probability of winning (optimal winning) from each state. We show that for Rabin winning conditions, both problems are in NP. As these problems were known to be NP-hard, it follows that they are NP-complete for Rabin conditions, and dually, coNP-complete for Streett conditions. The proof proceeds by showing that pure memoryless strategies suffice for qualitatively and quantitatively winning stochastic graph games with Rabin conditions. This insight is of interest in its own right, as it implies that controllers for Rabin objectives have simple implementations. We also prove that for every ω-regular condition, optimal winning strategies are no more complex than almost-sure winning strategies.
- Published
- 2005
10. Information Flow in Concurrent Games
- Author
-
Marco Faella and Luca de Alfaro
- Subjects
Bayesian game ,Non-cooperative game ,Sequential game ,Operations research ,Computer science ,Human–computer interaction ,Complete information ,ComputingMilieux_PERSONALCOMPUTING ,Perfect information ,Screening game ,Extensive-form game - Abstract
We consider games where the players have perfect information about the game's state and history, and we focus on the information exchange that takes place at each round as the players choose their moves. The ability of a player to gather information on the opponent's choice of move in a round determines her ability to counteract the move, and win the game. When the game is played between teams, rather than single players, the amount of intra-team communication determines the ability of the team members to coordinate their moves and win the game. We consider games with quantitative bounds on inter-team and intra-team information flow, and we provide algorithms and complexity bounds for their solution.
- Published
- 2003
11. Discounting the Future in Systems Theory
- Author
-
Thomas A. Henzinger, Rupak Majumdar, and Luca de Alfaro
- Subjects
Inflation ,Discounting ,Systems theory ,Computer science ,media_common.quotation_subject ,Stochastic game ,Probabilistic logic ,Markov decision process ,Fixed point ,Value (mathematics) ,Game theory ,Mathematical economics ,media_common - Abstract
Discounting the future means that the value, today, of a unit payoffis 1 if the payoff occurs today, a if it occurs tomorrow, a2 if it occurs the day after tomorrow, and so on, for some real-valued discount factor 0 ≤ a ≤ 1. Discounting (or inflation) is a key paradigm in economics and has been studied in Markov decision processes as well as game theory. We submit that discounting also has a natural place in systems engineering: for nonterminating systems, a potential bug in the faraway future is less troubling than a potential bug today. We therefore develop a systems theory with discounting. Our theory includes several basic elements: discounted versions of system properties that correspond to the ω-regular properties, fixpoint-based algorithms for checking discounted properties, and a quantitative notion of bisimilarity for capturing the difference between two states with respect to discounted properties. We present the theory in a general form that applies to probabilistic systems as well as multicomponent systems (games), but it readily specializes to classical transition systems. We show that discounting, besides its natural practical appeal, has also several mathematical benefits. First, the resulting theory is robust, in that small perturbations of a system can cause only small changes in the properties of the system. Second, the theory is computational, in that the values of discounted properties, as well as the discounted bisimilarity distance between states, can be computed to any desired degree of precision.
- Published
- 2003
12. The Element of Surprise in Timed Games
- Author
-
Thomas A. Henzinger, Marco Faella, Rupak Majumdar, Mariëlle Stoelinga, and Luca de Alfaro
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Concurrency ,ComputingMilieux_PERSONALCOMPUTING ,Divergence (computer science) ,Turns, rounds and time-keeping systems in games ,Surprise ,Action (philosophy) ,Reachability ,Artificial intelligence ,business ,Mathematical economics ,Game theory ,media_common - Abstract
We consider concurrent two-person games played in real time, in which the players decide both which action to play, and when to play it. Such timed games differ from untimed games in two essential ways. First, players can take each other by surprise, because actions are played with delays that cannot be anticipated by the opponent. Second, a player should not be able to win the game by preventing time from diverging. We present a model of timed games that preserves the element of surprise and accounts for time divergence in a way that treats both players symmetrically and applies to all ω-regular winning conditions. We prove that the ability to take each other by surprise adds extra power to the players. For the case that the games are specified in the style of timed automata, we provide symbolic algorithms for their solution with respect to all ω-regular winning conditions. We also show that for these timed games, memory strategies are more powerful than memoryless strategies already in the case of reachability objectives.
- Published
- 2003
13. Game Models for Open Systems
- Author
-
Luca de Alfaro
- Subjects
Theoretical computer science ,Computer science ,Transition system ,Component-based software engineering ,Game models ,Open system (systems theory) ,Game theory - Abstract
An open system is a system whose behavior is jointly determined by its internal structure, and by the input it receives from the environment. To solve control and verification problems, open systems have often been modeled as games between the system and the environment; we argue that the game view of open systems should be extended also to the definitions of system refinement and composition. We give a symmetrical interpretation to games between system and environment: the moves of the system represent the outputs that the system can generate (the output guarantees), and symmetrically, the moves of the environment represent the inputs that the system can accept (the input assumptions). We argue in favor of defining refinement of open systems in terms of alternating simulation, which is the relation between games that plays the same role of simulation between transition systems. Alternating simulation captures the principle that a component refines another if it has weaker input assumptions, and stronger output guarantees. Furthermore, we argue in favor of a notion of composition that accounts for the compatibility between input assumptions and output guarantees, and that enables the synthesis of new input guarantees for the composed system. These game-theoretical notions of refinement and compatibility are related to the type-theoretical notions of subtyping and type compatibility, and give rise to an expressive modeling framework for component-based design and verification.
- Published
- 2003
14. Resource Interfaces
- Author
-
Arindam Chakrabarti, Luca de Alfaro, Thomas A. Henzinger, and Mariëlle Stoelinga
- Published
- 2003
15. Quantitative Verification and Control via the Mu-Calculus
- Author
-
Luca de Alfaro
- Subjects
Bisimulation ,Functional verification ,Computer science ,Runtime verification ,Probabilistic logic ,Calculus ,System requirements specification ,Markov decision process ,Extension (predicate logic) ,Time complexity - Abstract
Linear-time properties and symbolic algorithms provide a widely used framework for system specification and verification. In this framework, the verification and control questions are phrased as boolean questions: a system either satisfies (or can be made to satisfy) a property, or it does not. These questions can be answered by symbolic algorithms expressed in the μ-calculus. We illustrate how the μ-calculus also provides the basis for two quantitative extensions of this approach: a probabilistic extension, where the verification and control problems are answered in terms of the probability with which the specification holds, and a discounted extension, in which events in the near future are weighted more heavily than events in the far away future.
- Published
- 2003
16. Timed Interfaces
- Author
-
Luca de Alfaro, Thomas A. Henzinger, and Mariëlle Stoelinga
- Published
- 2002
17. Synchronous and Bidirectional Component Interfaces
- Author
-
Luca de Alfaro, Arindam Chakrabarti, Freddy Y. C. Mang, and Thomas A. Henzinger
- Subjects
Computer science ,Distributed computing ,Design specification ,Component-based software engineering ,Compatibility (mechanics) ,Symbolic computation ,Implementation - Abstract
We present interface models that describe both the input assumptions of a component, and its output behavior. By enabling us to check that the input assumptions of a component are met in a design, interface models provide a compatibility check for component-based design. When refining a design into an implementation, interface models require that the output behavior of a component satisfies the design specification only when the input assumptions of the specification are satisfied, yielding greater flexibility in the choice of implementations. Technically, our interface models are games between two players, Input and Output; the duality of the players accounts for the dual roles of inputs and outputs in composition and refinement. We present two interface models in detail, one for a simple synchronous form of interaction between components typical in hardware, and the other for more complex synchronous interactions on bidirectional connections. As an example, we specify the interface of a bidirectional bus, with the input assumption that at any time at most one component has write access to the bus. For these interface models, we present algorithms for compatibility and refinement checking, and we describe efficient symbolic implementations.
- Published
- 2002
18. Interface Compatibility Checking for Software Modules
- Author
-
Arindam Chakrabarti, Freddy Y. C. Mang, Luca de Alfaro, Marcin Jurdziński, and Thomas A. Henzinger
- Subjects
Computer science ,Programming language ,business.industry ,Suite ,computer.software_genre ,Software modules ,File server ,Software ,Sensor node ,Compatibility (mechanics) ,Component-based software engineering ,business ,Game theory ,computer - Abstract
We present a formal methodology and tool for uncovering errors in the interaction of software modules. Our methodology consists of a suite of languages for defining software interfaces, and algorithms for checking interface compatibility. We focus on interfaces that explain the method-call dependencies between software modules. Such an interface makes assumptions about the environment in the form of call and availability constraints. A call constraint restricts the accessibility of local methods to certain external methods. An availability constraint restricts the accessibility of local methods to certain states of the module. For example, the interface for a file server with local methods open and read may assert that a file cannot be read without having been opened. Checking interface compatibility requires the solution of games, and in the presence of availability constraints, of pushdown games. Based on this methodology, we have implemented a tool that has uncovered incompatibilities in TinyOS, a small operating system for sensor nodes in adhoc networks.
- Published
- 2002
19. Process Algebra and Probabilistic Methods. Performance Modelling and Verification
- Author
-
Luca de Alfaro and Stephen Gilmore
- Subjects
Probabilistic method ,Theoretical computer science ,Computer science ,Process calculus - Published
- 2001
20. Symbolic Algorithms for Infinite-State Games
- Author
-
Thomas A. Henzinger, Rupak Majumdar, and Luca de Alfaro
- Subjects
Computer Science::Computer Science and Game Theory ,Class (set theory) ,Range (mathematics) ,Reachability ,Computer science ,Hybrid system ,Combinatorial game theory ,Equivalence relation ,Equivalence (measure theory) ,Algorithm ,Symbolic data analysis - Abstract
A procedure for the analysis of state spaces is called symbolic if it manipulates not individual states, but sets of states that are represented by constraints. Such a procedure can be used for the analysis of infinite state spaces, provided termination is guaranteed. We present symbolic procedures, and corresponding termination criteria, for the solution of infinite-state games, which occur in the control and modular verification of infinite-state systems. To characterize the termination of symbolic procedures for solving infinite-state games, we classify these game structures into four increasingly restrictive categories: 1. Class 1 consists of infinite-state structures for which all safety and reachability games can be solved. 2. Class 2 consists of infinite-state structures for which all ω-regular games can be solved. 3. Class 3 consists of infinite-state structures for which all nested positive boolean combinations of ω-regular games can be solved. 4. Class 4 consists of infinite-state structures for which all nested boolean combinations of ω-regular games can be solved. We give a structural characterization for each class, using equivalence relations on the state spaces of games which range from game versions of trace equivalence to a game version of bisimilarity. We provide infinite-state examples for all four classes of games from control problems for hybrid systems. We conclude by presenting symbolic algorithms for the synthesis of winning strategies ("controller synthesis") for infinite-state games with arbitrary u-regular objectives, and prove termination over all class-2 structures. This settles, in particular, the symbolic controller synthesis problem for rectangular hybrid systems.
- Published
- 2001
21. Model Checking the World Wide Web?
- Author
-
Luca de Alfaro
- Subjects
Model checking ,medicine.medical_specialty ,Theoretical computer science ,Computer science ,business.industry ,computer.software_genre ,Constructive ,Social Semantic Web ,World Wide Web ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Web page ,Web design ,medicine ,Mashup ,The Internet ,Software system ,business ,Web modeling ,computer - Abstract
Web design is an inherently error-prone process. To help with the detection of errors in the structure and connectivity of Web pages, we propose to apply model-checking techniques to the analysis of the World Wide Web. Model checking the Web is different in many respects from ordinary model checking of system models, since the Kripke structure of theWeb is not known in advance, but can only be explored in a gradual fashion. In particular, the model-checking algorithms cannot be phrased in ordinary µ-calculus, since some operations, such as the computation of sets of predecessor Web pages and the computations of greatest fixpoints, are not possible on the Web. We introduce constructive µ-calculus, a fixpoint calculus similar to µ-calculus, but whose formulas can be effectively evaluated over the Web; and we show that its expressive power is very close to that of ordinary µ-calculus. Constructive µ-calculus can be used not only for phrasing Web model-checking algorithms, but also for the analysis of systems having a large, irregular state space that can be only gradually explored, such as software systems. On the basis of these ideas, we have implemented the Web model checker MCWEB, and we describe some of the issues that arose in its implementation, as well as the type of errors that it was able to find.
- Published
- 2001
22. Interface Theories for Component-Based Design
- Author
-
Luca de Alfaro and Thomas A. Henzinger
- Subjects
Theoretical computer science ,Function block diagram ,business.industry ,Interface model ,Computer science ,Component (UML) ,Component-based software engineering ,Software development ,business - Abstract
We classify component-based models of computation into component models and interface models. A component model specifies for each component howthe component behaves in an arbitrary environment; an interface model specifies for each component what the component expects from the environment. Component models support compositional abstraction, and therefore component-based verification. Interface models support compositional refinement, and therefore componentbased design. Many aspects of interface models, such as compatibility and refinement checking between interfaces, are properly viewed in a gametheoretic setting, where the input and output values of an interface are chosen by different players.
- Published
- 2001
23. Compositional Methods for Probabilistic Systems
- Author
-
Luca de Alfaro, Thomas A. Henzinger, and Ranjit Jhala
- Subjects
Nondeterministic algorithm ,Knowledge representation and reasoning ,Stochastic process ,Principle of compositionality ,Computer science ,Probabilistic logic ,Probability distribution ,Statistical model ,Propositional calculus ,Semantics ,Algorithm ,TRACE (psycholinguistics) - Abstract
We present a compositional trace-based model for probabilistic systems. The behavior of a system with probabilistic choice is a stochastic process, namely, a probability distribution on traces, or "bundle." Consequently, the semantics of a system with both nondeterministic and probabilistic choice is a set of bundles. The bundles of a composite system can be obtained by combining the bundles of the components in a simple mathematical way. Refinement between systems is bundle containment. We achieve assume-guarantee compositionality for bundle semantics by introducing two scoping mechanisms. The first mechanism, which is standard in compositional modeling, distinguishes inputs from outputs and hidden state. The second mechanism, which arises in probabilistic systems, partitions the state into probabilistically independent regions.
- Published
- 2001
24. The Control of Synchronous Systems, Part II
- Author
-
Thomas A. Henzinger, Luca de Alfaro, and Freddy Y. C. Mang
- Subjects
Set (abstract data type) ,Input/output ,Computer science ,Composability ,Control theory ,Dependency relation ,Control system ,State (computer science) ,Dependent type - Abstract
A controller is an environment for a system that achieves a particular control objective by providing inputs to the system without constraining the choices of the system. For synchronous systems, where system and controller make simultaneous and interdependent choices, the notion that a controller must not constrain the choices of the system can be formalized by type systems for composability. In a previous paper, we solved the control problem for static and dynamic types: a static type is a dependency relation between inputs and outputs, and composition is well-typed if it does not introduce cyclic dependencies; a dynamic type is a set of static types, one for each state. Static and dynamic types, however, cannot capture many important digital circuits, such as gated clocks, bidirectional buses, and random-access memory. We therefore introduce more general type systems, so-called dependent and bidirectional types, for modeling these situations, and we solve the corresponding control problems. In a system with a dependent type, the dependencies between inputs and outputs are determined gradually through a game of the system against the controller. In a system with a bidirectional type, also the distinction between inputs and outputs is resolved dynamically by such a game. The game proceeds in several rounds. In each round the system and the controller choose to update some variables dependent on variables that have already been updated. The solution of the control problem for dependent and bidirectional types is based on algorithms for solving these games.
- Published
- 2001
25. Detecting Errors Before Reaching Them
- Author
-
Thomas A. Henzinger, Freddy Y. C. Mang, and Luca de Alfaro
- Subjects
Model checking ,Theoretical computer science ,Computer science ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Overhead (computing) ,Invariant (physics) ,Error detection and correction ,Formal methods ,Formal verification ,Outcome (probability) ,Counterexample - Abstract
Any formal method or tool is almost certainly more often applied in situations where the outcome is failure (a counterexample) rather than success (a correctness proof). We present a method for symbolic model checking that can lead to significant time and memory savings for model-checking runs that fail, while occurring only a small overhead for model-checking runs that succeed. Our method discovers an error as soon as it cannot be prevented, which can be long before it actually occurs; for example, the violation of an invariant may become unpreventable many transitions before the invariant is violated.
- Published
- 2000
26. Symbolic Model Checking of Probabilistic Processes Using MTBDDs and the Kronecker Representation
- Author
-
Marta Kwiatkowska, Gethin Norman, Luca de Alfaro, Roberto Segala, and David Parker
- Subjects
Model checking ,Theoretical computer science ,Binary decision diagram ,Computer science ,Probabilistic logic ,Probabilistic argumentation ,Nondeterministic algorithm ,Reachability ,Formal specification ,Probabilistic CTL ,Probability distribution ,Temporal logic ,Markov decision process ,Formal verification ,Algorithm ,Boolean data type - Abstract
This paper reports on experimental results with symbolic model checking of probabilistic processes based on Multi-Terminal Binary Decision Diagrams (MTBDDs). We consider concurrent probabilistic systems as models; these allow nondeterministic choice between probability distributions and are particularly well suited to modelling distributed systems with probabilistic behaviour, e.g. randomized consensus algorithms and probabilistic failures. As a specification formalism we use the probabilistic branching-time temporal logic PBTL which allows one to express properties such as "under any scheduling of nondeterministic choices, the probability of Φ holding until ψ is true is at least 0.78/at most 0.04". We adapt the Kronecker representation of (Plateau 1985), which yields a very compact MTBDD encoding of the system. We implement an experimental model checker using the CUDD package and demonstrate that model construction and reachability-based model checking is possible in a matter of seconds for certain classes of systems consisting of up to 1030 states.
- Published
- 2000
27. The Control of Synchronous Systems
- Author
-
Freddy Y. C. Mang, Thomas A. Henzinger, and Luca de Alfaro
- Subjects
Input/output ,Nondeterministic algorithm ,Theoretical computer science ,Hierarchy (mathematics) ,Computer science ,Dependency relation ,Semantics (computer science) ,Transition system ,State (computer science) ,Semantics ,Algorithm - Abstract
In the synchronous composition of processes, one process may prevent another process from proceeding unless compositions without a well-defined product behavior are ruled out. They can be ruled out semantically, by insisting on the existence of certain fixed points, or syntactically, by equipping processes with types, which make the dependencies between input and output signals transparent. We classify various typing mechanisms and study their effects on the control problem. A static type enforces fixed, acyclic dependencies between input and output ports. For example, synchronous hardware without combinational loops can be typed statically. A dynamic type may vary the dependencies from state to state, while maintaining acyclicity, as in level-sensitive latches. Then, two dynamically typed processes can be syntactically compatible, if all pairs of possible dependencies are compatible, or semantically compatible, if in each state the combined dependencies remain acyclic. For a given plant process and control objective, there may be a controller of a static type, or only a controller of a syntactically compatible dynamic type, or only a controller of a semantically compatible dynamic type. We show this to be a strict hierarchy of possibilities, and we present algorithms and determine the complexity of the corresponding control problems. Furthermore, we consider versions of the control problem in which the type of the controller (static or dynamic) is given. We show that the solution of these fixed-type control problems requires the evaluation of partially ordered (Henkin) quantifiers on boolean formulas, and is therefore harder (nondeterministic exponential time) than more traditional control questions.
- Published
- 2000
28. Automating Modular Verification
- Author
-
Thomas A. Henzinger, Luca de Alfaro, Freddy Y. C. Mang, and Rajeev Alur
- Subjects
Controllability ,Theoretical computer science ,business.industry ,Reachability ,Computer science ,Modulo ,Distributed computing ,Transition system ,Leverage (statistics) ,Systems design ,Modular design ,business ,Automation - Abstract
Modular techniques for automatic verification attempt to overcome the state-explosion problem by exploiting the modular structure naturally present in many system designs. Unlike other tasks in the verification of finite-state systems, current modular techniques rely heavily on user guidance. In particular, the user is typically required to construct module abstractions that are neither too detailed as to render insufficient benefits in state exploration, nor too coarse as to invalidate the desired system properties. In this paper, we construct abstract modules automatically, using reachability and controllability information about the concrete modules. This allows us to leverage automatic verification techniques by applying them in layers: first we compute on the state spaces of system components, then we use the results for constructing abstractions, and finally we compute on the abstract state space of the system. Our experimental results indicate that if reachability and controllability information is used in the construction of abstractions, the resulting abstract modules are often significantly smaller than the concrete modules and can drastically reduce the space and time requirements for verification.
- Published
- 1999
29. Stochastic transition systems
- Author
-
Luca de Alfaro
- Subjects
symbols.namesake ,Systems analysis ,Stochastic process ,Semantics (computer science) ,Computer science ,Distributed computing ,Probabilistic automaton ,symbols ,Markov process ,Workload ,Algorithm ,Stochastic transition ,Reliability (statistics) - Abstract
Traditional methods for the analysis of system performance and reliability generally assume a precise knowledge of the system and its workload. Here, we present methods that are suited for the analysis of systems that contain partly unknown or unspecified components, such as systems in their early design stages.
- Published
- 1998
30. Temporal logics for the specification of performance and reliability
- Author
-
Luca de Alfaro
- Subjects
Nondeterministic algorithm ,Theoretical computer science ,Computation tree logic ,Computer science ,Semantics (computer science) ,Computer Science::Logic in Computer Science ,Probabilistic logic ,Temporal logic ,Markov decision process ,Formal verification ,Reliability (statistics) - Abstract
In this paper we present a methodology for the verification of performance and reliability properties of discrete real-time systems. The methodology relies on a temporal logic that can express bounds on the probability of events and on the average time between them. The semantics of the logics is defined with respect to timed systems that exhibit both probabilistic and nondeterministic behavior. We present model-checking algorithms for the algorithmic verification of the specifications, and we discuss their complexity.
- Published
- 1997
31. Hybrid diagrams: A deductive-algorithmic approach to hybrid system verification
- Author
-
Zohar Manna, Arjun Kapur, and Luca de Alfaro
- Subjects
Safety property ,Transformation (function) ,Theoretical computer science ,Computer science ,Computer Science::Logic in Computer Science ,Hybrid system ,Verification problem ,Temporal logic ,Hybrid automaton - Abstract
We present a methodology for the verification of temporal properties of hybrid systems. The methodology is based on the deductive transformation of hybrid diagrams, which represent the system and its properties, and which can be algorithmically checked against the specification. This check either gives a positive answer to the verification problem, or provides guidance for the further transformation of the diagrams. The resulting methodology is complete for quantifier-free linear-time temporal logic.
- Published
- 1997
32. Temporal verification by diagram transformations
- Author
-
Zohar Manna and Luca de Alfaro
- Subjects
Theoretical computer science ,Computer science ,Simple (abstract algebra) ,Diagram ,Representation (systemics) ,Temporal logic ,Graph product - Abstract
This paper presents a methodology for the verification of temporal properties of systems based on the gradual construction and algorithmic checking of fairness diagrams. Fairness diagrams correspond to abstractions of the system and its progress properties, and have a simple graphical representation.
- Published
- 1996
33. STeP: The Stanford Temporal Prover
- Author
-
Zohar Manna, Edward Y. Chang, Jaejin Lee, Anca Browne, Henny B. Sipma, Arjun Kapur, Michael Colón, Luca de Alfaro, Nikolaj Bjørner, Tomás E. Uribe, and Harish Devarajan
- Subjects
Model checking ,High-level verification ,Functional verification ,Theoretical computer science ,Computer science ,Programming language ,Runtime verification ,Temporal logic ,Verification ,Gas meter prover ,computer.software_genre ,Formal verification ,computer - Abstract
We describe the Stanford Temporal Prover (STeP), a system being developed to support the computer-aided formal verification of concurrent and reactive systems based on temporal specifications. Unlike systems based on model-checking, STeP is not restricted to finite-state systems. It combines model checking and deductive methods to allow the verification of a broad class of systems, including programs with infinite data domains, N-process programs, and N-component circuit designs, for arbitrary N. In short, STeP has been designed with the objective of combining the expressiveness of deductive methods with the simplicity of model checking. The verification process is for the most part automatic. User interaction occurs mostly at the highest, most intuitive level, primarily through a graphical proof language of verification diagrams. Efficient simplification methods, decision procedures, and invariant generation techniques are then invoked automatically to prove resulting first-order verification conditions with minimal assistance. We describe the performance of the system when applied to several examples, including the N-process dining philosopher''s program, Szymanski''s N-process mutual exclusion algorithm, and a distributed N-way arbiter circuit.
- Published
- 1995
34. Verification in continuous time by discrete reasoning
- Author
-
Zohar Manna and Luca de Alfaro
- Subjects
Linear temporal logic ,Computer science ,Interval temporal logic ,Algorithm - Published
- 1995
35. Model checking of probabilistic and nondeterministic systems
- Author
-
Andrea Bianco and Luca de Alfaro
- Subjects
Computer Science::Robotics ,Nondeterministic algorithm ,Model checking ,Theoretical computer science ,Computer science ,Computer Science::Logic in Computer Science ,Formal specification ,Probabilistic logic ,Temporal logic ,Probabilistic behavior ,Computer Science::Formal Languages and Automata Theory ,Reliability (statistics) - Abstract
The temporal logics pCTL and pCTL* have been proposed as tools for the formal specification and verification of probabilistic systems: as they can express quantitative bounds on the probability of system evolutions, they can be used to specify system properties such as reliability and performance. In this paper, we present model-checking algorithms for extensions of pCTL and pCTL* to systems in which the probabilistic behavior coexists with nondeterminism, and show that these algorithms have polynomial-time complexity in the size of the system. This provides a practical tool for reasoning on the reliability and performance of parallel systems.
- Published
- 1995
36. A Ground-Complete Axiomatization of Finite State Processes in Process Algebra
- Author
-
Baeten, J.C.M., Bravetti, M., Abadi, M., de Alfaro, L., Formal System Analysis, MARTÍN ABADI, LUCA DE ALFARO, M. Bravetti, and J. Baeten
- Subjects
Recursion ,Process calculus ,Modulo ,System of linear equations ,GeneralLiterature_MISCELLANEOUS ,Operational semantics ,Algebra ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Congruence (geometry) ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Computer Science::Logic in Computer Science ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Transition system ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,concurrency ,axiomatization ,PROCESS ALGEBRA ,Axiom ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
We consider a generic process algebra of which the standard process algebras ACP, CCS and CSP are subalgebras of reduced expressions. In particular such an algebra is endowed with a recursion operator which computes minimal fixpoint solutions of systems of equations over processes. As model for processes we consider finite-state transition systems modulo Milner`s observational congruence and we define an operational semantics for the process algebra. Over such a generic algebra we show the following. We provide a syntactical characterization (allowing as many terms as possible) for the equations involved in recursion operators, which guarantees that transition systems generated by the operational semantics are indeed finite-state. Vice-versa we show that every process admits a specification in terms of such a restricted form of recursion. We then present an axiomatization which is ground-complete over such a restricted signature: an equation can be derived from the axioms between closed terms exactly when the corresponding finite-state transition systems are observationally congruent. Notably, in presenting such an axiomatization, we also show that the two standard axioms of Milner for weakly unguarded recursion can be expressed by using just a single axiom.
- Published
- 2005
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.