359 results
Search Results
2. A Cartesian Closed Category of Domains with Almost Algebraic Bases.
- Author
-
Kou, Hui and Lyu, Zhenchao
- Subjects
FUNCTION spaces ,CONTINUOUS functions ,ANALYTIC geometry ,COMPUTER science - Abstract
In this paper, we investigate the properties of almost algebraic domains introduced by G. Hamrin and V. Stoltenberg-Hansen in 2006. We introduce a notion of M -closed basis and define a new class of domains, called ωAML -domains, which are continuous L -domains endowed with countable, almost algebraic, and M -closed bases. The main result of this paper is: the class of ωAML -domains is closed under function spaces and finite cartesian products. Hence, the category of ωAML -domains together with Scott continuous functions is cartesian closed. The results of this paper give an answer to an open problem posed by G. Hamrin and V. Stoltenberg-Hansen in "Two categories of effective continuous cpos, Theoretical Computer Science, 365 (2006), 216–236". [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
3. Induced Topologies on the Poset of Finitely Generated Saturated Sets.
- Author
-
Zhang, Wenfeng and Xu, Xiaoquan
- Subjects
TOPOLOGICAL spaces ,DISTRIBUTIVE lattices ,TOPOLOGY ,ORDERED sets ,BANACH lattices ,COMPUTER science - Abstract
In [R. Heckmann, K. Keimel, Quasicontinuous Domains and the Smyth Powerdomain, Electronic Notes in Theoretical Computer Science 298 (2013), 215–232], Heckmann and Keimel proved that a dcpo P is quasicontinuous iff the poset Fin P of nonempty finitely generated upper sets ordered by reverse inclusion is continuous. We generalize this result to general topological spaces in this paper. More precisely, for any T 0 space (X , τ) and U ∈ τ , we construct a topology τ F generated by the basic open subsets U F = { ↑ F ∈ Fin X : F ⊆ U }. It is shown that a T 0 space (X , τ) is a hypercontinuous lattice iff τ F is a completely distributive lattice. In particular, we prove that if a poset P satisfies property DINT
op , then P is quasi-hypercontinuous iff Fin P is hypercontinuous. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
4. Linearization of Automatic Arrays and Weave Specifications.
- Author
-
Sprunger, David
- Subjects
ELECTRONIC linearization ,COMPUTER science ,CONFERENCES & conventions ,AUTOMATIC control systems ,MACHINE theory ,OFFICE equipment & supplies - Abstract
Abstract: Grabmayer, Endrullis, Hendriks, Klop, and Moss [C. Grabmayer, J. Endrullis, D. Hendriks, J.W. Klop, and L.S. Moss. Automatic sequences and zipspecifications. In Proceedings of the 2012 27th Annual IEEE/ACM Symposium on Logic in Computer Science, pages 335–344. IEEE Computer Society, 2012] developed a method for defining automatic sequences in terms of ʼzip specificationsʼ and proved that a sequence is automatic [J.-P. Allouche and J. Shallit. Automatic Sequences. Cambridge University Press, 2003] iff it has a zip specification where all zip terms have the same arity. This paper begins by investigating a similar definitional scheme for the higher-dimensional counterpart of automatic sequences, automatic arrays. In the course of establishing the results required for this machinery, we find an isomorphism–closely related to the z-order curve [G.M. Morton. A computer oriented geodetic data base and a new technique in file sequencing. International Business Machines Company, 1966]–between a final coalgebra for arrays and the standard final coalgebra for sequences. This isomorphism preserves automaticity properties: an array is k,l-automatic iff its corresponding sequence is kl-automatic. The former notion of automaticity (k,l-automatic, note the comma) is defined for arrays as in [J.-P. Allouche and J. Shallit. Automatic Sequences. Cambridge University Press, 2003], and the latter notion is the standard notion of automaticity for sequences. It also provides a convenient way to translate between stream zip specifications and array zip specifications. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
5. Self-Quotation in a Typed, Intensional Lambda-Calculus.
- Author
-
Jay, Barry
- Subjects
DATA structures ,SYNTAX in programming languages ,PROGRAMMING languages ,COMPUTER science ,COMPUTER software - Abstract
Intensional lambda-calculus adds intensional combinators to lambda-calculus to facilitate analysis. In particular, they are used to factorise data structures and programs into their components, which can be used to convert abstractions into combinators. This paper shows how to type an intensional lambda-calculus using no more than the types of System F. Even the quotation function used to access program syntax can be defined and typed, as can its inverse: the calculus supports typed self-quotation. Thus, one may freely alternate between program analysis and program execution. Proofs of all results have been verified in Coq. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Distributed LTL Model Checking with Hash Compaction.
- Author
-
Barnat, J., Havlíček, J., and Ročkai, P.
- Subjects
DISTRIBUTED computing ,HASHING ,ALGORITHMS ,COMPUTER science ,MATHEMATICAL models ,SYSTEMS theory - Abstract
Abstract: We extend a distributed-memory explicit-state LTL model checking algorithm (OWCTY) with hash compaction. We provide a detailed description of the improved algorithm and a correctness argument in the theoretical part of the paper. Additionally, we deliver an implementation of the algorithm as part of out parallel and distributed-memory model checker DiVinE, and use this implementation for a practical evaluation of the approach, on which we report in the experimental part of the paper. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
7. New Undecidability Results for Properties of Term Rewrite Systems.
- Author
-
Verma, Rakesh
- Subjects
ALGORITHMS ,REWRITING systems (Computer science) ,COMPUTER systems ,MACHINE theory ,MATHEMATICAL models ,COMPUTER science - Abstract
Abstract: This paper is on several basic properties of term rewrite systems: reachability, joinability, uniqueness of normal forms, unique normalization, confluence, and existence of normal forms, for subclasses of rewrite systems defined by syntactic restrictions on variables. All these properties are known to be undecidable for the general class and decidable for ground (variable-free) systems. Recently, there has been impressive progress on efficient algorithms or decidability results for many of these properties. The aim of this paper is to present new results and organize existing ones to clarify further the boundary between decidability and undecidability for these properties. Another goal is to spur research towards a complete classification of these properties for subclasses defined by syntactic restrictions on variables. The proofs of the presented results may be intrinsically interesting as well due to their economy, which is partly based on improved reductions between some of the properties. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
8. Transparent First-class Futures and Distributed Components.
- Author
-
Cansado, Antonio, Henrio, Ludovic, and Madelaine, Eric
- Subjects
COMPUTER architecture ,TECHNICAL specifications ,PROGRAMMING languages ,DATA modeling ,SYNCHRONIZATION ,COMPUTER science - Abstract
Abstract: Futures are special kind of values that allow the synchronisation of different processes. Futures are in fact identifiers for promised results of function calls that are still awaited. When the result is necessary for the computation, the process is blocked until the result is returned. We are interested in this paper in transparent first-class futures, and their use within distributed components. We say that futures are transparent if the result is automatically and implicitly awaited upon the first access to the value; and that futures are first-class if they can be transmitted between components as usual objects. Thus, because of the difficulty to identify future objects, analysing the behaviour of components using first-class transparent futures is challenging. This paper contributes with first a static representation for futures, second a means to detect local deadlocks in a component system with first class futures, and finally extensions to interface definitions in order to avoid such deadlocks. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
9. Functional Active Objects: Typing and Formalisation.
- Author
-
Henrio, Ludovic and Kammüller, Florian
- Subjects
AUTOMATIC theorem proving ,CALCULUS ,RMI (Computer architecture) ,PROGRAMMING languages ,SEMANTIC computing ,COMPUTER science - Abstract
Abstract: This paper provides a sound foundation for autonomous objects communicating by remote method invo- cations and futures. As a distributed extension of ζ-calculus, we define ASP
fun , a calculus of functional objects, behaving autonomously and communicating by a request-reply mechanism: requests are method calls handled asynchronously and futures represent awaited results for requests. This results in a well structured distributed object language enabling a concise representation of asynchronous method invoca- tions. This paper first presents the ASPfun calculus and its semantics. Secondly we provide a type system for ASPfun , which guarantees the “progress” property. Most importantly, ASPfun and its properties have been formalised and proved using the Isabelle theorem prover, and we consider it as a good step toward formalisation of distributed languages. [Copyright &y& Elsevier]- Published
- 2009
- Full Text
- View/download PDF
10. Implementation of an Orchestration Language as a Haskell Domain Specific Language.
- Author
-
Campos, Marco Devesas and Barbosa, L.S.
- Subjects
HASKELL (Computer program language) ,PROGRAMMING languages ,CALCULUS ,COMPUTER algorithms ,COMPUTER science ,SYNCHRONIZATION ,COMPUTER software - Abstract
Abstract: Even though concurrent programming has been a hot topic of discussion in Computer Science for the past 30 years, the community has yet to settle on a, or a few standard approaches to implement concurrent programs. But as more and more cores inhabit our CPUs and more and more services are made available on the web the problem of coordinating different tasks becomes increasingly relevant. The present paper addresses this problem with an implementation of the orchestration language Orc as a domain specific language in Haskell. Orc was, therefore, realized as a combinator library using the lightweight threads and the communication and synchronization primitives of the Concurrent Haskell library. With this implementation it becomes possible to create orchestrations that re-use existing Haskell code and, conversely, re-use orchestrations inside other Haskell programs. The complexity inherent to distributed computation, entails the need for the classification of efficient, reusable, concurrent programming patterns. The paper discusses how the calculus of recursive schemes used in the derivation of functional programs, scales up to a distributed setting. It is shown, in particular, how to parallelize the entire class of binary tree hylomorphisms. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
11. Extending Constructive Logic Negation with Types.
- Author
-
Munoz-Hernandez, Susana and Moreno-Navarro, Juan José
- Subjects
NEGATION (Logic) ,LOGIC programming ,PROLOG (Computer program language) ,PROGRAMMING language semantics ,COMPUTER science - Abstract
Abstract: Negation has traditionally been a difficult issue in Logic Programming. Most of Prolog programmers have been restricted to use just a weak negation technique, like negation as failure. Many alternative semantics were proposed for achieving constructive negation in the last 20 years, but no implementation was provided so far because of its exponential complexity and the difficulty for developing it. First effective implementations of constructive negation into standard Prolog compilers are available just recently, around 2003, provided by our previous works. In this paper we present an extension of our implementations by introducing types in programs, thus improving usability as well as efficiency in some cases of our implementations of constructive negation. This can make constructive negation an interesting approach for its use in data bases querying, web search, filtered search, ontologies querying, coding rules, business rules, etc. Thanks to the use of types, our constructive negation can provide concrete values as results, instead of constraints (as in our previous works). We provide details about the semantics and the implementation in our approaches of classical, finite constructive, and intensional negation. The paper also includes some practical examples additionally allowing for providing measurements of computational behavior. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
12. A Tool for Generating a Symbolic Representation of tccp Executions.
- Author
-
Lescaylle, Alexei and Villanueva, Alicia
- Subjects
COMPUTER algorithms ,CONSTRAINT programming ,EMBEDDED computer systems ,COMPUTER software ,COMPUTER science - Abstract
Abstract: The Timed Concurrent Constraint language (tccp) was defined by F. de Boer et al. as an extension of the Concurrent Constraint Paradigm (Saraswat, 1993) for specifying reactive and embedded systems. In this paper, we describe the StructGenerator system which, given the specification of a tccp program, constructs a symbolic representation (a tccp Structure) modeling the behavior of such tccp program. The resulting structure allows one to verify the program by using a model-checking algorithm. It is similar to a Kripke Structure but, due to the nature of the ccp model, it differs from the classical approach in some important points that will be described along the paper. The StructGenerator system, implemented in C++, takes as input a file containing the specification of a tccp program and generates the associated tccp Structure. Along the paper, we cover the design and implementation of StructGenerator. We also demonstrate its functionality carrying out the execution of two practical examples. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
13. A Technique to Build Debugging Tools for Lazy Functional Logic Languages.
- Author
-
Braßel, Bernd
- Subjects
DEBUGGING ,PROGRAMMING languages ,INFERENCE (Logic) ,COMPUTER engineering ,COMPUTER science - Abstract
Abstract: This paper is based on a recently developed technique to build debugging tools for lazy functional programming languages. With this technique it is possible to replay the execution of a lazy program with a strict semantics by recording information of unevaluated expressions. The recorded information is called an oracle and is very compact. Oracles contain the number of strict steps between discarding unevaluated expressions. The technique has already been successfully employed to construct a debugger for lazy functional languages. This paper extends the technique to include also lazy functional logic languages. A debugging tool built with the technique can be downloaded at www-ps.informatik.uni-kiel.de/~bbr. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
14. An Evolutionary Trust and Distrust Model.
- Author
-
Agudo, Isaac, Fernández-Gago, Carmen, and Lopez, Javier
- Subjects
COMPUTER reliability ,TRUST ,COMPUTER simulation ,COMPUTER science - Abstract
Abstract: In this paper we propose a trust model, where besides considering trust and distrust, we also consider another parameter that measures the reliability on the stability of trust or distrust. The inclusion of this new parameter will allow us to use trust in a more accurate way. We consider trust is not static but dynamic and trust values can change along time. Thus, we will also take time into account, using it as a parameter of our model. There is very little work done about the inclusion of time as an influence on trust. We will show the applicability of our model in the scenario of the process of reviewing papers for a conference. Sometimes for these kind of processes the Chair of the conference should first find the suitable reviewers. He can make this selection by using our model. Once the reviewers are selected they send out their reviews to the Chair who can also use our model in order to make the final decision about acceptance of papers. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
15. Labelled Transitions for Mobile Ambients (As Synthesized via a Graphical Encoding).
- Author
-
Bonchi, Filippo, Gadducci, Fabio, and Monreale, Giacoma Valentina
- Subjects
MACHINE theory ,COMPUTER interfaces ,COMPUTER science ,GRAPHIC methods ,CASE studies ,AUTOMATION - Abstract
Abstract: The paper presents a case study on the synthesis of labelled transition systems (LTSs) for process calculi, choosing as testbed Cardelli and Gordon''s Mobile Ambients (MAs). The proposal is based on a graphical encoding: each process is mapped into a graph equipped with suitable interfaces, such that the denotation is fully abstract with respect to the usual structural congruence. Graphs with interfaces are amenable to the synthesis mechanism proposed by Ehrig and König and based on borrowed contexts (BCs), an instance of relative pushouts, introduced by Leifer and Milner. The BC mechanism allows the effective construction of a LTS that has graphs with interfaces as both states and labels, and such that the associated bisimilarity is automatically a congruence. Our paper focuses on the analysis of a LTS over (processes as) graphs with interfaces, as distilled by exploiting the graphical encoding of MAs. In particular, we use the LTS on graphs to recover a suitable LTS directly defined over the structure of MAs processes. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
16. On Decidability of LTL+Past Model Checking for Process Rewrite Systems.
- Author
-
Křetínský, Mojmír, Řehák, Vojtěch, and Strejček, Jan
- Subjects
MODEL validation ,REWRITING systems (Computer science) ,DECIDABILITY (Mathematical logic) ,MATHEMATICAL logic ,OPERATOR theory ,COMPUTER science - Abstract
Abstract: The paper [Bozzelli, L., M. Křetínský, V. Řehák and J. Strejček, On decidability of LTL model checking for process rewrite systems, in: FSTTCS 2006, LNCS 4337 (2006), pp. 248–259] shows that the model checking problem for (weakly extended) Process Rewrite Systems and properties given by LTL formulae with temporal operators strict eventually and strict always is decidable. The same paper contains an open question whether the problem remains decidable even if we extend the set of properties by allowing also past counterparts of the mentioned operators. The current paper gives a positive answer to this question. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
17. T-homotopy and Refinement of Observation (I): Introduction.
- Author
-
Gaucher, Philippe
- Subjects
HOMOTOPY theory ,SCIENTIFIC observation ,MODEL categories (Mathematics) ,COMPUTER science ,MATHEMATICAL analysis ,PARTIALLY ordered sets - Abstract
Abstract: This paper is the extended introduction of a series of papers about modelling T-homotopy by refinement of observation. The notion of T-homotopy equivalence is discussed. A new one is proposed and its behaviour with respect to other construction in dihomotopy theory is explained. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
18. A Framework for Component Categories.
- Author
-
Haucourt, Emmanuel
- Subjects
CATEGORIES (Mathematics) ,FUNCTOR theory ,ALGEBRAIC topology ,PARTIALLY ordered spaces ,HOMOTOPY theory ,COMPUTER science ,MORPHISMS (Mathematics) ,MONOIDS - Abstract
Abstract: This paper provides further developments in the study of the component categories which have been introduced in [Fajstrup, L., E. Goubault, E. Haucourt and M. Raußen, Component categories and the fundamental category, APCS 12 (2004), pp. 81–108]. In particular, the component category functor is seen as a left adjoint hence preserves the pushouts. This property is applied to prove a Van Kampen like theorem for component categories. This last point is very important to make effective calculations. The original purpose of component categories is to suitably reduce the size of the fundamental categories which are the directed counterpart of classical fundamental groupoids (see [Higgins, P.J., “Categories and Groupoids,” Mathematical Studies 32, Van Nostrand Reinhold, 1971, libre d''accs sur internet l''adresse http://www.tac.mta.ca/tac/reprints/]). In concrete examples, the fundamental category is as “big” as while the component category is “finitely generated”. We take advantage of this fact to define the cohomology of a directed geometrical shape as the cohomology of its component category. The cohomology of small categories is defined in [Baues, H.J. and G. Wirsching, Cohomology of small categories, Journal of Pure and Applied Algebra 38 (1985)] and [Baues, H.J., “Combinatorial Homotopy and 4-Dimensional Complexes,” De Gruyter expositions in Mathematics 2, Walter de Gruyter, 1991]. Still, in the recent paper [Husainov, A.A., On the homolgy of small categories and asynchronous transition systems, Homology, Homotopy and Applications 6 (2004), pp. 439–471], the homology of small categories is defined in a very similar way and applied to the study of asynchronous transition systems. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
19. Testing Non-deterministic Stream X-machine Models and P systems.
- Author
-
Ipate, Florentin and Gheorghe, Marian
- Subjects
MATHEMATICAL models ,MACHINE theory ,COMPUTER science ,ALGORITHMS ,COMPUTER software ,COMPUTER networks ,SCALABILITY - Abstract
Abstract: Under certain well defined conditions, the stream X-machine testing method can produce a test set that is guaranteed to determine the correctness of an implementation. The testing method has originally assumed that an implementation of each processing function or relation is proven to be correct before the actual testing can take place. Such a limitation has been removed in a subsequent paper, but only for deterministic X-machines. This paper extends this result to non-deterministic stream X-machines and considers a conformance relationship between a specification and an implementation, rather than mere equivalence. Furthermore, it shows how this method can be applied to test a P system by building a suitable stream X-machine from the derivation tree associated with a partial computation. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
20. A Random Bag Preserving Product Operation.
- Author
-
Schellekens, Michel
- Subjects
DATA structures ,COMPUTER algorithms ,PROGRAMMING languages ,LINEAR programming ,JAVA programming language ,COMPUTER science - Abstract
Abstract: The author''s current research programme is the development of a modular calculus for the average-cost of data structuring. This modular calculus provides a novel foundation for the analysis of algorithms. Its applicability to the analysis of algorithms has been demonstrated at the Center for Efficiency-Oriented Languages (CEOL) through the design of the novel programming language and the associated average-case analysis tool DISTRI-TRACK [M. Schellekens, D. Hickey and G. Bollella, ACETT, a Linearly-Compositional Programming Language for (semi-)automated Average-Case analysis, IEEE Real-Time Systems Symposium - Work In Progress Session, 2004; M. Boubekeur, D. Hickey, J. Mc Enery and M. Schellekens, A new Approach for Modular Average-Case Timing of Real-Time Java Programs, Accepted for publication in WSEAS Transactions on Computers, 2007; M. Boubekeur, D. Hickey, J. Mc Enery and M. Schellekens, Towards Modular Average-Case Timing in Real-Time Languages: An Application to Real-Time Java, Accepted for publication on the 6th WSEAS International Conference on APPLIED COMPUTER SCIENCE (ACS''06), Tenerife, December, 2006; M. Schellekens, A Modular Calculus for the Average Cost of Data Structuring, Springer book to appear, 250 pages, May 2008 (to appear), M. Schellekens, Compositional Average-Case Analysis, preprint, under review, 2006]. Modular computations of the average cost of data structuring are possible through the fundamental notion of random bag preservation. Random bag preserving operations enable the constructive tracking of the data and the distribution of the data states during a computation. This in turn enables the (semi-)automated derivation of the average cost of the operations. Two fundamental operations enable the creation and destruction of data structures: the product operation, which is the subject of this paper, and the delete operation, which forms the subject of [M. Schellekens, Compositional Average-Case Analysis, preprint, under review, 2006]. The introduction of the entire language is well beyond the scope of this paper and will be reported in a book [M. Schellekens, A Modular Calculus for the Average Cost of Data Structuring, Springer book to appear, 250 pages, May 2008 (to appear)]. The language has been implemented at CEOL and automated derivations of average-cost of data structuring are under way. Here we report on a (simplified) version of the fundamental notion of random bag preservation and demonstrate that the central product operation possesses this crucial property. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
21. Animation and Interactive Programming: A Practical Approach.
- Author
-
Benachour, Phillip and Edwards, Reuben
- Subjects
COMPUTER-generated imagery ,COMPUTER software ,SECONDARY education ,SCRIPTING languages (Computer science) ,PROGRAMMING languages ,COMPUTER science ,COMPUTER programmers - Abstract
Abstract: This paper describes a work in progress of using animation software tools to teach programming principles. The motivation behind this work is to encourage students in higher education who do not see themselves as serious programmers to engage with some of the concepts and methods used in the teaching of programming. In addition this work was used in workshops to engage with local and regional secondary schools focussing on the use of animation as a tool for learning programming principles. The results presented in this paper are based on feedback from first year undergraduate students. Initial feedback from teachers, pupils and schools has been very positive and requests for additional visits have been made. This has created an opportunity to further engage with these schools for additional workshops. Analysis and feedback from these schools will be presented in the workshop. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
22. A Tool for Programming with Interaction Nets.
- Author
-
Almeida, José Bacelar, Pinto, Jorge Sousa, and Vilaça, Miguel
- Subjects
COMPUTER programming ,DESIGN templates ,COMPUTER graphics ,COMPUTER science ,MATHEMATICAL notation ,GRAPHIC arts ,TRANSLATORS (Computer programs) - Abstract
Abstract: This paper introduces INblobs, a visual tool developed at Minho for integrated development with Interaction Nets. Most of the existing tools take as input interaction nets and interaction rules represented in a textual format. INblobs is first of all a visual editor that allows users to edit interaction systems (both interaction nets and interaction rules) graphically, and to convert them to textual notation. This can then be used as input to other tools that implement reduction of nets. INblobs also allows the user to reduce nets within the tool, and includes a mechanism that automatically selects the next active pair to be reduced, following one of the given reduction strategies. The paper also describes other features of the tool, such as the creation of rules from pre-defined templates. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
23. General Refinement, Part Two: Flexible Refinement.
- Author
-
Reeves, Steve and Streader, David
- Subjects
INFORMATION theory ,CYBERNETICS ,INFORMATION science ,COMPUTER science ,INFORMATION technology - Abstract
Abstract: In the previous, companion, paper [Reeves, S. and D. Streader, General refinement, part one: interfaces, determinism and special refinement, Proceedings of Refine 2008, Electronic Notes in Theoretical Computer Science (2008).] to this paper we introduced our general model of refinement, discussed ideas around determinism and interfaces that the general definition raised, and gave several examples showing how the general definition could be specialised to the sorts of refinement we see in the literature. In this paper we continue the story and we define vertical refinement on our general model. Vertical refinement can be seen as a generalisation of what, in the literature, has been called action refinement or non-atomic refinement. Alternatively, by viewing a special model (from the previous paper) as a logical theory, vertical refinement can be seen as a theory morphism, formalised as a Galois connection. We give an example of the utility of this definition by constructing a vertical refinement between broadcast processes and interactive branching programs, and we see how interactive branching programs can be implemented on a platform which provides broadcast communication. We also show how developments that fall outside the usual, special theories of refinement can be brought into the refinement world by giving examples of development which were thought not to be possible using refinement. Throughout, the central, simple idea of refinement as a development process that moves from abstract to concrete while preserving certain valuable guarantees will guide us. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
24. AGAPIA v0.1: A Programming Language for Interactive Systems and Its Typing System.
- Author
-
Dragoi, Cezara and Stefanescu, Gheorghe
- Subjects
PROGRAMMING languages ,ELECTRONIC data processing ,COMPUTER programming ,INTERACTIVE computer systems ,COMPUTER systems ,COMPUTER science - Abstract
Abstract: A model (consisting of rv-systems), a core programming language (for developing rv-programs), several specification and analysis techniques appropriate for modeling, programming and reasoning about interactive computing systems have been recently introduced by Stefanescu using register machines and space-time duality, see [Stefanescu, G. Interactive systems with registers and voices. Fundamenta Informaticae 73 (2006), 285–306. (Early draft, School of Computing, National University of Singapore, July 2004.)]. After that, Dragoi and Stefanescu have developed structured programming techniques for rv-systems and their verification, see, e.g., [Dragoi, C., and G. Stefanescu. Structured programming for interactive rv-systems. Institute of Mathematics of the Romanian Academy, IMAR Preprint 9/2006, Bucharest 2006. Dragoi, C., and G. Stefanescu. Towards a Hoare-like logic for structured rv-programs. Institute of Mathematics of the Romanian Academy, IMAR Preprint 10/2006, Bucharest, 2006. Dragoi, C., and G. Stefanescu. Implementation and verification of ring termination detection protocols using structured rv-programs. Annals of University of Bucharest, Mathematics-Informatics Series, 55 (2006), 129–138. Dragoi, C., and G. Stefanescu. Structured interactive programs with registers and voices and their verification. Draft, Bucharest, January 2007. Dragoi, C., and G. Stefanescu. On compiling structured interactive programs with registers and voices. In: “Proc. SOFSEM 2008,” 259–270. LNCS 4910, Springer, 2008.]. In the present paper a kernel programming language AGAPIA v0.1 for interactive systems is introduced. The language contains definitions for complex spatial and temporal data, arithmetic and boolean expressions, modules, and while-programming statements with their temporal, spatial, and spatio-temporal versions. In AGAPIA v0.1 one can write programs for open processes located at various sites and having their temporal windows of adequate reaction to the environment. The main technical part of the paper describes a typing system for AGAPIA v0.1 programs. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
25. Validating for Liveness in Hidden Adversary Systems.
- Author
-
Mukherjee, Saikat, Srinivasa, Srinath, and D, Satish Chandra
- Subjects
INTERACTIVE computer systems ,MACHINE theory ,COMPUTER systems ,COMPUTER science ,COMPUTER programming - Abstract
Abstract: Multi-stream interactive systems can be seen as “hidden adversary” systems (HAS), where the observable behaviour on any interaction channel is affected by interactions happening on other channels. One way of modelling HAS is in the form of a multi-process I/O automata, where each interacting process appears as a token in a shared state space. Constraints in the state space specify how the dynamics of one process affects other processes. We define the “liveness criterion” of each process as the end objective to be achieved by the process. The problem now for each process is to achieve this objective in the face of unforeseen interferences from other processes. In an earlier paper, it was proposed that this uncertainty can be mitigated by collaboration among the disparate processes. Two types of collaboration philosophies were also suggested: altruistic collaboration and pragmatic collaboration. This paper addresses the HAS validation problem where processes collaborate altruistically. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
26. Building Certified Static Analysers by Modular Construction of Well-founded Lattices.
- Author
-
Pichardie, David
- Subjects
LATTICE theory ,MODULAR programming ,PROGRAMMING languages ,ELECTRONIC data processing ,COMPUTER science - Abstract
Abstract: This paper presents fixpoint calculations on lattice structures as example of highly modular programming in a dependently typed functional language. We propose a library of Coq module functors for constructing complex lattices using efficient data structures. The lattice signature contains a well-foundedness proof obligation which ensures termination of generic fixpoint iteration algorithms. With this library, complex well-foundedness proofs can hence be constructed in a functorial fashion. This paper demonstrates the ability of the recent Coq module system in manipulating algebraic structures and extracting efficient Ocaml implementations from them. The second contribution of this work is a generic result, based on the constructive notion of accessibility predicate, about preservation of accessibility properties when combining relations. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
27. Services and Contracts: Coalgebraically.
- Author
-
Sun, Meng
- Subjects
COMPUTER systems ,COMPUTER architecture ,COMPUTER interfaces ,AXIOMS ,COMPUTER science - Abstract
Abstract: The popularity of service-oriented computing has not been accompanied by the necessary formalization of the notions being involved. This paper focuses on the development of a coalgebraic framework to support service-oriented application design. In this paper, the concepts are separated into three hierarchies – interfaces, contracts and services. Interfaces are specified by functors, and services are shown to be coalgebras of such functors, which should satisfy the axioms given in corresponding contracts. Different interfaces, contracts and services are related respectively by the morphisms between them. And the notion of bisimulation for services is derived from service morphisms, which captures the observational equivalence of services. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
28. Integrating Refinement into Software Development Tools.
- Author
-
Yang, Lu and Stolz, Volker
- Subjects
COMPUTER software development ,COMPUTER programming ,SYSTEMS design ,SYSTEMS development ,COMPUTER science ,COMPUTER architecture - Abstract
Abstract: It is a challenge for automatic tool support to formal design by refinement transformations. In this paper, we bring this matter to the attention of the research community and discuss a component-based model transformational approach for integrating refinement into software development tools. Models, their consistency and correctness, in an object-oriented and component-based development process are defined in rCOS, that is a refinement calculus recently developed at UNU-IIST. Correctness preserving transformations between models are formalized and proved as refinement rules in rCOS. In this paper, we will discuss on how these transformations can be implemented in the relations language of Query/View/Transformation (QVT) standardized by OMG. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
29. Using Matrix Graph Grammars for the Analysis of Behavioural Specifications: Sequential and Parallel Independence.
- Author
-
Pérez Velasco, Pedro Pablo and de Lara, Juan
- Subjects
PROGRAMMING languages ,ELECTRONIC data processing ,COMPUTER science ,BOOLEAN algebra ,MATRICES (Mathematics) - Abstract
Abstract: In this paper we present a new approach for the analysis of rule-based specification of system dynamics. We model system states as simple digraphs, which can be represented with boolean matrices. Rules modelling the different state changes of the system can also be represented with boolean matrices, and therefore the rewriting is expressed using boolean operations only. The conditions for sequential independence between pair of rules are well-known in the categorical approaches to graph transformation (e.g. single and double pushout). These conditions state when two rules can be applied in any order yielding the same result. In this paper, we study the concept of sequential independence in our framework, and extend it in order to consider derivations of arbitrary finite length. Instead of studying one-step rule advances, we study independence of rule permutations in sequences of arbitrary finite length. We also analyse the conditions under which a sequence is applicable to a given host graph. We introduce rule composition and give some preliminary results regarding parallel independence. Moreover, we improve our framework making explicit the elements which, if present, disable the application of a rule or a sequence. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
30. Computational Soundness of a Call by Name Calculus of Recursively-scoped Records.
- Author
-
Machkasova, Elena
- Subjects
REWRITING systems (Computer science) ,MACHINE theory ,COMPUTER science ,MATHEMATICAL functions ,INFORMATION theory - Abstract
Abstract: The paper presents a calculus of recursively-scoped records: a two-level calculus with a traditional call-by-name λ-calculus at a lower level and unordered collections of labeled λ-calculus terms at a higher level. Terms in records may reference each other, possibly in a mutually recursive manner, by means of labels. We define two relations: a rewriting relation that models program transformations and an evaluation relation that defines a small-step operational semantics of records. Both relations follow a call-by-name strategy. We use a special symbol called a black hole to model cyclic dependencies that lead to infinite substitution. Computational soundness is a property of a calculus that connects the rewriting relation and the evaluation relation: it states that any sequence of rewriting steps (in either direction) preserves the meaning of a record as defined by the evaluation relation. The computational soundness property implies that any program transformation that can be represented as a sequence of forward and backward rewriting steps preserves the meaning of a record as defined by the small step operational semantics. In this paper we describe the computational soundness framework and prove computational soundness of the calculus. The proof is based on a novel inductive context-based argument for meaning preservation of substituting one component into another. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
31. The Case for Fairness of Trust Management.
- Author
-
Wierzbicki, Adam
- Subjects
COMPUTER security ,COMPUTER systems ,UTILITIES (Computer programs) ,SYSTEMS software ,COMPUTER science - Abstract
Abstract: All trust management systems must take into account the possibility of error: of misplaced trust. Therefore, regardless of whether it uses reputation or not, is centralized or distributed, a trust management system must be evaluated with consideration for the consequences of misplaced or abused trust. Thus, the issue of fairness has always been implicitly considered in the design and evaluation of trust management systems. This paper attempts to show that an implicit consideration, using the utilitarian paradigm of maximizing the sum of agents'' utilities, is insufficient. Two case studies presented in the paper concern the design of a new reputation systems that uses implicit and emphasized negative feedbacks, and the evaluation of reputation systems'' robustness to discrimination. The case studies demonstrate that considering fairness explicitly leads to different trust management system design and evaluation. Trust management systems can realize a goal of system fairness, identified with distributional fairness of agents'' utilities. The realization of this goal can be achieved in a laboratory setting when all other factors that affect utilities can be excluded, and where the system can be tested using modeled adversaries. Taking the fairness of agent behavior explicitly into account when building trust or distrust can help to realize the goal of fairness of trust management systems. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
32. Frameworks Based on Templates for Rigorous Model-driven Development.
- Author
-
Amálio, Nuno, Polack, Fiona, and Stepney, Susan
- Subjects
SYSTEMS engineering ,COMPUTER engineering ,COMPUTER science ,UNIFIED modeling language ,FORMAL languages ,PROGRAMMING languages - Abstract
Abstract: The engineering of systems that are acceptably correct is a hard problem. On the one hand, semi-formal modelling approaches that are used in practical, large-scale system development, such as the UML, are not amenable to formal analysis and consistency checking. On the other hand, formal modelling and analysis requires a level of competence and expertise that is not common in commercial development communities, and formal approaches are not well integrated with the rest of the development process. This paper advocates an approach to building engineering environments (or frameworks) for rigorous model-driven development (MDD) that is based on combining semi-formal notations with formal modelling languages. To support the approach, there is a formal language of templates, which captures patterns of formal development and enables an approach to proof with templates. This allows the construction of catalogues of patterns (represented as templates) and meta-theorems for frameworks. The paper presents and illustrates a framework for sequential systems that combines UML and the formal language Z. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
33. Handling Model Changes: Regression Testing and Test-Suite Update with Model-Checkers.
- Author
-
Fraser, Gordon, Aichernig, Bernhard K., and Wotawa, Franz
- Subjects
COMPUTER science ,ELECTRONICS ,COMPUTER programming ,COMPUTERS ,COMPUTER systems - Abstract
Abstract: Several model-checker based methods to automated test-case generation have been proposed recently. The performance and applicability largely depends on the complexity of the model in use. For complex models, the costs of creating a full test-suite can be significant. If the model is changed, then in general the test-suite is completely regenerated. However, only a subset of a test-suite might be invalidated by a model change. Creating a full test-suite in such a case would therefore waste time by unnecessarily recreating valid test-cases. This paper investigates methods to reduce the effort of recreating test-suites after a model is changed. This is also related to regression testing, where the number of test-cases necessary after a change should be minimized. This paper presents and evaluates methods to identify obsolete test-cases, and to extend any given test-case generation approach based on model-checkers in order to create test-cases for test-suite update or regression testing. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
34. Formal Models for Informal GUI Designs.
- Author
-
Bowen, Judy and Reeves, Steve
- Subjects
COMPUTER software ,SYSTEMS design ,USER interfaces ,ELECTRONIC systems ,COMPUTER science - Abstract
Abstract: Many different methods exist for the design and implementation of software systems. These methods may be fully formal, such as the use of formal specification languages and refinement processes, or they may be totally informal, such as jotting design ideas down on paper prior to coding, or they may be somewhere in between these two extremes. Formal methods are naturally suited to underlying system behaviour while user-centred approaches to user interface design fit comfortably with more informal approaches. The challenge is to find ways of integrating user-centred design methods with formal methods so that the benefits of both are fully realised. This paper presents a way of capturing the intentions behind informal design artefacts within a formal environment and then shows several applications of this approach. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
35. Peer Review of Animations Developed by Students.
- Author
-
Oechsle, Rainer and Morth, Thiemo
- Subjects
COMPUTER scientists ,VISUALIZATION ,VISUAL programming languages (Computer science) ,COMPUTER science ,COMPUTER-generated imagery - Abstract
Abstract: This paper describes the project “Visual Knowledge Communication”, a joint project that started recently. The partners are psychologists and computer scientists from four universities of the German state Rhineland-Palatinate. The starting point for the project was the fact that visualizations have attracted considerable interest in psychology as well as computer science within the last years. However, psychologists and computer scientists pursued their investigations independently from each other in the past. This project has as its main goal the support and fostering of cooperation between psychologists and computer scientists in several visualization research projects. The paper sketches the overall project. It then discusses in more detail the authors'' subproject which deals with a peer review process for animations developed by students. The basic ideas, the main goals, and the project plan are described. This paper is a work-in-progress report. Therefore, it does not contain any results. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
36. Specialization of Interaction Protocols in a Temporal Action Logic.
- Author
-
Giordano, Laura, Martelli, Alberto, and Schwind, Camilla
- Subjects
COMPUTER science ,COMPUTER architecture ,COMPUTER network protocols ,ACTION theory (Psychology) - Abstract
Abstract: Temporal logics are well suited for the specification and verification of systems of communicating agents. In this paper we adopt a social approach to agent communication, where communication is described in terms of changes to the social state, and interaction protocols in terms of permissions and commitments among agents. In particular, we make use of a temporal action theory, where a protocol is defined as a set of temporal constraints, which specify the effects and preconditions of the communicative actions on the social state. The paper addresses the problem of combining two protocols to define a new more specialized protocol, and presents a notion of protocol specialization which is based on the well known notion of stuttering equivalence between runs. Moreover, the paper studies sufficient conditions (verifiable from the protocol specification) which guarantee that the combination of two protocols is legal. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
37. Probabilistic Verification and Approximation.
- Author
-
Lassaigne, Richard and Peyronnet, Sylvain
- Subjects
MARKOV processes ,TECHNICAL specifications ,INDUSTRIAL design ,COMPUTER science - Abstract
Abstract: Model checking is an algorithmic method allowing to automatically verify if a system which is represented as a Kripke model satisfies a given specification. Specifications are usually expressed by formulas of temporal logic. The first objective of this paper is to give an overview of methods able to verify probabilistic systems. Models of such systems are labelled discrete time Markov chains and specifications are expressed in extensions of temporal logic by probabilistic operators. The second objective is to focus on the complexity of these methods and to answer the question: can probabilistic verification be efficiently approximated? In general, the answer is negative. However, in many applications, the specification formulas can be expressed in some positive fragment of linear time temporal logic. In this paper, we show how some simple randomized approximation algorithms can improve the efficiency of the verification of such probabilistic specifications. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
38. Analytical Tableaux for da Costa's Hierarchy of Paraconsistent Logics.
- Author
-
D'Ottaviano, Itala M. Loffredo and de Castro, Milton Augustinis
- Subjects
COMPUTABLE functions ,RECURSIVE functions ,CONFERENCES & conventions ,COMPUTER science - Abstract
Abstract: In this paper we present a new hierarchy of analytical tableaux systems , for da Costa''s hierarchy of propositional paraconsistent logics . In our tableaux formulation, we introduce da Costa''s “ball” operator “∘”, the generalized operators “k” and “(k)”, and the negations “”, for , as primitive operators, differently to what has been done in the literature, where these operators are usually defined operators. We prove a version of Cut Rule for the , and also prove that these systems are logically equivalent to the corresponding systems . The systems TNDC
n constitute completely automated theorem proving systems for the systems of da Costa''s hierarchy .3 [3] This paper corresponds to part of the results of the Doctorate Thesis of the second author, presented at UNICAMP in June, 2004. Previous versions of these results were presented in the “III World Congress on Paraconsistency”, held in July, 2003; and in the “XII Latin American Symposium on Mathematical Logic”, held in January, 2004. This is a reduced version of a paper accepted for publication by the Journal of Applied Non-Classical Logics. The research was partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Brazil. [Copyright &y& Elsevier]- Published
- 2006
- Full Text
- View/download PDF
39. Time-awareness and Proactivity in Models of Interactive Computation.
- Author
-
Motus, Leo, Meriste, Merik, and Dosch, Walter
- Subjects
ELECTRONIC systems ,COMPUTER systems ,COMPUTER science ,CYBERNETICS - Abstract
Abstract: The paper discusses explicit properties and the requirements that are to be verified, imposed upon software-intensive systems by their environment and by their users. Those systems are time-critical, may contain autonomous components, and may exhibit proactive behaviour. It is suggested that the analysis and verification of properties in software-intensive systems requires time-aware model of interactive computation. The authors of this paper claim that hitherto used time interpretation in computer science is too simplified, and several simultaneously maintained independent time counting systems is a necessary precondition for timing analysis of interactions. A feature space for comparing the existing approaches to interactive computing is suggested, and a potential candidate for time-aware model of interactive computation is discussed. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
40. A Verifier for Region-Annotated Java Bytecodes.
- Author
-
Cherem, Sigmund and Rugina, Radu
- Subjects
COMPUTER memory management ,COMPUTER science ,JAVA programming language ,PROGRAMMING languages - Abstract
Abstract: This paper presents a verifier for the memory-safe execution of extended Java bytecodes that support region-based memory management and explicit deallocation primitives. The verifier reads in region-annotated bytecodes that augment the standard Java bytecodes with instructions for creating and removing memory regions, allocating objects in regions, and passing regions as parameters. The verification ensures that each region is live when objects in the region are in use and that the program does not follow dangling references. The verifier requires region-safety certificates to be provided along with the bytecodes. The verification process consists of a load-time verification of method bodies, and a lazy linkage verification of method calls. Our region system supports both regions that are not lexically scoped and dangling pointers; the verifier proposed in this paper can successfully handle both of these features. Our experiments indicate that the sizes of certificates are acceptable in practice, and that region verification incurs little run-time overhead. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
41. Using the Alloy Analyzer to Verify Data Refinement in Z.
- Author
-
Bolton, Christie
- Subjects
TECHNICAL specifications ,ARTIFICIAL intelligence ,ELECTRONIC data processing ,COMPUTER science - Abstract
Abstract: In the development of critical systems, standards dictate that it is necessary to first design, construct and formally analyse abstract models of the system. Developers must then verify that the final implementation is consistent with these more abstract specifications. Z is an example of a state-based specification language. It has been shown to be effective in a variety of cases—indeed it was developed as part of a joint collaboration between Oxford University''s PRG and IBM Hursley for the specification of the CICS system. However, Z''s main weakness is that it does not have the necessary tool support: whilst there are associated type checkers, there is no tool for automatically verifying refinement in Z. The contribution of this paper is to show how data refinement in Z can be automatically verified using the Alloy Analyzer. The soundness and joint completeness of the simulation rules for Z have already been established: here we translate them to Alloy. We then show how data types expressed in Z can also be translated to Alloy, before presenting the assertions necessary for the Alloy Analyzer to identify the retrieve relation and hence verify refinement. We present a simple example in which the Alloy Analyzer successfully identifies the retrieve relation between two data types thereby verifying simulation and hence refinement. We conclude the paper with a discussion of the suitability of the Alloy Analyzer for such a task. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
42. Intersection and Union Types for.
- Author
-
van Bakel, Steffen
- Subjects
PHILOSOPHY of language ,PROGRAMMING languages ,COMPUTER training ,COMPUTER science - Abstract
Abstract: This paper presents a notion of intersection and union type assignment for the calculus , a substitution free language that can be used to describe the behaviour of functional programming languages at a very low level of granularity, and has first been defined in [Stéphane Lengrand. Call-by-value, call-by-name, and strong normalization for the classical sequent calculus. In Bernhard Gramlich and Salvador Lucas, editors, Electronic Notes in Theoretical Computer Science, volume 86. Elsevier, 2003, S. van Bakel, S. Lengrand, and P. Lescanne. The language §: computation and sequent calculus in classical logic. Submitted, 2004]. has been designed to give a Curry-Howard-de Bruijn correspondence to the sequent calculus for classical logic. In this paper we will define a notion of sequent-style intersection type assignment on that needs union types, and show that this notion is closed for both subject-reduction and subject-expansion. We will also show that it is an extension of the Strict system for lc. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
43. Object Oriented Concepts Identification from Formal B Specifications.
- Author
-
Idani, Akram and Ledru, Yves
- Subjects
CHARTS, diagrams, etc. ,STAKEHOLDERS ,GRAPHIC methods ,COMPUTER science - Abstract
Abstract: This paper addresses the graphical representation of static aspects of B specifications, using UML class diagrams. These diagrams can help understand the specification for stakeholders who are not familiar with the B method, such as customers or certification authorities. The paper first discusses some rules for a preliminary derivation of a class diagram. It then studies the consistency of the concepts preliminarily identified from an object oriented point of view. A formal concept analysis technique is used to distinguish between consistent classes, attributes, associations and operations. The proposed technique is to incrementally add operations to the formal specification which automatically result in evolution of the class diagram. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
44. On the Optimality of Register Saturation.
- Author
-
Touati, Sid-Ahmed-Ali
- Subjects
COMPUTER science ,DYNAMIC programming ,NONLINEAR programming ,CYBERNETICS - Abstract
Abstract: In an optimizing compiler, the register allocation process is still a crucial phase since it allows to reduce spill code that damages the performances. The register constraints are generally taken into account during the instruction scheduling phase of an acyclic data dependence graph (DAG): any schedule must minimize the register requirement. However, in a previous work [Sid-Ahmed-Ali Touati. Register Saturation in Superscalar and VLIW Codes. In Proceedings of The International Conference on Compiler Construction, Lecture Notes in Computer Science. Springer-Verlag, April 2001], we introduced and mathematically studied the register saturation (RS) concept. It consists of computing the exact upper-bound of the register need for all the valid schedules, independently of the functional unit constraints. The goal of RS is to decouple register constraints from instruction scheduling. In this paper, we continue our theoretical efforts and we present two main results. First, we give an exact solution with integer linear programming for both the problems of computing the RS of a DAG and reducing it. Our integer program brings a new way to model register constraints that allows us to produce the lowest number of constraints and variables in the literature (till now). Indeed, given a DAG with n nodes and m arcs, we need integer variables and linear constraints, which is better than the actual size complexity in the literature that model register constraints. Second, we prove that the problem of reducing the register saturation is NP-hard. Our detailed experiments in this paper show that our previous heuristics [Sid-Ahmed-Ali Touati. Register Saturation in Superscalar and VLIW Codes. In Proceedings of The International Conference on Compiler Construction, Lecture Notes in Computer Science. Springer-Verlag, April 2001] are nearly optimal. We provide a discussion too in order to argument why the RS approach should be better that minimizing the register requirement. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
45. Analysis of a BMAP/D/1-Timer Multiplexer.
- Author
-
Horváth, Gábor and Telek, Miklós
- Subjects
MATRICES (Mathematics) ,STOCHASTIC processes ,MARKOV processes ,COMPUTER science - Abstract
Abstract: In this paper we introduce and analyze a model of a multiplexer queue with a batch Markovian arrival process and a special, timer based, non-work-conserving service discipline. We show that the embedded process at departures is an M/G/1 type process with proper state partitioning, which can be efficiently analyzed by matrix geometric methods. We derive the expressions to compute the distribution of the waiting time. The paper ends with numerical experiments, and points out some interesting features of the system. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
46. Load Balancing Parallel Explicit State Model Checking.
- Author
-
Kumar, Rahul and Mercer, Eric G.
- Subjects
COMPUTER software ,ALGORITHMS ,COMPUTER science ,COMPUTER architecture - Abstract
Abstract: This paper first identifies some of the key concerns about the techniques and algorithms developed for parallel model checking; specifically, the inherent problem with load balancing and large queue sizes resultant in a static partition algorithm. This paper then presents a load balancing algorithm to improve the run time performance in distributed model checking, reduce maximum queue size, and reduce the number of states expanded before error discovery. The load balancing algorithm is based on generalized dimension exchange (GDE). This paper presents an empirical analysis of the GDE based load balancing algorithm on three different supercomputing architectures—distributed memory clusters, Networks of Workstations (NOW) and shared memory machines. The analysis shows increased speedup, lower maximum queue sizes and fewer total states explored before error discovery on each of the architectures. Finally, this paper presents a study of the communication overhead incurred by using the load balancing algorithm, which although significant, does not offset performance gains. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
47. Evolution Through Architectural Reconciliation.
- Author
-
Avgeriou, Paris, Guelfi, Nicolas, and Perrouin, Gilles
- Subjects
SOFTWARE architecture ,UNIFIED modeling language ,MODERN logic ,COMPUTER science - Abstract
Abstract: One of the possible scenarios in a system evolution cycle, is to translate an emergent set of new requirements into software architecture design and subsequently to update the system implementation. In this paper, we argue that this form of forward engineering, even though addresses the new system requirements, tends to overlook the implementation constraints. An architect must also reverse-engineer the system, in order to make these constraints explicit. Thus, we propose an approach where we reconcile two architectural models, one that is forward-engineered from the requirements and another that is reverse-engineered from the implementation. The final reconciled model is optimally adapted to the emergent set of requirements and to the actual system implementation. The contribution of this paper is twofold: the application of architectural reconciliation in the context of software evolution and an approach to formalize both the specification and transformation of the architectural models. The architectural modeling is based upon the UML 2.0 standard, while the formalization approach is based on set theory and first-order logic. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
48. Validated Proof-Producing Decision Procedures.
- Author
-
Klapper, Robert and Stump, Aaron
- Subjects
COMPUTER software ,PROGRAMMING languages ,COMPUTER science ,ELECTRONIC data processing - Abstract
Abstract: A widely used technique to integrate decision procedures (DPs) with other systems is to have the DPs emit proofs of the formulas they report valid. One problem that arises is debugging the proof-producing code; it is very easy in standard programming languages to write code which produces an incorrect proof. This paper demonstrates how proof-producing DPs may be implemented in a programming language, called Rogue-Sigma-Pi (RSP), whose type system ensures that proofs are manipulated correctly. RSP combines the Rogue rewriting language and the Edinburgh Logical Framework (LF). Type-correct RSP programs are partially correct: essentially, any putative LF proof object produced by a type-correct RSP program is guaranteed to type check in LF. The paper describes a simple proof-producing combination of propositional satisfiability checking and congruence closure implemented in RSP. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
49. Logical Semantics for the Rewriting Calculus.
- Author
-
Stump, Aaron and Schürmann, Carsten
- Subjects
COMPUTER logic ,MATHEMATICAL analysis ,INFORMATION theory ,COMPUTER science - Abstract
Abstract: The Rewriting Calculus has been proposed as a language for defining term rewriting strategies. Rules are explicitly represented as terms, and are applied explicitly to other terms to transform them. Sets of rules may be applied to (sets of) terms non-deterministically to obtain sets of results. Strategies are implemented as rules which accept other rules as arguments and apply them in certain ways. This paper describes work in progress to strengthen the Rewriting Calculus by giving it a logical semantics. Such a semantics can provide crucial guidance for studying the language and increasing its expressive power. The latter is demonstrated by adding support to the Rewriting Calculus for what we call higher-form rewriting, where rules rewrite other rules. The logical semantics used is based on ordered linear logic. The paper develops the ideas through several examples. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
50. PVS Strategies for Proving Abstraction Properties of Automata.
- Author
-
Mitra, Sayan and Archer, Myla
- Subjects
COMPUTER simulation ,ELECTROMECHANICAL analogies ,MATHEMATICAL models ,SIMULATION methods & models ,COMPUTER science - Abstract
Abstract: Abstractions are important in specifying and proving properties of complex systems. To prove that a given automaton implements an abstract specification automaton, one must first find the correct abstraction relation between the states of the automata, and then show that this relation is preserved by all corresponding action sequences of the two automata. This paper describes tool support based on the PVS theorem prover that can help users accomplish the second task, in other words, in proving a candidate abstraction relation correct. This tool support relies on a clean and uniform technique for defining abstraction properties relating automata that uses library theories for defining abstraction relations and templates for specifying automata and abstraction theorems. The paper then describes how the templates and theories allow development of generic, high level PVS strategies that aid in the mechanization of abstraction proofs. These strategies first set up the standard subgoals for the abstraction proofs and then execute the standard initial proof steps for these subgoals, thus making the process of proving abstraction properties in PVS more automated. With suitable supplementary strategies to implement the “natural” proof steps needed to complete the proofs of any of the standard subgoals remaining to be proved, the abstraction proof strategies can form part of a set of mechanized proof steps that can be used interactively to translate high level proof sketches into PVS proofs. Using timed I/O automata examples taken from the literature, this paper illustrates use of the templates, theories, and strategies described to specify and prove two types of abstraction property: refinement and forward simulation. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.