31 results
Search Results
2. A framework for defining coupling metrics.
- Author
-
Tempero, Ewan and Ralph, Paul
- Subjects
- *
COMPUTER programming , *COMPUTER science , *REIFICATION , *PHILOSOPHY , *MATHEMATICAL models - Abstract
Abstract Many metrics have been proposed to measure coupling—the degree of association between modules in a system. They have often been described in different ways, hindering comparison and research. Their definitions are often incomplete regarding language features in some languages, meaning that different tool developers may implement the same metric differently. This complicates comparing results from studies that use different tools. This paper therefore aims to define coupling metrics consistently and unambiguously. The paper describes a model of coupling that uses the reification of the concept of dependency as its fundamental unit. Based on this model, it defines a framework for defining coupling metrics. It shows how to define several well-known coupling metrics in the framework, and how defining different metrics based on the same model facilitates direct comparisons. It discusses how the framework resolves issues due to incomplete metric definitions, such as different language features. This formal framework is sufficiently simple that it can be implemented in such a way as to provide multiple metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
3. Measuring the significance of inconsistency in the Viewpoints framework.
- Author
-
Mu, Kedian, Jin, Zhi, Liu, Weiru, Zowghi, Didar, and Wei, Bo
- Subjects
- *
COMPUTER software development , *PROTOTYPES , *COMPUTER systems , *COMPUTER science , *PROOF theory , *COMPARATIVE studies - Abstract
Abstract: Measuring inconsistency is crucial to effective inconsistency management in software development. A complete measurement of inconsistency should focus on not only the degree but also the significance of inconsistency. However, most of the approaches available only take the degree of inconsistency into account. The significance of inconsistency has not yet been given much needed consideration. This paper presents an approach for measuring the significance of inconsistency arising from different viewpoints in the Viewpoints framework. We call an individual set of requirements belonging to different viewpoints a combined requirements collection in this paper. We argue that the significance of inconsistency arising in a combined requirements collection is closely associated with global priority levels of requirements involved in the inconsistency. Here we assume that the global priority level of an individual requirement captures the relative importance of every viewpoint including this requirement as well as the local priority level of the requirement within the viewpoint. Then we use the synthesis of global priority levels of all the requirements in a combined collection to measure the significance of the collection. Following this, we present a scoring matrix function to measure the significance of inconsistency in an inconsistent combined requirements collection, which describes the contribution made by each subset of the requirements collection to the significance of the set of requirements involved in the inconsistency. An ordering relationship between inconsistencies of two combined requirements collections, termed more significant than, is also presented by comparing their significance scoring matrix functions. Finally, these techniques were implemented in a prototype tool called IncMeasurer, which we developed as a proof of concept. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
4. ASPfun : A typed functional active object calculus
- Author
-
Henrio, Ludovic, Kammüller, Florian, and Lutz, Bianca
- Subjects
- *
REMOTE sensing , *SEMANTIC computing , *DISTRIBUTED computing , *COMPUTER science , *PROGRAMMING languages , *INFORMATION technology - Abstract
Abstract: This paper provides a sound foundation for autonomous objects communicating by remote method invocations and futures. As a distributed extension of -calculus we define ASPfun, a calculus of functional objects, behaving autonomously and communicating by a request-reply mechanism: requests are method calls handled asynchronously and futures represent awaited results for requests. This results in an object language enabling a concise representation of a set of active objects interacting by asynchronous method invocations. This paper first presents the ASPfun calculus and its semantics. Then, we provide a type system for ASPfun which guarantees the “progress” property. Most importantly, ASPfun has been formalised; its properties have been formalised and proved using the Isabelle theorem prover and we consider this as an important step in the formalization of distributed languages. This work was also an opportunity to study different binder representations and experiment with two of them in the Isabelle/HOL theorem prover. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
5. Connectors as designs: Modeling, refinement and test case generation
- Author
-
Meng, Sun, Arbab, Farhad, Aichernig, Bernhard K., Aştefănoaei, Lăcrămioara, de Boer, Frank S., and Rutten, Jan
- Subjects
- *
MATHEMATICAL models , *SERVICE-oriented architecture (Computer science) , *PROGRAMMING languages , *SEMANTIC computing , *COMPUTER science , *INFORMATION theory - Abstract
Abstract: Over the past years, the need for high-confidence coordination mechanisms has intensified as new technologies have appeared for the development of service-oriented applications, making formalization of coordination mechanisms critical. Unifying Theories of Programming (UTP) provide a formal semantic foundation not only for programming languages but also for various expressive specification languages. A key concept in UTP is design: the familiar pre/post-condition pair that describes a contract. In this paper we use UTP to formalize Reo connectors, whereby connectors are interpreted as designs in UTP. This model can be used as a semantic foundation for proving properties of connectors, such as equivalence and refinement relations between connectors. Furthermore, it can be used as a reference document for developing tool support for Reo, such as test case generators. A fault-based method to generate test cases for component connectors from specifications is also provided in this paper. For connectors, faults are caused by possible errors during the development process, such as wrongly used channels, missing or redundant subcircuits, or circuits with wrongly constructed topology. We give test cases and connectors a unifying formal semantics by using the notion of design in UTP, and generate test cases by solving constraints obtained from a specification and a faulty implementation. A prototype serves to demonstrate the automatization of the approach. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
6. A didactic object-oriented, prototype-based visual programming environment.
- Author
-
García Perez-Schofield, Baltasar and Ortin, Francisco
- Subjects
- *
VISUAL environment , *OBJECT-oriented programming , *JAVASCRIPT programming language , *COMPUTER engineering , *COMPUTER science students , *COMPUTER science , *SOFTWARE architecture - Abstract
Abstract Object-oriented programming is widely used in both the industry and the education areas. The most-common model of object-oriented programming is the class-based one. However, popular languages not implementing this model are gaining traction as time goes by. This alternative model is the prototype-based one, with one key characteristic: there are no classes. In this paper, a visual tool is proposed as a vehicle for learning the prototype-based object-oriented programming, present, for instance, in Self, Lua, or JavaScript. This software has been in use for three years in a subject of the Computer Science Engineering degree, at the University of Vigo. Highlights • Pooi is an interactive environment which updates with each instruction. • The system sports an diagram viewer, an object inspector, and a REPL. • The software was designed for undergraduate students of computer science engineering. • This tool has been used successfully in lecturing object-oriented programming. • Pooi is free, offering also the sources, and a set of tutorials and examples: http://jbgarcia.webs.uvigo.es/prys/pooi/. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Simulation refinement for concurrency verification
- Author
-
Hesselink, Wim H.
- Subjects
- *
COMPUTER simulation , *VERIFICATION of computer systems , *COMPLETENESS theorem , *SEMANTICS , *METHODOLOGY , *COMPUTER science - Abstract
Abstract: In recent years, we applied and extended the theory of Abadi and Lamport (1991) on the existence of refinement mappings. The present paper presents an overview of our extensions of the theory. For most concepts we provide examples or pointers to case studies where they occurred. The paper presents the results on semantic completeness. It sketches out how the theory is related to the other formalisms in the area. It discusses the tension between semantic completeness and methodological convenience. It concludes with our experience with the theorem provers NQTHM and PVS that were used during these projects. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
8. Evaluation of a process for architectural assumption management in software development.
- Author
-
Yang, Chen, Liang, Peng, and Avgeriou, Paris
- Subjects
- *
COMPUTER architecture , *COMPUTER software development -- Management , *SOFTWARE engineering , *SOFTWARE maintenance , *COMPUTER science - Abstract
Highlights • An architectural assumption management process with guidelines is developed. • Two case studies were conducted to validate the process. • The process can help to make architectural assumptions explicit. • The process can help to identify and reduce invalid architectural assumptions. Abstract Context Architectural assumption management is critical to the success of software development projects. In this paper, we propose an Architectural Assumption Management (AAM) process, comprised of four AAM activities: Architectural Assumption Making, Description, Evaluation, and Maintenance. Objective Evaluating the AAM process in architectural assumption management, regarding the ease of understanding and the effort of conducting the AAM process, as well as the effectiveness of using the AAM process to make architectural assumptions explicit and to identify and reduce invalid architectural assumptions. Method An explanatory study with 88 first-year master students in software engineering, and an exploratory study with five practitioners from five companies. Results (1) the ease of understanding the AAM process is moderate for first-year master students but easy for practitioners; (2) the effort of conducting the AAM process is moderate for first-year master students; (3) Making and Evaluation took the students more time than Description and Maintenance; (4) the practitioners considered Evaluation as the most time consuming activity; (5) the AAM process can help to make architectural assumptions explicit and to identify and reduce invalid architectural assumptions in projects. The majority of the students and practitioners agreed that Architectural Assumption Evaluation is the most helpful activity for all these three aspects. For other activities, there are different opinions about their helpfulness; and (6) there are various factors identified that can impact the aforementioned results. Being aware of and properly adjusting these factors can facilitate the application of the AAM process in projects. Conclusions The AAM process aims at systematically managing architectural assumptions in software development. The results of the case studies provide preliminary empirical evidence for the evaluation of the AAM process. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Specifying and executing optimizations for generalized control flow graphs.
- Author
-
Mansky, William, Gunter, Elsa L., Griffith, Dennis, and Adams, Michael D.
- Subjects
- *
PROGRAM transformation , *GRAPH theory , *COMPUTER science , *MATHEMATICAL optimization , *GENERALIZABILITY theory - Abstract
Optimizations performed by compilers, usually expressed as rewrites on program graphs, are a core part of modern compilers. However, even production compilers have bugs, and these bugs are difficult to detect and resolve. In this paper we present Morpheus, a domain-specific language for formal specification of program transformations, and describe its executable semantics. The fundamental approach of Morpheus is to describe program transformations as rewrites on control flow graphs with temporal logic side conditions. The syntax of Morpheus allows cleaner, more comprehensible specifications of program optimizations; its executable semantics allows these specifications to act as prototypes for the optimizations themselves, so that candidate optimizations can be tested and refined before going on to include them in a compiler. We demonstrate the use of Morpheus to state, test, and refine the specification of a variety of transformations, including a simple loop peeling transformation for single-threaded code and a redundant store elimination optimization on parallel programs. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
10. The effect of refactoring on change and fault-proneness in commercial C# software.
- Author
-
Gatrell, M. and Counsell, S.
- Subjects
- *
SOFTWARE refactoring , *DEBUGGING , *SOFTWARE architecture , *STATISTICAL correlation , *COMPUTER science - Abstract
Refactoring is a process for improving the internal characteristics and design of software while preserving its external behaviour. Refactoring has been suggested as a positive influence on the long-term quality and maintainability of software and, as a result, we might expect benefits of a lower future change or fault propensity by refactoring software. Conversely, many studies show a correlation between change and future faults; so application of a refactoring may in itself increase future fault propensity, negating any benefit of the refactoring. In this paper, we determine whether the refactoring process reaps future maintenance benefits and, as a consequence, results in software with a lower propensity for both faults and change. We studied a large, commercial software system over a twelve-month period and identified a set of refactored classes during the middle four months of the study; a bespoke tool was used to detect occurrences of fifteen types of refactoring. We then examined the fault- and change-proneness of the same set of refactored classes in the four months prior to, during, and after the period of refactoring to determine if change or fault activity was reduced either during or after the period of refactoring studied. We also compared these trends with remaining classes in the system that had not been refactored over the same periods. Results revealed that refactored classes experienced a lower change-proneness in the period after refactoring and were significantly less fault-prone during and after the period of refactoring, even when accounting for the effects of change. The study therefore presents concrete evidence of the benefits of refactoring in these two senses. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
11. Formalizing hybrid systems with Event-B and the Rodin Platform.
- Author
-
Su, Wen, Abrial, Jean-Raymond, and Zhu, Huibiao
- Subjects
- *
HYBRID systems , *COMPUTER simulation , *COMPUTER science - Abstract
This paper contains the development of hybrid systems with Event-B and the Rodin Platform. It follows the seminal approach introduced at the turn of the century in Action Systems. Many examples that have been entirely proved with the Rodin Platform illustrate our approach. We propose to complement the Event-B/Rodin Platform approach with the usage of Matlab, either to simulate examples with some correct as well as incorrect set of parameters, or to use the analytical power of Matlab to complement the usage of Event-B. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
12. Guest editors’ introduction to the 4th issue of Experimental Software and Toolkits (EST-4).
- Author
-
Mens, Kim, van den Brand, M.G.J., and Kienle, Holger M.
- Subjects
- *
COMPUTER software development , *COMPUTER science , *COMPUTER programming , *COMPUTER architecture , *VISUALIZATION , *ARCHITECTURAL design - Abstract
Abstract: Experimental software and toolkits play a crucial role in computer science. Elsevier’s Science of Computer Programming special issues on Experimental Software and Toolkits (EST) provide a means for academic tool builders to get more visibility and credit for their work, by publishing a paper along with the corresponding system in a peer-reviewed journal. Typically, tools are presented from both a user and a developer perspective, addressing tool-building issues such as architecture and design, requirements, methodologies and process aspects. This is already the fourth edition of EST with no less than 17 published systems covering application areas ranging from software analysis and visualization to teaching and software development support. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
13. Incremental construction of systems: An efficient characterization of the lacking sub-system.
- Author
-
Santone, Antonella, Vaglini, Gigliola, and Villani, Maria Luisa
- Subjects
- *
COMPUTER systems , *SOFTWARE engineering , *COMPUTER software , *AUTOMATION , *SCALABILITY , *COMPUTER science - Abstract
Abstract: Software engineering research is driven by the aim of making software development more dynamic, flexible and evolvable. Nowadays the emphasis is on the evolution of pre-existing sub-systems and component and service-based development, where often only a part of the system is totally under control of the designer, most components being remotely operated by external vendors. In this context, we tackle the following problem: given the formal specification of the (incomplete) system, say it , already built, how to characterize collaborators of to be selected, based on a given communication interface , so that a given property is satisfied. Using properties described by temporal logic formulae and systems by CCS processes, if is the formula to be satisfied by the complete system, an efficient and automatic procedure is defined to identify a formula such that, for each existing process satisfying , the process satisfies . Important features of this result are simplicity of the derived property , compared to the original one, and scalability of the verification process. Such characteristics are necessary for applying the method to both incremental design and system evolution scenarios where is already in place, and one needs to understand the specification of the functionality of the new component that should correctly interact with . Indeed, in general, finding a suitable partner for is easier than finding a complete system satisfying the global property. Moreover, in this paper it is shown how can be used also to select a set of possible candidate processes through a property-directed and structural heuristic. From the verification point of view, the description of the lacking component through a logic formula guarantees correctness of the integration with of any process that exhibits a behaviour compliant with the inferred formula. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
14. Invariant assertions, invariant relations, and invariant functions.
- Author
-
Mraihi, Olfa, Louhichi, Asma, Jilani, Lamia Labed, Desharnais, Jules, and Mili, Ali
- Subjects
- *
INVARIANTS (Mathematics) , *ASSERTIONS (Logic) , *DOCUMENTATION software , *COMPUTER software , *COMPUTER programming , *COMPUTER science - Abstract
Abstract: Invariant assertions play an important role in the analysis and documentation of while loops of imperative programs. Invariant functions and invariant relations are alternative analysis tools that are distinct from invariant assertions but are related to them. In this paper we discuss these three concepts and analyze their relationships. The study of invariant functions and invariant relations is interesting not only because it provides alternative means to analyze loops, but also because it gives us insights into the structure of invariant assertions, hence it may help us enhance techniques for generating invariant assertions. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
15. Structural reconfiguration of systems under behavioral adaptation
- Author
-
Canal, Carlos, Cámara, Javier, and Salaün, Gwen
- Subjects
- *
ADAPTIVE computing systems , *DYNAMICAL systems , *COMPUTER software , *COMPUTER interfaces , *INFORMATION technology , *COMPUTER science - Abstract
Abstract: A major asset of modern systems is to dynamically reconfigure themselves to cope with failures or component updates. Nevertheless, designing such systems with off-the-shelf components is hardly feasible: components are black-boxes that can only interact with others on compatible interfaces. Part of the problem is solved through Software Adaptation techniques, which compensate mismatches between interfaces. Our approach aims at using results of Software Adaptation in order to also provide reconfiguration capabilities to black-box components. This paper first formalizes a framework that unifies behavioral adaptation and structural reconfiguration of components. This formalization is used for statically detecting whether it is possible to reconfigure a system. In a second part, we present five notions of reconfiguration: history-aware reconfiguration, future-aware reconfiguration, property-compliant reconfiguration, one-way reconfigurability, and full reconfigurability. For each of these notions, its relevant properties are presented, and they are illustrated on simple yet realistic examples. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
16. Formal modeling of evolving self-adaptive systems
- Author
-
Khakpour, Narges, Jalili, Saeed, Talcott, Carolyn, Sirjani, Marjan, and Mousavi, MohammadReza
- Subjects
- *
COMPUTER simulation , *ADAPTIVE computing systems , *INFORMATION technology , *COMPUTER science , *SEMANTIC computing , *EQUIVALENCE relations (Set theory) - Abstract
Abstract: In this paper, we present a formal model, named PobSAM (Policy-based Self-Adaptive Model), for developing and modeling self-adaptive evolving systems. In this model, policies are used as a mechanism to direct and adapt the behavior of self-adaptive systems. A PobSAM model is a collection of autonomous managers and managed actors. The managed actors are dedicated to the functional behavior while the autonomous managers govern the behavior of managed actors by enforcing suitable policies. A manager has a set of configurations including two types of policies: governing policies and adaptation policies. To adapt the system behavior in response to the changes, the managers switch among different configurations. We employ the combination of an algebraic formalism and an actor-based model to specify this model formally. Managed actors are expressed by an actor model. Managers are modeled as meta-actors whose configurations are described using a multi-sorted algebra called CA. We provide an operational semantics for PobSAM using labeled transition systems. Furthermore, we provide behavioral equivalence of different sorts of CA in terms of splitting bisimulation and prioritized splitting bisimulation. Equivalent managers send the same set of messages to the actors. Using our behavioral equivalence theory, we can prove that the overall behavior of the system is preserved by substituting a manager by an equivalent one. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
17. SAT-solving in CSP trace refinement
- Author
-
Palikareva, Hristina, Ouaknine, Joël, and Roscoe, A.W.
- Subjects
- *
COMPUTER algorithms , *BOOLEAN algebra , *SATISFIABILITY (Computer science) , *COMPUTER science , *ERROR analysis in mathematics , *ALGEBRA , *ENCODING - Abstract
Abstract: In this paper, we address the problem of applying SAT-based bounded model checking (BMC) and temporal -induction to asynchronous concurrent systems. We investigate refinement checking in the process-algebraic setting of Communicating Sequential Processes (CSP), focusing on the CSP traces model which is sufficient for verifying safety properties. We adapt the BMC framework to the context of CSP and the existing refinement checker FDR yielding bounded refinement checking which also lays the foundation for tailoring the -induction technique. As refinement checking reduces to checking for reverse containment of possible behaviours, we exploit the SAT-solver to decide bounded language inclusion as opposed to bounded reachability of error states, as in most existing model checkers. Due to the harder problem to decide and the presence of invisible silent actions in process algebras, the original syntactic translation of BMC to SAT cannot be applied directly and we adopt a semantic translation algorithm based on watchdog transformations. We propose a Boolean encoding of CSP processes resting on FDR’s hybrid two-level approach for calculating the operational semantics using supercombinators. We have implemented a prototype tool, SymFDR, written in C++, which uses FDR as a shared library for manipulating CSP processes and the state-of-the-art incremental SAT-solver MiniSAT 2.0. Experiments with BMC indicate that in some cases, especially in complex combinatorial problems, SymFDR significantly outperforms FDR and even copes with problems that are beyond FDR’s capabilities. SymFDR in -induction mode works reasonably well for small test cases, but is inefficient for larger ones as the threshold becomes too large, due to concurrency. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
18. Towards a notion of unsatisfiable and unrealizable cores for LTL
- Author
-
Schuppan, Viktor
- Subjects
- *
CONSTRAINT satisfaction , *COMPUTER programming , *PROOF theory , *SEARCH algorithms , *MATHEMATICAL logic , *COMPUTER science - Abstract
Abstract: Unsatisfiable cores, i.e., parts of an unsatisfiable formula that are themselves unsatisfiable, have important uses in debugging specifications, speeding up search in model checking or SMT, and generating certificates of unsatisfiability. While unsatisfiable cores have been well investigated for Boolean SAT and constraint programming, the notion of unsatisfiable cores for temporal logics such as LTL has not received much attention. In this paper we investigate notions of unsatisfiable cores for LTL that arise from the syntax tree of an LTL formula, from converting it into a conjunctive normal form, and from proofs of its unsatisfiability. The resulting notions are more fine-grained than existing ones. We illustrate the benefits of the more fine-grained notions on examples from the literature. We extend some of the notions to realizability and we discuss the relationship of unsatisfiable and unrealizable cores with the notion of vacuity. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
19. Region-Based RTSJ Memory Management: State of the art
- Author
-
Hamza, H. and Counsell, S.
- Subjects
- *
JAVA programming language , *COMPUTER memory management , *REAL-time computing , *GARBAGE collection (Computer science) , *CONSTRAINT satisfaction , *COMPUTER science - Abstract
Abstract: Developing a real-time system in Java requires awareness of memory behaviour in addition to software functional requirements. The Real-Time Specification for Java (RTSJ) introduces a scoped memory model to avoid garbage collection delays in critical real-time applications which need to meet hard real-time constraints. Scoped memory management has certain advantages over garbage collection in terms of predictability. However, developing a real-time application using scoped memory areas (regions) may suffer from both design and runtime errors. Moreover, from a memory footprint perspective, the inability to determine precisely how many scoped memory areas should be used and which objects or threads should be allocated into these scoped memory areas makes using RTSJ problematic for developing real-time systems. In this paper, a survey of the current approaches to improve scoped memory management and new emerging challenges in RTSJ scoped memory management model are presented. Categorizing those problems and challenges provides a picture of the issues researchers have yet to investigate and to support solutions for an optimal scoped memory model. Current approaches and a set of benchmarks used to evaluate current solutions are presented and new research questions in developing real-time Java systems using a scoped memory model are proposed. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
20. Contexts, refinement and determinism
- Author
-
Reeves, Steve and Streader, David
- Subjects
- *
COMPUTER engineering , *DESIGN techniques , *INSIGHT , *COMPUTER systems , *COMPUTER programming , *COMPUTER science - Abstract
Abstract: In this paper we have been influenced by those who take an “engineering view” of the problem of designing systems, i.e. a view that is motivated by what someone designing a real system will be concerned with, and what questions will arise as they work on their design. Specifically, we have borrowed from the testing work of Hennessy, de Nicola and van Glabbeek, e.g. Hennessy, 1988 , de Nicola , de Nicola, 1992 and van Glabbeek, 2001, 1990 . Here we concentrate on one fundamental part of the engineering view and where consideration of it leads. The aspects we are concerned with are computational entities in contexts, observed by users. This leads to formalising design steps that are often left informal, and that in turn gives insights into non-determinism and ultimately leads to being able to use refinement in situations where existing techniques fail. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
21. A mechanical verification of the stressing algorithm for negative cost cycle detection in networks
- Author
-
Shankar, Natarajan and Subramani, K.
- Subjects
- *
SOFTWARE verification , *COMPUTER algorithms , *COMPUTER science , *PROTOTYPES , *CONSTRAINT satisfaction , *GRAPH theory - Abstract
Abstract: The negative cost cycle detection (NCCD) problem in weighted directed graphs is a fundamental problems in theoretical computer science with applications in a wide range of domains ranging from maximum flows to image segmentation. From the perspective of program verification, this problem is identical to the problem of checking the satisfiability of a conjunction of difference constraints. There exist a number of approaches in the literature for NCCD with each approach having its own set of advantages. Recently, a greedy, space-efficient algorithm called the stressing algorithm was proposed for this problem. In this paper, we present a novel proof of the Stressing algorithm and its verification using the Prototype Verification System (PVS) theorem prover. This example is part of a larger research program to verify the soundness and completeness of a core set of decision procedures. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
22. Ontology-driven analysis of UML-based collaborative processes using OWL-DL and CPN
- Author
-
Noguera, Manuel, Hurtado, María V., Rodríguez, María Luisa, Chung, Lawrence, and Garrido, José Luis
- Subjects
- *
UNIFIED modeling language , *ONTOLOGY , *PETRI nets , *INDUSTRIAL management , *COMPUTER science , *PROGRAMMING languages - Abstract
Abstract: A key ingredient in system and organization modeling is modeling business processes that involve the collaborative participation of different teams within and outside the organization. Recently, the use of the Unified Modeling Language (UML) for collaborative business modeling has been increasing, thanks to its human-friendly visual representation of a rich set of structural and behavioral views, albeit its unclear semantics. In the meantime, the use of the Web Ontology Language (OWL) has also been emerging, thanks to its clearly-defined semantics, hence being amenable to automatic analysis and reasoning, although it is less human friendly than, and also perhaps not as rich as, the UML notation — especially concerning processes, or activities. In this paper, we view the UML and the OWL as being complementary to each other, and exploit their relative strengths. We provide a mapping between the two, through a set of mapping rules, which allow for the capture of UML activity diagrams in an OWL-ontology. This mapping, which results in a formalization of collaborative processes, also sets a basis for subsequent construction of executable models using the Colored Petri Nets (CPN) formalism. For this purpose, we also provide appropriate mappings from OWL-based ontological elements into CPN elements. A case study of a mortgage granting system is described, along with the potential benefits and limitations of our proposal. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
23. Towards mechanized correctness proofs for cryptographic algorithms: Axiomatization of a probabilistic Hoare style logic
- Author
-
den Hartog, Jerry
- Subjects
- *
VIDEO games , *ALGORITHMS , *CRYPTOGRAPHY , *COMPUTER science - Abstract
Abstract: In [R.J. Corin, J.I. den Hartog, A probabilistic hoare-style logic for game-based cryptographic proofs, in: M. Bugliesi, B. Preneel, V. Sassone (Eds.), ICALP 2006 Track C, Venice, Italy, in: Lecture Notes in Computer Science, vol. 4052, Springer-Verlag, Berlin, 2006, pp. 252–263] we build a formal verification technique for game-based correctness proofs of cryptographic algorithms based on a probabilistic Hoare style logic [J.I. den Hartog, E.P. de Vink, Verifying probabilistic programs using a Hoare like logic, International Journal of Foundations of Computer Science 13 (3) (2002) 315–340]. An important step towards enabling mechanized verification within this technique is an axiomatization of implication between predicates which is purely semantically defined in the latter reference cited above. In this paper we provide an axiomatization and illustrate its place in the formal verification technique given in the former. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
24. Software architecture design for streaming Java RMI
- Author
-
Yang, Chih-Chieh, Chen, Chung-Kai, Chang, Yu-Hao, Chung, Kai-Hsin, and Lee, Jenq-Kuen
- Subjects
- *
COMPUTER science , *COMPUTER logic , *COMPUTER programmers , *INFORMATION technology , *COMPUTER programming - Abstract
Abstract: In recent years, network streaming becomes a highly popular research topic in computer science due to the fact that a large proportion of network traffic is occupied by multimedia streaming. In this paper we present novel methodologies for enhancing the streaming capabilities of Java RMI. Our streaming support for Java RMI includes the pushing mechanism, which allows the servers to push data in a streaming fashion to the client site, and the aggregation mechanism, which allows the client site to make a single remote invocation to gather data from multiple servers that keep replicas of data streams and aggregate partial data into a complete data stream. In addition, our system also allows the client site to forward local data to other clients. Our framework is implemented by extending the Java RMI stub to allow custom designs for streaming buffers and controls, and by providing a continuous buffer for raw data in the transport layer socket. This enhanced framework allows standard Java RMI services to enjoy streaming capabilities. In addition, we propose aggregation algorithms as scheduling methods in such an environment. Preliminary experiments using our framework demonstrate its promising performance in the provision of streaming services in Java RMI layers. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
25. Building CBR systems with jcolibri
- Author
-
Díaz-Agudo, Belén, González-Calero, Pedro A., Recio-García, Juan A., and Sánchez-Ruiz-Granados, Antonio A.
- Subjects
- *
DECISION making , *MANAGEMENT science , *QUALITY control , *CASE-based reasoning , *PROBLEM solving , *COMPUTER science - Abstract
Abstract: Case-based reasoning (CBR) is a paradigm for combining problem solving and learning that has become one of the most successful applied subfields of AI in recent years. Now that CBR has become a mature and established technology two necessities have become critical: the availability of tools to build CBR systems, and the accumulated practical experience of applying CBR techniques to real-world problems. In this paper we are presenting jcolibri, an object-oriented framework in Java for building CBR systems, that greatly benefits from the reuse of previously developed CBR systems. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
26. Modelling and model checking suspendible business processes via statechart diagrams and CSP
- Author
-
Yeung, W.L., Leung, K.R.P.H., Wang, Ji, and Dong, Wei
- Subjects
- *
STATECHARTS (Computer science) , *STRUCTURED techniques of electronic data processing , *COMPUTER science , *COMPUTERS , *CHARTS, diagrams, etc. , *BUSINESS enterprises , *COMMUNICATION - Abstract
When modelling object behaviour with UML statechart diagrams, the history mechanism can be useful for modelling the suspension of a “normal” business process upon certain “abnormal” events together with the subsequent resumption, as illustrated by the examples in this paper. However, previous approaches to model checking statechart diagrams often ignore the history mechanism. We enhanced such a previous approach based on Communicating Sequential Processes (CSP) and developed a support tool for it. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
27. Nested transactional memory: Model and architecture sketches
- Author
-
Moss, J. Eliot B. and Hosking, Antony L.
- Subjects
- *
COMPUTER architecture , *COMPUTER input-output equipment , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: We offer a reference model for nested transactions at the level of memory accesses, and sketch possible hardware architecture designs that implement that model. We describe both closed and open nesting. The model is abstract in that it does not relate to hardware, such as caches, but describes memory as seen by each transaction, memory access conflicts, and the effects of commits and aborts. The hardware sketches describe approaches to implementing the model using bounded size caches in a processor with overflows to memory. In addition to a model that will support concurrency within a transaction, we describe a simpler model that we call linear nesting. Linear nesting supports only a single thread of execution in a transaction nest, but may be easier to implement. While we hope that the model is a good target to which to compile transactions from source languages, the mapping from source constructs to nested transactional memory is beyond the scope of the paper. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
28. The case for virtual register machines
- Author
-
Gregg, David, Beatty, Andrew, Casey, Kevin, Davis, Brian, and Nisbet, Andy
- Subjects
- *
COMPUTER architecture , *COMPUTER systems , *COMPUTER software , *COMPUTER science - Abstract
Abstract: Virtual machines (VMs) are a popular target for language implementers. A long-running question in the design of virtual machines has been whether stack or register architectures can be implemented more efficiently with an interpreter. Many designers favour stack architectures since the location of operands is implicit in the stack pointer. In contrast, the operands of register machine instructions must be specified explicitly. In this paper, we present a working system for translating stack-based Java virtual machine (JVM) code to a simple register code. We describe the translation process, the complicated parts of the JVM which make translation more difficult, and the optimisations needed to eliminate copy instructions. Experimental results show that a register format reduced the number of executed instructions by 34.88%, while increasing the number of bytecode loads by an average of 44.81%. Overall, this corresponds to an increase of 2.32 loads for each dispatch removed. We believe that the high cost of dispatches makes register machines attractive even at the cost of increased loads. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
29. Extending component-based design with hardware components
- Author
-
Arató, Péter, Ádám Mann, Zoltán, and Orbán, András
- Subjects
- *
COMPUTER input-output equipment , *SYSTEMS design , *COMPUTER science , *SOFTWARE engineering - Abstract
Abstract: In order to cope with the increasing complexity of system design, component-based software engineering advocates the reuse and adaptation of existing software components. However, many applications—particularly embedded systems—consist of not only software, but also hardware components. Thus, component-based design should be extended to systems with both hardware and software components. Such an extension is not without challenges though. The extended methodology has to consider hard constraints on performance as well as different cost factors. Also, the dissimilarities between hardware and software (such as level of abstraction, communication primitives, etc.) have to be resolved. In this paper, the authors propose such an extended component-based design methodology to include hardware components as well. This methodology allows the designer to work at a very high level of abstraction, where the focus is on functionality only. Non-functional constraints are specified in a declarative manner, and the mapping of components to hardware or software is determined automatically based on those constraints in the so-called hardware/software partitioning step. Moreover, a tool is presented supporting the new design methodology. Beside automating the partitioning process, this tool also checks the consistency between hardware and software implementations of a component. The authors also present a case study to demonstrate the applicability of the outlined concepts. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
30. An interactive environment for beginning Java programmers
- Author
-
Goldman, Kenneth J.
- Subjects
- *
JAVA programming language , *COMPUTER software , *COMPUTER science , *TECHNOLOGY - Abstract
Building upon years of evolution in object-oriented programming language design, Java has emerged as the language of choice among many educators for teaching introductory computer science. A clean, type-safe language, Java provides a garbage collected heap and a comprehensive exception-handling mechanism. However, in spite of this support, many students in introductory computer science courses still find programming to be an overwhelming source of frustration. Linguistic concerns and programming mechanics demand so much attention that deeper concepts are often postponed for later courses, leaving students in introductory courses with the mistaken impression that computer science is a shallow discipline, concerned only with transcribing ideas into code, and not with the ideas themselves.JPie is a tightly integrated programming environment for live software construction in Java. JPie treats programming as an application in its own right, providing a visual representation of class definitions and supporting direct manipulation of graphical representations of programming abstractions and constructs. Exploiting Java’s reflection mechanism, JPie supports the notion of a dynamic class that can be modified while the program is running, thereby eliminating the edit–compile–test cycle. Following years of experience using Java as the vehicle for teaching introductory computer science, we have designed JPie to provide a more natural and fluid software development process that both raises the level of abstraction and eliminates many of the common pitfalls that beginning Java programmers face. This paper studies JPie from an educational perspective. We systematically review key programming abstractions and explain how JPie supports them in ways that keep beginning programmers focused on important ideas. Our experience using JPie in an introductory computer science survey course for non-majors is briefly discussed. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
31. Designing the automatic transformation of visual languages
- Author
-
Varró, Dániel, Varró, Gergely, and Pataricza, András
- Subjects
- *
VISUAL programming languages (Computer science) , *COMPUTER science , *RELIABILITY (Personality trait) - Abstract
The design process of complex systems requires a precise checking of the functional and dependability attributes of the target design. The growing complexity of systems necessitates the use of formal methods, as the exhaustiveness of checks performed by the traditional simulation and testing is insufficient.For this reason, the mathematical models of various formal verification tools are automatically derived from UML-diagrams of the model by mathematical transformations guaranteeing a complete consistency between the target design and the models of verification and validation tools.In the current paper, a general framework for an automated model transformation system is presented. The method starts from a uniform visual description and a formal proof concept of the particular transformations by integrating the powerful computational paradigm of graph transformation, planner algorithms of artificial intelligence, and various concepts of computer engineering. [Copyright &y& Elsevier]
- Published
- 2002
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.