252 results
Search Results
2. Predicate Logic for Software Engineering.
- Author
-
Parnas, David Lorge
- Subjects
SOFTWARE engineering ,ENGINEERING ,SYSTEMS design ,ELECTRONIC data processing documentation ,COMPUTER systems ,ELECTRONIC systems ,COMPUTERS - Abstract
The interpretations of logical expressions found in most introductory textbooks are not suitable for use in software engineering applications because they do not deal with partial functions. More advanced papers and texts deal with partial functions in a variety of complex ways. This paper proposes a very simple change to the classic interpretation of predicate expressions, one that defines their value for all values of all variables, yet is almost identical to the standard definitions. It then illustrates the application of this interpretation in software documentation. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
3. GUEST EDITORIAL: INTORDUCTION TO THE SPECIAL SECTION.
- Subjects
SOFTWARE engineering ,CONFERENCES & conventions ,COMPUTERS ,ELECTRONIC systems - Abstract
The Symposium on Foundations in Software Engineering is an annual conference sponsored by Association for Computing Machinery, a computer portal and dedicated to the presentation of innovative research results contributing to an engineering discipline for software systems. This special section of the July 1998 issue of the journal IEEE Transactions on Software Engineering features three of the best papers from the fourth symposium, held in October 1996 in San Francisco, California. During its first half-decade, research results presented at this conference have covered engineering research topics. However, a number of distinguishing properties tend to characterize the best of the papers and have helped set the standards for high-quality software engineering research. The three papers in this special section illustrate these properties. One of the critical challenges for software engineering research is to find practical tools that help system builder restructure existing systems. Mark Moriconi was general chair for the symposium and deserves considerable credit for its success.
- Published
- 1998
- Full Text
- View/download PDF
4. Discovering Architectures from Running Systems.
- Author
-
Schmerl, Bradley, Aldrich, Jonathan, Garlan, David, Kazman, Rick, and Hong Yan
- Subjects
COMPUTERS ,COMPUTER software development ,ARCHITECTURAL design ,PETRI nets ,GRAPH theory ,NETS (Mathematics) ,MATHEMATICS ,LANGUAGE & languages ,RESEARCH - Abstract
One of the challenging problems for software developers is guaranteeing that a system as built is consistent with its architectural design. In this paper, we describe a technique that uses runtime observations about an executing system to construct an architectural view of the system. In this technique, we develop mappings that exploit regularities in system implementation and architectural style. These mappings describe how low-level system events can be interpreted as more abstract architectural operations and are formally defined using Colored Petri Nets. In this paper, we describe a system, called DiscoTect, that uses these mappings and we introduce the DiscoSTEP mapping language and its formal definition. Two case studies showing the application of DiscoTect suggest that the tool is practical to apply to legacy systems and can dynamically verify conformance to a preexisting architectural specification. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
5. Guest Editor's Introduction Experimental Computer Science.
- Author
-
Iyer, Ravi K.
- Subjects
COMPUTER science ,TECHNOLOGY ,COMPUTERS ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,ELECTRONIC systems - Abstract
Experimental research in computer science is a relatively new, yet fast developing area. It is encouraging to that there is substantial research going on in this important area. A study on experimental evaluation of a reusability-oriented parallel programming environment is given. There is a significant amount of experimental research in the area of computer dependability.
- Published
- 1990
6. Guest Editorial for the Special Collection from the Third International Conference on Software Engineering.
- Author
-
Belady, Laszlo A.
- Subjects
SOFTWARE engineering ,COMPUTER software development ,COMPUTER systems ,COMPUTER software industry ,COMPUTERS ,COMPUTER industry ,CONFERENCES & conventions - Abstract
This editorial discusses selected papers submitted during the Third International Conference on Software Engineering held in Atlanta, Georgia in May 1978. The international congress, attended by members of the academe and the computer industry, aimed to define the emerging importance of computer software engineering. The first paper refers to how the computer industry provides solutions to everyday challenges with the use of computers. Here, customers express their views on the purpose of computers, leading computer software developers to consider user requirements. The second paper focuses on the activity of design. It discusses a modeling scheme where large programs can be built out of incomplete parts. Design activities are followed by testing, the focus of the third manuscript. A comprehensive study of the two basic approaches to program testing is discussed. The last paper considers programs in operation, the last phase of the computer software development.
- Published
- 1978
7. Integrated Performance Models for Distributed Processing in Computer Communication Networks.
- Author
-
Thomasian, Alexander and Bay, Paul F.
- Subjects
DISTRIBUTED computing ,COMPUTER systems ,COMPUTER networks ,WIDE area networks ,MULTIPROGRAMMING (Electronic computers) ,COMPUTERS - Abstract
This paper deals with the analysis of large-scale closed queueing network (QN) models which are used for the performance analysis of computer communication networks (CCN's). The computer systems are interconnected by a wide-area network. Users accessing local/remote computers are affected by the contention (queueing delays) at the computer systems and the communication subnet. The computational cost of analyzing such models increases exponentially with the number of user classes (chains), even when the QN is tractable (product-form). In fact, the submodels of the integrated model are generally not product-form, e.g., due to blocking at computer systems (multiprogramming level constraints) and in the communication subnet (window flow control constraints). Two approximate solution methods are proposed in this paper to analyze the integrated QN model. Both methods use decomposition and iterative techniques to exploit the structure of the QN model such that computational cost is proportional to the numbs of chains. The accuracy of the solution methods is validated against each other and simulation. The model is used to study the effect that channel capacity assignments, window sizes for congestion control, and routing have on system performance. [ABSTRACT FROM AUTHOR]
- Published
- 1985
8. Implementation of the Database Machine DIRECT.
- Author
-
Boral, Haran, DeWitt, David J., Friedland, Dina, Jarrell, Nancy F., and Wilkinson, W. Kevin
- Subjects
DATABASES ,ELECTRONIC systems ,MULTIPROCESSORS ,COMPUTERS ,COMPUTER software ,SOFTWARE engineering ,COMPUTER systems - Abstract
DIRECT is a multiprocessor database machine designed and implemented at the University of Wisconsin. This paper describes our experiences with the implementation of DIRECT. We start with a brief overview of the original machine proposal and how it differs from what was actually implemented. We then describe the structure of the DIRECT software. This includes software on host computers that interfaces with the database machine; software on the back-end controller of DIRECT; and software executed by the query processors. In addition to describing the structure of the software we will attempt to motivate and justify its design and implementation. We also discuss a number of implementation issues (e.g., debugging of the code across several machines). We conclude the paper with a list of the "lessons" we have learned from this experience. [ABSTRACT FROM AUTHOR]
- Published
- 1982
9. Guest Editorial: Special Section on Interaction and State-Based Modeling.
- Author
-
Uchitel, Sebastian, Broy, Manfred, Krüger, Ingoif H., and Whittle, Jon
- Subjects
COMPUTER systems ,COMPUTER graphics ,WEB services ,APPLICATION software ,COMPUTER network protocols ,GRAPHIC methods ,COMPUTERS ,COMPUTER simulation ,COMPUTER networks ,COMPUTER drawing - Abstract
The article reflects on various issues discussed within the issue. An article by Y. Bontemps, P. Heymans, and P.Y. Schobbens addresses the theoretical and practical limitations of the scenario-development approach. The paper focuses on complexity and decidability of verification and synthesis within the context of live sequence charts. An article by Braberman, N. Kicillof, and A. Olivero addresses the specification and checking of real-time properties. A novel graphical scenario language is presented which allows the description of complex properties not expressible with sequence chart-like notations. In an article, X. Fu, T. Bultan, and J. Su study the interplay between global interaction modeling and local behavior modeling in the web services domain. The authors show how top-down and bottom-up development of web-services requires careful understanding of issues such as reliability and show that the notion of synchronizability can enable tractable analyses to support such development. The authors also provide insights into the relation between conversation protocols and message sequence charts for which issues such as realizability have been studied extensively.
- Published
- 2005
- Full Text
- View/download PDF
10. System Test Planning of Software: An Optimization Approach.
- Author
-
Chari, Kaushal and Hevner, Alan
- Subjects
MATHEMATICAL models of economic development ,TESTING ,COMPUTER software ,COMPUTERS ,COMPUTER engineering ,SOFTWARE engineering ,PLANNING ,RESEARCH - Abstract
This paper extends an exponential reliability growth model to determine the optimal number of test cases to be executed for various use case scenarios during the system testing of software. An example demonstrates a practical application of the optimization model for system test planning. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
11. Generating Annotated Behavior Models from End-User Scenarios.
- Author
-
Damas, Christophe, Lambeau, Bernard, Dupont, Pierre, and Lamsweerde, Axel Van
- Subjects
COMPUTER simulation ,COMPUTER systems ,COMPUTER networks ,COMPUTER software ,DATA transmission systems ,COMPUTER-generated imagery ,COMPUTERS ,COMPUTER drawing ,COMPUTER graphics - Abstract
Requirements-related scenarios capture typical examples of system behavior through sequences of desired interactions between the software-to-be and its environment. Their concrete, narrative style of expression makes them very effective for eliciting software requirements and for validating behavior models. However, scenarios raise coverage problems as they only capture partial histories of interaction among system component instances. Moreover, they often leave the actual requirements implicit. Numerous efforts have therefore been made recently to synthesize requirements or behavior models inductively from scenarios. Two problems arise from those efforts. On the one hand, the scenarios must be complemented with additional input such as state assertions along episodes or flowcharts on such episodes. This makes such techniques difficult to use by the nonexpert end-users who provide the scenarios. On the other hand, the generated state machines may be hard to understand as their nodes generally convey no domain- specific properties. Their validation by analysts, complementary to model checking and animation by tools, may therefore be quite difficult. This paper describes tool-supported techniques that overcome those two problems. Our tool generates a labeled transition system (LTS) for each system component from simple forms of message sequence charts (MSC) taken as examples or counterexamples of desired behavior. No additional input is required. A global LTS for the entire system is synthesized first. This LTS covers all scenario examples and excludes all counterexamples. It is inductively generated through an interactive procedure that extends known learning techniques for grammar induction. The procedure is incremental on training examples. It interactively produces additional scenarios that the end-user has to classify as examples or counterexamples of desired behavior. The LTS synthesis procedure may thus also be used independently for requirements elicitation through scenario questions generated by the tool. The synthesized system LTS is then projected on local LTS for each system component. For model validation by analysts, the tool generates state invariants that decorate the nodes of the local LTS. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
12. A UML-Based Pattern Specification Technique.
- Author
-
France, Robert B., Dae-Kyoo Kim, Ghosh, Sudipto, and Song, Eunjee
- Subjects
COMPUTER software development ,COMPUTERS ,SOFTWARE engineering ,COMPUTER software ,PROGRAMMING languages - Abstract
Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. In this paper, we present a rigorous and practical technique for specifying pattern solutions expressed in the Unified Modeling Language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
13. A Testing Framework for Mobile Computing Software.
- Author
-
Satoh, Ichiro
- Subjects
MOBILE computing ,MOBILE agent systems ,APPLICATION software ,WIRELESS Internet ,COMPUTER networks ,COMPUTERS ,EMULATION software - Abstract
We present a framework for testing applications for mobile computing devices. When a device is moved into and attached to a new network, the proper functioning of applications running on the device often depends on the resources and services provided locally in the current network. This framework provides an application-level emulator for mobile computing devices to solve this problem. Since the emulator is constructed as a mobile agent, it can carry applications across networks on behalf of its target device and allow the applications to connect to local servers in its current network in the same way as if they had been moved with and executed on the device itself. This paper also demonstrates the utility of this framework by describing the development of typical network-dependent applications in mobile and ubiquitous computing settings. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
14. Requirements-Based Monitors for Real-Time Systems.
- Author
-
Peters, Dennis K. and Parnas, David Lorge
- Subjects
AUTOMATIC control systems ,REAL-time control ,COMPUTERS ,COMPUTER systems ,QUALITY control ,COMPUTER networks - Abstract
Before designing safety- or mission-critical real-time systems, a specification of the required behavior of the system should be produced and reviewed by domain experts. After the system has been implemented, it should be thoroughly tested to ensure that it behaves correctly. This is best done using a monitor, a system that observes the behavior of a target system and reports if that behavior is consistent with the requirements. Such a monitor can be used both as an oracle during testing and as a supervisor during operation. Monitors should be based on the documented requirements of the system. If the target system is required to monitor or control real-valued quantities, then the requirements, which are expressed in terms of the monitored and controlled quantities, will allow a range of behaviors to account for errors and imprecision in observation and control of these quantities. Even if the controlled variables are discrete valued, the requirements must specify the timing tolerance. Because of the limitations of the devices used by the monitor to observe the environmental quantities, there is unavoidable potential for false reports, both negative and positive. This paper discusses design of monitors for real-time systems, and examines the conditions under which a monitor will produce false reports. We describe the conclusions that can be drawn when using a monitor to observe system behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
15. Building Knowledge through Families of Experiments.
- Author
-
Basili, Victor R., Shull, Forrest, and Lanubile, Filippo
- Subjects
SOFTWARE engineering ,COMPUTER programming management ,COMPUTER programming ,COMPUTER software ,COMPUTERS ,EXPERIMENTAL design - Abstract
Experimentation in software engineering is necessary but difficult. One reason is that there are a large number of context variables and, so, creating a cohesive understanding of experimental results requires a mechanism for motivating studies and integrating results. It requires a community of researchers that can replicate studies, vary context variables, and build models that represent the common observations about the discipline. This paper discusses the experience of the authors, based upon a collection of experiments, in terms of a framework for organizing sets of related studies. With such a framework, experiments can be viewed as part of common families of studies, rather than being isolated events. Common families of studies can contribute to important and relevant hypotheses that may not be suggested by individual experiments. A framework also facilitates building knowledge in an incremental manner through the replication of experiments within families of studies. To support the framework, this paper discusses the experiences of the authors in carrying out empirical studies, with specific emphasis on persistent problems encountered in experimental design, threats to validity, criteria for evaluation, and execution of experiments in the domain of software engineering. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
16. Bayesian Analysis of Empirical Software Engineering Cost Models.
- Author
-
Chulani, Sunita, Boehm, Barry, and Steece, Bert
- Subjects
SOFTWARE engineering ,REGRESSION analysis ,MULTIVARIATE analysis ,PROJECT management ,COMPUTER software ,COMPUTERS - Abstract
To date many software engineering cost models have been developed to predict the cost, schedule, and quality of the software under development. But, the rapidly changing nature of software development has made it extremely difficult to develop empirical models that continue to yield high prediction accuracies. Software development costs continue to increase and practitioners continually express their concerns over their inability to accurately predict the costs involved. Thus, one of the most important objectives of the software engineering community has been to develop useful models that constructively explain the software development life-cycle and accurately predict the cost of developing a software product. To that end, many parametric software estimation models have evolved in the last two decades [25], [17], [26], [15], [28], [1], [2], [33], [7], [10], (22], [23]. Almost all of the above mentioned parametric models have been empirically calibrated to actual data from completed software projects. The most commonly used technique for empirical calibration has been the popular classical multiple regression approach. As discussed in this paper, the multiple regression approach imposes a few assumptions frequently violated by software engineering datasets. The source data is also generally imprecise in reporting size, effort, and cost-driver ratings, particularly across different organizations. This results in the development of inaccurate empirical models that don't perform very well when used for prediction. This paper illustrates the problems faced by the multiple regression approach during the calibration of one of the popular software engineering cost models, COCOMO II. It describes the use of a pragmatic 10 percent weighted average approach that was used for the first publicly available calibrated version [6]. It then moves on to show how a more sophisticated Bayesian approach can be used to alleviate some of the problems faced by multiple regression. It compares and contrasts the two empirical approaches, and concludes that the Bayesian approach was better and more robust than the multiple regression approach. Bayesian analysis is a well-defined and rigorous process of inductive reasoning that has been used in many scientific disciplines (the reader can refer to [11], [35], [3] for a broader understanding of the Bayesian Analysis approach). A distinctive feature of the Bayesian approach is that it permits the investigator to use both sample (data) and prior (expert-judgment) information in a logically consistent manner in making inferences. This is done by using Bayes' theorem to produce a 'postdata' or posterior distribution for the model parameters. Using Bayes' theorem, prior (or initial) values are transformed to postdata views. This transformation can be viewed as a learning process. The posterior distribution is determined by the variances of the prior and sample information. If the variance of the prior information is smaller than the variance of the sampling information, then a higher weight is assigned to the prior information. On the other hand, if the variance of the sample information is smaller than the variance of the prior information, then a higher weight is assigned to the sample information causing the posterior estimate to be closer to the sample information. The Bayesian approach discussed in this paper enables stronger solutions to one of the biggest problems faced by the software engineering community: the challenge of making good decisions using data that is usually scarce and incomplete. We note that the predictive performance of the Bayesian approach (i.e., within 30 percent of the actuals 75 percent of the time) is significantly better than that of the previous multiple regression approach (i.e., within 30 percent of the actuals only 52 percent of the time) on our latest sample of 161 project datapoints. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
17. Ginger2: An Environment for Computer-Aided Empirical Software Engineering.
- Author
-
Torii, Koji, Matsumoto, Ken-ichi, Nakakoji, Kumiyo, Takada, Yoshihiro, Takada, Shingo, and Shima, Kazuyuki
- Subjects
SOFTWARE engineering ,COMPUTER programming management ,HUMAN behavior ,COMPUTER programming ,COMPUTER software ,COMPUTERS - Abstract
Empirical software engineering can be viewed as a series of actions to obtain knowledge and a better understanding about some aspects of software development given a set of problem statements in the form of issues, questions or hypotheses. Our experience in conducting empirical software engineering from a variety of viewpoints for the last decade has made us aware of the criticality of integrating the various types of data that are collected and analyzed as well as the criticality of integrating the various types of activities that take place such as experiment design and the experiment itself. This has led us to develop a Computer-Aided Empirical Software Engineering (CAESE) framework as a substrate for supporting the empirical software engineering lifecycle. CAESE supports empirical software engineering in the same manner as a CASE environment serves as a substrate for supporting the software development lifecycle. This paper first presents the CAESE framework that consists of three elements. The first element is a process model for the "lifecycle" of empirical software engineering studies, including needs analysis, experiment design, actual experimentation, and analyzing and packaging results. The second element is a model that helps empirical software engineers decide how to look at the "world" to be studied in a coherent manner. The third element is an architecture based on which CAESE environments can be built, consisting of tool sets for each phase of the process model, a process management mechanism, and the two types of integration mechanism that are vital for handling multiple types of data: data integration and control integration. The second half of this paper describes the Ginger2 environment as an instantiation of our framework. The paper concludes with reports on case studies using Ginger2, which dealt with a variety of empirical data types including mouse and keystrokes, eye traces, three- dimensional movement, skin resistance level, and video-taped data. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
18. Engineering and Analysis of Fixed Priority Schedulers.
- Author
-
Katcher, Daniel I., Arakawa, Hiroshi, and Strosnider, Jay K.
- Subjects
ALGORITHMS ,PRODUCTION scheduling ,SYSTEMS software ,COMPUTER systems ,ELECTRONIC systems ,COMPUTERS ,ELECTRONICS - Abstract
Scheduling theory holds gnat promise as a means to a priori validate timing correctness of real-time applications. However, there currently exists a wide gap between scheduling theory and its implementation in operating system kernels running on specific hardware platforms. The implementation of any particular scheduling algorithm introduces overhead and blocking components which must be accounted for in the timing correctness validation process. This paper presents a methodology for incorporating the costs of scheduler implementation within the context of fixed priority scheduling algorithms. Both event-driven and timer-driven scheduling implementations are analyzed. We show that for the timer-driven scheduling implementations the selection of the timer interrupt rate can dramatically affect the schedulability of a task set, and we present a method for determining the optimal timer rate. We analyzed both randomly generated and two well-defined task sets and found that their schedulability can be significantly degraded by the implementation costs. Task sets that have ideal breakdown utilization over 90% may not even be schedulable when the implementation costs are considered. This work provides a first step toward bridging the gap between real-time scheduling theory and implementation realities. This gap must be bridged for any meaningful validation of timing correctness properties of real- time applications. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
19. Measurement and Analysis of Workload Effects on Fault Latency in Real-Time Systems.
- Author
-
Woodbury, Michael H. and Shin, Kang G.
- Subjects
REAL-time computing ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,COMPUTERS - Abstract
The effectiveness of all known recovery mechanisms is greatly reduced in the presence of multiple latent faults. The presence of multiple latent faults increases the possibility of multiple errors, which could result in coverage failure. In this paper, we present experimental evidence indicating workload effects on the duration of fault latency. A synthetic workload generator is used to vary the workload, and a hardware fault injector is applied to inject transient faults of varying durations. This method allows us to derive the distribution of fault latency duration. Experimental results were obtained from the fault-tolerant multiprocessor (FTMP) at the NASA Airlab. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
20. Tradeoff in the Design of Efficient Algorithm-Based Error Detection Schemes for Hypercube Multiprocessors.
- Author
-
Balasubramanian, Vijay and Banerjee, Prithviraj
- Subjects
PARALLEL processing ,COMPUTER algorithms ,MULTIPROCESSORS ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,COMPUTERS - Abstract
Numerous algorithms for computationally intensive tasks have been developed by researchers that are suitable for execution on hypercube multiprocessors. One characteristic of many of these algorithms is that they are extremely structured and are tuned for the highest performance to execute on hypercube architectures. In this paper, we have looked at parallel algorithm design from a different perspective. In many cases, it may be possible to redesign the parallel algorithms using software techniques so as to provide a low-cost on-line scheme for hardware error detection without any hardware modifications. This approach is called Algorithm-based error detection. In the past, we have applied algorithm-based techniques for on-line error detection on the hypercube and have reported some preliminary results of one specific implementation on some applications. In this paper, we provide an in-depth study of the various issues and tradeoffs available in Algorithm-based error detection, as well as a general methodology for evaluating the schemes. We have illustrated the approach on an extremely useful computation in the field of numerical linear algebra: QR factorization. We have implemented and investigated numerous ways of applying algorithm-based error detection using different system-level encoding strategies for QIZ factorization. Different schemes have been observed to result in varying error coverages and time overheads. We have reported the results of our studies performed on a 16 processor Intel iPSC-2/D4/MX hypercube multiprocessor. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
21. An Approach to Performance Specification of Communication Protocols Using Timed Petri Nets.
- Author
-
Garg, Kumkum
- Subjects
DISTRIBUTED computing ,PETRI nets ,COMPUTER systems ,COMPUTER network protocols ,PERFORMANCE evaluation ,COMPUTERS - Abstract
There has been a lot of interest in the past decade in using timed Petri nets to model computer systems. In this paper we show how such timed Petri nets can be used to great advantage in describing and algebraically specifying communication system performance. We make use of the time parameter of timed Petri nets to model the delay in performing certain operations of a communication protocol. The specification is borrowed from the recently reported AFFIRM language, and the protocol chosen for illustration is the ECMA transfer protocol, proposed for the ISO reference model. However, the methodology can be used with other protocols as well. We also show how liveness properties can be specified easily using timed Petri nets. [ABSTRACT FROM AUTHOR]
- Published
- 1985
22. Extending State Transition Diagrams for the Specification of Human-Computer Interaction.
- Author
-
Wasserman, Anthony I.
- Subjects
HUMAN-computer interaction ,COMPUTER software development ,USER interfaces ,COMPUTERS ,INFORMATION resources management ,SOFTWARE engineering - Abstract
User Software Engineering is a methodology for the specification and implementation of interactive information systems. An early step in the methodology is the creation of a formal executable description of the user interaction with the system, based on augmented state transition diagrams. This paper shows the derivation of the USE transition diagrams based on perceived shortcomings of the "pure" state transition diagram approach. In this way, the features of the USE specification notation are gradually presented and illustrated. The paper shows both the graphical notation and the textual equivalent of the notation, and briefly describes the automated tools that support direct execution of the specification. This specification is easily encoded in a machine-processable form to create an executable form of the computer-human interaction. [ABSTRACT FROM AUTHOR]
- Published
- 1985
23. Optimal Release Time of Computer Software.
- Author
-
Koch, Harvey S. and Kubat, Peter
- Subjects
COMPUTER software ,SOFTWARE engineering ,COMPUTER systems ,COMPUTER programming ,COMPUTERS - Abstract
A decision procedure to determine when computer software should be released is described. This procedure is based upon the cost- benefit for the entire company that has developed the software. This differs from the common practice of only minimizing the repair costs for the data processing division. Decision rules are given to determine it what time the system should be released based upon the results of testing the software. Necessary and sufficient conditions are identified which determine when the system should be released (immediately, before the deadline, at the deadline, or after the deadline). No assumptions are made about the relationship between any of the model's parameters. The model can be used whether the software was developed by a first or second party. The case where future costs are discounted is also considered. [ABSTRACT FROM AUTHOR]
- Published
- 1983
24. Combining Testing with Formal Specifications: A Case Study.
- Author
-
McMullin, Paul R. and Gannon, John D.
- Subjects
AXIOMS ,SOFTWARE engineering ,COMPUTER systems ,COMPUTER programming ,COMPUTERS ,COMPUTER software - Abstract
This paper describes our experience specifying, implementing, and validating a record-oriented text editor similar to one discussed in [7]. Algebraic axioms served as the specification notation; and the implementation was tested with a compiler-based system that uses the axioms to test implementations with a finite collection of test cases. Formal specifications were sometimes difficult to produce, but helped reveal errors during unit testing. Thorough exercising of the implementations by the specifications resulted in few errors persisting until integration. [ABSTRACT FROM AUTHOR]
- Published
- 1983
25. Performance of Synchronized Iterative Processes in Multiprocessor Systems.
- Author
-
Dubois, Michel and Briggs, Fayé A.
- Subjects
SYNCHRONIZATION ,MULTIPROCESSORS ,COMPUTERS ,INFORMATION technology ,PARALLEL processing ,ELECTRONIC data processing - Abstract
A general methodology for studying the degree of matching between an architecture and an algorithm is introduced and applied to the case of synchronized iterative algorithms in MIMD machines. The effectiveness of a multiprocessor system for a synchronized iterative algorithm depends on the performance features of the algorithm. In tightly coupled systems, performance is affected by memory interference. Conversely, the cost of interprocessor communication is determinant for loosely coupled systems. In this paper we develop simple and approximate analytic models to estimate the relative effects of synchronization, memory conflicts, and interprocessor communication. Using these models, we compare the effectiveness of three typical multiprocessor systems for synchronized iterative algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 1982
26. A New Method for Concurrency in B-Trees.
- Author
-
Yat-Sang Kwong and Wood, Derick
- Subjects
COMPUTERS ,COMPUTER systems ,COMPUTER software ,SOFTWARE engineering - Abstract
In this paper we study the problem of supporting concurrent operations in B-trees and their variants. A survey of previous work is given and two basic types of solutions to tins problem are identified. A new solution with a greater degree of concurrency is proposed. As solutions are surveyed or presented we identify useful techniques which have wider applicability. in particular, we introduce the technique of side-branching in our new solution. [ABSTRACT FROM AUTHOR]
- Published
- 1982
27. The Concurrency Control Mechanism of SDD-1: A System for Distributed Databases (The Fully Redundant Case).
- Author
-
Bernstein, Philip A., Rothnie Jr., James B., Goodman, Nathan, and Papadimitriou, Christos A.
- Subjects
DATABASES ,DISTRIBUTED computing ,ELECTRONIC data processing ,DATABASE design ,COMPUTERS - Abstract
SDD-1, A System for Distributed Databases, is a distributed database system being developed by Computer Corporation of America (CCA), Cambridge, MA. SDD-1 permits data to be stored redundantly at several database sites in order to enhance the reliability and responsiveness of the system and to facilitate upward scaling of system capacity. This paper describes the method used by SDD-1 for updating data that are stored redundantly. Redundant updating can be costly because it may potentially involve extensive intercomputer communication overhead in order to lock all copies of data being updated. The method described here avoids this overhead by identifying cases in which it is not necessary to perform this global database locking. The identification of transactions that do not require global locking is based on a predefinition of transaction classes performed by the database administrator using an analysis technique described herein. The classes defined are used at run time to decide what level of synchronization is needed for a given transaction. It is important to note that this predefinition activity in no way limits the transactions that the system can accept; it merely permits more efficient execution of those types of transactions that were anticipated. [ABSTRACT FROM AUTHOR]
- Published
- 1978
28. Cover3.
- Subjects
MAGAZINE covers ,AUTHORS ,SOFTWARE engineering ,PUBLICATIONS ,COMPUTERS ,COMPUTER software ,DATABASES ,SOCIETIES - Published
- 2011
- Full Text
- View/download PDF
29. Automated Synthesis of Mediators to Support Component Interoperability.
- Author
-
Bennaceur, Amel and Issarny, Valerie
- Subjects
INTERNETWORKING ,SOFTWARE engineering ,MIDDLEWARE ,INSTANT messaging ,SOCIAL interaction ,COMPUTERS - Abstract
Interoperability is a major concern for the software engineering field, given the increasing need to compose components dynamically and seamlessly. This dynamic composition is often hampered by differences in the interfaces and behaviours of independently-developed components. To address these differences without changing the components, mediators that systematically enforce interoperability between functionally-compatible components by mapping their interfaces and coordinating their behaviours are required. Existing approaches to mediator synthesis assume that an interface mapping is provided which specifies the correspondence between the operations and data of the components at hand. In this paper, we present an approach based on ontology reasoning and constraint programming in order to infer mappings between components’ interfaces automatically. These mappings guarantee semantic compatibility between the operations and data of the interfaces. Then, we analyse the behaviours of components in order to synthesise, if possible, a mediator that coordinates the computed mappings so as to make the components interact properly. Our approach is formally-grounded to ensure the correctness of the synthesised mediator. We demonstrate the validity of our approach by implementing the MICS (Mediator synthesIs to Connect Components) tool and experimenting it with various real-world case studies. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
30. Using Timed Automata for Modeling Distributed Systems with Clocks: Challenges and Solutions.
- Author
-
Rodriguez-Navas, Guillermo and Proenza, Julián
- Subjects
SOFTWARE verification ,AUTOMATION ,COMPUTER software ,EMBEDDED computer systems ,REAL-time computing ,HYBRID systems ,SYNCHRONIZATION - Abstract
The application of model checking for the formal verification of distributed embedded systems requires the adoption of techniques for realistically modeling the temporal behavior of such systems. This paper discusses how to model with timed automata the different types of relationships that may be found among the computer clocks of a distributed system, namely, ideal clocks, drifting clocks, and synchronized clocks. For each kind of relationship, a suitable modeling pattern is thoroughly described and formally verified. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
31. Nonparametric Analysis of the Order-Statistic Model in Software Reliability.
- Author
-
Wilson, Simon P. and Samaniego, Francisco J.
- Subjects
LITERATURE ,MATHEMATICAL statistics ,MATHEMATICS ,RELIABILITY (Personality trait) ,SIMULATION methods & models ,COMPUTERS ,COMPUTER software - Abstract
In the literature on statistical inference in software reliability, the assumptions of parametric models and random sampling of bugs have been pervasive. We argue that both assumptions are problematic, the first because of robustness concerns and the second due to logical and practical difficulties. These considerations motivate the approach taken in this paper. We propose a nonparametric software reliability model based on the order-statistic paradigm. The objective of the work is to estimate, from data on discovery times observed within a type I censoring framework, both the underlying distribution F from which discovery times are generated and N, the unknown number of bugs in the software. The estimates are used to predict the next time to failure. The approach makes use of Bayesian nonparametric inference methods, in particular, the beta-Stacy process. The proposed methodology is illustrated on both real and simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. Design-Level Performance Prediction of Component-Based Applications.
- Author
-
Yan Liu, Fekete, Alan, and Gorton, Ian
- Subjects
CORBA (Computer architecture) ,COMPUTER architecture ,COMPUTER software development ,BENCHMARKING (Management) ,COMPUTER software ,SYSTEMS design ,DIGITAL control systems ,COMPUTERS ,COMPUTER systems - Abstract
Server-side component technologies such as Enterprise JavaBeans (EJBs), .NET, and CORBA are commonly used in enterprise applications that have requirements for high performance and scalability. When designing such applications, architects must select a suitable component technology platform and application architecture to provide the required performance. This is challenging as no methods or tools exist to predict application performance without building a significant prototype version for subsequent benchmarking. In this paper, we present an approach to predict the performance of component-based server-side applications during the design phase of software development. The approach constructs a quantitative performance model for a proposed application. The model requires inputs from an application-independent performance profile of the underlying component technology platform, and a design description of the application. The results from the model allow the architect to make early decisions between alternative application architectures in terms of their performance and scalability. We demonstrate the method using an EJB application and validate predictions from the model by implementing two different application architectures and measuring their performance on two different implementations of the EJB platform. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
33. A Reflective Implementation of Java Multi-Methods.
- Author
-
Forax, Rémi, Duris, Etienrie, and Roussel, Gilles
- Subjects
JAVA programming language ,PROGRAMMING languages ,C++ ,VIRTUAL machine systems ,DIGITAL computer simulation ,COMPUTERS - Abstract
In Java, method implementations are chosen at runtime by late-binding with respect to the runtime class of just the receiver argument. However, in order to simplify many programming designs, late-binding with respect to the dynamic type of all arguments is sometimes desirable. This behavior, usually provided by multi-methods, is known. as multi polymorphism This paper presents a new multi-method implementation based on the standard Java reflection mechanism. Provided as a package, it does not require any language extension nor any virtual machine modification. The design issues of this reflective implementation are presented together with a new and simple multi-method dispatch algorithm that efficiently supports class loading at runtime. This implementation provides a practicable and fully portable multi-method solution. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
34. Clustering Algorithm for Parallelizing Software Systems in Multiprocessors Environment.
- Author
-
Kadamuddi, Dinesh and Tsai, Jeffrey J. P.
- Subjects
COMPUTER software ,MULTIPROCESSORS ,COMPUTERS ,COMPUTER software development ,ELECTRONIC commerce ,COMPUTER systems - Abstract
A variety of techniques and tools exist to parallelize software systems on different parallel architectures (SIMD, MIMD). With the advances in high-speed networks, there has been a dramatic increase in the number of client/server applications. A variety of client/server applications are deployed today, ranging from simple telnet sessions to complex electronic commerce transactions. Industry standard protocols, like Secure Socket Layer (SSL), Secure Electronic Transaction (SET), etc., are in use for ensuring privacy and integrity of data, as well as for authenticating the sender and the receiver during message passing. Consequently, a majority of applications using parallel processing techniques are becoming synchronization-centric, i.e., for every message transfer, the sender and receiver must synchronize. However, more effective techniques and tools are needed to automate the clustering of such synchronization-centric applications to extract parallelism. In this paper, we present a new clustering algorithm to facilitate the parallelization of software systems in a multiprocessors environment. The new clustering algorithm achieves traditional clustering objectives (reduction in parallel execution time, communication cost, etc.). Additionally, our approach 1) reduces the performance degradation caused by synchronizations, and 2) avoids deadlocks during clustering. The effectiveness of our approach is depicted with the help of simulation results. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
35. ZPL: A Machine Independent Programming Language for Parallel Computers.
- Author
-
Chamberlain, Bradford L., Sung-Eun Choi, Lewis, E. Christopher, Lin, Calvin, Snyder, Lawrence, and Weathersby, W. Derrick
- Subjects
ZPL (Computer program language) ,PROGRAMMING languages ,PARALLEL computers ,SEMANTICS ,HIGH performance computing ,COMPUTERS - Abstract
The goal of producing architecture-independent parallel programs is complicated by the competing need for high performance. The ZPL programming language achieves both goals by building upon an abstract parallel machine and by providing programming constructs that allow the programmer to "see" this underlying machine. This paper describes ZPL and provides a comprehensive evaluation of the language with respect to its goals of performance portability, and programming convenience. In particular we describe ZPL's machine-independent performance model, describe the programming benefits of ZPL's region-based constructs, summarize the compilation benefits of the language's high-level semantics, and summarize empirical evidence that ZPL has achieved both high performance and portability on diverse machines such as the IBM SP-2, Cray T3E, and SGI Power Challenge. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
36. Analysis of a Conflict Between Aggregation and Interface Negotiation in Microsoft's Component Object Model.
- Author
-
Sullivan, Kevin J., Marchukov, Mark, and Socha, John
- Subjects
SOFTWARE engineering ,COMPUTER software ,SOFTWARE configuration management ,COMPUTER architecture ,COMPUTERS - Abstract
Many software projects today are based on the integration of independently designed software components that are acquired on the market, rather than developed within the projects themselves. A component standard, or integration architecture, is a set of design rules meant to ensure that such components can be integrated in defined ways without undue effort. The rules of a component standard define, among other things, component interoperability and composition mechanisms. Understanding the properties of such mechanisms and interactions between them is important for the successful development and integration of software components, as well as for the evolution of component standards. This paper presents a rigorous analysis of two such mechanisms: component aggregation and dynamic interface negotiation, which were first introduced in Microsoft's Component Object Model (COM). We show that interface negotiation does not function properly within COM aggregation boundaries. In particular, interface negotiation generally cannot be used to determine the identity and set of interfaces of aggregated components. This complicates integration within aggregates. We provide a mediator-based example, and show that the problem is in the sharing of interfaces inherent in COM aggregation. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
37. Qualitative Methods in Empirical Studies of Software Engineering.
- Author
-
Seaman, Carolyn B.
- Subjects
SOFTWARE engineering ,DATA analysis ,EXPERIMENTAL design ,COMPUTER programming ,COMPUTERS ,COMPUTER science - Abstract
While empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behavior. This paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
38. Comprehending Object and Process Models: An Empirical Study.
- Author
-
Agarwal, Ritu, De, Prabuddha, and Sinha, Atish P.
- Subjects
OBJECT-oriented methods (Computer science) ,OBJECT-oriented programming ,COMPUTER programming ,COMPUTERS ,COMPUTER architecture ,COMPUTER science - Abstract
Although prior research has compared modeling performance using different systems development methods, there has been little research examining the comprehensibility of models generated by those methods. In this paper, we report the results of an empirical study comparing user comprehension of object-oriented (OO) and process-oriented (PO) models. The fundamental difference is that while OO models tend to focus on structure, PO models tend to emphasize behavior or processes. Proponents of the OO modeling approach argue that it lends itself naturally to the way humans think. However, evidence from research in cognitive psychology and human factors suggests that human problem solving is innately procedural. Given these conflicting viewpoints, we investigate empirically if OO models are in fact easier to understand than PO models. But, as suggested by the theory of cognitive fit, model comprehension may be influenced by task-specific characteristics. We, therefore, compare OO and PO models based on whether the comprehension activity involves: 1) only structural aspects, 2) only behavioral aspects, or 3) a combination of structural and behavioral aspects. We measure comprehension through subjects' responses to questions designed along these three dimensions. Two experiments were conducted, each with a different application and a different group of subjects. Each subject was first trained in both methods, and then participated in one of the two experiments, answering several questions relating to his or her comprehension of an OO or a PO model of a business application. The comprehension questions ranged in complexity from relatively simple (addressing either structural or behavioral aspects) to more complex ones (addressing both structural and behavioral aspects). Results show that for most of the simple questions, no significant difference was observed insofar as model comprehension is concerned. For most of the complex questions, however, the PO model was found to be easier to understand than the OO model. In addition to describing the process and the outcomes of the experiments, we present the experimental method employed as a viable approach for conducting research into various phenomena related to the efficacy of alternative systems analysis and design methods. We also identify areas where future research is necessary, along with a recommendation of appropriate research methods for empirical examination. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
39. Distributed Shared Abstractions (DSA) on Multiprocessors.
- Author
-
Clémençon, Christian, Mukherjee, Bodhisattwa, and Schwan, Karsten
- Subjects
- *
MULTIPROCESSORS , *INFORMATION technology , *COMPUTER networks , *COMPUTERS , *SEQUENTIAL machine theory - Abstract
Any parallel program has abstractions that are shared by the program's multiple processes, including data structures containing shared data, code implementing operations like global sums or minima, type instances used for process synchronization or communication. Such shared abstractions can considerably affect the performance of parallel programs, on both distributed and shared memory multiprocessors. As a result, their Implementation must be efficient, and such efficiency should be achieved without unduly compromising program portability and maintainability. Unfortunately, efficiency and portability can be at cross-purposes, since high performance typically requires changes in the representation of shared abstractions across different parallel machines. The primary contribution of the DSA library presented and evaluated in this paper is its representation of shared abstractions as objects that may be internally distributed across different nodes of a parallel machine. Such distributed shared abstractions (DSA) are encapsulated so that their implementations are easily changed while maintaining program portability across parallel architectures ranging from small-scale multiprocessors, to medium-scale shared and distributed memory machines, and potentially, to networks of computer workstations. The principal results presented in this paper are 1) a demonstration that the fragmentation of object state across different nodes of a multiprocessor machine can significantly improve program performance, and 2) that such object fragmentation can be achieved without compromising portability by changing object interfaces. These results are demonstrated using implementations of the DSA library on several medium-scale multiprocessors, including the BBN Butterfly, Kendall Square Research, and 501 shared memory multiprocessors. The DSA library's evaluation uses synthetic workloads and a parallel implementation of a branch-and-bound algorithm for solving the Traveling Salesperson Problem (TSP). [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
40. Experience with an Approach to Comparing Software Design Methodologies.
- Author
-
Xiping Song and Osterweil, Leon J.
- Subjects
- *
COMPUTER software development , *COMPUTER systems , *ELECTRONIC systems , *COMPUTERS , *COMPUTER programming - Abstract
A number of software design methodologies (SDM's) have been developed and compared over the past two decades. An accurate comparison would aid in codifying and integrating these SDM's. However, existing comparisons are often based largely upon the experiences of practitioners and the intuitive understandings of the authors. Consequently, they tend to be subjective and affected by application domains. In this paper, we introduce a systematic and defined process (called CDM) for objectively comparing SDM's. We believe that using CDM will lead to detailed, traceable, and objective comparisons. CDM uses process modeling techniques to model SDM's, classify their components (e.g., guidelines and notations), and analyze their procedural aspects. Modeling the SDM's entails decomposing their methods into components and analyzing the structure and functioning of the components. The classification of the components illustrates which components address similar design issues and/or have similar structures. Similar components then may be further modeled to aid in understanding more precisely their similarities and differences. The models of the SDM's are also used as the bases for conjectures and analyses about the differences between the SDM's. This paper describes three experiments that we carried out in evaluating CDM. The first uses CDM to compare JSD and Booch's Object Oriented Design (BOOD). The second uses CDM to compare two other pairs of SDM's. The last compares some of our comparisons with other comparisons done in the past using different approaches. The results of these experiments demonstrate that process modeling is valuable as a powerful tool in analysis of software development approaches. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
41. Analysis of Concurrency Coherency Control Protocols for Distributed Transaction Processing Systems with Regional Locality.
- Author
-
Ciciani, Bruno, Dias, Daniel M., and Yu, Philip S.
- Subjects
- *
ELECTRONIC systems , *SYSTEMS software , *COMPUTER systems , *COMPUTERS , *MULTIMEDIA systems - Abstract
In this paper we examine a system structure and protocols to improve the performance of a distributed transaction processing system when there is some regional locality of data reference. Several transaction processing applications such as reservation systems, insurance, and banking belong to this category. While maintaining a distributed computer system at each region, a central computer system is introduced with a replication of all databases at the distributed sites. It can provide the advantage of distributed systems for transactions that refer principally to local data, and also can, provide the advantage of centralized systems for transactions accessing nonlocal data. Specialized protocols can be designed to keep the copies at the distributed and centralized systems consistent without incurring the overhead and delay of generalized protocols for fully replicated databases. In this paper we study the advantage achievable through this system structure and the trade-offs between protocols for concurrency and coherency control of the duplicate copies of the databases. An approximate analytic model is employed to estimate the system performance. It is found that the performance is indeed sensitive to the protocol and substantial performance improvement can be obtained as compared with distributed systems. The protocol design factors considered include the approach for intersite concurrency control (optimistic versus pessimistic), resolution of aborts due to intersite conflict, and choice of the master/primary site of the dual copies (distributed site versus central site). Among the protocols considered, the most robust one uses an optimistic protocol for intersite control with the distributed site as the master site, allows a locally running transaction to commit without any communication with the central site, and balances transaction aborts between transactions running at the central site and distributed sites. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
42. X-Ware Reliability and Availability Modeling.
- Author
-
Laprie, Jean-Claude and Kanoun, Karama
- Subjects
- *
SOFTWARE engineering , *STOCHASTIC processes , *COMPUTER systems , *COMPUTERS , *ELECTRONIC systems , *COMPUTER programming , *ELECTRONICS - Abstract
This paper addresses the problem of modeling a system's reliability and availability with respect to the various classes of faults (physical and design, internal and external) which may affect the service delivered to its users. Hardware and software models are currently exceptions in spite of the users' requirements; these requirements are expressed in terms of failures independently of their sources; i.e. the various classes of faults. The causes of this situation are analyzed; it is shown that there is no theoretical impediment to deriving such models, and that the classical reliability theory can be generalized in order to cover both hardware and software viewpoints that are X-Ware. After the introduction, which summarizes the current state of the art with regard to the dependability requirements from the users' viewpoint, the body of the paper is composed of two sections. Section H is devoted to system behavior up to failure; it focuses on failure rate and reliability, considering in turn atomic (or single component) systems and systems made out of components. Section III deals with the sequence of failures when considering restoration actions consecutive to maintenance; failure intensity, reliability, and availability are derived for various forms of maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
43. Prism—Methodology and Process-Oriented Environment.
- Author
-
Madhavji, Nazim H. and Schafer, Wilhelm
- Subjects
- *
EDUCATIONAL tests & measurements , *OCLC PRISM (Information retrieval system) , *COMPUTER sound processing , *COMPUTERS , *COMPUTER systems , *OPERATIONS research - Abstract
Prism is an experimental process-oriented environment supporting methodical development, instantiation, and execution of software processes. This paper describes the Prism model of engineering processes and an architecture which captures this model in its various components. The architecture has been designed to hold a product software process description, the life-cycle of which is supported by an explicit representation of a higher level (or meta) process description. The central part of this paper describes the nine-step Prism methodology for building and tailoring process models, and gives several scenarios to support this description. In Prism, process models are built using a hybrid process modeling language, which is based on a high-level Petri net formalism and rules. An important observation to note is that this environment work should be seen as an infrastructure, or a stepping stone, for carrying out the more difficult task of creating sound process models. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
44. The Processor Working Set and Its Use in Scheduling Multiprocessor Systems.
- Author
-
Ghosal, Dipak, Serazzi, Giuseppe, and Tripathi, Satish K.
- Subjects
- *
MULTIPROCESSORS , *COMPUTERS , *COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ALGORITHMS , *PARALLEL processing - Abstract
There are two main contributions of this paper. First, this paper introduces the concept of a processor working set (pws) as a single value parameter for characterizing the parallel program behavior. Through detailed experimental studies of different algorithms on a transputer-based multiprocessor machine, it is shown that the pws is indeed a robust measure for characterizing the workload of a multiprocessor system. Small deviations in the performance of algorithms arising due to communication overhead are captured in this parameter. The second contribution of this paper relates to the study of static processor allocation strategies. It is shown that processor allocation strategies based on the pws provide significantly better throughput-delay characteristics. The robustness of pws is further demonstrated by showing that allocation policies that allocate processors more than the pws are inferior in performance to those that never allocate more than the pws—even at a moderately low load. Based on the results, a simple static allocation policy that allocates the pws at low load and adaptively fragments at high load to one processor per job is proposed. This allocation strategy is shown to possess the best throughput-delay characteristic over a wide range of loads. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
45. Modeling of Hierarchical Distributed Systems with Fault-Tolerance.
- Author
-
Yuan-Bao Shieh, Ghosal, Dipak, Chintamaneni, Prasad R., and Tripathi, Satish K.
- Subjects
- *
DISTRIBUTED computing , *FAULT-tolerant computing , *SOFTWARE engineering , *COMPUTER software , *COMPUTER systems , *COMPUTERS - Abstract
This paper addresses some fault-tolerant issues pertaining to hierarchically distributed systems. Since each of the levels in a hierarchical system could have various characteristics, different fault-tolerance schemes could be appropriate at different levels. In this paper, we use stochastic Petri nets (SPN's) to investigate various fault-tolerant schemes in this context. The basic SPN is augmented by parameterized subnet primitives to model the fault-tolerant schemes. Both centralized and distributed fault-tolerant schemes are considered in this paper. These two schemes are investigated by considering the individual levels in a hierarchical system independently. In the case of distributed fault-tolerance, we consider two different checkpointing strategies. This first scheme is called the arbitrary checkpointing strategy. Each process in this scheme does its checkpointing independently; thus, the domino effect may occur. The second scheme is called the planned strategy. Here, process checkpointing is constrained to ensure no domino effect. Our results show that, under certain cases, an arbitrary checkpointing strategy can perform better than a planned strategy. Finally, we have studied the effect of integration on the fault-tolerant strategies of the various levels of a hierarchy. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
46. Extensions to an Approach to the Modeling of Software Testing with Some Performance Comparisons.
- Author
-
Downs, Thomas
- Subjects
- *
SOFTWARE engineering , *SOFTWARE architecture , *COMPUTER software development , *COMPUTER software quality control , *COMPUTER software industry , *COMPUTERS - Abstract
This paper shows how a major (and questionable) assumption underlying a previously reported approach to the modeling of software testing can be relaxed in order to provide a more realistic model. Under the assumption of uniform execution the new model is found to perform only marginally better than the previous model, indicating that the uniform execution assumption is a poor one. A non-uniform execution model, also developed in the paper, is then shown to give very good performance on application to three sets of software reliability data. The results obtained point the way to further developments which are likely to lead to models whose performance is superior to that of the nonuniform execution model presented here. The paper also devotes some attention to the problem of comparison of performance of different models and points out some difficulties in this area. [ABSTRACT FROM AUTHOR]
- Published
- 1986
47. Design of Dynamically Reconfigurable Real-Time Software Using Port-Based Objects.
- Author
-
Stewart, David B., Volpe, Richard A., and Khosla, Pradeep K.
- Subjects
PORTS (Electronic computer system) ,COMPUTERS ,COMPUTER software ,COMPUTER software industry ,DIGITAL control systems ,ROBOTICS - Abstract
The port-based object is a new software abstraction for designing and implementing dynamically reconfigurable real-time software. It forms the basis of a programming model that uses domain-specific elemental units to provide specific, yet flexible, guidelines to control engineers for creating and integrating software components. We use a port-based object abstraction, based on combining the notion of an object with the port-automaton algebraic model of concurrent processes. It is supported by an implementation using domain-specific communication mechanisms and templates that have been incorporated into the Chimera Real-Time Operating System and applied to several robotic applications. This paper describes the port-based object abstraction, provides a detailed analysis of communication and synchronization based on distributed shared memory, and describes a programming paradigm based on a framework process and code templates for quickly implementing applications. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
48. Software Support for Multiprocessor Latency Measurement and Evaluation.
- Author
-
Yong Yan, Xiaodong Zhang, and Qian Ma, Richard A.
- Subjects
EVALUATION ,DYNAMIC programming ,DEBUGGING ,MULTIPROCESSORS ,COMPUTERS ,INTEGRATED software - Abstract
Parallel computing scalability evaluates the extent to which parallel programs and architectures can effectively utilize increasing numbers of processors. In this paper, we compare a group of existing scalability metrics and evaluation models with an experimental metric which uses network latency to measure and evaluate the scalability of parallel programs and architectures. To provide insight into dynamic system performance, we have developed an integrated software environment prototype for measuring and evaluating multiprocessor scalability performance, called Scale-Graph. Scale-Graph uses a graphical instrumentation monitor to collect, measure and analyze latency-related data, and to display scalability performance based on various program execution patterns. The graphical software tool is X-window based and currently implemented on standard workstations to analyze performance data of the KSR-1, a hierarchical ring-based shared-memory architecture. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
49. Incremental LL(1) Parsing in Language-Based Editors.
- Author
-
Shilling, John J.
- Subjects
ALGORITHMS ,PARSING (Computer grammar) ,COMPUTATIONAL linguistics ,COMPUTER systems ,ELECTRONIC systems ,COMPUTERS ,ELECTRONICS - Abstract
This paper introduces an efficient incremental LL(1) parsing algorithm for use in language-based editors that use the structure recognition approach. It is designed to parse user input at intervals of very small granularity and to limit the amount of incremental parsing needed when changes are made internal to the editing buffer. The algorithm uses the editing focus as a guide in restricting parsing. It has been implemented in the Fred language-based editor. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
50. Passive-space and Time View: Vector Clocks for Achieving Higher Performance, Program Correction, and Distributed Computing.
- Author
-
Ahuja, Mohan, Carison, Timothy, and Gahlot, Ashwani
- Subjects
DISTRIBUTED computing ,ELECTRONIC data processing ,COMPUTER systems ,ELECTRONIC systems ,COMPUTERS - Abstract
We have noticed two problems with viewing a process as a sequence of events: The first problem is the complete loss of information about potential intra-process concurrency for both sequential and distributed computations, and partial loss of information about potential inter-process concurrency for distributed computations. The second problem is that the resulting reasoning framework does not lend itself to refinement (from sequential computing or a given set of distributed processes) to a preferable set of distributed processes. We argue that it is more natural to view a computation, either distributed or sequential, as a partially ordered set of events. Doing so leads to a view, called passive-space and time view, which we propose in this paper. In the proposed view, a point in space is a passive entity which does not order events that occur at it, and the order is determined by the interaction among the events. We define a relation, "Affects", between pairs of events, which captures the partial order on events. To aid users of the relation "Affects" in developing algorithms, we define vector clocks, that are global logical clocks, so that the relation "Affects", and hence all potential concurrency, between events can be identified from their timestamps assigned. Since this research is motivated by the need to solve practical problems, we define passive-space and time view such that a user has control, in two ways, over the costs involved. First, a process designer may choose any granularity of events and choose any mechanism for determining partial order among events on the process. Second, a user may choose a vector clock such that the extent of potential intra-process concurrency identified depends on the costs associated with the identification. We give vector clocks which trade cost incurred and concurrency identification. We compare the proposed view with the space and time view. Finally, we give a scheme for reducing the cost of communicating timestamps. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.