39 results
Search Results
2. An Acyclic Expansion Algorithm for Fast Protocol Validation.
- Author
-
Kakuda, Yoshiaki, Wakahara, Yasushi, and Norigoe, Masamitsu
- Subjects
COMPUTER algorithms ,ALGORITHMS ,COMPUTER programming ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
For the development of communications software com- posed of many modules, protocol validation is essential to detect errors in the interactions among the modules. A number of protocol validation techniques were proposed in the past, but the validation time required by these techniques is too long for many actual protocols. This paper proposes a new fast protocol validation technique to overcome this drawback. The proposed technique is to construct the minimum acyclic form of state transitions in individual processes of the protocol, and to detect protocol errors such as system deadlocks and channel overflows fast. This paper also presents a protocol validation system based on the proposed technique to confirm its feasibility and shows validation results for some actual protocols obtained with this system. As a result, the protocol validation system is expected to contribute to a great extent to the improvement of the productivity in development and maintenance of communications software. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
3. Introducing Software Engineering Developments to a Classical Operating Systems Course.
- Author
-
Billard, Edward A.
- Subjects
ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering ,SYSTEMS software ,COMPUTER software ,COMPUTER programming ,SYSTEM analysis - Abstract
An operating system course draws from a well-defined fundamental theory, but one needs to consider how more re- cent advances, not necessarily in the theory itself, can be applied to improve the course and the general body of knowledge of the student. The goal of this paper is to show how recent software engineering developments can be introduced to such a course to not only satisfy the theory requirements, but also make the theory more understandable. In particular, this paper focuses on how students can effectively learn the Unified Modeling Language, the object-oriented methodology, and the Java programming language in the context of an operating systems course. The goal is to form a systematic software engineering process for operating system design and implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
4. Handling Obstacles in Goal-Oriented Requirements Engineering.
- Author
-
van Lamsweerde, Axel and Letier, Emmanuel
- Subjects
ENGINEERING ,ELECTRONIC systems ,MATHEMATICAL programming ,PROGRAM transformation ,TECHNICAL specifications ,COMPUTER software - Abstract
Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
5. ABSTRACTING AND ENFORCING WEB SERVICE PROTOCOLS.
- Author
-
Benatallah, Boualem, Casati, Fabio, Skogsrud, Halvard, and Toumani, Farouk
- Subjects
WEB services ,APPLICATION software ,COMPUTER software ,COMPUTER systems ,ELECTRONIC systems ,ELECTRONICS ,ENGINEERING ,PHYSICAL sciences - Abstract
Web services are emerging as a promising technology for the automation of inter-organizational interactions. As technology matures and the foundations of Web services become more solid, users will start to demand tools that facilitate the service development lifecycle. It is only when such tools become available that novel technologies become applied and enter the mainstream, since the complexity, cost and time necessary to deploy and manage solutions is dramatically reduced. In this paper, we present a framework and a tool that support the model-driven development of Web services. The idea consists in identifying key Web services abstractions, in addition to those of basic Web services standards, that enable the description of service policies and properties that are useful in practice. In this paper, we focus on service protocols, and specifically on conversation and trust negotiation protocols. These protocols are modeled by means of graphical tools and high-level languages so that they are easy to specify, understand, and evolve. The tools also support the automatic generation of service implementation skeletons based on these abstractions, manage the entire service lifecycle, and provide run-time support to verify that the interaction among clients and services occur in compliance with the specified policies. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
6. Nonrectangular Shaping and Sizing of Soft Modules for Floorplan-Design Improvement.
- Author
-
Chu, Chris C. N. and Young, Evangeline F. Y.
- Subjects
COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,COMPUTER industry - Abstract
Many previous works on floorplanning with non rectangular modules [1]-[12] assume that the modules are predesignated to have particular non rectangular shapes, e.g., L-shaped, T-shaped, etc. However, this is not common in practice because rectangular shapes are more preferable in many designing steps. Those non rectangular shapes are actually generated during floor planning in order to further optimize the solution. In this paper, we study this problem of changing the shapes and dimensions of the flexible modules to fill up the unused area of a preliminary floor plan, while keeping the relative positions between the modules unchanged. This feature will also be useful in fixing small incremental changes during engineering change order modifications. We formulate the problem as a mathematical program. The formulation is such that the dimensions of all of the rectangular and, nonrectangular modules can be computed by closed-form equations in O(m) time in each corresponding Lagrangian relaxation sub problem (LRS) where m is the total number of edges in the constraint graphs. As a result, the total time for the whole shaping and sizing process is O(k × m), where k is the number of iterations on the LRS. Experimental results show that the amount of area reused is 3.7% on average, while the total wirelength can be reduced by 0.43% on average because of the more compacted result packing. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
7. Towards Complexity Metrics for Ada Tasking.
- Author
-
Shatz, Sol M.
- Subjects
DISTRIBUTED computing ,PROGRAMMING languages ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
With growing interest in distributed computing come demands for techniques to aid in development of correct and reliable distributed software. Controlling, or at least recognizing, complexity of such software is an important part of the development and maintenance process. While a number of metrics have been proposed for quantitatively measuring the complexity of sequential, centralized programs, corresponding metrics for distributed software are noticeable by their absence. Using Ada as a representative distributed programming language, this paper discusses some ideas on complexity metrics that focus on Ada tasking and rendezvous. Concurrently active rendezvous are claimed to be an important aspect of communication complexity. A Petri net graph model of Ada rendezvous is used to introduce a "rendezvous graph," an abstraction that can be useful in viewing and computing effective communication complexity. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
8. Single-Site and Distributed Optimistic Protocols for Concurrency Control.
- Author
-
Bassiouni, M. A.
- Subjects
ELECTRONIC data processing ,DATABASES ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
In spite of their advantage in removing the overhead of lock maintenance and deadlock handling, optimistic concurrency control methods have continued to be far less popular in practice than locking schemes. There are two complementary approaches to help render the optimistic approach practically viable. For the high-level approach, integration schemes can be utilized so that the database management system is provided with a variety of synchronization methods each of which can be applied to the appropriate class of transactions. The low-level approach seeks to increase the concurrency of the original optimistic method and improve its performance. In this paper we examine the latter approach, and present algorithms that aim at reducing backups and improve throughput. Both the single-site and distributed networks are considered. Optimistic schemes using time-stamps for fully duplicated and partially duplicated database networks are presented, with emphasis on performance enhancement and on reducing the overall cost of implementation. Abstract-A methodology is presented for evaluating the performance of database update schemes in a distributive environment. The methodology makes use of the history of how data are used in the database. Parameters, such as update to retrieval ratio and average file size, can be set based on the actual characteristics of a system. The analysis is specifically directed toward the support of derived data within the relational model. Because concurrency is a major problem in a distributive system, the support of derived data is analyzed with respect to three distributive concurrency control techniques- master/slave, distributed, and synchronized. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
9. Specification of Synchronizing Processes.
- Author
-
Ramamritham, Krithivasan and Keller, Robert M.
- Subjects
COMPUTER software ,COMPUTER programming ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
The formalism of temporal logic has been suggested to be an appropriate tool for expressing the semantics of concurrent programs. This paper is concerned with the application of temporal logic to the specification of factors affecting the synchronization of concurrent processes. Towards this end, we first introduce a model for synchronization and axiomatize its behavior. SYSL, a very high-level language for specifying synchronization properties, is then described. It is designed using the primitives of temporal logic and features constructs to express properties that affect synchronization in a fairly natural and modular fashion. Since the statements in the language have intuitive interpretations, specifications are humanly readable. In addition, since they possess appropriate formal semantics, unambiguous specifications result. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
10. Toolpack—An Experimental Software Development Environment Research Project.
- Author
-
Osterweil, Leon J.
- Subjects
COMPUTER software development ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems ,COMPUTER software - Abstract
This paper discusses the goals and methods of the Toolpack project and in this context discusses the architecture and design of the software system being produced as the focus of the project. Toolpack is presented as an experimental activity in which a large software tool environment is being created for the purpose of general distribution and then careful study and analysis. The paper begins by explaining the motivation for building integrated tool sets. It then proceeds to explain the basic requirements that an integrated system of tools must satisfy in order to be successful and to remain useful both in practice and as an experimental object. The paper then summarizes the tool capabilities that will be incorporated into the environment. It then goes on to present a careful description of the actual architecture of the Toolpack integrated tool system. Finally the Toolpack project experimental plan is presented, and future plans and directions are summarized. [ABSTRACT FROM AUTHOR]
- Published
- 1983
11. Three-dimensional finite element method for the filling simulation of injection molding.
- Author
-
Geng Tie, Li Dequn, and Zhou Huamin
- Subjects
ENGINEERING ,COMPUTER software ,COMPUTER science ,ELECTRONIC systems - Abstract
With the development of molding techniques, molded parts have more complex and larger geometry with nonuniform thickness. In this case, the velocity and the variation of parameters in the gapwise direction are considerable and cannot be neglected. A three-dimensional (3D) simulation model can predict the filling process more accurately than a 2.5D model based on the Hele–Shaw approximation. This paper gives a mathematical model and numeric method based on 3D model to perform more accurate simulations of a fully flow. The model employs an equal-order velocity–pressure interpolation method. The relation between velocity and pressure is obtained from the discretized momentum equations in order to derive the pressure equation. A 3D control volume scheme is used to track the flow front. During calculating the temperature field, the influence of convection items in three directions is considered. The software based on this 3D model can calculate the pressure field, velocity field and temperature field in filling process. The validity of the model has been tested through the analysis of the flow in cavities. [ABSTRACT FROM AUTHOR]
- Published
- 2006
12. Covering Arrays for Efficient Fault Characterization in Complex Configuration Spaces.
- Author
-
Yilmaz, Cemal, Cohen, Myra B., and Porter, Adam A.
- Subjects
SOFTWARE engineering ,COMPUTER software ,ELECTRONIC systems ,MATHEMATICS ,ENGINEERING ,TECHNOLOGY ,HIGH technology industries ,COMPUTER programming - Abstract
Many modern software systems are designed to be highly configurable so they can run on and be optimized for a wide variety of platforms and usage scenarios. Testing such systems is difficult because, in effect, you are testing a multitude of systems, not just one. Moreover, bugs can and do appear in some configurations, but not in others. Our research focuses on a subset of these bugs that are "option-related"--those that manifest with high probability only when specific configuration options take on specific settings. Our goal is not only to detect these bugs, but also to automatically characterize the configuration subspaces (i.e., the options and their settings) in which they manifest. To improve efficiency, our process tests only a sample of the configuration space, which we obtain from mathematical objects called covering arrays. This paper compares two different kinds of covering arrays for this purpose and assesses the effect of sampling strategy on fault characterization accuracy. Our results strongly suggest that sampling via covering arrays allows us to characterize option-related failures nearly as well as if we had tested exhaustively, but at a much lower cost. We also provide guidelines for using our approach in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
13. Event-Based Traceability for Managing Evolutionary Change.
- Author
-
Cleland-Huang, Jane, Chang, Carl K., and Christensen, Mark
- Subjects
COMPUTER software ,ENGINEERING ,PROJECT management ,COMPUTER systems ,AUTOMATION ,ELECTRONIC systems - Abstract
Although the benefits of requirements traceability are widely recognized, the actual practice of maintaining a traceability scheme is not always entirely successful. The traceability infrastructure underlying a software system tends to erode over its lifetime, as time-pressured practitioners fail to consistently maintain links and update impacted artifacts each time a change occurs, even with the support of automated systems. This paper proposes a new method of traceability based upon event-notification and is applicable even in a heterogeneous and globally distributed development environment. Traceable artifacts are no longer tightly coupled but are linked through an event service, which creates an environment in which change is handled more efficiently, and artifacts and their related links are maintained in a restorable state. The method also supports enhanced project management for the process of updating and maintaining the system artifacts. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
14. Reusing Software: Issues and Research Directions.
- Author
-
Mili, Hafedh, Mili, Fatma, and Mili, Ali
- Subjects
- *
COMPUTER software development , *ARTIFICIAL intelligence , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Software productivity has been steadily increasing over the past 30 years, but not enough to close the gap between the demands placed on the software industry and what the state of the practice can deliver [22], [39]; nothing short of an order of magnitude increase in productivity will extricate the software industry from its perennial crisis [39], 1671. Several decades of intensive research in software engineering and artificial intelligence left few alternatives but software reuse as the (only) realistic approach to bring about the gains of productivity and quality that the software industry needs. In this paper, we discuss the implications of reuse on the production, with an emphasis on the technical challenges. Software reuse involves building software that is reusable by design and building with reusable software. Software reuse includes reusing both the products of previous software projects and the processes deployed to produce them, leading to a wide spectrum of reuse approaches, from the building blocks (reusing products) approach, on one hand, to the generative or reusable processor (reusing processes), on the other 1681. We discuss the implication of such approaches on the organization, control, and method of software development and discuss proposed models for their economic analysis. Software reuse benefits from methodologies and tools to: 1) build more readily reusable software and 2) locate, evaluate, and tailor reusable software, the last being critical for the building blocks approach. Both sets of issues are discussed in this paper, with a focus on application generators and OO development for the first and a thorough discussion of retrieval techniques for software components, component composition (or bottom-up design), and transformational systems for the second. We conclude by highlighting areas that, in our opinion, are worthy of further investigation. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
15. High Performance Software Testing on SIMD Machines.
- Author
-
Krauser, Edward W., Mathur, Aditya P., and Rego, Vernon J.
- Subjects
- *
COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ELECTRONIC systems - Abstract
This paper describes a new method, called mutant unification, for high-performance software testing. The method is aimed at supporting program mutation on parallel machines based on the Single Instruction Multiple Data stream (SIMD) paradigm. Several parameters that affect the performance of unification have been identified and their effect on the time to completion of a mutation test cycle and speedup has been studied. Program mutation analysis provides an effective means for determining the reliability of large software systems. It also provides a systematic method for measuring the adequacy of test data. However, it is likely that testing large software systems using mutation is computation bound and prohibitive on traditional sequential machines. Current implementations of mutation tools are unacceptably slow and are only suitable for testing relatively small programs. The unification method reported in this paper provides a practical alternative to the current approaches. It also opens up a new application domain for SIMD machines. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
16. Clarif3ring Some Fundamental Concepts in Software Testing.
- Author
-
Parrish, Allen S. and Zweben, Stuart H.
- Subjects
ELECTRONIC systems ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
A software test data adequacy criterion is a means for determining whether a test set is sufficient, or "adequate," for testing a given program. Previous work has proposed a set of properties that useful adequacy criteria should satisfy. In this paper, we identify some additional properties of useful adequacy criteria that are appropriate under certain realistic models of testing. We then discuss modifications to the formal definitions of certain popular adequacy criteria to make the criteria consistent with these additional properties. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
17. Software Performance Engineering: A Case Study Including Performance Comparison with Design Alternatives.
- Author
-
Smith, Connie U. and Williams, Lloyd G.
- Subjects
ELECTRONIC systems ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Software Performance Engineering (SPE) provides an approach to constructing systems to meet performance objectives. This paper illustrates the application of SPE to an example with some real-time properties and demonstrates how to compare performance characteristics of design alternatives. We show how SPE can be integrated with design methods and demonstrate that performance requirements can be achieved without sacrificing other desirable design qualities such as understandability, maintainability, and reusability. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
18. A Synthesis of Software Science Measures and the Cyclomatic Number.
- Author
-
Ramamurthy, Bina and Melton, Austin
- Subjects
COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
In examining the software science family of software complexity measures and the cyclomatic number software complexity measure, one quickly makes the following observations. There are characteristics of a program which affect its complexity and which a software science measure can detect and measure but the cyclomatic number cannot, and likewise there are characteristics which the cyclomatic number can detect and measure but the software science measures cannot. Thus, one would like to define a measure or a family of measures which are sensitive to the software characteristics measured by the software science measures and which are also sensitive to the characteristics measured by the cylomatic number. In this paper we present such a family of measures; these new measures are called weighted measures. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
19. An Integrated Life-Cycle Model for Software Maintenance.
- Author
-
Yau, Stephen S., Nicholl, Robin A., Tsai, Jeffrey J.-P., and Sying-Syang Liu
- Subjects
ELECTRONIC systems ,PROGRAMMING languages ,COMPUTER software development ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
In this paper, an integrated life-cycle model is presented for use in a software maintenance environment. This model, which represents information about the development and maintenance of software systems, emphasizing relationships between different phases of the software life cycle, provides the basis for automated tools to assist maintenance personnel in making changes to existing software systems. This model is independent of particular specification, design and programming languages because it represents only certain "basic" semantic properties of software systems: control flow, data flow, and data structure. The software development processes by which one phase of the software life cycle is derived from another are represented by graph rewriting rules, which indicate how various components of a software system have been implemented. This approach permits analysis of the basic properties of a software system throughout the soft- ware life cycle. Examples are given to illustrate the integrated software life-cycle model during evolution. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
20. A System for Generating Language-Oriented Editors.
- Author
-
Tenma, Takao, Tsubotani, Hideaki, Tanaka, Minoru, and Ichikawa, Tadao
- Subjects
EDITORS ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
It is commonly accepted that language-oriented tools are helpful for constructing programs. In order to construct and extend language-oriented tools easily, meta-tools have been researched. Our interest is to establish a simple and flexible framework for internal representation of programs, internal representation of language-dependent information, and the behavior of language-oriented tools for user's operations. This paper presents a system for generating language-oriented editors based on the object-oriented concepts. Features of the target language are represented as classes and their relations. A program is represented as an abstract syntax tree. Each node in the tree belongs to a nodeclass. Processing of each user's operation is achieved by messages between nodes. For generating more advanced editors, probes, internal-classes, and gates are incorporated into the system. In conclusion, the system generates a flexible and easily extendable language-oriented editor from a target language description in a highly modularized fashion by using the description language which the system provides. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
21. A Distributed Specification Model and Its Prototyping.
- Author
-
Yu Wang
- Subjects
DISTRIBUTED computing ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
In this paper, we describe a specification model that is based on the finite state machine but is distributed. The model allows the user to decompose a large system into separate views. Each view is a complete system in itself, and reveals how the whole system would behave as seen from a certain angle. Put together, the combined views present a complete picture of the whole system. The complexity of a large centralized system is thus distributed and subdued. We then offer a simple execution scheme for our model. Using a high-level state-transition language called SXL, constructs in the model are expressed as pre- and postconditions of transitions. The execution scheme allows all the views in the model to proceed in a parallel but harmonius way, producing a working prototype for the modeled system. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
22. Automated Protocol Implementation with RTAG.
- Author
-
Anderson, David P.
- Subjects
PROGRAMMING languages ,ELECTRONIC systems ,COMPUTER software ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
RTAG is a language based on an attribute grammar notation for specifying protocols. Its main design goals are: 1) to support concise and easily understood expression of complex real-world protocols, and 2) to serve as the basis of a portable software system for automated protocol implementation. This paper summarizes the RTAG language, gives examples of its use, sketches the algorithms used in generating implementations from these specifications, and describes a UNIX®-based automated implementation system for RTAG. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
23. A Software Environment for the Specification and Analysis of Problems of Coordination and Concurrency.
- Author
-
Aggarwal, Sudhir, Barbara, Daniel, and Meth, Kalman Z.
- Subjects
DISTRIBUTED computing ,SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems ,ELECTRONIC systems - Abstract
In today's distributed computing environment, the coordination of concurrent processes and the coordination of resource sharing are of critical importance. Consequently, much effort has been focused on the modeling of problems of coordination and concurrency. In this paper we describe a software environment (SPANNER) for the specification and analysis of such problems. In the SPANNER environment, one can formally produce a specification of a distributed computing problem, and then verify its "correctness" through reachability analysis and simulation. SPANNER is based on a finite state machine model called the selection/resolution model. We illustrate the capabilities of SPANNER by specifying and analyzing two classical coordination problems: 1) the dining philosophers; and 2) Dijkstra's concurrent programming problem. In addition to discussing these specific problems, our intention is also to focus on some of the more recently implemented capabilities of the SPANNER system such as process types and cluster variables. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
24. Managing Software Engineering Projects: A Social Analysis.
- Author
-
Scacchi, Walt
- Subjects
SOFTWARE engineering ,COMPUTER systems ,ENGINEERING ,ELECTRONIC systems ,COMPUTER software - Abstract
Managing software engineering projects requires an ability to comprehend and balance the technological, economic, and social bases through which large software systems are developed. It requires people who can formulate strategies for developing systems in the presence of ill-defined requirements, new computing technologies, and recurring dilemmas with existing computing arrangements. This necessarily assumes skill in acquiring adequate computing resources, controlling projects, coordinating development schedules, and employing and directing competent staff. It also requires people who can organize the process for developing and evolving software products with locally available resources. Managing software engineering projects is as much a job of social interaction as it is one of technical direction. This paper examines the social arrangements that a soft- ware manager must deal with in developing and using new computing systems, evaluating the appropriateness of software engineering tools or techniques, directing the evolution of a system through its life cycle, organizing and staffing software engineering projects, and assessing the distributed costs and benefits of local software engineering practices. The purpose is to underscore the role of social analysis of software engineering practices as a cornerstone in understanding what it takes to productively manage software projects. [ABSTRACT FROM AUTHOR]
- Published
- 1984
25. A Mathematical Framework for the Investigation of Testing.
- Author
-
Gourlay, John S.
- Subjects
ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems ,COMPUTER software ,FACTOR analysis - Abstract
Testing has long been in need of mathematical underpinnings to explain its value as well as its limitations. This paper develops and applies a mathematical framework that 1) unifies previous work on the subject, 2) provides a mechanism for comparing the power of methods of testing programs based on the degree to which the methods approximate program verification, and 3) provides a reasonable and useful interpretation of the notion that successful tests increase one's confidence in the program's correctness. Applications of the framework include confirmation of a number of common assumptions about practical testing methods. Among the assumptions confirmed is the need for generating tests from specifications as well as programs. On the other hand, a careful formal analysis of the usual assumptions surrounding mutation analysis shows that the "competent programmer hypothesis" does not suffice to ensure the claimed high reliability of mutation testing. Hardware testing is shown to fit into the framework as well, and a brief consideration of it shows how the practical differences between it and software testing arise. [ABSTRACT FROM AUTHOR]
- Published
- 1983
26. Formal Specification and Verification of Distributed Systems.
- Author
-
Bo-Shoe Chen and Yeh, Raymond T.
- Subjects
DISTRIBUTED computing ,COMPUTER integrated manufacturing systems ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems ,COMPUTER software - Abstract
Computations of distributed systems are extremely difficult to specify and verify using traditional techniques because the systems are inherently concurrent, asynchronous, and nondeterministic. Furthermore, computing nodes in a distributed system may be highly independent of each other, and the entire system may lack an accurate global clock. In this paper, we develop an event-based model to specify formally the behavior (the external view) and the structure (the internal view) of distributed systems. Both control-related and data-related properties of distributed systems are specified using two fundamental relationships among events: the "precedes" relation, representing time order; and the "enables" relations, representing causality. No assumption about the existence of a global clock is made in the specifications. The specification technique has a rather wide range of applications. Examples from different classes of distributed systems, include communication systems, process control systems, and a distributed prime number generator, are used to demonstrate the power of the technique. The correctness of a design can be proved before implementation by checking the consistency between the behavior specification and the structure specification of a system. Both safety and liveness properties can be specified and verified. Furthermore, since the specification technique defines the orthogonal properties of a system separately, each of them can then be verified independently. Thus, the proof technique avoids the exponential state-explosion problem found in state-machine specification techniques. [ABSTRACT FROM AUTHOR]
- Published
- 1983
27. On the Multiple Implementation of Abstract Data Types Within a Computation.
- Author
-
White, John R.
- Subjects
SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,ELECTRONIC systems ,ENGINEERING - Abstract
A fundamental step in the software design process is the selection of a refinement (implementation) for a data abstraction. This step traditionally involves investigating the expected performance of a system under different refinements of an abstraction and then selecting a single alternative which minimizes some performance cost metric. In this paper we reformulate this design step to allow different refinements of the same data abstraction within a computation. This reformulation reflects the fact that the implementation appropriate for a data abstraction is dependent on the behavior exhibited by the objects of the abstraction. Since this behavior can vary among the objects of a computation, a single refinement is often inappropriate. Accordingly, three frameworks are presented for understanding and representing variations in the behavior of objects and, thus, the potential for multiple implementations. The three frameworks are based upon: 1) a static partitioning of objects into disjoint implementation classes; 2) static partitioning of classes into implementation regions; and 3) dynamic partitioning of classes into implementation regions. These frameworks and analytic tools useful in investigating expected performance under multiple implementations are described in detail. [ABSTRACT FROM AUTHOR]
- Published
- 1983
28. A Three-View Model for Performance Engineering of Concurrent Software.
- Author
-
Woodside, C.M.
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems , *PETRI nets - Abstract
This paper describes a multiview characterization of concurrent software and systems suitable for displaying and analyzing performance information. The views draw from well-known descriptions, and are compatible with established techniques and tools such as execution graphs, Petri Nets, State-Charts, structured design or object-oriented design, and various models for performance. The views are connected by means of a "Core model" and are used together to extract information relating to system integration, such as interprocess overheads, and the delay behavior of separate software components in complex systems. The integration of the views in the Core assists by converting results in one view (such as scheduling delay for resources) to parameters in another (such as delays along a path). The ultimate goal of the views is to support designers in making tradeoffs which involve performance, and to provide early assessment of the performance potential of software designs. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
29. Software Bottlenecking in Client-Server Systems and Rendezvous Networks.
- Author
-
Neilson, I.E., Woodside, C.M., Petriu, D.C., and Majumdar, S.
- Subjects
- *
COMPUTER networks , *MULTIMEDIA systems , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Software bottlenecks are performance constraints caused by slow execution of a software task. In typical client-server systems a client task must wait in a blocked state for the server task to respond to its requests, so a saturated server will slow down all its clients. A Rendezvous Network generalizes this relationship to multiple layers of servers with send-and-wait interactions (rendezvous), a two-phase model of task behavior, and to a unified model for hardware and software contention. Software bottlenecks have different symptoms, different behavior when the system is altered, and a different cure from the conventional bottlenecks seen in queueing network models of computer systems, caused by hardware limits. The differences are due to the "push-back" effect of the rendezvous, which spreads the saturation of a server to its clients. The paper describes software bottlenecks by examples, gives a definition, shows how they can be located and alleviated, and gives a method for estimating the performance benefit to be obtained. Ultimately, if all the software bottlenecks can be removed, the performance limit will be due to a conventional hardware bottleneck. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
30. Using Automatic Process Clustering for Design Recovery and Distributed Debugging.
- Author
-
Kunz, Thomas and Black, James P.
- Subjects
- *
DISTRIBUTED computing , *DEBUGGING , *PROGRAMMING languages , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Distributed applications written in Hermes typically consist of a large number of sequential processes. The use of a hierarchy of process clusters can facilitate the debugging of such applications. Ideally, such a hierarchy should be derived automatically. This paper discusses two approaches to automatic process clustering, one analyzing runtime information with a statistical approach and one utilizing additional semantic information. Tools realizing these approaches were developed and a quantitative measure to evaluate process clusters is proposed. The results obtained under both approaches are compared, and indicate that the additional semantic information improves the cluster hierarchies derived. We demonstrate the value of automatic process clustering with an example. It is shown how appropriate process clusters reduce the complexity of the understanding process, facilitating program maintenance activities such as debugging. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
31. Call Path Refinement Profiles.
- Author
-
Hall, Robert J.
- Subjects
- *
COMPUTER programming , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
In order to effectively optimize complex programs built in a layered or recursive fashion (possibly from reused general components), the programmer has a critical need for performance information connected directly to the design decisions and other optimization opportunities present in the code. Call path refinement profiles are novel tools for guiding the optimization of such programs, that: 1) provide detailed performance information about arbitrarily nested (direct or indirect) function call sequences, and 2) focus the user's attention on performance bottlenecks by limiting and aggregating the information presented. This paper discusses the motivation for such profiles, describes in detail their implementation in the CPPROF profiler, and relates them to previous profilers, showing how most widely available profilers can be expressed simply and efficiently in terms of call path refinements. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
32. Performance Analysis of Two-Phase Locking.
- Author
-
Thomasian, Alexander and Kyung Ryu
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems , *QUEUING theory - Abstract
The performance of transaction processing systems with two-phase locking (2PL) can be degraded by transaction blocking due to lock conflicts and aborts to resolve deadlocks. This paper develops a straightforward analytic solution method, which takes into account the variability of transaction size (the number of lock requests), an issue which has been ignored in most earlier studies. We first obtain analytic expressions for the probability of lock conflict, probability of deadlock, and the waiting time per lock conflict. We then develop a family of noniterative analytic solutions to evaluate the overall system performance by considering the expansion in transaction response time due to transaction blocking. The accuracy of these solutions is verified by validation against simulation results. We also introduce a new measure for the degree of lock contention, which is a product of the mean number of lock conflicts per transaction and the mean waiting time per lock conflict (when blocked by an active transaction). It is shown that the variability in transaction size results in an increase in both measures as compared to fixed-size transactions of comparable size. It follows that performance studies of 2PL which ignore the variability in transaction size underestimate the effect of lock contention on system performance. We also provide a solution method for the case when the processing times of transaction steps are different. The solution method is used in the analysis of the two-phase transaction processing method, according to which a transaction is first executed without requesting any locks, prefetching data (from disk) for the second execution phase with locking. In high lock contention environments, this approach may result in a significant improvement in performance compared to 2PL, which is due to the reduction in lock-holding time, since no disk 10 is required in the second phase of the execution, but this is achieved at the cost of additional CPU processing due to repeated execution. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
33. A Fault-Tolerant Scheduling Problem.
- Author
-
Liestman, Arthur L. and Campbell, Roy H.
- Subjects
- *
FAULT tolerance (Engineering) , *ELECTRONIC systems , *COMPUTER software , *SOFTWARE engineering , *ENGINEERING - Abstract
A real-time system must be reliable if a failure to meet its timing specifications might endanger human life, damage equipment, or waste expensive resources. Applications that require remote operation, timing accuracy, and long periods of activity need mechanisms to support reliability. Fault tolerance improves reliability by incorporating redundancy into the system design. A deadline mechanism has been proposed to provide fault tolerance in real-time software systems. The mechanism trades the accuracy of the results of a service for timing precision. Two independent algorithms are provided for each service subject to a deadline. The primary algorithm produces a good quality service, although its real-time reliability may not be assured. The alternate algorithm is reliable and produces an acceptable response. This paper introduces an algorithm to generate an optimal schedule for the deadline mechanism and discusses a simple and efficient implementation. The schedule ensures the timely completion of the alternate algorithm despite a failure to complete the primary algorithm within real time. [ABSTRACT FROM AUTHOR]
- Published
- 1986
34. Evaluating Database Update Schemes: A Methodology and Its Applications to Distributive Systems.
- Author
-
Kinsley, Kathryn C. and Hughes, Charles E.
- Subjects
DISTRIBUTED computing ,DATABASES ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
A methodology is presented for evaluating the performance of database update schemes in a distributive environment. The methodology makes use of the history of how data are used in the database. Parameters, such as update to retrieval ratio and average file size, can be set based on the actual characteristics of a system. The analysis is specifically directed toward the support of derived data within the relational model. Because concurrency is a major problem in a distributive system, the support of derived data is analyzed with respect to three distributive concurrency control techniques—master/slave, distributed, and synchronized. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
35. Evaluation of Safety-Critical Software.
- Author
-
Parnas, David L., van Schouwen, A. John, Shu Po Kwan, and Rushby, John
- Subjects
ELECTRONIC systems ,COMPUTER software ,MEDICAL equipment ,ENGINEERING ,SPACE shuttles ,COMPUTER systems - Abstract
This article focuses on the evaluation of safety-critical software. It is increasingly common to use programmable computers in applications where their failure could be life-threatening and could result in extensive damage. For example, computers now have safety-critical functions in both military and civilian aircraft, in nuclear plants. and in medical devices. Within the engineering community software systems have a reputation for being undependable, especially in the first years of their use. The public is aware of a few spectacular stories such as the Space Shuttle flight that was delayed by a software timing problem, or the Venus probe that was lost because of a punctuation error. In the software community, the problem is known to be much more widespread. Generally, many uses and many failures are required before a product is considered reliable. Software products, including those that have become relatively reliable, behave like other products of evolution-like processes; they often fail, even years after they were built, when the operating conditions change.
- Published
- 1990
- Full Text
- View/download PDF
36. On the Customization of Components: A Rule-Based Approach.
- Author
-
Jia Zhou, Kendra Cooper, Hui Ma, and I-Ling Yen
- Subjects
RULE-based programming ,COMPUTER programming ,QUALITY of service ,ELECTRONIC systems ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering ,SOURCE code - Abstract
Realizing the quality-of-service (QOS) requirements for a software system continues to be an important and challenging issue in software engineering. A software system may need to be updated or reconfigured to provide modified QoS capabilities. These changes can occur at development time or at runtime. In component-based software engineering, software systems are built by composing components. When the QoS requirements change, there is a need to reconfigure the components. Unfortunately, many components are not designed to be reconfigurable, especially in terms of QoS capabilities, It is often labor-intensive and error-prone work to reconfigure the components, as developers need to manually check and modify the source code. Furthermore, the work requires experienced senior developers, which makes it costly. The limitations motivate the development of a new rule-based semiautomated component parameterization technique that performs code analysis to identify and adapt parameters and changes components into reconfigurable ones. Compared with a number of alternative QoS adaptation approaches, the proposed rule-based technique has advantages in terms of flexibility, extensibility, and efficiency. The adapted components support the reconfiguration of potential QoS trade-offs among time, space, quality, and so forth. The proposed rule-based technique has been successfully applied to two substantial libraries of components. The F-measure or balanced F-score results for the validation are excellent, that is, 94 percent. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
37. Layout Appropriateness: A Metric for Evaluating User Interface Widget Layout.
- Author
-
Sears, Andrew
- Subjects
ELECTRONIC systems ,HUMAN-computer interaction ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Numerous methods to evaluate user interfaces have been investigated. These methods vary greatly in the attention paid to the users' tasks. Some methods require detailed task descriptions while others are task-independent. Unfortunately, collecting detailed task information can be difficult. On the other hand, task-independent methods cannot evaluate a design for the tasks users actually perform. The goal of this research is to develop a metric, which incorporates simple task descriptions, that can assist designers in organizing widgets in the user interface. Simple task descriptions provide some of the benefits, without the difficulties, of performing a detailed task analysis. The metric, Layout Appropriateness (LA), requires a description of the sequences of widget-level actions users perform and how frequently each sequence is used. This task description can either be from observations of an existing system or from a simplified task analysis. The appropriateness of a given layout is computed by weighting the cost of each sequence of actions by how frequently the sequence is performed. This emphasizes frequent methods of accomplishing tasks while incorporating less frequent methods in the design. In addition to providing a comparison of proposed or existing layouts, an LA-optimal layout can be presented to the designer. The designer can compare the LA-optimal and existing layouts or start with the LA-optimal layout and modify it to take additional factors into consideration. Software engineers who occasionally face interface design problems and user interface designers can benefit from the explicit focus on the users' tasks that LA incorporates into automated user interface evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
38. The Tinkertoy Graphical Programming Environment.
- Author
-
Edel, Mark
- Subjects
PROGRAMMING languages ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
Tinkertoy is a graphic interface to Lisp, where programs are "built" rather than written, out of icons and flexible interconnections. It is exciting because it represents a computer/user interface that can easily exceed the interaction speed of the best text-based language editors and command languages. It also provides a consistent framework for interaction across both editing and command execution. Moreover, because programs are represented graphically, structures that do not naturally conform to the text medium can be clearly described, and new kinds of information can be incorporated into pro- grams and program elements. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
39. Skinner Wasn't a Software Engineer.
- Author
-
Harrison, Warren
- Subjects
COMPUTER software ,ENGINEERING ,PERFORMANCE ,ELECTRONIC systems ,EXPERIMENTAL design ,TECHNICAL specifications - Abstract
The article presents information relating to a new tool or technique called the comparative group experiment. Software engineering researchers and practitioners are all familiar with one of the main approaches to arguing the value of this new tool. Typically, this involves comparing the performance of two groups of subjects, one that employs the new, tool or technique and one that does not. These experiments are much more convincing if the groups consist of dozens of programmers rather than three or four. Comparative group experiments tend to work best when outcomes are observable within a short period of time. This is because of both the cost as well as the ability to control the subjects. A common alternative to the controlled group experiment is the field study, an analysis of the measurements taken in the process of developing a real software system. However, a field study's results can often be as misleading as academic studies involving students performing simple programming tasks.
- Published
- 2005
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.