136 results
Search Results
2. Variability in Software Systems—A Systematic Literature Review.
- Author
-
Galster, Matthias, Weyns, Danny, Tofan, Dan, Michalik, Bartosz, and Avgeriou, Paris
- Subjects
- *
COMPUTER software quality control , *SOFTWARE engineering , *LITERATURE reviews , *ENGINEERING , *ELECTRONIC systems - Abstract
Context: Variability (i.e., the ability of software systems or artifacts to be adjusted for different contexts) became a key property of many systems. Objective: We analyze existing research on variability in software systems. We investigate variability handling in major software engineering phases (e.g., requirements engineering, architecting). Method: We performed a systematic literature review. A manual search covered 13 premium software engineering journals and 18 premium conferences, resulting in 15,430 papers searched and 196 papers considered for analysis. To improve reliability and to increase reproducibility, we complemented the manual search with a targeted automated search. Results: Software quality attributes have not received much attention in the context of variability. Variability is studied in all software engineering phases, but testing is underrepresented. Data to motivate the applicability of current approaches are often insufficient; research designs are vaguely described. Conclusions: Based on our findings we propose dimensions of variability in software engineering. This empirically grounded classification provides a step towards a unifying, integrated perspective of variability in software systems, spanning across disparate or loosely coupled research themes in the software engineering community. Finally, we provide recommendations to bridge the gap between research and practice and point to opportunities for future research. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
3. A Study of Variability Models and Languages in the Systems Software Domain.
- Author
-
Berger, Thorsten, She, Steven, Lotufo, Rafael, Wasowski, Andrzej, and Czarnecki, Krzysztof
- Subjects
- *
SYSTEMS software , *COMPUTER software research , *COMPILERS (Computer programs) , *SOFTWARE engineering , *ENGINEERING - Abstract
Variability models represent the common and variable features of products in a product line. Since the introduction of FODA in 1990, several variability modeling languages have been proposed in academia and industry, followed by hundreds of research papers on variability models and modeling. However, little is known about the practical use of such languages. We study the constructs, semantics, usage, and associated tools of two variability modeling languages, Kconfig and CDL, which are independently developed outside academia and used in large and significant software projects. We analyze 128 variability models found in 12 open--source projects using these languages. Our study 1) supports variability modeling research with empirical data on the real-world use of its flagship concepts. However, we 2) also provide requirements for concepts and mechanisms that are not commonly considered in academic techniques, and 3) challenge assumptions about size and complexity of variability models made in academic papers. These results are of interest to researchers working on variability modeling and analysis techniques and to designers of tools, such as feature dependency checkers and interactive product configurators. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
4. The Role of Deliberate Artificial Design Elements in Software Engineering Experiments.
- Author
-
Hannay, Jo E. and Jorgensen, Magne
- Subjects
- *
SOFTWARE engineering , *SYSTEMS design , *COMPUTER software , *COMPUTER systems , *ENGINEERING , *DESIGN - Abstract
Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies that are badly designed or that have research questions of low relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
5. Testability Transformation.
- Author
-
Harman, Mark, Lin Hu, Hierons, Rob, Wegener, Joachim, Sthamer, Harmen, Baresel, André, and Roper, Marc
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *COMPUTER software - Abstract
A testability transformation is a source-to-source transformation that aims to improve the ability of a given test generation method to generate test data for the original program. This paper introduces testability transformation, demonstrating that it differs from traditional transformation, both theoretically and practically, while still allowing many traditional transformation rules to be applied. The paper illustrates the theory of testability transformation with an example application to evolutionary testing. An algorithm for flag removal is defined and results are presented from an empirical study which show how the algorithm improves both the performance of evolutionary test data generation and the adequacy level of the test data so-generated. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
6. Knowledge-Based Automation of a Design Method for Concurrent Systems.
- Author
-
Mills, Kevin L. and Gomaa, Hassan
- Subjects
- *
SOFTWARE engineering , *EXPERT systems , *SYSTEMS design , *ELECTRONIC data processing , *SYSTEM analysis , *ENGINEERING - Abstract
This paper describes a knowledge-based approach to automate a software design method for concurrent systems. The approach uses multiple paradigms to represent knowledge embedded in the design method. Semantic data modeling provides the means to represent concepts from a behavioral modeling technique, called Concurrent Object-Based Real-time Analysis (COBRA), which defines system behavior using data/control flow diagrams. Entity-Relationship modeling is used to represent a design metamodel based on a design method, called Concurrent Design Approach for Real-Time Systems (CODARTS), which represents concurrent designs as software architecture diagrams, task behavior specifications, and module specifications. Production rules provide the mechanism for codifying a set of CODARTS heuristics that can generate concurrent designs based on semantic concepts included in COBRA behavioral models and on entities and relationships included in CODARTS design metamodels. Together, the semantic data model, the entity-relationship model, and the production rules, when encoded using an expert-system shell, compose CODA, an automated designer's assistant. Other forms of automated reasoning, such as knowledge-based queries, can be used to check the correctness and completeness of generated designs with respect to properties defined in the CODARTS design metamodel. CODA is applied to generate 10 concurrent designs for four real-time problems. The paper reports the degree of automation achieved by CODA. The paper also evaluates the quality of generated designs by comparing the similarity between designs produced by CODA and human designs reported in the literature for the same problems. In addition, the paper compares CODA with four other approaches used to automate software design methods. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
7. Handling Obstacles in Goal-Oriented Requirements Engineering.
- Author
-
van Lamsweerde, Axel and Letier, Emmanuel
- Subjects
- *
ENGINEERING , *ELECTRONIC systems , *MATHEMATICAL programming , *PROGRAM transformation , *TECHNICAL specifications , *COMPUTER software - Abstract
Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
8. Managing Conflicts in Goal-Driven Requirements Engineering.
- Author
-
van Lamsweerde, Axel, Darimont, Robert, and Letier, Emmanuel
- Subjects
- *
ENGINEERING , *CONFLICT management , *GOAL (Psychology) , *INCONSISTENCY (Logic) , *STAKEHOLDERS , *HEURISTIC - Abstract
A wide range of inconsistencies can arise during requirements engineering as goals and requirements are elicited from multiple stakeholders. Resolving such inconsistencies sooner or later in the process is a necessary condition for successful development of the software implementing those requirements. The paper first reviews the main types of inconsistency that can arise during requirements elaboration, defining them in an integrated framework and exploring their interrelationships. It then concentrates on the specific case of conflicting formulations of goals and requirements among different stakeholder viewpoints or within a single viewpoint. A frequent, weaker form of conflict called divergence is introduced and studied in depth. Formal techniques and heuristics are proposed for detecting conflicts and divergences from specifications of goals/ requirements and of domain properties. Various techniques are then discussed for resolving conflicts and divergences systematically by introduction of new goals or by transformation of specifications of goals/objects toward conflict-free versions. Numerous examples are given throughout the paper to illustrate the practical relevance of the concepts and techniques presented. The latter are discussed in the framework of the KAOS methodology for goal-driven requirements engineering. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
9. Evolution and Reuse of Orthogonal Architecture.
- Author
-
Rajlich, Václav and Silva, João H.
- Subjects
- *
ARCHITECTURE , *TECHNOLOGY , *SCHOOLS of architecture , *ENGINEERING , *SCHOOLS , *INDUSTRIAL arts - Abstract
In this paper, we present a case study of evolution (or vertical reuse) in the domain of visual interactive software tools. We introduce an architecture suitable for this purpose, called orthogonal architecture. The paper describes the architecture itself, the reverse engineering process by which it was obtained, and the forward engineering process by which it was evolved. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
10. Reusing Software: Issues and Research Directions.
- Author
-
Mili, Hafedh, Mili, Fatma, and Mili, Ali
- Subjects
- *
COMPUTER software development , *ARTIFICIAL intelligence , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Software productivity has been steadily increasing over the past 30 years, but not enough to close the gap between the demands placed on the software industry and what the state of the practice can deliver [22], [39]; nothing short of an order of magnitude increase in productivity will extricate the software industry from its perennial crisis [39], 1671. Several decades of intensive research in software engineering and artificial intelligence left few alternatives but software reuse as the (only) realistic approach to bring about the gains of productivity and quality that the software industry needs. In this paper, we discuss the implications of reuse on the production, with an emphasis on the technical challenges. Software reuse involves building software that is reusable by design and building with reusable software. Software reuse includes reusing both the products of previous software projects and the processes deployed to produce them, leading to a wide spectrum of reuse approaches, from the building blocks (reusing products) approach, on one hand, to the generative or reusable processor (reusing processes), on the other 1681. We discuss the implication of such approaches on the organization, control, and method of software development and discuss proposed models for their economic analysis. Software reuse benefits from methodologies and tools to: 1) build more readily reusable software and 2) locate, evaluate, and tailor reusable software, the last being critical for the building blocks approach. Both sets of issues are discussed in this paper, with a focus on application generators and OO development for the first and a thorough discussion of retrieval techniques for software components, component composition (or bottom-up design), and transformational systems for the second. We conclude by highlighting areas that, in our opinion, are worthy of further investigation. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
11. Predicate Logic for Software Engineering.
- Author
-
Parnas, David Lorge
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *SYSTEMS design , *ELECTRONIC data processing documentation , *COMPUTER systems , *ELECTRONIC systems , *COMPUTERS - Abstract
The interpretations of logical expressions found in most introductory textbooks are not suitable for use in software engineering applications because they do not deal with partial functions. More advanced papers and texts deal with partial functions in a variety of complex ways. This paper proposes a very simple change to the classic interpretation of predicate expressions, one that defines their value for all values of all variables, yet is almost identical to the standard definitions. It then illustrates the application of this interpretation in software documentation. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
12. LISPACK—A Methodology and Tool for the Performance Analysis of Parallel Systems and Algorithms.
- Author
-
Lazeolla, Giuseppe and Marinuzzi, Francesco
- Subjects
- *
PARALLEL computers , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems - Abstract
The paper deals with the performance analysis of parallel algorithms and systems. For these, numerical solution methods quickly show their limits because of the enormous state-space growth. The proposed methodology and software tool (LISPACK, an acronym for List-manipulation Parallel-modeling Package) uses string manipulation, lumping, and recursive elimination as a means for the definition of the large Markovian process, its restructuring and efficient solution. Initially, the enormous stage space is conveniently collapsed and the large transition matrix is reduced. Subsequently, the reduced matrix is recursively block banded, and an efficient recursive, symbolic Gauss elimination is applied. No relevant costs are incurred for the state-space collapsing and restructuring, nor for the matrix block banding. The analysis of a typical parallel system and algorithm model is developed as a case study, to discuss the features of the method. The paper has two contributions. The first, symbolic-approach methodology, is proposed for the performance analysis of parallel algorithms and systems. Second, a tool is introduced that exploits the capabilities of the symbolic approach in the solution of parallel models, where the numerical techniques reveal their limits. [ABSTRACT FROM AUTHOR]
- Published
- 1993
13. Simulation and Comparison of Albrecht's Function Point and DeMarco's Function Bang Metrics in a CASE Environment.
- Author
-
Rask, Raimo, Laamanen, Petteri, and Lyytinen, Kalle
- Subjects
- *
COMPUTER software development , *COMPUTER programming management , *COMPUTER software , *ENGINEERING , *SOFTWARE engineering - Abstract
Software size estimates provide a basis for soft- ware cost estimation during software development. Hence, it is important to measure the system size reliably as early as possible, i.e., during the requirements specification. Two best known specification level metrics are Albrecht's Function Points and DeMarco's Function Bang. One problem in using these metrics has been that there are only few tools that can calculate them during the specification phase. We have built one such tool. Another problem has been that no research data is available how these metrics correlate with one another. The paper compares these two metrics by a simulation study in which automatically generated randomized dataflow diagrams (DED's) were used as a statistical sample to count automatically function points and function bang in a built CASE environment. These value counts were correlated statistically using correlation coefficients and regression analysis. The simulation study permits sufficient variation in the base material to cover most types of system specifications. Moreover, it allows sufficient sampling sizes to make statistical analysis of data. The obtained results show that in certain cases there is a relatively good statistical correlation between these metrics. No overall general correlation exists, however. The paper does not show which one of the two metrics fares better as a size metric. Yet, our study suggests to use in many cases Function Bang metric, because its automatic calculation is simpler and depends less on judgement. Moreover, the study demonstrates that correlations depend upon a system type. This implies that in software projects one must be careful with size estimates while using these metrics. In order to know when one needs to calibrate the size estimate we need to develop algorithms which help to detect logical system types and make adjustments accordingly. The results also point out the need of empirical research in which we can better derive the connection between specification level metrics and the number of lines of code. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
14. Properties of Control-Flow Complexity Measures.
- Author
-
Lakshmanan, K.B., Jayaprakash, S., and Sinha, P.K.
- Subjects
- *
FLOWGRAPHS , *SOFTWARE measurement , *SOFTWARE engineering , *SYSTEM analysis , *ENGINEERING , *SOFTWARE reengineering - Abstract
In this paper we attempt to formalize some properties which any reasonable control-flow complexity measure must satisfy, and our work is in the same spirit of an earlier paper by Weyuker in this TRANSACTIONS [18]. Since large programs are often built by sequencing and nesting of simpler constructs, we explore how control-flow complexity measures behave under such compositions. Our analysis reveals the strengths and weaknesses of some of the existing control flow complexity measures. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
15. High Performance Software Testing on SIMD Machines.
- Author
-
Krauser, Edward W., Mathur, Aditya P., and Rego, Vernon J.
- Subjects
- *
COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ELECTRONIC systems - Abstract
This paper describes a new method, called mutant unification, for high-performance software testing. The method is aimed at supporting program mutation on parallel machines based on the Single Instruction Multiple Data stream (SIMD) paradigm. Several parameters that affect the performance of unification have been identified and their effect on the time to completion of a mutation test cycle and speedup has been studied. Program mutation analysis provides an effective means for determining the reliability of large software systems. It also provides a systematic method for measuring the adequacy of test data. However, it is likely that testing large software systems using mutation is computation bound and prohibitive on traditional sequential machines. Current implementations of mutation tools are unacceptably slow and are only suitable for testing relatively small programs. The unification method reported in this paper provides a practical alternative to the current approaches. It also opens up a new application domain for SIMD machines. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
16. The Processor Working Set and Its Use in Scheduling Multiprocessor Systems.
- Author
-
Ghosal, Dipak, Serazzi, Giuseppe, and Tripathi, Satish K.
- Subjects
- *
MULTIPROCESSORS , *COMPUTERS , *COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ALGORITHMS , *PARALLEL processing - Abstract
There are two main contributions of this paper. First, this paper introduces the concept of a processor working set (pws) as a single value parameter for characterizing the parallel program behavior. Through detailed experimental studies of different algorithms on a transputer-based multiprocessor machine, it is shown that the pws is indeed a robust measure for characterizing the workload of a multiprocessor system. Small deviations in the performance of algorithms arising due to communication overhead are captured in this parameter. The second contribution of this paper relates to the study of static processor allocation strategies. It is shown that processor allocation strategies based on the pws provide significantly better throughput-delay characteristics. The robustness of pws is further demonstrated by showing that allocation policies that allocate processors more than the pws are inferior in performance to those that never allocate more than the pws—even at a moderately low load. Based on the results, a simple static allocation policy that allocates the pws at low load and adaptively fragments at high load to one processor per job is proposed. This allocation strategy is shown to possess the best throughput-delay characteristic over a wide range of loads. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
17. Process Synchronization: Design and Performance Evaluation of Distributed Algorithms.
- Author
-
Bagrodia, Rajive
- Subjects
- *
COMPUTER algorithms , *ALGORITHMS , *SYNCHRONIZATION , *SOFTWARE engineering , *ENGINEERING , *COMPUTER programming - Abstract
The concept of multiway rendezvous has been proposed to implement synchronous communication among an arbitrary number of concurrent, asynchronous processes. The synchronization and exclusion problems associated with implementing multiway rendezvous are expressed succinctly in the context of the committee coordination problem. A variety of techniques to solve the synchronization and exclusion problems may be combined to design algorithms for the committee coordination problem. This paper presents a simple solution for the problem and shows how it can be implemented to develop a family of algorithms. The algorithms use message counts to solve synchronization and solve the exclusion problem by using a circulating token, or by using auxiliary resources as in the solutions for the dining or drinking philosophers problems. The paper also presents the results of a simulation study on the performance of the algorithms. The experiments measured the response time and message complexity of each algorithm as a result of variations in different model parameters including network topology and level of conflict in the system. The results from the study show that the response time for algorithms proposed in this paper is significantly better than an existing algorithm, whereas the message complexity is considerably worse. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
18. An Acyclic Expansion Algorithm for Fast Protocol Validation.
- Author
-
Kakuda, Yoshiaki, Wakahara, Yasushi, and Norigoe, Masamitsu
- Subjects
- *
COMPUTER algorithms , *ALGORITHMS , *COMPUTER programming , *COMPUTER software , *ELECTRONIC systems , *ENGINEERING , *SOFTWARE engineering - Abstract
For the development of communications software com- posed of many modules, protocol validation is essential to detect errors in the interactions among the modules. A number of protocol validation techniques were proposed in the past, but the validation time required by these techniques is too long for many actual protocols. This paper proposes a new fast protocol validation technique to overcome this drawback. The proposed technique is to construct the minimum acyclic form of state transitions in individual processes of the protocol, and to detect protocol errors such as system deadlocks and channel overflows fast. This paper also presents a protocol validation system based on the proposed technique to confirm its feasibility and shows validation results for some actual protocols obtained with this system. As a result, the protocol validation system is expected to contribute to a great extent to the improvement of the productivity in development and maintenance of communications software. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
19. Generic Lifecycle Support in the ALMA Environment.
- Author
-
Lamsweerde, Axel Van, Delcourt, Bruno, Delor, Emmanuelle, Schayes, Marie-Claire, and Champagne, Robert
- Subjects
- *
ARCHITECTURAL design , *SOFTWARE engineering , *SYNTAX (Grammar) , *ARCHITECTURAL designs , *ENGINEERING , *COMPUTER architecture - Abstract
ALMA is an environment kernel supporting the elaboration, analysis, documentation, and maintenance of the various products developed during an entire software lifecycle. Its central component is an environment database n which properties about software objects and relations are collected. These properties include texts written in various formalisms. Two kinds of tools are provided: 1) high- level tools for updating, querying, reporting and maintaining multiple versions of software objects and relations consistently in the database, and 2) syntax-directed tools like structural editors for manipulating the formal texts attached to software objects and relations in the database. A basic feature of the ALMA kernel is its genericity. Tools of the first kind are parameterized on software lifecycle models while tools of the second kind are parameterized on formalisms. Instantiated versions of them for specific models arid formalisms are generated by a meta-environment, which also generates the environment database structure tailored to the desired lifecycle model. This paper concentrates on the database support meta-system and the instantiated database support 5) stems it generates. Our main concern is to discuss the architectural design decisions we made and the mechanisms we introduced for achieving parameterization on lifecycle models. In particular, we describe the entity-relationship meta-model we designed for meta-defining a particular lifecycle model as input to the meta-system. This meta-model is an extension of standard entity-relationship models in that n-ary relations can have attributes, they can be defined on unions of entity types, type specialization with multiple inheritance is supported, and a mechanism is provided for defining views yielding different environment subdatabases associated with different classes of users and/or tools. The crucial role played by this meta-model will be stressed all along the paper. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
20. Fragtypes: A Basis for Programming Environments.
- Author
-
Madhavji, Nazim H.
- Subjects
- *
PROGRAMMING languages , *SOFTWARE engineering , *MATHEMATICAL models , *COMPUTER programming , *ENGINEERING - Abstract
It is being recognized that recent programming environments have made significant progress towards improving the programming process. In adhering to this goal of software engineering, this paper introduces a new basis for programming environments. This basis encourages development of software in fragments of various types, called fragtypes. Fragtypes range from a simple expression type to a complete subsystem type. As a result, they are suited to the development of software in an enlarged scope that includes both programming in the small and programming in the large; The paper shows how new proposed operations on fragtypes can achieve unusual effects on the software development process. Fragtypes and their associated construction rules form the basis of the programming environment MUPE- 2, which is currently under development at McGill University. The target and the implementation language of this environment is the programming language Modula-2. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
21. Implementation of an FP-Shell.
- Author
-
Kamath, Yogeesh H. and Matthews, Manton M.
- Subjects
- *
PROGRAMMING languages , *COMPUTER software , *ENGINEERING , *SOFTWARE engineering - Abstract
One of the best features of the UNIXTM Shell is that it provides a framework which can be used to build complex programs by interconnecting existing simple programs. However, it is limited to linear combinations of programs, and building of more complex programs must be accomplished by executing sequences of commands. This paper introduces Backus' FP (Functional Programming) as an alter- native command language for UNIX. In FP, programs are true functions and another distinctive feature of FP languages is that they contain functional forms, which are constructs for combining programs to build new programs. Also, the functional style of programming provides a natural way of exploiting parallel machine architecture. In this paper it is shown how to enhance the power of the UNIX Shell by the inclusion of the FP functional forms. The FP-Shell is fully implemented in "C" under UNIX. It interprets standard UNIX commands as FP primitive functions and standard UNIX files as FP system objects. It also serves as a framework for the study of functional programming languages. [ABSTRACT FROM AUTHOR]
- Published
- 1987
22. Toolpack—An Experimental Software Development Environment Research Project.
- Author
-
Osterweil, Leon J.
- Subjects
- *
COMPUTER software development , *ELECTRONIC systems , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *COMPUTER software - Abstract
This paper discusses the goals and methods of the Toolpack project and in this context discusses the architecture and design of the software system being produced as the focus of the project. Toolpack is presented as an experimental activity in which a large software tool environment is being created for the purpose of general distribution and then careful study and analysis. The paper begins by explaining the motivation for building integrated tool sets. It then proceeds to explain the basic requirements that an integrated system of tools must satisfy in order to be successful and to remain useful both in practice and as an experimental object. The paper then summarizes the tool capabilities that will be incorporated into the environment. It then goes on to present a careful description of the actual architecture of the Toolpack integrated tool system. Finally the Toolpack project experimental plan is presented, and future plans and directions are summarized. [ABSTRACT FROM AUTHOR]
- Published
- 1983
23. Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation.
- Author
-
Albrecht, Allan J. and Gafeney Jr., John E.
- Subjects
- *
COMPUTER software development , *COMPUTER software , *COMPUTER systems , *SOFTWARE engineering , *ENGINEERING - Abstract
One of the most important problems faced by software developers and users is the prediction of the size of a programming system and its development effort. As an alternative to "size," one might deal with a measure of the "function" that the software is to perform. Albrecht [1] has developed a methodology to estimate the amount of the "function" the software is to perform, in terms of the data it is to use (absorb) and to generate (produce). The "function" is quantified as "function points," essentially, a weighted sum of the numbers of "inputs," "outputs," master files," and "inquiries" provided to, or generated by, the software. This paper demonstrates the equivalence between Albrecht's external input/output data flow representative of a program (the "function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a program as well as the "soft content" variation of Halstead's model suggested by Gaffney [7]. Further, the high degree of correlation between "function points" and the eventual "SLOC" (source lines of code) of the program, and between "function points" and the work-effort required to develop the code, is demonstrated. The "function point" measure is thought to be more useful than "SLOC" as a prediction of work effort because "function points" are relatively easily estimated from a statement of basic requirements for a program early in the development cycle. The strong degree of equivalency between "function points" and "SLOC" shown in the paper suggests a two-step work-effort validation procedure, first using "function points" to estimate "SLOC," and then using "SLOC" to estimate the work-effort. This approach would pro- vide validation of application development work plans and work-effort estimates early in the development cycle. The approach would also more effectively use the existing base of knowledge on producing "SLOC" until a similar base is developed for "function points." The paper assumes that the reader is familiar with the fundamental theory of "software science" measurements and the practice of validating estimates of work-effort to design and implement software applications (programs). If not, a review of [1] -[3] is suggested. [ABSTRACT FROM AUTHOR]
- Published
- 1983
24. Requirements Elicitation and Specification Using the Agent Paradigm: The Case Study of an Aircraft Turnaround Simulator.
- Author
-
Miller, Tim, Bin Lu, Sterling, Leon, Beydoun, Ghassan, and Taveter, Kuldar
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *TECHNOLOGY transfer , *DIFFUSION of innovations - Abstract
In this paper, we describe research results arising from a technology transfer exercise on agent-oriented requirements engineering with an industry partner. We introduce two improvements to the state-of-the-art in agent-oriented requirements engineering, designed to mitigate two problems experienced by ourselves and our industry partner: (1) the lack of systematic methods for agent-oriented requirements elicitation and modelling; and (2) the lack of prescribed deliverables in agent-oriented requirements engineering. We discuss the application of our new approach to an aircraft turnaround simulator built in conjunction with our industry partner, and show how agent-oriented models can be derived and used to construct a complete requirements package. We evaluate this by having three independent people design and implement prototypes of the aircraft turnaround simulator, and comparing the three prototypes. Our evaluation indicates that our approach is effective at delivering correct, complete, and consistent requirements that satisfy the stakeholders, and can be used in a repeatable manner to produce designs and implementations. We discuss lessons learnt from applying this approach. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
25. NLP-KAOS for Systems Goal Elicitation: Smart Metering System Case Study.
- Author
-
Casagrande, Erik, Woldeamlak, Selamawit, Woon, Wei Lee, Zeineldin, H. H., and Svetinovic, Davor
- Subjects
- *
REQUIREMENTS engineering , *ENGINEERING , *TECHNICAL specifications , *NATURAL language processing , *ARTIFICIAL intelligence - Abstract
This paper presents a computational method that employs Natural Language Processing (NLP) and text mining techniques to support requirements engineers in extracting and modeling goals from textual documents. We developed a NLP-based goal elicitation approach within the context of KAOS goal-oriented requirements engineering method. The hierarchical relationships among goals are inferred by automatically building taxonomies from extracted goals. We use smart metering system as a case study to investigate the proposed approach. Smart metering system is an important subsystem of the next generation of power systems (smart grids). Goals are extracted by semantically parsing the grammar of goal-related phrases in abstracts of research publications. The results of this case study show that the developed approach is an effective way to model goals for complex systems, and in particular, for the research-intensive complex systems. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
26. A General Testability Theory: Classes, Properties, Complexity, and Testing Reductions.
- Author
-
Rodriguez, Ismael, Llana, Luis, and Rabanal, Pablo
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *ANTIPATTERNS (Software engineering) , *CAPABILITY maturity model - Abstract
In this paper we develop a general framework to reason about testing. The difficulty of testing is assessed in terms of the amount of tests that must be applied to determine whether the system is correct or not. Based on this criterion, five testability classes are presented and related. We also explore conditions that enable and disable finite testability, and their relation to testing hypotheses is studied. We measure how far incomplete test suites are from being complete, which allows us to compare and select better incomplete test suites. The complexity of finding that measure, as well as the complexity of finding minimum complete test suites, is identified. Furthermore, we address the reduction of testing problems to each other, that is, we study how the problem of finding test suites to test systems of some kind can be reduced to the problem of finding test suites for another kind of systems. This enables to export testing methods. In order to illustrate how general notions are applied to specific cases, many typical examples from the formal testing techniques domain are presented. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
27. A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection.
- Author
-
Kessentini, Wael, Kessentini, Marouane, Sahraoui, Houari, Bechikh, Slim, and Ouni, Ali
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *COMPUTER software quality control , *QUALITY control , *EVOLUTIONARY algorithms - Abstract
We propose in this paper to consider code-smells detection as a distributed optimization problem. The idea is that different methods are combined in parallel during the optimization process to find a consensus regarding the detection of code-smells. To this end, we used Parallel Evolutionary algorithms (P-EA) where many evolutionary algorithms with different adaptations (fitness functions, solution representations, and change operators) are executed, in a parallel cooperative manner, to solve a common goal which is the detection of code-smells. An empirical evaluation to compare the implementation of our cooperative P-EA approach with random search, two single population-based approaches and two code-smells detection techniques that are not based on meta-heuristics search. The statistical analysis of the obtained results provides evidence to support the claim that cooperative P-EA is more efficient and effective than state of the art detection approaches based on a benchmark of nine large open source systems where more than 85 percent of precision and recall scores are obtained on a variety of eight different types of code-smells. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
28. Problem Oriented Software Engineering: Solving the Package Router Control Problem.
- Author
-
Hall, Jon G., Rapanotti, Lucia, and Jackson, Michael A.
- Subjects
- *
SOFTWARE engineering , *COMPUTER software development , *ROUTING (Computer network management) , *NETWORK routers , *COMPUTER systems , *ENGINEERING - Abstract
Problem orientation is gaining interest as a way of approaching the development of software intensive systems, and yet, a significant example that explores its use is missing from the literature. In this paper, we present the basic elements of Problem Oriented Software Engineering (POSE), which aims at bringing both nonformal and formal aspects of software development together in a single framework. We provide an example of a detailed and systematic POSE development of a software problem: that of designing the controller for a package router. The problem is drawn from the literature, but the analysis presented here is new. The aim of the example is twofold: to illustrate the main aspects of POSE and how it supports software engineering design and to demonstrate how a nontrivial problem can be dealt with by the approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
29. Design Pattern Detection Using Similarity Scoring.
- Author
-
Tsantalis, Nikolaos, Chatzigeorgiou, Alexander, Stephanides, George, and Halkidis, Spyros T.
- Subjects
- *
REVERSE engineering , *GRAPH algorithms , *COMPUTER software , *OPERATIONS research , *COMPUTER algorithms , *SOFTWARE patterns , *ENGINEERING , *HEURISTIC , *METHODOLOGY , *OPEN source software - Abstract
The identification of design patterns as part of the reengineering process can convey important information to the designer. However, existing pattern detection methodologies generally have problems in dealing with one or more of the following issues: Identification of modified pattern versions, search space explosion for large systems and extensibility to novel patterns. In this paper, a design pattern defection methodology is proposed that is based on similarity scoring between graph vertices. Due to the nature of the underlying graph algorithm, this approach has the ability to also recognize patterns that are modified from their standard representation. Moreover, the approach exploits the fact that patterns reside in one or more inheritance hierarchies, reducing the size of the graphs to which the algorithm is applied. Finally, the algorithm does not rely on any pattern-specific heuristic, facilitating the extension to novel design structures. Evaluation on three open-source projects demonstrated the accuracy and the efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
30. Covering Arrays for Efficient Fault Characterization in Complex Configuration Spaces.
- Author
-
Yilmaz, Cemal, Cohen, Myra B., and Porter, Adam A.
- Subjects
- *
SOFTWARE engineering , *COMPUTER software , *ELECTRONIC systems , *MATHEMATICS , *ENGINEERING , *TECHNOLOGY , *HIGH technology industries , *COMPUTER programming - Abstract
Many modern software systems are designed to be highly configurable so they can run on and be optimized for a wide variety of platforms and usage scenarios. Testing such systems is difficult because, in effect, you are testing a multitude of systems, not just one. Moreover, bugs can and do appear in some configurations, but not in others. Our research focuses on a subset of these bugs that are "option-related"--those that manifest with high probability only when specific configuration options take on specific settings. Our goal is not only to detect these bugs, but also to automatically characterize the configuration subspaces (i.e., the options and their settings) in which they manifest. To improve efficiency, our process tests only a sample of the configuration space, which we obtain from mathematical objects called covering arrays. This paper compares two different kinds of covering arrays for this purpose and assesses the effect of sampling strategy on fault characterization accuracy. Our results strongly suggest that sampling via covering arrays allows us to characterize option-related failures nearly as well as if we had tested exhaustively, but at a much lower cost. We also provide guidelines for using our approach in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
31. A Survey of Controlled Experiments in Software Engineering.
- Author
-
øberg, Dag I. K., Hannay, Jo E., Hansen, Ove, Kampenes, Vigdis By, Karahasanović, Amela, Liborg, Nils-Kristian, and Rekdal, Anette C.
- Subjects
- *
SOFTWARE engineering , *COMPUTER software , *SURVEYS , *PERIODICALS , *ENGINEERING - Abstract
The classical method for identifying cause-effect relationships is to conduct controlled experiments. This paper reports upon the present state of how controlled experiments in software engineering are conducted and the extent to which relevant information is reported. Among the 5,453 scientific articles published in 12 leading software engineering journals and conferences in the decade from 1993 to 2002, 103 articles (1.9 percent) reported controlled experiments in which individuals or teams performed one or more software engineering tasks. This survey quantitatively characterizes the topics of the experiments and their subjects (number of subjects, students versus professionals, recruitment, and rewards for participation), tasks (type of task, duration, and type and size of application) and environments (location, development tools). Furthermore, the survey reports on how internal and external validity is addressed and the extent to which experiments are replicated. The gathered data reflects the relevance of software engineering experiments to industrial practice and the scientific maturity of software engineering research. [ABSTRACT FROM AUTHOR]
- Published
- 2005
32. An Approach to Developing Domain Requirements as a Core Asset Based on Commonality and Variability Analysis in a Product Line.
- Author
-
Mikyeong Moon, Keunhyuk Yeom, and Heung Seok Chae
- Subjects
- *
PRODUCT lines , *PRODUCT management , *COMPUTER software , *COMMERCIAL products , *METHODOLOGY , *ENGINEERING - Abstract
The methodologies of product line engineering emphasize proactive reuse to construct high-quality products more quickly that are less costly. Requirements engineering for software product families differs significantly from requirements engineering for single software products. The requirements for a product line are written for the group of systems as a whole, with requirements for individual systems specified by a delta or an increment to the generic set. Therefore, it is necessary to identify and explicitly denote the regions of commonality and points of variation at the requirements level. In this paper, we suggest a method of producing requirements that will be a core asset in the product line. We describe a process for developing domain requirements where commonality and variability in a domain are explicitly considered. A CASE environment, named DREAM, for managing commonality and variability analysis of domain requirements is also described. We also describe a case study for an e-Travel System domain where we found that our approach to developing domain requirements based on commonality and variability analysis helped to produce domain requirements as a core asset for product lines. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
33. Leveraging User-Session Data to Support Web Application Testing.
- Author
-
Elbaum, Sebastian, Rothermel, Gregg, Karre, Srikanth, and Fisher II, Marc
- Subjects
- *
APPLICATION software , *COMPUTER software testing , *RELIABILITY in engineering , *SOFTWARE engineering , *ENGINEERING - Abstract
Web applications are vital components of the global information infrastructure, and it is important to ensure their dependability. Many techniques and tools for validating Web applications have been created, but few of these have addressed the need to test Web application functionality and none have attempted to leverage data gathered in the operation of Web applications to assist with testing. In this paper, we present several techniques for using user session data gathered as users operate Web applications to help test those applications from a functional standpoint. We report results of an experiment comparing these new techniques to existing white-box techniques for creating test cases for Web applications, assessing both the adequacy of the generated test cases and their ability to detect faults on a point-of-sale Web application. Our results show that user session data can be used to produce test suites more effective overall than those produced by the white-box techniques considered; however, the faults detected by the two classes of techniques differ, suggesting that the techniques are complementary. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
34. Spatial Complexity Metrics: An Investigation of Utility.
- Author
-
Gold, Nicolas E., Mohan, Andrew M., and Layzell, Paul J.
- Subjects
- *
COMPUTER software , *COMPUTATIONAL complexity , *SOFTWARE measurement , *SOFTWARE engineering , *ENGINEERING - Abstract
Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the "spatial complexity" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
35. FSM-Based Incremental Conformance Testing Methods.
- Author
-
El-Fakih, Khaled, Yevtushenko, Nina, and Bochmann, Grogor V.
- Subjects
- *
SOFTWARE engineering , *COMPUTER software , *COMPUTER systems , *TECHNICAL specifications , *INDUSTRIAL design , *ENGINEERING - Abstract
The development of appropriate test oasis is an important issue for conformance testing of protocol implementations and other reactive software systems. A number of methods are known for the development of a test suite based on a specification given in the form of a finite state machine. In practice, the system requirements evolve throughout the lifetime of the system and the specifications are modified incrementally. In this paper, we adapt tour well-known test derivation methods, namely, the HIS, W, Wp, and UlOv methods, for generating tests that would test only the modified parts of an evolving specification. Some application examples and experimental results are provided. These results show significant gains when using incremental testing in comparison with complete testing, especially when the modified part represents less than 20 percent of the whole specification. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
36. Confirming Configurations in EFSM Testing.
- Author
-
Petrenko, Alexandre, Boroday, Sergiy, and Groz, Roland
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *CONFIGURATION management - Abstract
In this paper, we investigate the problem of configuration verification for the extended FSM (EFSM) model. This is an extension of the FSM state identification problem. Specifically, given a configuration ("state vector") and an arbitrary set of configurations, determine an input sequence such that the EFSM in the given configuration produces an output sequence different from that of the configurations in the given set or at least in a maximal proper subset. Such a sequence can be used in a test case to confirm the destination configuration of a particular EFSM transition. We demonstrate that this problem could be reduced to the EFSM traversal problem, so that the existing methods and tools developed in the context of model checking become applicable. We introduce notions of EFSM projections and products and, based on these notions, we develop a theoretical framework for determining configuration-confirming sequences. The proposed approach is illustrated on a realistic example. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
37. Event-Based Traceability for Managing Evolutionary Change.
- Author
-
Cleland-Huang, Jane, Chang, Carl K., and Christensen, Mark
- Subjects
- *
COMPUTER software , *ENGINEERING , *PROJECT management , *COMPUTER systems , *AUTOMATION , *ELECTRONIC systems - Abstract
Although the benefits of requirements traceability are widely recognized, the actual practice of maintaining a traceability scheme is not always entirely successful. The traceability infrastructure underlying a software system tends to erode over its lifetime, as time-pressured practitioners fail to consistently maintain links and update impacted artifacts each time a change occurs, even with the support of automated systems. This paper proposes a new method of traceability based upon event-notification and is applicable even in a heterogeneous and globally distributed development environment. Traceable artifacts are no longer tightly coupled but are linked through an event service, which creates an environment in which change is handled more efficiently, and artifacts and their related links are maintained in a restorable state. The method also supports enhanced project management for the process of updating and maintaining the system artifacts. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
38. Defining and Applying Measures of Distance Between Specifications.
- Author
-
Labed Jilani, Lamia, Desharnais, Jules, and Mili, Ali
- Subjects
- *
TECHNICAL specifications , *SOFTWARE measurement , *SOFTWARE engineering , *ENGINEERING - Abstract
Echoing Louis Pasteur's quote, [SUP1] we submit the premise that it is advantageous to define measures of distance between requirements specifications because such measures open up a wide range of possibilities both in theory and in practice. In this paper, we present a mathematical basis for measuring distances between specifications and show how our measures of distance can be used to address concrete problems that arise in the practice of software engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
39. Validating the ISO/IEC 155O4 Measure of Software Requirements Analysis Process Capability.
- Author
-
EI Emam, Khaled and Birk, Andreas
- Subjects
- *
SOFTWARE engineering , *STANDARDS , *COMPUTER software , *TRUTHFULNESS & falsehood , *PERFORMANCE , *ENGINEERING - Abstract
ISO/IEC 15504 is an emerging international standard on software process assessment. It defines a number of software engineering processes and a scale for measuring their capability. One of the defined processes is software requirements analysis (SRA). A basic premise of the measurement scale is that higher process capability is associated with better project performance (i.e., predictive validity). This paper describes an empirical study that evaluates the predictive validity of SRA process capability. Assessments using ISO/IEC 15504 were conducted on 56 projects world-wide over a period of two years. Performance measures on each project were also collected using questionnaires, such as the ability to meet budget commitments and staff productivity. The results provide strong evidence of predictive validity for the SRA process capability measure used in ISO/IEC 15504, but only for organizations with more than 50 IT Staff. Specifically, a strong relationship was found between the implementation of requirements analysis practices as defined in ISO/IEC 15504 and the productivity of software projects. For smaller organizations, evidence of predictive validity was rather weak. This can be interpreted in a number of different ways: that the measure of capability is not suitable for small organizations or that the SRA process capability has less effect on project performance for small organizations. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
40. A Theory-Based Representation for Object-Orianted Domain Models.
- Author
-
DeLoach, Scott A. and Hartrum, Thomas C.
- Subjects
- *
SOFTWARE engineering , *FORMAL methods (Computer science) , *TECHNICAL specifications , *SYSTEMS design , *ENGINEERING , *MODEL categories (Mathematics) - Abstract
Formal software specification has long been touted as a way to increase the quality and reliability of software; however, it remains an intricate, manually intensive activity. An alternative to using formal specifications directly is to translate graphically based, semiformal specifications into formal specifications. However, before this translation can take place, a formal definition of baste object-oriented concepts must be found. This paper presents an algebraic model of object-orientation that defines how object-oriented concepts can be represented algebraically using an object-oriented algebraic specification language O-SLANG. O-SLANG combines basic algebraic specification constructs with category theory operations to capture internal object class structure, as well as relationships between classes. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
41. From Safety Analysis to Software Requirements.
- Author
-
Hansen, Kirsten M., Ravn, Anders P., and Stavridou, Victoria
- Subjects
- *
COMPUTER software , *INDUSTRIAL safety , *TREE graphs , *FORMAL methods (Computer science) , *COMPONENT software , *SYSTEMS design , *ENGINEERING - Abstract
Software for safety critical systems must deal with the hazards identified by safety analysis. This paper investigates, how the results of one safety analysis technique, fault trees, are interpreted as software safety requirements to be used in the program design process. We propose that fault tree analysis and program development use the same system model. This model is formalized in a real-time, interval logic, based on a conventional dynamic systems model with state evolving over time. Fault trees are interpreted as temporal formulas, and it is shown how such formulas can be used for deriving safety requirements for software components. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
42. A Framework for Specification-Based Testing.
- Author
-
Stocks, Phil and Carrington, David
- Subjects
- *
SOFTWARE engineering , *TECHNICAL specifications , *TESTING , *RELIABILITY in engineering , *QUALITY control , *ENGINEERING - Abstract
Test templates and a test template framework are introduced as useful concepts in specification-based testing. The framework can be defined using any model-based specification notation and used to derive tests from model-based specifications--in this paper, it is demonstrated using the Z notation. The framework formally defines test data sets and their relation to the operations in a specification and to other test data sets, providing structure to the testing process. Flexibility is preserved, so that many testing strategies can be used. Important application areas of the framework are discussed, including refinement of test data, regression testing, and test oracles. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
43. Towards a Framework for Software Measurement Validation.
- Author
-
Kitchenham, Barbara, Pfleeger, Shari Lawrence, and Fenton, Norman
- Subjects
- *
SOFTWARE measurement , *SOFTWARE engineering , *SOFTWARE validation , *ENGINEERING , *MEASUREMENT - Abstract
In this paper we propose a framework for validating software measurement. We start by defining a measurement structure model that identifies the elementary component of measures and the measurement process, and then consider five other models involved in measurement: unit definition models, instrumentation models, attribute relationship models, measurement protocols and entity population models. We consider a number of measures from the viewpoint of our measurement validation framework and identify a number of shortcomings; in particular we identify a number of problems with the construction of function points. We also compare our view of measurement validation with ideas presented by other researchers and identify a number of areas of disagreement. Finally, we suggest several rules that practitioners and researchers can use to avoid measurement problems, including the use of measurement vectors rather than artificially contrived scalars. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
44. A Three-View Model for Performance Engineering of Concurrent Software.
- Author
-
Woodside, C.M.
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems , *PETRI nets - Abstract
This paper describes a multiview characterization of concurrent software and systems suitable for displaying and analyzing performance information. The views draw from well-known descriptions, and are compatible with established techniques and tools such as execution graphs, Petri Nets, State-Charts, structured design or object-oriented design, and various models for performance. The views are connected by means of a "Core model" and are used together to extract information relating to system integration, such as interprocess overheads, and the delay behavior of separate software components in complex systems. The integration of the views in the Core assists by converting results in one view (such as scheduling delay for resources) to parameters in another (such as delays along a path). The ultimate goal of the views is to support designers in making tradeoffs which involve performance, and to provide early assessment of the performance potential of software designs. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
45. Software Bottlenecking in Client-Server Systems and Rendezvous Networks.
- Author
-
Neilson, I.E., Woodside, C.M., Petriu, D.C., and Majumdar, S.
- Subjects
- *
COMPUTER networks , *MULTIMEDIA systems , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Software bottlenecks are performance constraints caused by slow execution of a software task. In typical client-server systems a client task must wait in a blocked state for the server task to respond to its requests, so a saturated server will slow down all its clients. A Rendezvous Network generalizes this relationship to multiple layers of servers with send-and-wait interactions (rendezvous), a two-phase model of task behavior, and to a unified model for hardware and software contention. Software bottlenecks have different symptoms, different behavior when the system is altered, and a different cure from the conventional bottlenecks seen in queueing network models of computer systems, caused by hardware limits. The differences are due to the "push-back" effect of the rendezvous, which spreads the saturation of a server to its clients. The paper describes software bottlenecks by examples, gives a definition, shows how they can be located and alleviated, and gives a method for estimating the performance benefit to be obtained. Ultimately, if all the software bottlenecks can be removed, the performance limit will be due to a conventional hardware bottleneck. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
46. An Event-Based Architecture Definition Language.
- Author
-
Luckham, David C. and Vera, James
- Subjects
- *
PROGRAMMING languages , *COMPUTER architecture , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems - Abstract
This paper discusses general requirements for architecture definition languages, and describes the syntax and semantics of the subset of the Rapide language that is designed to satisfy these requirements. Rapide is a concurrent event-based simulation language for defining and simulating the behavior of system architectures. Rapide is intended for modelling the architectures of concurrent and distributed systems, both hardware and software. In order to represent the behavior of distributed systems in as much detail as possible, Rapide is designed to make the greatest possible use of event-based modelling by producing causal event simulations. When a Rapide model is executed it produces a simulation that shows not only the events that make up the model's behavior, and their timestamps, but also which events caused other events, and which events happened independently. The architecture definition features of Rapide are described here: event patterns, interfaces, architectures and event pattern mappings. The use of these features to build causal event models of both static and dynamic architectures is illustrated by a series of simple examples from both software and hardware. Also we give a detailed example of the use of event pattern mappings to define the relationship between two architectures at different levels of abstraction. Finally, we discuss briefly how Rapide is related to other event-based languages. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
47. Using Automatic Process Clustering for Design Recovery and Distributed Debugging.
- Author
-
Kunz, Thomas and Black, James P.
- Subjects
- *
DISTRIBUTED computing , *DEBUGGING , *PROGRAMMING languages , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Distributed applications written in Hermes typically consist of a large number of sequential processes. The use of a hierarchy of process clusters can facilitate the debugging of such applications. Ideally, such a hierarchy should be derived automatically. This paper discusses two approaches to automatic process clustering, one analyzing runtime information with a statistical approach and one utilizing additional semantic information. Tools realizing these approaches were developed and a quantitative measure to evaluate process clusters is proposed. The results obtained under both approaches are compared, and indicate that the additional semantic information improves the cluster hierarchies derived. We demonstrate the value of automatic process clustering with an example. It is shown how appropriate process clusters reduce the complexity of the understanding process, facilitating program maintenance activities such as debugging. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
48. Call Path Refinement Profiles.
- Author
-
Hall, Robert J.
- Subjects
- *
COMPUTER programming , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
In order to effectively optimize complex programs built in a layered or recursive fashion (possibly from reused general components), the programmer has a critical need for performance information connected directly to the design decisions and other optimization opportunities present in the code. Call path refinement profiles are novel tools for guiding the optimization of such programs, that: 1) provide detailed performance information about arbitrarily nested (direct or indirect) function call sequences, and 2) focus the user's attention on performance bottlenecks by limiting and aggregating the information presented. This paper discusses the motivation for such profiles, describes in detail their implementation in the CPPROF profiler, and relates them to previous profilers, showing how most widely available profilers can be expressed simply and efficiently in terms of call path refinements. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
49. Test-Execution-Based Reliability Measurement and Modeling for Large Commercial Software.
- Author
-
Tian, Jeff, Peng Lu, and Palma, Joe
- Subjects
- *
COMPUTER software , *COMPUTER programming , *REAL-time computing , *SOFTWARE engineering , *ENGINEERING - Abstract
This paper studies practical reliability measurement and modeling for large commercial software systems based on test execution data collected during system testing. The application environment and the goals of reliability assessment were analyzed to identify appropriate measurement data. Various reliability growth models were used on failure data normalized by test case executions to track testing progress and provide reliability assessment. Practical problems in data collection, reliability measurement and modeling, and modeling result analysis were also examined. The results demonstrated the feasibility of reliability measurement in a large commercial software development environment and provided a practical comparison of various reliability measurements and models under such an environment. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
50. Compiling Real-Time Programs With Timing Constraint Refinement and Structural Code Motion.
- Author
-
Gerber, Richard and Hong, Seongsoo
- Subjects
- *
COMPUTER software , *COMPUTER programming , *REAL-time computing , *SOFTWARE engineering , *ENGINEERING - Abstract
We present a programming language called TCEL (Time-Constrained Event Language), whose semantics are based on time-constrained relationships between observable events. Such a semantics infers only those timing constraints necessary to achieve real-time correctness, without overconstraining the system. Moreover, an optimizing compiler can exploit this looser semantics to help tune the code, so that its worst-case execution time is consistent with its real-time requirements. In this paper we describe such a transformation system, which work$ in two phases. First, the TCEL source code is translated into an intermediate representation. Then an instruction- scheduling algorithm rearranges selected unobservable operations and synthesizes tasks guaranteed to respect the original event- based constraints. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.