296 results
Search Results
2. Predicate Logic for Software Engineering.
- Author
-
Parnas, David Lorge
- Subjects
SOFTWARE engineering ,ENGINEERING ,SYSTEMS design ,ELECTRONIC data processing documentation ,COMPUTER systems ,ELECTRONIC systems ,COMPUTERS - Abstract
The interpretations of logical expressions found in most introductory textbooks are not suitable for use in software engineering applications because they do not deal with partial functions. More advanced papers and texts deal with partial functions in a variety of complex ways. This paper proposes a very simple change to the classic interpretation of predicate expressions, one that defines their value for all values of all variables, yet is almost identical to the standard definitions. It then illustrates the application of this interpretation in software documentation. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
3. Managing Conflicts in Goal-Driven Requirements Engineering.
- Author
-
van Lamsweerde, Axel, Darimont, Robert, and Letier, Emmanuel
- Subjects
ENGINEERING ,CONFLICT management ,GOAL (Psychology) ,INCONSISTENCY (Logic) ,STAKEHOLDERS ,HEURISTIC - Abstract
A wide range of inconsistencies can arise during requirements engineering as goals and requirements are elicited from multiple stakeholders. Resolving such inconsistencies sooner or later in the process is a necessary condition for successful development of the software implementing those requirements. The paper first reviews the main types of inconsistency that can arise during requirements elaboration, defining them in an integrated framework and exploring their interrelationships. It then concentrates on the specific case of conflicting formulations of goals and requirements among different stakeholder viewpoints or within a single viewpoint. A frequent, weaker form of conflict called divergence is introduced and studied in depth. Formal techniques and heuristics are proposed for detecting conflicts and divergences from specifications of goals/ requirements and of domain properties. Various techniques are then discussed for resolving conflicts and divergences systematically by introduction of new goals or by transformation of specifications of goals/objects toward conflict-free versions. Numerous examples are given throughout the paper to illustrate the practical relevance of the concepts and techniques presented. The latter are discussed in the framework of the KAOS methodology for goal-driven requirements engineering. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
4. An Acyclic Expansion Algorithm for Fast Protocol Validation.
- Author
-
Kakuda, Yoshiaki, Wakahara, Yasushi, and Norigoe, Masamitsu
- Subjects
COMPUTER algorithms ,ALGORITHMS ,COMPUTER programming ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
For the development of communications software com- posed of many modules, protocol validation is essential to detect errors in the interactions among the modules. A number of protocol validation techniques were proposed in the past, but the validation time required by these techniques is too long for many actual protocols. This paper proposes a new fast protocol validation technique to overcome this drawback. The proposed technique is to construct the minimum acyclic form of state transitions in individual processes of the protocol, and to detect protocol errors such as system deadlocks and channel overflows fast. This paper also presents a protocol validation system based on the proposed technique to confirm its feasibility and shows validation results for some actual protocols obtained with this system. As a result, the protocol validation system is expected to contribute to a great extent to the improvement of the productivity in development and maintenance of communications software. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
5. NLP-KAOS for Systems Goal Elicitation: Smart Metering System Case Study.
- Author
-
Casagrande, Erik, Woldeamlak, Selamawit, Woon, Wei Lee, Zeineldin, H. H., and Svetinovic, Davor
- Subjects
REQUIREMENTS engineering ,ENGINEERING ,TECHNICAL specifications ,NATURAL language processing ,ARTIFICIAL intelligence - Abstract
This paper presents a computational method that employs Natural Language Processing (NLP) and text mining techniques to support requirements engineers in extracting and modeling goals from textual documents. We developed a NLP-based goal elicitation approach within the context of KAOS goal-oriented requirements engineering method. The hierarchical relationships among goals are inferred by automatically building taxonomies from extracted goals. We use smart metering system as a case study to investigate the proposed approach. Smart metering system is an important subsystem of the next generation of power systems (smart grids). Goals are extracted by semantically parsing the grammar of goal-related phrases in abstracts of research publications. The results of this case study show that the developed approach is an effective way to model goals for complex systems, and in particular, for the research-intensive complex systems. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
6. Variability in Software Systems—A Systematic Literature Review.
- Author
-
Galster, Matthias, Weyns, Danny, Tofan, Dan, Michalik, Bartosz, and Avgeriou, Paris
- Subjects
COMPUTER software quality control ,SOFTWARE engineering ,LITERATURE reviews ,ENGINEERING ,ELECTRONIC systems - Abstract
Context: Variability (i.e., the ability of software systems or artifacts to be adjusted for different contexts) became a key property of many systems. Objective: We analyze existing research on variability in software systems. We investigate variability handling in major software engineering phases (e.g., requirements engineering, architecting). Method: We performed a systematic literature review. A manual search covered 13 premium software engineering journals and 18 premium conferences, resulting in 15,430 papers searched and 196 papers considered for analysis. To improve reliability and to increase reproducibility, we complemented the manual search with a targeted automated search. Results: Software quality attributes have not received much attention in the context of variability. Variability is studied in all software engineering phases, but testing is underrepresented. Data to motivate the applicability of current approaches are often insufficient; research designs are vaguely described. Conclusions: Based on our findings we propose dimensions of variability in software engineering. This empirically grounded classification provides a step towards a unifying, integrated perspective of variability in software systems, spanning across disparate or loosely coupled research themes in the software engineering community. Finally, we provide recommendations to bridge the gap between research and practice and point to opportunities for future research. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
7. Testability Transformation.
- Author
-
Harman, Mark, Lin Hu, Hierons, Rob, Wegener, Joachim, Sthamer, Harmen, Baresel, André, and Roper, Marc
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software - Abstract
A testability transformation is a source-to-source transformation that aims to improve the ability of a given test generation method to generate test data for the original program. This paper introduces testability transformation, demonstrating that it differs from traditional transformation, both theoretically and practically, while still allowing many traditional transformation rules to be applied. The paper illustrates the theory of testability transformation with an example application to evolutionary testing. An algorithm for flag removal is defined and results are presented from an empirical study which show how the algorithm improves both the performance of evolutionary test data generation and the adequacy level of the test data so-generated. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
8. Handling Obstacles in Goal-Oriented Requirements Engineering.
- Author
-
van Lamsweerde, Axel and Letier, Emmanuel
- Subjects
ENGINEERING ,ELECTRONIC systems ,MATHEMATICAL programming ,PROGRAM transformation ,TECHNICAL specifications ,COMPUTER software - Abstract
Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
9. Requirements Elicitation and Specification Using the Agent Paradigm: The Case Study of an Aircraft Turnaround Simulator.
- Author
-
Miller, Tim, Bin Lu, Sterling, Leon, Beydoun, Ghassan, and Taveter, Kuldar
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,TECHNOLOGY transfer ,DIFFUSION of innovations - Abstract
In this paper, we describe research results arising from a technology transfer exercise on agent-oriented requirements engineering with an industry partner. We introduce two improvements to the state-of-the-art in agent-oriented requirements engineering, designed to mitigate two problems experienced by ourselves and our industry partner: (1) the lack of systematic methods for agent-oriented requirements elicitation and modelling; and (2) the lack of prescribed deliverables in agent-oriented requirements engineering. We discuss the application of our new approach to an aircraft turnaround simulator built in conjunction with our industry partner, and show how agent-oriented models can be derived and used to construct a complete requirements package. We evaluate this by having three independent people design and implement prototypes of the aircraft turnaround simulator, and comparing the three prototypes. Our evaluation indicates that our approach is effective at delivering correct, complete, and consistent requirements that satisfy the stakeholders, and can be used in a repeatable manner to produce designs and implementations. We discuss lessons learnt from applying this approach. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
10. A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection.
- Author
-
Kessentini, Wael, Kessentini, Marouane, Sahraoui, Houari, Bechikh, Slim, and Ouni, Ali
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software quality control ,QUALITY control ,EVOLUTIONARY algorithms - Abstract
We propose in this paper to consider code-smells detection as a distributed optimization problem. The idea is that different methods are combined in parallel during the optimization process to find a consensus regarding the detection of code-smells. To this end, we used Parallel Evolutionary algorithms (P-EA) where many evolutionary algorithms with different adaptations (fitness functions, solution representations, and change operators) are executed, in a parallel cooperative manner, to solve a common goal which is the detection of code-smells. An empirical evaluation to compare the implementation of our cooperative P-EA approach with random search, two single population-based approaches and two code-smells detection techniques that are not based on meta-heuristics search. The statistical analysis of the obtained results provides evidence to support the claim that cooperative P-EA is more efficient and effective than state of the art detection approaches based on a benchmark of nine large open source systems where more than 85 percent of precision and recall scores are obtained on a variety of eight different types of code-smells. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
11. A General Testability Theory: Classes, Properties, Complexity, and Testing Reductions.
- Author
-
Rodriguez, Ismael, Llana, Luis, and Rabanal, Pablo
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,ANTIPATTERNS (Software engineering) ,CAPABILITY maturity model - Abstract
In this paper we develop a general framework to reason about testing. The difficulty of testing is assessed in terms of the amount of tests that must be applied to determine whether the system is correct or not. Based on this criterion, five testability classes are presented and related. We also explore conditions that enable and disable finite testability, and their relation to testing hypotheses is studied. We measure how far incomplete test suites are from being complete, which allows us to compare and select better incomplete test suites. The complexity of finding that measure, as well as the complexity of finding minimum complete test suites, is identified. Furthermore, we address the reduction of testing problems to each other, that is, we study how the problem of finding test suites to test systems of some kind can be reduced to the problem of finding test suites for another kind of systems. This enables to export testing methods. In order to illustrate how general notions are applied to specific cases, many typical examples from the formal testing techniques domain are presented. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
12. A Study of Variability Models and Languages in the Systems Software Domain.
- Author
-
Berger, Thorsten, She, Steven, Lotufo, Rafael, Wasowski, Andrzej, and Czarnecki, Krzysztof
- Subjects
SYSTEMS software ,COMPUTER software research ,COMPILERS (Computer programs) ,SOFTWARE engineering ,ENGINEERING - Abstract
Variability models represent the common and variable features of products in a product line. Since the introduction of FODA in 1990, several variability modeling languages have been proposed in academia and industry, followed by hundreds of research papers on variability models and modeling. However, little is known about the practical use of such languages. We study the constructs, semantics, usage, and associated tools of two variability modeling languages, Kconfig and CDL, which are independently developed outside academia and used in large and significant software projects. We analyze 128 variability models found in 12 open--source projects using these languages. Our study 1) supports variability modeling research with empirical data on the real-world use of its flagship concepts. However, we 2) also provide requirements for concepts and mechanisms that are not commonly considered in academic techniques, and 3) challenge assumptions about size and complexity of variability models made in academic papers. These results are of interest to researchers working on variability modeling and analysis techniques and to designers of tools, such as feature dependency checkers and interactive product configurators. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
13. Editorial: The State of TSE.
- Author
-
Knight, John
- Subjects
ENGINEERING ,SOFTWARE engineering ,COMPUTER software - Abstract
This article reports that several changes in the periodical "IEEE Transactions on Software Engineering," published as of February 01, 2004, will make the processing of articles more timely and more effective. First, the length restriction on submitted manuscripts has been removed. This should allow authors to document results in appropriate length papers. Despite the change, authors are encouraged to keep papers as short as possible. Second, Transactions on Software Engineering manuscript management has been moved to the Web-based Manuscript Central system. This system makes all aspects of manuscript processing much more efficient and it allows everybody involved in processing papers, including authors, to obtain details of the state of manuscripts as they are processed. Third, preprints in the future will be available online two months before the issue cover date. Software remains a critical industry to the world. The impact of software is tremendous, both when it works and when it doesn't.
- Published
- 2004
- Full Text
- View/download PDF
14. The Role of Deliberate Artificial Design Elements in Software Engineering Experiments.
- Author
-
Hannay, Jo E. and Jorgensen, Magne
- Subjects
SOFTWARE engineering ,SYSTEMS design ,COMPUTER software ,COMPUTER systems ,ENGINEERING ,DESIGN - Abstract
Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies that are badly designed or that have research questions of low relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
15. A Survey of Controlled Experiments in Software Engineering.
- Author
-
øberg, Dag I. K., Hannay, Jo E., Hansen, Ove, Kampenes, Vigdis By, Karahasanović, Amela, Liborg, Nils-Kristian, and Rekdal, Anette C.
- Subjects
SOFTWARE engineering ,COMPUTER software ,SURVEYS ,PERIODICALS ,ENGINEERING - Abstract
The classical method for identifying cause-effect relationships is to conduct controlled experiments. This paper reports upon the present state of how controlled experiments in software engineering are conducted and the extent to which relevant information is reported. Among the 5,453 scientific articles published in 12 leading software engineering journals and conferences in the decade from 1993 to 2002, 103 articles (1.9 percent) reported controlled experiments in which individuals or teams performed one or more software engineering tasks. This survey quantitatively characterizes the topics of the experiments and their subjects (number of subjects, students versus professionals, recruitment, and rewards for participation), tasks (type of task, duration, and type and size of application) and environments (location, development tools). Furthermore, the survey reports on how internal and external validity is addressed and the extent to which experiments are replicated. The gathered data reflects the relevance of software engineering experiments to industrial practice and the scientific maturity of software engineering research. [ABSTRACT FROM AUTHOR]
- Published
- 2005
16. An Approach to Developing Domain Requirements as a Core Asset Based on Commonality and Variability Analysis in a Product Line.
- Author
-
Mikyeong Moon, Keunhyuk Yeom, and Heung Seok Chae
- Subjects
PRODUCT lines ,PRODUCT management ,COMPUTER software ,COMMERCIAL products ,METHODOLOGY ,ENGINEERING - Abstract
The methodologies of product line engineering emphasize proactive reuse to construct high-quality products more quickly that are less costly. Requirements engineering for software product families differs significantly from requirements engineering for single software products. The requirements for a product line are written for the group of systems as a whole, with requirements for individual systems specified by a delta or an increment to the generic set. Therefore, it is necessary to identify and explicitly denote the regions of commonality and points of variation at the requirements level. In this paper, we suggest a method of producing requirements that will be a core asset in the product line. We describe a process for developing domain requirements where commonality and variability in a domain are explicitly considered. A CASE environment, named DREAM, for managing commonality and variability analysis of domain requirements is also described. We also describe a case study for an e-Travel System domain where we found that our approach to developing domain requirements based on commonality and variability analysis helped to produce domain requirements as a core asset for product lines. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
17. Spatial Complexity Metrics: An Investigation of Utility.
- Author
-
Gold, Nicolas E., Mohan, Andrew M., and Layzell, Paul J.
- Subjects
COMPUTER software ,COMPUTATIONAL complexity ,SOFTWARE measurement ,SOFTWARE engineering ,ENGINEERING - Abstract
Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the "spatial complexity" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
18. Confirming Configurations in EFSM Testing.
- Author
-
Petrenko, Alexandre, Boroday, Sergiy, and Groz, Roland
- Subjects
SOFTWARE engineering ,ENGINEERING ,CONFIGURATION management - Abstract
In this paper, we investigate the problem of configuration verification for the extended FSM (EFSM) model. This is an extension of the FSM state identification problem. Specifically, given a configuration ("state vector") and an arbitrary set of configurations, determine an input sequence such that the EFSM in the given configuration produces an output sequence different from that of the configurations in the given set or at least in a maximal proper subset. Such a sequence can be used in a test case to confirm the destination configuration of a particular EFSM transition. We demonstrate that this problem could be reduced to the EFSM traversal problem, so that the existing methods and tools developed in the context of model checking become applicable. We introduce notions of EFSM projections and products and, based on these notions, we develop a theoretical framework for determining configuration-confirming sequences. The proposed approach is illustrated on a realistic example. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
19. Simulation and Comparison of Albrecht's Function Point and DeMarco's Function Bang Metrics in a CASE Environment.
- Author
-
Rask, Raimo, Laamanen, Petteri, and Lyytinen, Kalle
- Subjects
COMPUTER software development ,COMPUTER programming management ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Software size estimates provide a basis for soft- ware cost estimation during software development. Hence, it is important to measure the system size reliably as early as possible, i.e., during the requirements specification. Two best known specification level metrics are Albrecht's Function Points and DeMarco's Function Bang. One problem in using these metrics has been that there are only few tools that can calculate them during the specification phase. We have built one such tool. Another problem has been that no research data is available how these metrics correlate with one another. The paper compares these two metrics by a simulation study in which automatically generated randomized dataflow diagrams (DED's) were used as a statistical sample to count automatically function points and function bang in a built CASE environment. These value counts were correlated statistically using correlation coefficients and regression analysis. The simulation study permits sufficient variation in the base material to cover most types of system specifications. Moreover, it allows sufficient sampling sizes to make statistical analysis of data. The obtained results show that in certain cases there is a relatively good statistical correlation between these metrics. No overall general correlation exists, however. The paper does not show which one of the two metrics fares better as a size metric. Yet, our study suggests to use in many cases Function Bang metric, because its automatic calculation is simpler and depends less on judgement. Moreover, the study demonstrates that correlations depend upon a system type. This implies that in software projects one must be careful with size estimates while using these metrics. In order to know when one needs to calibrate the size estimate we need to develop algorithms which help to detect logical system types and make adjustments accordingly. The results also point out the need of empirical research in which we can better derive the connection between specification level metrics and the number of lines of code. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
20. Capacity of Voting Systems.
- Author
-
Rangarajan, Sampath, Jalote, Pankaj, and Tripathi, Satish K.
- Subjects
BACKUP processing alternatives in electronic data processing ,SYSTEMS design ,DATABASES ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Data replication is often used to increase the availability of data in a database system. Voting schemes can be used to manage this replicated data. In this paper we use a simple model to study the capacity of systems using voting schemes for data management. Capacity of a system is defined as the number of operations the system can perform successfully, on an average, per unit time. We study the capacity of a system using voting and compare it with the capacity of a system using a single node. We show that the maximum increase in capacity by the use of majority voting is bounded by lip, where p is the steady-state probability of a node being alive. We also show that for a system employing majority voting, if the reliability of nodes is high, increasing the number of nodes to more than three gives only a marginal increase in capacity. We perform similar analysis for three other voting schemes. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
21. LISPACK—A Methodology and Tool for the Performance Analysis of Parallel Systems and Algorithms.
- Author
-
Lazeolla, Giuseppe and Marinuzzi, Francesco
- Subjects
PARALLEL computers ,SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
The paper deals with the performance analysis of parallel algorithms and systems. For these, numerical solution methods quickly show their limits because of the enormous state-space growth. The proposed methodology and software tool (LISPACK, an acronym for List-manipulation Parallel-modeling Package) uses string manipulation, lumping, and recursive elimination as a means for the definition of the large Markovian process, its restructuring and efficient solution. Initially, the enormous stage space is conveniently collapsed and the large transition matrix is reduced. Subsequently, the reduced matrix is recursively block banded, and an efficient recursive, symbolic Gauss elimination is applied. No relevant costs are incurred for the state-space collapsing and restructuring, nor for the matrix block banding. The analysis of a typical parallel system and algorithm model is developed as a case study, to discuss the features of the method. The paper has two contributions. The first, symbolic-approach methodology, is proposed for the performance analysis of parallel algorithms and systems. Second, a tool is introduced that exploits the capabilities of the symbolic approach in the solution of parallel models, where the numerical techniques reveal their limits. [ABSTRACT FROM AUTHOR]
- Published
- 1993
22. Rapid Transaction-Undo Recovery Using Twin-Page Storage Management.
- Author
-
Kun-Lung Wu and Fuchs, W. Kent
- Subjects
COMPUTER storage devices ,SOFTWARE engineering ,ENGINEERING ,SOFTWARE productivity ,MANAGEMENT ,OPERATIONS research - Abstract
This paper presents a twin-page storage method, which is an alternative to the TWIST (twin slot) approach by Reuter, for rapid transaction-undo recovery. In contrast to TWIST, our twin-page approach allows dirty pages in the buffer to be written at any instant onto disk without the requirement of undo logging, and, when a transaction is aborted, no explicit undo is required. As a result, all locks accumulated by the aborted transaction can be released earlier, allowing other transactions waiting for the locks to proceed. Through maintenance of aborted transaction identifiers, invalid pages written by the aborted transaction coexist with other valid pages and are guaranteed not be accessed by subsequent transactions. Instead of an explicit undo, most of the invalid pages are overwritten by subsequent normal updates. Performance in terms of disk I/O and CPU overhead for transaction-undo recovery is analyzed and compared with TWIST. It is shown that our scheme is particularly suited for applications where there are a large number of updates written onto disk when transactions are aborted, and there are frequent aborts. The approach, however, is not as applicable to environments where transactions are typically short or rarely aborted, or most updates are not written onto disk before a commitment. [ABSTRACT FROM AUTHOR]
- Published
- 1993
23. Towards Complexity Metrics for Ada Tasking.
- Author
-
Shatz, Sol M.
- Subjects
DISTRIBUTED computing ,PROGRAMMING languages ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
With growing interest in distributed computing come demands for techniques to aid in development of correct and reliable distributed software. Controlling, or at least recognizing, complexity of such software is an important part of the development and maintenance process. While a number of metrics have been proposed for quantitatively measuring the complexity of sequential, centralized programs, corresponding metrics for distributed software are noticeable by their absence. Using Ada as a representative distributed programming language, this paper discusses some ideas on complexity metrics that focus on Ada tasking and rendezvous. Concurrently active rendezvous are claimed to be an important aspect of communication complexity. A Petri net graph model of Ada rendezvous is used to introduce a "rendezvous graph," an abstraction that can be useful in viewing and computing effective communication complexity. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
24. Single-Site and Distributed Optimistic Protocols for Concurrency Control.
- Author
-
Bassiouni, M. A.
- Subjects
ELECTRONIC data processing ,DATABASES ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
In spite of their advantage in removing the overhead of lock maintenance and deadlock handling, optimistic concurrency control methods have continued to be far less popular in practice than locking schemes. There are two complementary approaches to help render the optimistic approach practically viable. For the high-level approach, integration schemes can be utilized so that the database management system is provided with a variety of synchronization methods each of which can be applied to the appropriate class of transactions. The low-level approach seeks to increase the concurrency of the original optimistic method and improve its performance. In this paper we examine the latter approach, and present algorithms that aim at reducing backups and improve throughput. Both the single-site and distributed networks are considered. Optimistic schemes using time-stamps for fully duplicated and partially duplicated database networks are presented, with emphasis on performance enhancement and on reducing the overall cost of implementation. Abstract-A methodology is presented for evaluating the performance of database update schemes in a distributive environment. The methodology makes use of the history of how data are used in the database. Parameters, such as update to retrieval ratio and average file size, can be set based on the actual characteristics of a system. The analysis is specifically directed toward the support of derived data within the relational model. Because concurrency is a major problem in a distributive system, the support of derived data is analyzed with respect to three distributive concurrency control techniques- master/slave, distributed, and synchronized. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
25. Generic Lifecycle Support in the ALMA Environment.
- Author
-
Lamsweerde, Axel Van, Delcourt, Bruno, Delor, Emmanuelle, Schayes, Marie-Claire, and Champagne, Robert
- Subjects
ARCHITECTURAL design ,SOFTWARE engineering ,SYNTAX (Grammar) ,ARCHITECTURAL designs ,ENGINEERING ,COMPUTER architecture - Abstract
ALMA is an environment kernel supporting the elaboration, analysis, documentation, and maintenance of the various products developed during an entire software lifecycle. Its central component is an environment database n which properties about software objects and relations are collected. These properties include texts written in various formalisms. Two kinds of tools are provided: 1) high- level tools for updating, querying, reporting and maintaining multiple versions of software objects and relations consistently in the database, and 2) syntax-directed tools like structural editors for manipulating the formal texts attached to software objects and relations in the database. A basic feature of the ALMA kernel is its genericity. Tools of the first kind are parameterized on software lifecycle models while tools of the second kind are parameterized on formalisms. Instantiated versions of them for specific models arid formalisms are generated by a meta-environment, which also generates the environment database structure tailored to the desired lifecycle model. This paper concentrates on the database support meta-system and the instantiated database support 5) stems it generates. Our main concern is to discuss the architectural design decisions we made and the mechanisms we introduced for achieving parameterization on lifecycle models. In particular, we describe the entity-relationship meta-model we designed for meta-defining a particular lifecycle model as input to the meta-system. This meta-model is an extension of standard entity-relationship models in that n-ary relations can have attributes, they can be defined on unions of entity types, type specialization with multiple inheritance is supported, and a mechanism is provided for defining views yielding different environment subdatabases associated with different classes of users and/or tools. The crucial role played by this meta-model will be stressed all along the paper. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
26. PROTEAN: A High-level Petri Net Tool for the Specification and Verification of Communication Protocols.
- Author
-
Billington, Jonathan, Wheeler, Geoffrey R., and Wilbur-ham, Michael C.
- Subjects
PETRI nets ,GRAPH theory ,COMPUTER software ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
A computer aid for the specification and analysis of computer communication protocols has been developed over a period of 7 years by the Telecom Australia Research Laboratories. It is based on a formal specification technique called Numerical Petri Nets. The computer aid, known as PROTEAN (PROTocol Emulation and Analysis), provides both graphical (color) and textual interfaces to the protocol designer. Numerical Petri Net (NPN) specifications may be created, stored, appended to other NPNs, structured, edited, listed, displayed, and analyzed. Interactive simulation, exhaustive reachability analysis, and several directed graph analysis facilities are provided. Reachability graphs can be automatically laid out and displayed. PROTEAN determines liveness (dead code, deadlocks, and livelocks) from the reachability graph and its strongly connected components. Language analysis, involving the automatic reduction of reachability graphs to language graphs, can be used to study sequences of key system events. This allows a protocol to be compared with its service specification. Elementary cycles of graphs can be generated, allowing interesting cycles to be highlighted on reachability and language graphs. Facilities are provided for debugging the specification, once a problem with the protocol has been discovered. They allow sequences of events, which lead to the undesired behavior, to be traced. The paper commences with a comparison of specification languages, concentrating on extended finite state machines and high-level Petri nets. NPNs and PROTEAN's facilities are then described and illustrated with a simple example. The application of PROTEAN to complex examples is mentioned briefly. A discussion of the approach, its limitations and future work is presented in the context of other developments reported in the literature. Work towards a comprehensive Protocol Engineering Workstation is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
27. Fragtypes: A Basis for Programming Environments.
- Author
-
Madhavji, Nazim H.
- Subjects
PROGRAMMING languages ,SOFTWARE engineering ,MATHEMATICAL models ,COMPUTER programming ,ENGINEERING - Abstract
It is being recognized that recent programming environments have made significant progress towards improving the programming process. In adhering to this goal of software engineering, this paper introduces a new basis for programming environments. This basis encourages development of software in fragments of various types, called fragtypes. Fragtypes range from a simple expression type to a complete subsystem type. As a result, they are suited to the development of software in an enlarged scope that includes both programming in the small and programming in the large; The paper shows how new proposed operations on fragtypes can achieve unusual effects on the software development process. Fragtypes and their associated construction rules form the basis of the programming environment MUPE- 2, which is currently under development at McGill University. The target and the implementation language of this environment is the programming language Modula-2. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
28. Mathematical Model of Composite Objects and Its Application for Organizing Engineering Databases.
- Author
-
Ketabchi, Mohammad A. and Berzins, Valdis
- Subjects
SOFTWARE engineering ,ENGINEERING ,DATABASES ,MATHEMATICAL models ,COMPUTER programming - Abstract
Composite objects are descriptions of assemblies of parts which themselves can be assemblies of other parts. Efficient storage and retrieval of composite objects is essential for computer aided design applications of database management systems. This paper introduces a clustering concept called component aggregation which considers assemblies having the same types of parts as equivalent objects. The notion of equivalent objects is used to develop a mathematical model of composite objects. It is shown that the set of equivalence classes of objects form a Boolean algebra whose minterms represent the objects which are not considered composite at the current viewing level. The algebraic structure of composite objects serves as a basis for developing a technique for organizing composite objects and supporting materialization of explosion views. The technique provides a clubstering mechanism which partitions the database into meaningful and application-oriented clusters, and allows any desired explosion view to be materialized using a minimal set of stored views. A simplified relational database for design data, and a set of frequent access patterns in design applications are outlined and used to demonstrate the benefits of database organization based on the mathematical model of composite objects. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
29. Specification of Synchronizing Processes.
- Author
-
Ramamritham, Krithivasan and Keller, Robert M.
- Subjects
COMPUTER software ,COMPUTER programming ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
The formalism of temporal logic has been suggested to be an appropriate tool for expressing the semantics of concurrent programs. This paper is concerned with the application of temporal logic to the specification of factors affecting the synchronization of concurrent processes. Towards this end, we first introduce a model for synchronization and axiomatize its behavior. SYSL, a very high-level language for specifying synchronization properties, is then described. It is designed using the primitives of temporal logic and features constructs to express properties that affect synchronization in a fairly natural and modular fashion. Since the statements in the language have intuitive interpretations, specifications are humanly readable. In addition, since they possess appropriate formal semantics, unambiguous specifications result. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
30. Toolpack—An Experimental Software Development Environment Research Project.
- Author
-
Osterweil, Leon J.
- Subjects
COMPUTER software development ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems ,COMPUTER software - Abstract
This paper discusses the goals and methods of the Toolpack project and in this context discusses the architecture and design of the software system being produced as the focus of the project. Toolpack is presented as an experimental activity in which a large software tool environment is being created for the purpose of general distribution and then careful study and analysis. The paper begins by explaining the motivation for building integrated tool sets. It then proceeds to explain the basic requirements that an integrated system of tools must satisfy in order to be successful and to remain useful both in practice and as an experimental object. The paper then summarizes the tool capabilities that will be incorporated into the environment. It then goes on to present a careful description of the actual architecture of the Toolpack integrated tool system. Finally the Toolpack project experimental plan is presented, and future plans and directions are summarized. [ABSTRACT FROM AUTHOR]
- Published
- 1983
31. Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation.
- Author
-
Albrecht, Allan J. and Gafeney Jr., John E.
- Subjects
COMPUTER software development ,COMPUTER software ,COMPUTER systems ,SOFTWARE engineering ,ENGINEERING - Abstract
One of the most important problems faced by software developers and users is the prediction of the size of a programming system and its development effort. As an alternative to "size," one might deal with a measure of the "function" that the software is to perform. Albrecht [1] has developed a methodology to estimate the amount of the "function" the software is to perform, in terms of the data it is to use (absorb) and to generate (produce). The "function" is quantified as "function points," essentially, a weighted sum of the numbers of "inputs," "outputs," master files," and "inquiries" provided to, or generated by, the software. This paper demonstrates the equivalence between Albrecht's external input/output data flow representative of a program (the "function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a program as well as the "soft content" variation of Halstead's model suggested by Gaffney [7]. Further, the high degree of correlation between "function points" and the eventual "SLOC" (source lines of code) of the program, and between "function points" and the work-effort required to develop the code, is demonstrated. The "function point" measure is thought to be more useful than "SLOC" as a prediction of work effort because "function points" are relatively easily estimated from a statement of basic requirements for a program early in the development cycle. The strong degree of equivalency between "function points" and "SLOC" shown in the paper suggests a two-step work-effort validation procedure, first using "function points" to estimate "SLOC," and then using "SLOC" to estimate the work-effort. This approach would pro- vide validation of application development work plans and work-effort estimates early in the development cycle. The approach would also more effectively use the existing base of knowledge on producing "SLOC" until a similar base is developed for "function points." The paper assumes that the reader is familiar with the fundamental theory of "software science" measurements and the practice of validating estimates of work-effort to design and implement software applications (programs). If not, a review of [1] -[3] is suggested. [ABSTRACT FROM AUTHOR]
- Published
- 1983
32. Problem Oriented Software Engineering: Solving the Package Router Control Problem.
- Author
-
Hall, Jon G., Rapanotti, Lucia, and Jackson, Michael A.
- Subjects
SOFTWARE engineering ,COMPUTER software development ,ROUTING (Computer network management) ,NETWORK routers ,COMPUTER systems ,ENGINEERING - Abstract
Problem orientation is gaining interest as a way of approaching the development of software intensive systems, and yet, a significant example that explores its use is missing from the literature. In this paper, we present the basic elements of Problem Oriented Software Engineering (POSE), which aims at bringing both nonformal and formal aspects of software development together in a single framework. We provide an example of a detailed and systematic POSE development of a software problem: that of designing the controller for a package router. The problem is drawn from the literature, but the analysis presented here is new. The aim of the example is twofold: to illustrate the main aspects of POSE and how it supports software engineering design and to demonstrate how a nontrivial problem can be dealt with by the approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
33. Design Pattern Detection Using Similarity Scoring.
- Author
-
Tsantalis, Nikolaos, Chatzigeorgiou, Alexander, Stephanides, George, and Halkidis, Spyros T.
- Subjects
REVERSE engineering ,GRAPH algorithms ,COMPUTER software ,OPERATIONS research ,COMPUTER algorithms ,SOFTWARE patterns ,ENGINEERING ,HEURISTIC ,METHODOLOGY ,OPEN source software - Abstract
The identification of design patterns as part of the reengineering process can convey important information to the designer. However, existing pattern detection methodologies generally have problems in dealing with one or more of the following issues: Identification of modified pattern versions, search space explosion for large systems and extensibility to novel patterns. In this paper, a design pattern defection methodology is proposed that is based on similarity scoring between graph vertices. Due to the nature of the underlying graph algorithm, this approach has the ability to also recognize patterns that are modified from their standard representation. Moreover, the approach exploits the fact that patterns reside in one or more inheritance hierarchies, reducing the size of the graphs to which the algorithm is applied. Finally, the algorithm does not rely on any pattern-specific heuristic, facilitating the extension to novel design structures. Evaluation on three open-source projects demonstrated the accuracy and the efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
34. Covering Arrays for Efficient Fault Characterization in Complex Configuration Spaces.
- Author
-
Yilmaz, Cemal, Cohen, Myra B., and Porter, Adam A.
- Subjects
SOFTWARE engineering ,COMPUTER software ,ELECTRONIC systems ,MATHEMATICS ,ENGINEERING ,TECHNOLOGY ,HIGH technology industries ,COMPUTER programming - Abstract
Many modern software systems are designed to be highly configurable so they can run on and be optimized for a wide variety of platforms and usage scenarios. Testing such systems is difficult because, in effect, you are testing a multitude of systems, not just one. Moreover, bugs can and do appear in some configurations, but not in others. Our research focuses on a subset of these bugs that are "option-related"--those that manifest with high probability only when specific configuration options take on specific settings. Our goal is not only to detect these bugs, but also to automatically characterize the configuration subspaces (i.e., the options and their settings) in which they manifest. To improve efficiency, our process tests only a sample of the configuration space, which we obtain from mathematical objects called covering arrays. This paper compares two different kinds of covering arrays for this purpose and assesses the effect of sampling strategy on fault characterization accuracy. Our results strongly suggest that sampling via covering arrays allows us to characterize option-related failures nearly as well as if we had tested exhaustively, but at a much lower cost. We also provide guidelines for using our approach in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
35. Leveraging User-Session Data to Support Web Application Testing.
- Author
-
Elbaum, Sebastian, Rothermel, Gregg, Karre, Srikanth, and Fisher II, Marc
- Subjects
APPLICATION software ,COMPUTER software testing ,RELIABILITY in engineering ,SOFTWARE engineering ,ENGINEERING - Abstract
Web applications are vital components of the global information infrastructure, and it is important to ensure their dependability. Many techniques and tools for validating Web applications have been created, but few of these have addressed the need to test Web application functionality and none have attempted to leverage data gathered in the operation of Web applications to assist with testing. In this paper, we present several techniques for using user session data gathered as users operate Web applications to help test those applications from a functional standpoint. We report results of an experiment comparing these new techniques to existing white-box techniques for creating test cases for Web applications, assessing both the adequacy of the generated test cases and their ability to detect faults on a point-of-sale Web application. Our results show that user session data can be used to produce test suites more effective overall than those produced by the white-box techniques considered; however, the faults detected by the two classes of techniques differ, suggesting that the techniques are complementary. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
36. FSM-Based Incremental Conformance Testing Methods.
- Author
-
El-Fakih, Khaled, Yevtushenko, Nina, and Bochmann, Grogor V.
- Subjects
SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,TECHNICAL specifications ,INDUSTRIAL design ,ENGINEERING - Abstract
The development of appropriate test oasis is an important issue for conformance testing of protocol implementations and other reactive software systems. A number of methods are known for the development of a test suite based on a specification given in the form of a finite state machine. In practice, the system requirements evolve throughout the lifetime of the system and the specifications are modified incrementally. In this paper, we adapt tour well-known test derivation methods, namely, the HIS, W, Wp, and UlOv methods, for generating tests that would test only the modified parts of an evolving specification. Some application examples and experimental results are provided. These results show significant gains when using incremental testing in comparison with complete testing, especially when the modified part represents less than 20 percent of the whole specification. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
37. Event-Based Traceability for Managing Evolutionary Change.
- Author
-
Cleland-Huang, Jane, Chang, Carl K., and Christensen, Mark
- Subjects
COMPUTER software ,ENGINEERING ,PROJECT management ,COMPUTER systems ,AUTOMATION ,ELECTRONIC systems - Abstract
Although the benefits of requirements traceability are widely recognized, the actual practice of maintaining a traceability scheme is not always entirely successful. The traceability infrastructure underlying a software system tends to erode over its lifetime, as time-pressured practitioners fail to consistently maintain links and update impacted artifacts each time a change occurs, even with the support of automated systems. This paper proposes a new method of traceability based upon event-notification and is applicable even in a heterogeneous and globally distributed development environment. Traceable artifacts are no longer tightly coupled but are linked through an event service, which creates an environment in which change is handled more efficiently, and artifacts and their related links are maintained in a restorable state. The method also supports enhanced project management for the process of updating and maintaining the system artifacts. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
38. Knowledge-Based Automation of a Design Method for Concurrent Systems.
- Author
-
Mills, Kevin L. and Gomaa, Hassan
- Subjects
- *
SOFTWARE engineering , *EXPERT systems , *SYSTEMS design , *ELECTRONIC data processing , *SYSTEM analysis , *ENGINEERING - Abstract
This paper describes a knowledge-based approach to automate a software design method for concurrent systems. The approach uses multiple paradigms to represent knowledge embedded in the design method. Semantic data modeling provides the means to represent concepts from a behavioral modeling technique, called Concurrent Object-Based Real-time Analysis (COBRA), which defines system behavior using data/control flow diagrams. Entity-Relationship modeling is used to represent a design metamodel based on a design method, called Concurrent Design Approach for Real-Time Systems (CODARTS), which represents concurrent designs as software architecture diagrams, task behavior specifications, and module specifications. Production rules provide the mechanism for codifying a set of CODARTS heuristics that can generate concurrent designs based on semantic concepts included in COBRA behavioral models and on entities and relationships included in CODARTS design metamodels. Together, the semantic data model, the entity-relationship model, and the production rules, when encoded using an expert-system shell, compose CODA, an automated designer's assistant. Other forms of automated reasoning, such as knowledge-based queries, can be used to check the correctness and completeness of generated designs with respect to properties defined in the CODARTS design metamodel. CODA is applied to generate 10 concurrent designs for four real-time problems. The paper reports the degree of automation achieved by CODA. The paper also evaluates the quality of generated designs by comparing the similarity between designs produced by CODA and human designs reported in the literature for the same problems. In addition, the paper compares CODA with four other approaches used to automate software design methods. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
39. Defining and Applying Measures of Distance Between Specifications.
- Author
-
Labed Jilani, Lamia, Desharnais, Jules, and Mili, Ali
- Subjects
TECHNICAL specifications ,SOFTWARE measurement ,SOFTWARE engineering ,ENGINEERING - Abstract
Echoing Louis Pasteur's quote, [SUP1] we submit the premise that it is advantageous to define measures of distance between requirements specifications because such measures open up a wide range of possibilities both in theory and in practice. In this paper, we present a mathematical basis for measuring distances between specifications and show how our measures of distance can be used to address concrete problems that arise in the practice of software engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
40. Validating the ISO/IEC 155O4 Measure of Software Requirements Analysis Process Capability.
- Author
-
EI Emam, Khaled and Birk, Andreas
- Subjects
SOFTWARE engineering ,STANDARDS ,COMPUTER software ,TRUTHFULNESS & falsehood ,PERFORMANCE ,ENGINEERING - Abstract
ISO/IEC 15504 is an emerging international standard on software process assessment. It defines a number of software engineering processes and a scale for measuring their capability. One of the defined processes is software requirements analysis (SRA). A basic premise of the measurement scale is that higher process capability is associated with better project performance (i.e., predictive validity). This paper describes an empirical study that evaluates the predictive validity of SRA process capability. Assessments using ISO/IEC 15504 were conducted on 56 projects world-wide over a period of two years. Performance measures on each project were also collected using questionnaires, such as the ability to meet budget commitments and staff productivity. The results provide strong evidence of predictive validity for the SRA process capability measure used in ISO/IEC 15504, but only for organizations with more than 50 IT Staff. Specifically, a strong relationship was found between the implementation of requirements analysis practices as defined in ISO/IEC 15504 and the productivity of software projects. For smaller organizations, evidence of predictive validity was rather weak. This can be interpreted in a number of different ways: that the measure of capability is not suitable for small organizations or that the SRA process capability has less effect on project performance for small organizations. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
41. A Theory-Based Representation for Object-Orianted Domain Models.
- Author
-
DeLoach, Scott A. and Hartrum, Thomas C.
- Subjects
SOFTWARE engineering ,FORMAL methods (Computer science) ,TECHNICAL specifications ,SYSTEMS design ,ENGINEERING ,MODEL categories (Mathematics) - Abstract
Formal software specification has long been touted as a way to increase the quality and reliability of software; however, it remains an intricate, manually intensive activity. An alternative to using formal specifications directly is to translate graphically based, semiformal specifications into formal specifications. However, before this translation can take place, a formal definition of baste object-oriented concepts must be found. This paper presents an algebraic model of object-orientation that defines how object-oriented concepts can be represented algebraically using an object-oriented algebraic specification language O-SLANG. O-SLANG combines basic algebraic specification constructs with category theory operations to capture internal object class structure, as well as relationships between classes. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
42. Message of the Editor for Software Engineering Experience.
- Author
-
Munson, Jack
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software - Abstract
Focuses on the need to apply good software engineering to practice.
- Published
- 1983
43. Evolution and Reuse of Orthogonal Architecture.
- Author
-
Rajlich, Václav and Silva, João H.
- Subjects
- *
ARCHITECTURE , *TECHNOLOGY , *SCHOOLS of architecture , *ENGINEERING , *SCHOOLS , *INDUSTRIAL arts - Abstract
In this paper, we present a case study of evolution (or vertical reuse) in the domain of visual interactive software tools. We introduce an architecture suitable for this purpose, called orthogonal architecture. The paper describes the architecture itself, the reverse engineering process by which it was obtained, and the forward engineering process by which it was evolved. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
44. Reusing Software: Issues and Research Directions.
- Author
-
Mili, Hafedh, Mili, Fatma, and Mili, Ali
- Subjects
- *
COMPUTER software development , *ARTIFICIAL intelligence , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Software productivity has been steadily increasing over the past 30 years, but not enough to close the gap between the demands placed on the software industry and what the state of the practice can deliver [22], [39]; nothing short of an order of magnitude increase in productivity will extricate the software industry from its perennial crisis [39], 1671. Several decades of intensive research in software engineering and artificial intelligence left few alternatives but software reuse as the (only) realistic approach to bring about the gains of productivity and quality that the software industry needs. In this paper, we discuss the implications of reuse on the production, with an emphasis on the technical challenges. Software reuse involves building software that is reusable by design and building with reusable software. Software reuse includes reusing both the products of previous software projects and the processes deployed to produce them, leading to a wide spectrum of reuse approaches, from the building blocks (reusing products) approach, on one hand, to the generative or reusable processor (reusing processes), on the other 1681. We discuss the implication of such approaches on the organization, control, and method of software development and discuss proposed models for their economic analysis. Software reuse benefits from methodologies and tools to: 1) build more readily reusable software and 2) locate, evaluate, and tailor reusable software, the last being critical for the building blocks approach. Both sets of issues are discussed in this paper, with a focus on application generators and OO development for the first and a thorough discussion of retrieval techniques for software components, component composition (or bottom-up design), and transformational systems for the second. We conclude by highlighting areas that, in our opinion, are worthy of further investigation. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
45. Properties of Control-Flow Complexity Measures.
- Author
-
Lakshmanan, K.B., Jayaprakash, S., and Sinha, P.K.
- Subjects
- *
FLOWGRAPHS , *SOFTWARE measurement , *SOFTWARE engineering , *SYSTEM analysis , *ENGINEERING , *SOFTWARE reengineering - Abstract
In this paper we attempt to formalize some properties which any reasonable control-flow complexity measure must satisfy, and our work is in the same spirit of an earlier paper by Weyuker in this TRANSACTIONS [18]. Since large programs are often built by sequencing and nesting of simpler constructs, we explore how control-flow complexity measures behave under such compositions. Our analysis reveals the strengths and weaknesses of some of the existing control flow complexity measures. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
46. The Processor Working Set and Its Use in Scheduling Multiprocessor Systems.
- Author
-
Ghosal, Dipak, Serazzi, Giuseppe, and Tripathi, Satish K.
- Subjects
- *
MULTIPROCESSORS , *COMPUTERS , *COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ALGORITHMS , *PARALLEL processing - Abstract
There are two main contributions of this paper. First, this paper introduces the concept of a processor working set (pws) as a single value parameter for characterizing the parallel program behavior. Through detailed experimental studies of different algorithms on a transputer-based multiprocessor machine, it is shown that the pws is indeed a robust measure for characterizing the workload of a multiprocessor system. Small deviations in the performance of algorithms arising due to communication overhead are captured in this parameter. The second contribution of this paper relates to the study of static processor allocation strategies. It is shown that processor allocation strategies based on the pws provide significantly better throughput-delay characteristics. The robustness of pws is further demonstrated by showing that allocation policies that allocate processors more than the pws are inferior in performance to those that never allocate more than the pws—even at a moderately low load. Based on the results, a simple static allocation policy that allocates the pws at low load and adaptively fragments at high load to one processor per job is proposed. This allocation strategy is shown to possess the best throughput-delay characteristic over a wide range of loads. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
47. High Performance Software Testing on SIMD Machines.
- Author
-
Krauser, Edward W., Mathur, Aditya P., and Rego, Vernon J.
- Subjects
- *
COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ELECTRONIC systems - Abstract
This paper describes a new method, called mutant unification, for high-performance software testing. The method is aimed at supporting program mutation on parallel machines based on the Single Instruction Multiple Data stream (SIMD) paradigm. Several parameters that affect the performance of unification have been identified and their effect on the time to completion of a mutation test cycle and speedup has been studied. Program mutation analysis provides an effective means for determining the reliability of large software systems. It also provides a systematic method for measuring the adequacy of test data. However, it is likely that testing large software systems using mutation is computation bound and prohibitive on traditional sequential machines. Current implementations of mutation tools are unacceptably slow and are only suitable for testing relatively small programs. The unification method reported in this paper provides a practical alternative to the current approaches. It also opens up a new application domain for SIMD machines. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
48. Process Synchronization: Design and Performance Evaluation of Distributed Algorithms.
- Author
-
Bagrodia, Rajive
- Subjects
- *
COMPUTER algorithms , *ALGORITHMS , *SYNCHRONIZATION , *SOFTWARE engineering , *ENGINEERING , *COMPUTER programming - Abstract
The concept of multiway rendezvous has been proposed to implement synchronous communication among an arbitrary number of concurrent, asynchronous processes. The synchronization and exclusion problems associated with implementing multiway rendezvous are expressed succinctly in the context of the committee coordination problem. A variety of techniques to solve the synchronization and exclusion problems may be combined to design algorithms for the committee coordination problem. This paper presents a simple solution for the problem and shows how it can be implemented to develop a family of algorithms. The algorithms use message counts to solve synchronization and solve the exclusion problem by using a circulating token, or by using auxiliary resources as in the solutions for the dining or drinking philosophers problems. The paper also presents the results of a simulation study on the performance of the algorithms. The experiments measured the response time and message complexity of each algorithm as a result of variations in different model parameters including network topology and level of conflict in the system. The results from the study show that the response time for algorithms proposed in this paper is significantly better than an existing algorithm, whereas the message complexity is considerably worse. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
49. Implementation of an FP-Shell.
- Author
-
Kamath, Yogeesh H. and Matthews, Manton M.
- Subjects
- *
PROGRAMMING languages , *COMPUTER software , *ENGINEERING , *SOFTWARE engineering - Abstract
One of the best features of the UNIXTM Shell is that it provides a framework which can be used to build complex programs by interconnecting existing simple programs. However, it is limited to linear combinations of programs, and building of more complex programs must be accomplished by executing sequences of commands. This paper introduces Backus' FP (Functional Programming) as an alter- native command language for UNIX. In FP, programs are true functions and another distinctive feature of FP languages is that they contain functional forms, which are constructs for combining programs to build new programs. Also, the functional style of programming provides a natural way of exploiting parallel machine architecture. In this paper it is shown how to enhance the power of the UNIX Shell by the inclusion of the FP functional forms. The FP-Shell is fully implemented in "C" under UNIX. It interprets standard UNIX commands as FP primitive functions and standard UNIX files as FP system objects. It also serves as a framework for the study of functional programming languages. [ABSTRACT FROM AUTHOR]
- Published
- 1987
50. From Safety Analysis to Software Requirements.
- Author
-
Hansen, Kirsten M., Ravn, Anders P., and Stavridou, Victoria
- Subjects
COMPUTER software ,INDUSTRIAL safety ,TREE graphs ,FORMAL methods (Computer science) ,COMPONENT software ,SYSTEMS design ,ENGINEERING - Abstract
Software for safety critical systems must deal with the hazards identified by safety analysis. This paper investigates, how the results of one safety analysis technique, fault trees, are interpreted as software safety requirements to be used in the program design process. We propose that fault tree analysis and program development use the same system model. This model is formalized in a real-time, interval logic, based on a conventional dynamic systems model with state evolving over time. Fault trees are interpreted as temporal formulas, and it is shown how such formulas can be used for deriving safety requirements for software components. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.