569 results
Search Results
2. Enforcing safety requirements for industrial automation systems at runtime position paper
- Author
-
Thomas Moser, Stefan Biffl, Martin Melik-Merkumians, and Wikan Danar Sunindyo
- Subjects
System requirements ,Requirements management ,Engineering ,Requirements engineering ,business.industry ,Runtime verification ,Systems engineering ,Requirements elicitation ,Process automation system ,Software engineering ,business ,Formal verification ,Requirements analysis - Abstract
Current industrial automation systems are becoming more and more complex, and typically involve different phases of engineering, such as design time and runtime. System requirements, which are usually elicited during design time by engineers, currently are not sufficiently represented at runtime, like the runtime enforcement of safety requirements for industrial automation systems. Such kind of enforcement usually is very hard to model and predict at design time. Hence, the need exists to capture and manage safety requirements at design time and runtime, since safety requirements of industrial automation systems may lead to high risks if not addressed properly. In this position paper, we introduce a safety requirements enforcement framework and the using of Boilerplates for requirements elicitation and by explicitly modeling the runtime requirements knowledge for further application. We illustrate and evaluate the approach with data from a real-world case study in the area of industrial process systems. Major result was that the Boilerplates and explicit engineering knowledge are well suited to capture and enforce runtime safety requirements of industrial automation systems.
- Published
- 2011
3. Writing good software engineering research papers
- Author
-
Mary Shaw
- Subjects
80399 Computer Software not elsewhere classified ,FOS: Computer and information sciences ,Research design ,Engineering ,Order (business) ,Acoustical engineering ,Technical writing ,business.industry ,business ,Software engineering - Abstract
Software engineering researchers solve problems of several different kinds. To do so, they produce several different kinds of results, and they should develop appropriate evidence to validate these results. They often report their research in conference papers. I analyzed the abstracts of research papers submitted to ICSE 2002 in order to identify the types of research reported in the submitted and accepted papers, and I observed the program committee discussions about which papers to accept. This report presents the research paradigms of the papers, common concerns of the program committee, and statistics on success rates. This information should help researchers design better research projects and write papers that present their results to best advantage.
- Published
- 2003
4. Guest Editor's Introduction Selected Papers from COMPSAC '86.
- Author
-
North, John R.
- Subjects
- *
COMPUTER software , *ELECTRONIC systems , *ENGINEERING , *SOFTWARE engineering , *CONFERENCES & conventions - Abstract
The International Computer and Software Applications Conference has celebrated its tenth anniversary in 1986. The conference has been very successful in attracting excellent papers and panels from both academia and industry. It continued the tradition and attracted superb papers for the following tracks: software quality, software engineering, software requirements, development environments, software techniques and knowledge-based systems.
- Published
- 1988
5. Challenges in Creating Environments for SOA Learning
- Author
-
Nicolas Lopez, Rubby Casallas, and Jorge Villalobos
- Subjects
Engineering ,computer.internet_protocol ,business.industry ,Professional practice ,Context (language use) ,Service-oriented architecture ,Technology management ,Engineering management ,ComputingMilieux_COMPUTERSANDEDUCATION ,Position paper ,Software engineering ,business ,Software architecture ,Set (psychology) ,computer ,Curriculum - Abstract
SOA is now in widespread use by the industry and is a current area of interest for research. However a currently open area of investigation is how to properly introduce SOA in an IT/CS/SwE type of curriculum. Software engineering students must develop a series of abilities and skills related to SOA for effective professional practice. Providing environments where students go beyond learning some concepts and specific technologies to truly apprehend the complexity involved in SOA is a major challenge. This environment must be articulated in the context of business needs and other software architecture methodologies. Instructors need a new set of skills and abilities to tackle this challenge. This position paper discusses what such an environment should provide for students and instructors to effectively develop skills that will enable SOA practice in the business world.
- Published
- 2007
6. Editor's Comments.
- Author
-
Basili, Victor R.
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
Comments on the articles in the "IEEE Transactions on Software Engineering" were given. The periodical has an established, published scope which is subject to change as the file develops. Part of the review process is to state whether the paper meets the criteria. The scope should be limited to papers of interest to the software engineering community. However, besides mainstream software engineering topics, it could include the software engineering of various applications.
- Published
- 1988
7. Predicate Logic for Software Engineering.
- Author
-
Parnas, David Lorge
- Subjects
SOFTWARE engineering ,ENGINEERING ,SYSTEMS design ,ELECTRONIC data processing documentation ,COMPUTER systems ,ELECTRONIC systems ,COMPUTERS - Abstract
The interpretations of logical expressions found in most introductory textbooks are not suitable for use in software engineering applications because they do not deal with partial functions. More advanced papers and texts deal with partial functions in a variety of complex ways. This paper proposes a very simple change to the classic interpretation of predicate expressions, one that defines their value for all values of all variables, yet is almost identical to the standard definitions. It then illustrates the application of this interpretation in software documentation. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
8. Guest Editors' Introduction to the Special Section on the International Conference on Software Engineering.
- Author
-
Griswold, William G. and Nuseibeh, Bashar
- Subjects
ENGINEERING ,SOFTWARE engineering ,COMPUTER software ,FAULT-tolerant computing ,SYSTEMS design - Abstract
This article presents information about the selection of the four best papers submitted to the 27th International Conference on Software Engineering held on May 21, 2005. A summary for each paper is provided along with the author's name and the main topic of each paper. One paper deals with a new approach to software fault tolerance. Another paper discusses an environment for finding and visualizing examples of usage of an API. The third paper reports on a study to determine the limitations of tools used in computer maintenance tasks.
- Published
- 2006
- Full Text
- View/download PDF
9. An Acyclic Expansion Algorithm for Fast Protocol Validation.
- Author
-
Kakuda, Yoshiaki, Wakahara, Yasushi, and Norigoe, Masamitsu
- Subjects
COMPUTER algorithms ,ALGORITHMS ,COMPUTER programming ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
For the development of communications software com- posed of many modules, protocol validation is essential to detect errors in the interactions among the modules. A number of protocol validation techniques were proposed in the past, but the validation time required by these techniques is too long for many actual protocols. This paper proposes a new fast protocol validation technique to overcome this drawback. The proposed technique is to construct the minimum acyclic form of state transitions in individual processes of the protocol, and to detect protocol errors such as system deadlocks and channel overflows fast. This paper also presents a protocol validation system based on the proposed technique to confirm its feasibility and shows validation results for some actual protocols obtained with this system. As a result, the protocol validation system is expected to contribute to a great extent to the improvement of the productivity in development and maintenance of communications software. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
10. Variability in Software Systems—A Systematic Literature Review.
- Author
-
Galster, Matthias, Weyns, Danny, Tofan, Dan, Michalik, Bartosz, and Avgeriou, Paris
- Subjects
COMPUTER software quality control ,SOFTWARE engineering ,LITERATURE reviews ,ENGINEERING ,ELECTRONIC systems - Abstract
Context: Variability (i.e., the ability of software systems or artifacts to be adjusted for different contexts) became a key property of many systems. Objective: We analyze existing research on variability in software systems. We investigate variability handling in major software engineering phases (e.g., requirements engineering, architecting). Method: We performed a systematic literature review. A manual search covered 13 premium software engineering journals and 18 premium conferences, resulting in 15,430 papers searched and 196 papers considered for analysis. To improve reliability and to increase reproducibility, we complemented the manual search with a targeted automated search. Results: Software quality attributes have not received much attention in the context of variability. Variability is studied in all software engineering phases, but testing is underrepresented. Data to motivate the applicability of current approaches are often insufficient; research designs are vaguely described. Conclusions: Based on our findings we propose dimensions of variability in software engineering. This empirically grounded classification provides a step towards a unifying, integrated perspective of variability in software systems, spanning across disparate or loosely coupled research themes in the software engineering community. Finally, we provide recommendations to bridge the gap between research and practice and point to opportunities for future research. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
11. Failure Size Proportional Models and an Analysis of Failure Detection Abilities of Software Testing Strategies.
- Author
-
Zachariah, Babu and Rattihalli, R. N.
- Subjects
SOFTWARE engineering ,SCHUR functions ,POISSON processes ,PROBABILITY theory ,ENGINEERING - Abstract
This paper combines two distinct areas of research, namely software reliability growth modeling, and efficacy studies on software testing methods. It begins by proposing two software reliability growth models with a new approach to modeling. These models make the basic assumption that the intensity of failure occurrence during the testing phase of a piece of software is proportional to the s-expected probability of selecting a failure-causing input. The first model represents random testing, and the second model represents partition testing. These models provide the s-expected number of failures over a period, which in turn is used in analyzing the failure detection abilities of testing strategies. The specific areas of investigation are • conditions that enable partition testing yielding optimal results, and • comparison between partition testing and random testing in terms of efficacy [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
12. Introducing Software Engineering Developments to a Classical Operating Systems Course.
- Author
-
Billard, Edward A.
- Subjects
ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering ,SYSTEMS software ,COMPUTER software ,COMPUTER programming ,SYSTEM analysis - Abstract
An operating system course draws from a well-defined fundamental theory, but one needs to consider how more re- cent advances, not necessarily in the theory itself, can be applied to improve the course and the general body of knowledge of the student. The goal of this paper is to show how recent software engineering developments can be introduced to such a course to not only satisfy the theory requirements, but also make the theory more understandable. In particular, this paper focuses on how students can effectively learn the Unified Modeling Language, the object-oriented methodology, and the Java programming language in the context of an operating systems course. The goal is to form a systematic software engineering process for operating system design and implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
13. Testability Transformation.
- Author
-
Harman, Mark, Lin Hu, Hierons, Rob, Wegener, Joachim, Sthamer, Harmen, Baresel, André, and Roper, Marc
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software - Abstract
A testability transformation is a source-to-source transformation that aims to improve the ability of a given test generation method to generate test data for the original program. This paper introduces testability transformation, demonstrating that it differs from traditional transformation, both theoretically and practically, while still allowing many traditional transformation rules to be applied. The paper illustrates the theory of testability transformation with an example application to evolutionary testing. An algorithm for flag removal is defined and results are presented from an empirical study which show how the algorithm improves both the performance of evolutionary test data generation and the adequacy level of the test data so-generated. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
14. Requirements Elicitation and Specification Using the Agent Paradigm: The Case Study of an Aircraft Turnaround Simulator.
- Author
-
Miller, Tim, Bin Lu, Sterling, Leon, Beydoun, Ghassan, and Taveter, Kuldar
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,TECHNOLOGY transfer ,DIFFUSION of innovations - Abstract
In this paper, we describe research results arising from a technology transfer exercise on agent-oriented requirements engineering with an industry partner. We introduce two improvements to the state-of-the-art in agent-oriented requirements engineering, designed to mitigate two problems experienced by ourselves and our industry partner: (1) the lack of systematic methods for agent-oriented requirements elicitation and modelling; and (2) the lack of prescribed deliverables in agent-oriented requirements engineering. We discuss the application of our new approach to an aircraft turnaround simulator built in conjunction with our industry partner, and show how agent-oriented models can be derived and used to construct a complete requirements package. We evaluate this by having three independent people design and implement prototypes of the aircraft turnaround simulator, and comparing the three prototypes. Our evaluation indicates that our approach is effective at delivering correct, complete, and consistent requirements that satisfy the stakeholders, and can be used in a repeatable manner to produce designs and implementations. We discuss lessons learnt from applying this approach. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
15. A Cooperative Parallel Search-Based Software Engineering Approach for Code-Smells Detection.
- Author
-
Kessentini, Wael, Kessentini, Marouane, Sahraoui, Houari, Bechikh, Slim, and Ouni, Ali
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software quality control ,QUALITY control ,EVOLUTIONARY algorithms - Abstract
We propose in this paper to consider code-smells detection as a distributed optimization problem. The idea is that different methods are combined in parallel during the optimization process to find a consensus regarding the detection of code-smells. To this end, we used Parallel Evolutionary algorithms (P-EA) where many evolutionary algorithms with different adaptations (fitness functions, solution representations, and change operators) are executed, in a parallel cooperative manner, to solve a common goal which is the detection of code-smells. An empirical evaluation to compare the implementation of our cooperative P-EA approach with random search, two single population-based approaches and two code-smells detection techniques that are not based on meta-heuristics search. The statistical analysis of the obtained results provides evidence to support the claim that cooperative P-EA is more efficient and effective than state of the art detection approaches based on a benchmark of nine large open source systems where more than 85 percent of precision and recall scores are obtained on a variety of eight different types of code-smells. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
16. A General Testability Theory: Classes, Properties, Complexity, and Testing Reductions.
- Author
-
Rodriguez, Ismael, Llana, Luis, and Rabanal, Pablo
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,ANTIPATTERNS (Software engineering) ,CAPABILITY maturity model - Abstract
In this paper we develop a general framework to reason about testing. The difficulty of testing is assessed in terms of the amount of tests that must be applied to determine whether the system is correct or not. Based on this criterion, five testability classes are presented and related. We also explore conditions that enable and disable finite testability, and their relation to testing hypotheses is studied. We measure how far incomplete test suites are from being complete, which allows us to compare and select better incomplete test suites. The complexity of finding that measure, as well as the complexity of finding minimum complete test suites, is identified. Furthermore, we address the reduction of testing problems to each other, that is, we study how the problem of finding test suites to test systems of some kind can be reduced to the problem of finding test suites for another kind of systems. This enables to export testing methods. In order to illustrate how general notions are applied to specific cases, many typical examples from the formal testing techniques domain are presented. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
17. A Study of Variability Models and Languages in the Systems Software Domain.
- Author
-
Berger, Thorsten, She, Steven, Lotufo, Rafael, Wasowski, Andrzej, and Czarnecki, Krzysztof
- Subjects
SYSTEMS software ,COMPUTER software research ,COMPILERS (Computer programs) ,SOFTWARE engineering ,ENGINEERING - Abstract
Variability models represent the common and variable features of products in a product line. Since the introduction of FODA in 1990, several variability modeling languages have been proposed in academia and industry, followed by hundreds of research papers on variability models and modeling. However, little is known about the practical use of such languages. We study the constructs, semantics, usage, and associated tools of two variability modeling languages, Kconfig and CDL, which are independently developed outside academia and used in large and significant software projects. We analyze 128 variability models found in 12 open--source projects using these languages. Our study 1) supports variability modeling research with empirical data on the real-world use of its flagship concepts. However, we 2) also provide requirements for concepts and mechanisms that are not commonly considered in academic techniques, and 3) challenge assumptions about size and complexity of variability models made in academic papers. These results are of interest to researchers working on variability modeling and analysis techniques and to designers of tools, such as feature dependency checkers and interactive product configurators. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
18. Editorial: The State of TSE.
- Author
-
Knight, John
- Subjects
ENGINEERING ,SOFTWARE engineering ,COMPUTER software - Abstract
This article reports that several changes in the periodical "IEEE Transactions on Software Engineering," published as of February 01, 2004, will make the processing of articles more timely and more effective. First, the length restriction on submitted manuscripts has been removed. This should allow authors to document results in appropriate length papers. Despite the change, authors are encouraged to keep papers as short as possible. Second, Transactions on Software Engineering manuscript management has been moved to the Web-based Manuscript Central system. This system makes all aspects of manuscript processing much more efficient and it allows everybody involved in processing papers, including authors, to obtain details of the state of manuscripts as they are processed. Third, preprints in the future will be available online two months before the issue cover date. Software remains a critical industry to the world. The impact of software is tremendous, both when it works and when it doesn't.
- Published
- 2004
- Full Text
- View/download PDF
19. The Role of Deliberate Artificial Design Elements in Software Engineering Experiments.
- Author
-
Hannay, Jo E. and Jorgensen, Magne
- Subjects
SOFTWARE engineering ,SYSTEMS design ,COMPUTER software ,COMPUTER systems ,ENGINEERING ,DESIGN - Abstract
Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies that are badly designed or that have research questions of low relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
20. A Survey of Controlled Experiments in Software Engineering.
- Author
-
øberg, Dag I. K., Hannay, Jo E., Hansen, Ove, Kampenes, Vigdis By, Karahasanović, Amela, Liborg, Nils-Kristian, and Rekdal, Anette C.
- Subjects
SOFTWARE engineering ,COMPUTER software ,SURVEYS ,PERIODICALS ,ENGINEERING - Abstract
The classical method for identifying cause-effect relationships is to conduct controlled experiments. This paper reports upon the present state of how controlled experiments in software engineering are conducted and the extent to which relevant information is reported. Among the 5,453 scientific articles published in 12 leading software engineering journals and conferences in the decade from 1993 to 2002, 103 articles (1.9 percent) reported controlled experiments in which individuals or teams performed one or more software engineering tasks. This survey quantitatively characterizes the topics of the experiments and their subjects (number of subjects, students versus professionals, recruitment, and rewards for participation), tasks (type of task, duration, and type and size of application) and environments (location, development tools). Furthermore, the survey reports on how internal and external validity is addressed and the extent to which experiments are replicated. The gathered data reflects the relevance of software engineering experiments to industrial practice and the scientific maturity of software engineering research. [ABSTRACT FROM AUTHOR]
- Published
- 2005
21. Spatial Complexity Metrics: An Investigation of Utility.
- Author
-
Gold, Nicolas E., Mohan, Andrew M., and Layzell, Paul J.
- Subjects
COMPUTER software ,COMPUTATIONAL complexity ,SOFTWARE measurement ,SOFTWARE engineering ,ENGINEERING - Abstract
Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the "spatial complexity" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
22. Confirming Configurations in EFSM Testing.
- Author
-
Petrenko, Alexandre, Boroday, Sergiy, and Groz, Roland
- Subjects
SOFTWARE engineering ,ENGINEERING ,CONFIGURATION management - Abstract
In this paper, we investigate the problem of configuration verification for the extended FSM (EFSM) model. This is an extension of the FSM state identification problem. Specifically, given a configuration ("state vector") and an arbitrary set of configurations, determine an input sequence such that the EFSM in the given configuration produces an output sequence different from that of the configurations in the given set or at least in a maximal proper subset. Such a sequence can be used in a test case to confirm the destination configuration of a particular EFSM transition. We demonstrate that this problem could be reduced to the EFSM traversal problem, so that the existing methods and tools developed in the context of model checking become applicable. We introduce notions of EFSM projections and products and, based on these notions, we develop a theoretical framework for determining configuration-confirming sequences. The proposed approach is illustrated on a realistic example. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
23. Writing research article introductions in software engineering: How accurate is a standard model?
- Author
-
Anthony, Laurence
- Subjects
REPORT writing ,SOFTWARE engineering ,ENGINEERING - Abstract
Evaluates a standard model for describing the structure of research article introductions in the field of software engineering. Adequacy of description of the main framework of the introduction; Key features unaccounted in the model; Step descriptions in the model; Accuracy of written introductions using the model.
- Published
- 1999
- Full Text
- View/download PDF
24. Simulation and Comparison of Albrecht's Function Point and DeMarco's Function Bang Metrics in a CASE Environment.
- Author
-
Rask, Raimo, Laamanen, Petteri, and Lyytinen, Kalle
- Subjects
COMPUTER software development ,COMPUTER programming management ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Software size estimates provide a basis for soft- ware cost estimation during software development. Hence, it is important to measure the system size reliably as early as possible, i.e., during the requirements specification. Two best known specification level metrics are Albrecht's Function Points and DeMarco's Function Bang. One problem in using these metrics has been that there are only few tools that can calculate them during the specification phase. We have built one such tool. Another problem has been that no research data is available how these metrics correlate with one another. The paper compares these two metrics by a simulation study in which automatically generated randomized dataflow diagrams (DED's) were used as a statistical sample to count automatically function points and function bang in a built CASE environment. These value counts were correlated statistically using correlation coefficients and regression analysis. The simulation study permits sufficient variation in the base material to cover most types of system specifications. Moreover, it allows sufficient sampling sizes to make statistical analysis of data. The obtained results show that in certain cases there is a relatively good statistical correlation between these metrics. No overall general correlation exists, however. The paper does not show which one of the two metrics fares better as a size metric. Yet, our study suggests to use in many cases Function Bang metric, because its automatic calculation is simpler and depends less on judgement. Moreover, the study demonstrates that correlations depend upon a system type. This implies that in software projects one must be careful with size estimates while using these metrics. In order to know when one needs to calibrate the size estimate we need to develop algorithms which help to detect logical system types and make adjustments accordingly. The results also point out the need of empirical research in which we can better derive the connection between specification level metrics and the number of lines of code. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
25. Capacity of Voting Systems.
- Author
-
Rangarajan, Sampath, Jalote, Pankaj, and Tripathi, Satish K.
- Subjects
BACKUP processing alternatives in electronic data processing ,SYSTEMS design ,DATABASES ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Data replication is often used to increase the availability of data in a database system. Voting schemes can be used to manage this replicated data. In this paper we use a simple model to study the capacity of systems using voting schemes for data management. Capacity of a system is defined as the number of operations the system can perform successfully, on an average, per unit time. We study the capacity of a system using voting and compare it with the capacity of a system using a single node. We show that the maximum increase in capacity by the use of majority voting is bounded by lip, where p is the steady-state probability of a node being alive. We also show that for a system employing majority voting, if the reliability of nodes is high, increasing the number of nodes to more than three gives only a marginal increase in capacity. We perform similar analysis for three other voting schemes. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
26. LISPACK—A Methodology and Tool for the Performance Analysis of Parallel Systems and Algorithms.
- Author
-
Lazeolla, Giuseppe and Marinuzzi, Francesco
- Subjects
PARALLEL computers ,SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
The paper deals with the performance analysis of parallel algorithms and systems. For these, numerical solution methods quickly show their limits because of the enormous state-space growth. The proposed methodology and software tool (LISPACK, an acronym for List-manipulation Parallel-modeling Package) uses string manipulation, lumping, and recursive elimination as a means for the definition of the large Markovian process, its restructuring and efficient solution. Initially, the enormous stage space is conveniently collapsed and the large transition matrix is reduced. Subsequently, the reduced matrix is recursively block banded, and an efficient recursive, symbolic Gauss elimination is applied. No relevant costs are incurred for the state-space collapsing and restructuring, nor for the matrix block banding. The analysis of a typical parallel system and algorithm model is developed as a case study, to discuss the features of the method. The paper has two contributions. The first, symbolic-approach methodology, is proposed for the performance analysis of parallel algorithms and systems. Second, a tool is introduced that exploits the capabilities of the symbolic approach in the solution of parallel models, where the numerical techniques reveal their limits. [ABSTRACT FROM AUTHOR]
- Published
- 1993
27. Rapid Transaction-Undo Recovery Using Twin-Page Storage Management.
- Author
-
Kun-Lung Wu and Fuchs, W. Kent
- Subjects
COMPUTER storage devices ,SOFTWARE engineering ,ENGINEERING ,SOFTWARE productivity ,MANAGEMENT ,OPERATIONS research - Abstract
This paper presents a twin-page storage method, which is an alternative to the TWIST (twin slot) approach by Reuter, for rapid transaction-undo recovery. In contrast to TWIST, our twin-page approach allows dirty pages in the buffer to be written at any instant onto disk without the requirement of undo logging, and, when a transaction is aborted, no explicit undo is required. As a result, all locks accumulated by the aborted transaction can be released earlier, allowing other transactions waiting for the locks to proceed. Through maintenance of aborted transaction identifiers, invalid pages written by the aborted transaction coexist with other valid pages and are guaranteed not be accessed by subsequent transactions. Instead of an explicit undo, most of the invalid pages are overwritten by subsequent normal updates. Performance in terms of disk I/O and CPU overhead for transaction-undo recovery is analyzed and compared with TWIST. It is shown that our scheme is particularly suited for applications where there are a large number of updates written onto disk when transactions are aborted, and there are frequent aborts. The approach, however, is not as applicable to environments where transactions are typically short or rarely aborted, or most updates are not written onto disk before a commitment. [ABSTRACT FROM AUTHOR]
- Published
- 1993
28. Towards Complexity Metrics for Ada Tasking.
- Author
-
Shatz, Sol M.
- Subjects
DISTRIBUTED computing ,PROGRAMMING languages ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
With growing interest in distributed computing come demands for techniques to aid in development of correct and reliable distributed software. Controlling, or at least recognizing, complexity of such software is an important part of the development and maintenance process. While a number of metrics have been proposed for quantitatively measuring the complexity of sequential, centralized programs, corresponding metrics for distributed software are noticeable by their absence. Using Ada as a representative distributed programming language, this paper discusses some ideas on complexity metrics that focus on Ada tasking and rendezvous. Concurrently active rendezvous are claimed to be an important aspect of communication complexity. A Petri net graph model of Ada rendezvous is used to introduce a "rendezvous graph," an abstraction that can be useful in viewing and computing effective communication complexity. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
29. Single-Site and Distributed Optimistic Protocols for Concurrency Control.
- Author
-
Bassiouni, M. A.
- Subjects
ELECTRONIC data processing ,DATABASES ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
In spite of their advantage in removing the overhead of lock maintenance and deadlock handling, optimistic concurrency control methods have continued to be far less popular in practice than locking schemes. There are two complementary approaches to help render the optimistic approach practically viable. For the high-level approach, integration schemes can be utilized so that the database management system is provided with a variety of synchronization methods each of which can be applied to the appropriate class of transactions. The low-level approach seeks to increase the concurrency of the original optimistic method and improve its performance. In this paper we examine the latter approach, and present algorithms that aim at reducing backups and improve throughput. Both the single-site and distributed networks are considered. Optimistic schemes using time-stamps for fully duplicated and partially duplicated database networks are presented, with emphasis on performance enhancement and on reducing the overall cost of implementation. Abstract-A methodology is presented for evaluating the performance of database update schemes in a distributive environment. The methodology makes use of the history of how data are used in the database. Parameters, such as update to retrieval ratio and average file size, can be set based on the actual characteristics of a system. The analysis is specifically directed toward the support of derived data within the relational model. Because concurrency is a major problem in a distributive system, the support of derived data is analyzed with respect to three distributive concurrency control techniques- master/slave, distributed, and synchronized. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
30. Generic Lifecycle Support in the ALMA Environment.
- Author
-
Lamsweerde, Axel Van, Delcourt, Bruno, Delor, Emmanuelle, Schayes, Marie-Claire, and Champagne, Robert
- Subjects
ARCHITECTURAL design ,SOFTWARE engineering ,SYNTAX (Grammar) ,ARCHITECTURAL designs ,ENGINEERING ,COMPUTER architecture - Abstract
ALMA is an environment kernel supporting the elaboration, analysis, documentation, and maintenance of the various products developed during an entire software lifecycle. Its central component is an environment database n which properties about software objects and relations are collected. These properties include texts written in various formalisms. Two kinds of tools are provided: 1) high- level tools for updating, querying, reporting and maintaining multiple versions of software objects and relations consistently in the database, and 2) syntax-directed tools like structural editors for manipulating the formal texts attached to software objects and relations in the database. A basic feature of the ALMA kernel is its genericity. Tools of the first kind are parameterized on software lifecycle models while tools of the second kind are parameterized on formalisms. Instantiated versions of them for specific models arid formalisms are generated by a meta-environment, which also generates the environment database structure tailored to the desired lifecycle model. This paper concentrates on the database support meta-system and the instantiated database support 5) stems it generates. Our main concern is to discuss the architectural design decisions we made and the mechanisms we introduced for achieving parameterization on lifecycle models. In particular, we describe the entity-relationship meta-model we designed for meta-defining a particular lifecycle model as input to the meta-system. This meta-model is an extension of standard entity-relationship models in that n-ary relations can have attributes, they can be defined on unions of entity types, type specialization with multiple inheritance is supported, and a mechanism is provided for defining views yielding different environment subdatabases associated with different classes of users and/or tools. The crucial role played by this meta-model will be stressed all along the paper. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
31. PROTEAN: A High-level Petri Net Tool for the Specification and Verification of Communication Protocols.
- Author
-
Billington, Jonathan, Wheeler, Geoffrey R., and Wilbur-ham, Michael C.
- Subjects
PETRI nets ,GRAPH theory ,COMPUTER software ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
A computer aid for the specification and analysis of computer communication protocols has been developed over a period of 7 years by the Telecom Australia Research Laboratories. It is based on a formal specification technique called Numerical Petri Nets. The computer aid, known as PROTEAN (PROTocol Emulation and Analysis), provides both graphical (color) and textual interfaces to the protocol designer. Numerical Petri Net (NPN) specifications may be created, stored, appended to other NPNs, structured, edited, listed, displayed, and analyzed. Interactive simulation, exhaustive reachability analysis, and several directed graph analysis facilities are provided. Reachability graphs can be automatically laid out and displayed. PROTEAN determines liveness (dead code, deadlocks, and livelocks) from the reachability graph and its strongly connected components. Language analysis, involving the automatic reduction of reachability graphs to language graphs, can be used to study sequences of key system events. This allows a protocol to be compared with its service specification. Elementary cycles of graphs can be generated, allowing interesting cycles to be highlighted on reachability and language graphs. Facilities are provided for debugging the specification, once a problem with the protocol has been discovered. They allow sequences of events, which lead to the undesired behavior, to be traced. The paper commences with a comparison of specification languages, concentrating on extended finite state machines and high-level Petri nets. NPNs and PROTEAN's facilities are then described and illustrated with a simple example. The application of PROTEAN to complex examples is mentioned briefly. A discussion of the approach, its limitations and future work is presented in the context of other developments reported in the literature. Work towards a comprehensive Protocol Engineering Workstation is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
32. Fragtypes: A Basis for Programming Environments.
- Author
-
Madhavji, Nazim H.
- Subjects
PROGRAMMING languages ,SOFTWARE engineering ,MATHEMATICAL models ,COMPUTER programming ,ENGINEERING - Abstract
It is being recognized that recent programming environments have made significant progress towards improving the programming process. In adhering to this goal of software engineering, this paper introduces a new basis for programming environments. This basis encourages development of software in fragments of various types, called fragtypes. Fragtypes range from a simple expression type to a complete subsystem type. As a result, they are suited to the development of software in an enlarged scope that includes both programming in the small and programming in the large; The paper shows how new proposed operations on fragtypes can achieve unusual effects on the software development process. Fragtypes and their associated construction rules form the basis of the programming environment MUPE- 2, which is currently under development at McGill University. The target and the implementation language of this environment is the programming language Modula-2. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
33. Mathematical Model of Composite Objects and Its Application for Organizing Engineering Databases.
- Author
-
Ketabchi, Mohammad A. and Berzins, Valdis
- Subjects
SOFTWARE engineering ,ENGINEERING ,DATABASES ,MATHEMATICAL models ,COMPUTER programming - Abstract
Composite objects are descriptions of assemblies of parts which themselves can be assemblies of other parts. Efficient storage and retrieval of composite objects is essential for computer aided design applications of database management systems. This paper introduces a clustering concept called component aggregation which considers assemblies having the same types of parts as equivalent objects. The notion of equivalent objects is used to develop a mathematical model of composite objects. It is shown that the set of equivalence classes of objects form a Boolean algebra whose minterms represent the objects which are not considered composite at the current viewing level. The algebraic structure of composite objects serves as a basis for developing a technique for organizing composite objects and supporting materialization of explosion views. The technique provides a clubstering mechanism which partitions the database into meaningful and application-oriented clusters, and allows any desired explosion view to be materialized using a minimal set of stored views. A simplified relational database for design data, and a set of frequent access patterns in design applications are outlined and used to demonstrate the benefits of database organization based on the mathematical model of composite objects. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
34. Specification of Synchronizing Processes.
- Author
-
Ramamritham, Krithivasan and Keller, Robert M.
- Subjects
COMPUTER software ,COMPUTER programming ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
The formalism of temporal logic has been suggested to be an appropriate tool for expressing the semantics of concurrent programs. This paper is concerned with the application of temporal logic to the specification of factors affecting the synchronization of concurrent processes. Towards this end, we first introduce a model for synchronization and axiomatize its behavior. SYSL, a very high-level language for specifying synchronization properties, is then described. It is designed using the primitives of temporal logic and features constructs to express properties that affect synchronization in a fairly natural and modular fashion. Since the statements in the language have intuitive interpretations, specifications are humanly readable. In addition, since they possess appropriate formal semantics, unambiguous specifications result. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
35. Toolpack—An Experimental Software Development Environment Research Project.
- Author
-
Osterweil, Leon J.
- Subjects
COMPUTER software development ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems ,COMPUTER software - Abstract
This paper discusses the goals and methods of the Toolpack project and in this context discusses the architecture and design of the software system being produced as the focus of the project. Toolpack is presented as an experimental activity in which a large software tool environment is being created for the purpose of general distribution and then careful study and analysis. The paper begins by explaining the motivation for building integrated tool sets. It then proceeds to explain the basic requirements that an integrated system of tools must satisfy in order to be successful and to remain useful both in practice and as an experimental object. The paper then summarizes the tool capabilities that will be incorporated into the environment. It then goes on to present a careful description of the actual architecture of the Toolpack integrated tool system. Finally the Toolpack project experimental plan is presented, and future plans and directions are summarized. [ABSTRACT FROM AUTHOR]
- Published
- 1983
36. Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation.
- Author
-
Albrecht, Allan J. and Gafeney Jr., John E.
- Subjects
COMPUTER software development ,COMPUTER software ,COMPUTER systems ,SOFTWARE engineering ,ENGINEERING - Abstract
One of the most important problems faced by software developers and users is the prediction of the size of a programming system and its development effort. As an alternative to "size," one might deal with a measure of the "function" that the software is to perform. Albrecht [1] has developed a methodology to estimate the amount of the "function" the software is to perform, in terms of the data it is to use (absorb) and to generate (produce). The "function" is quantified as "function points," essentially, a weighted sum of the numbers of "inputs," "outputs," master files," and "inquiries" provided to, or generated by, the software. This paper demonstrates the equivalence between Albrecht's external input/output data flow representative of a program (the "function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a program as well as the "soft content" variation of Halstead's model suggested by Gaffney [7]. Further, the high degree of correlation between "function points" and the eventual "SLOC" (source lines of code) of the program, and between "function points" and the work-effort required to develop the code, is demonstrated. The "function point" measure is thought to be more useful than "SLOC" as a prediction of work effort because "function points" are relatively easily estimated from a statement of basic requirements for a program early in the development cycle. The strong degree of equivalency between "function points" and "SLOC" shown in the paper suggests a two-step work-effort validation procedure, first using "function points" to estimate "SLOC," and then using "SLOC" to estimate the work-effort. This approach would pro- vide validation of application development work plans and work-effort estimates early in the development cycle. The approach would also more effectively use the existing base of knowledge on producing "SLOC" until a similar base is developed for "function points." The paper assumes that the reader is familiar with the fundamental theory of "software science" measurements and the practice of validating estimates of work-effort to design and implement software applications (programs). If not, a review of [1] -[3] is suggested. [ABSTRACT FROM AUTHOR]
- Published
- 1983
37. Editorial: A New Editor-in-Chief and the State of TSE.
- Author
-
Knight, John
- Subjects
COMPUTER software industry ,SOFTWARE engineering ,COLLEGE teachers ,COMPUTER science ,ENGINEERING ,INDUSTRIAL costs - Abstract
The editorial introduces the new Editor-in-Chief of the "IEEE Transactions on Software Engineering" journal, Jeff Kramer, professor in the Department of Computing at the Imperial College of Science, Technology, and Medicine in London, England. The status of the journal is discussed, including gratitude for the creativity displayed in the 341 papers received in 2005. The future of the software industry is discussed, along with difficulties in improving reliability and lowering production cost.
- Published
- 2006
- Full Text
- View/download PDF
38. Effects of the Meetings-Flow Approach on Quality Teamwork in the Training of Software Capstone Projects.
- Author
-
Chen, Chung-Yang, Hong, Ya-Chun, and Chen, Pei-Chi
- Subjects
MEETINGS ,TRAINING ,CAPSTONE courses ,COMPUTER software development ,SOFTWARE engineering ,COMPUTER engineering education - Abstract
Software development relies heavily on teamwork; determining how to streamline this collaborative development is an essential training subject in computer and software engineering education. A team process known as the meetings-flow (MF) approach has recently been introduced in software capstone projects in engineering programs at various institutions. In undergraduate science, technology, engineering, and mathematics (STEM) curricula that emphasize team- and project-based learning, the MF approach serves as a macro-level instructional tool to guide students in holistically designing and directing collaborative project development. Previous studies on MF have shown the technical benefits of monitoring student work and product quality. This study investigated the approach further, from the perspective of team management. The effects of MF were examined through an experiment with team-related hypotheses. The results revealed that MF significantly enhances a team's communication and coordination and balances members' contributions by giving mutual support and effort. It has relatively less influence, however, on student team cohesion. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. Problem Oriented Software Engineering: Solving the Package Router Control Problem.
- Author
-
Hall, Jon G., Rapanotti, Lucia, and Jackson, Michael A.
- Subjects
SOFTWARE engineering ,COMPUTER software development ,ROUTING (Computer network management) ,NETWORK routers ,COMPUTER systems ,ENGINEERING - Abstract
Problem orientation is gaining interest as a way of approaching the development of software intensive systems, and yet, a significant example that explores its use is missing from the literature. In this paper, we present the basic elements of Problem Oriented Software Engineering (POSE), which aims at bringing both nonformal and formal aspects of software development together in a single framework. We provide an example of a detailed and systematic POSE development of a software problem: that of designing the controller for a package router. The problem is drawn from the literature, but the analysis presented here is new. The aim of the example is twofold: to illustrate the main aspects of POSE and how it supports software engineering design and to demonstrate how a nontrivial problem can be dealt with by the approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
40. Software Quality Analysis of Unlabeled Program Modules With Semisupervised Clustering.
- Author
-
Seliya, Naeem and Khoshgoftaar, Taghi M.
- Subjects
COMPUTER software quality control ,SOFTWARE engineering ,SOFTWARE measurement ,ENGINEERING ,ALGORITHMS - Abstract
Software quality assurance is a vital component of software project development. A software quality estimation model is trained using software measurement and defect (software quality) data of a previously developed release or similar project. Such an approach assumes that the development organization has experience with systems similar to the current project and that defect data are available for all modules in the training data. In software engineering practice, however, various practical issues limit the availability of defect data for modules in the training data. In addition, the organization may not have experience developing a similar system. In such cases, the task of software quality estimation or labeling modules as fault prone or not fault prone falls on the expert. We propose a semisupervised clustering scheme for software quality analysis of program modules with no defect data or quality-based class labels. It is a constraint-based semisupervised clustering scheme that uses k-means as the underlying clustering algorithm. Software measurement data sets obtained from multiple National Aeronautics and Space Administration software projects are used in our empirical investigation. The proposed technique is shown to aid the expert in making better estimations as compared to predictions made when the expert labels the clusters formed by an unsupervised learning algorithm. In addition, the software quality knowledge learnt during the semisupervised process provided good generalization performance for multiple test data sets. An analysis of program modules that remain unlabeled subsequent to our semisupervised clustering scheme provided useful insight into the characteristics of their software attributes. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
41. Covering Arrays for Efficient Fault Characterization in Complex Configuration Spaces.
- Author
-
Yilmaz, Cemal, Cohen, Myra B., and Porter, Adam A.
- Subjects
SOFTWARE engineering ,COMPUTER software ,ELECTRONIC systems ,MATHEMATICS ,ENGINEERING ,TECHNOLOGY ,HIGH technology industries ,COMPUTER programming - Abstract
Many modern software systems are designed to be highly configurable so they can run on and be optimized for a wide variety of platforms and usage scenarios. Testing such systems is difficult because, in effect, you are testing a multitude of systems, not just one. Moreover, bugs can and do appear in some configurations, but not in others. Our research focuses on a subset of these bugs that are "option-related"--those that manifest with high probability only when specific configuration options take on specific settings. Our goal is not only to detect these bugs, but also to automatically characterize the configuration subspaces (i.e., the options and their settings) in which they manifest. To improve efficiency, our process tests only a sample of the configuration space, which we obtain from mathematical objects called covering arrays. This paper compares two different kinds of covering arrays for this purpose and assesses the effect of sampling strategy on fault characterization accuracy. Our results strongly suggest that sampling via covering arrays allows us to characterize option-related failures nearly as well as if we had tested exhaustively, but at a much lower cost. We also provide guidelines for using our approach in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
42. Leveraging User-Session Data to Support Web Application Testing.
- Author
-
Elbaum, Sebastian, Rothermel, Gregg, Karre, Srikanth, and Fisher II, Marc
- Subjects
APPLICATION software ,COMPUTER software testing ,RELIABILITY in engineering ,SOFTWARE engineering ,ENGINEERING - Abstract
Web applications are vital components of the global information infrastructure, and it is important to ensure their dependability. Many techniques and tools for validating Web applications have been created, but few of these have addressed the need to test Web application functionality and none have attempted to leverage data gathered in the operation of Web applications to assist with testing. In this paper, we present several techniques for using user session data gathered as users operate Web applications to help test those applications from a functional standpoint. We report results of an experiment comparing these new techniques to existing white-box techniques for creating test cases for Web applications, assessing both the adequacy of the generated test cases and their ability to detect faults on a point-of-sale Web application. Our results show that user session data can be used to produce test suites more effective overall than those produced by the white-box techniques considered; however, the faults detected by the two classes of techniques differ, suggesting that the techniques are complementary. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
43. Formally Assessing an Instructional Tool: A Controlled Experiment in Software Engineering.
- Author
-
Karoulis, Athanasis, Stamelos, Loannis G., Angelis, Lefteris, and Pombortsis, Andreas S.
- Subjects
ENGINEERING ,SOFTWARE engineering ,HUMAN-computer interaction ,ERGONOMICS ,KNOWLEDGE management ,USER interfaces - Abstract
This paper describes a controlled experiment concerning the use of a learning aid during the instructional procedure. The core issue of investigation is whether this instructional aid can augment the cognitive transfer of the learners by personalizing the offered knowledge. For this purpose, a controlled experiment was conducted with the participation of 79 undergraduate students. The taught domain was two lessons concerning human-computer interaction: the first in usability engineering and the second in interface evaluation methodologies. A test session was also conducted to collect data on the assessment of the augmentation of the students' knowledge on the domain. Descriptive and inferential statistics were applied to the collected data to test the research hypotheses. The results showed that with regard to the transfer of simple information, this "lesson sheet" does not provide any statistically significant advantage, yet for complex information, a significant statistically improved performance was observed for the student group that used the tool. Finally, concerns about the application of the tool and further research in the area are also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
44. Integrating Large-Scale Group Projects and Software Engineering Approaches for Early Computer Science Courses.
- Author
-
Blake, M. Brian
- Subjects
COMPUTER software ,SOFTWARE engineering ,CYBERNETICS ,ENGINEERING ,COMPUTER systems ,UNIVERSITIES & colleges ,CURRICULUM - Abstract
The utilization of large-scale group projects in early computer science courses has been readily accepted in academia. In these types of projects, students are given a specific portion of a large programming problem to design and develop. Ultimately, the consolidation of all of the independent student projects integrates to form the solution for the large-scale project. Although many studies report on the experience of executing a semester-long course of this nature, course experience at Georgetown University, Washington, DC, shows the benefits of embedding a large-scale project that comprises just a segment of the course (three to four weeks). The success of these types of courses requires an effective process for creating the specific large-scale project. In this paper, an effective process for large-scale group project course development is applied to the second computer science course at George- town University. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
45. FSM-Based Incremental Conformance Testing Methods.
- Author
-
El-Fakih, Khaled, Yevtushenko, Nina, and Bochmann, Grogor V.
- Subjects
SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,TECHNICAL specifications ,INDUSTRIAL design ,ENGINEERING - Abstract
The development of appropriate test oasis is an important issue for conformance testing of protocol implementations and other reactive software systems. A number of methods are known for the development of a test suite based on a specification given in the form of a finite state machine. In practice, the system requirements evolve throughout the lifetime of the system and the specifications are modified incrementally. In this paper, we adapt tour well-known test derivation methods, namely, the HIS, W, Wp, and UlOv methods, for generating tests that would test only the modified parts of an evolving specification. Some application examples and experimental results are provided. These results show significant gains when using incremental testing in comparison with complete testing, especially when the modified part represents less than 20 percent of the whole specification. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
46. Experience With Teaching Black-Box Testing in a Computer Science/Software Engineering Curriculum.
- Author
-
Chen, T. Y. and Poon, Pak-Lok
- Subjects
COMPUTER software ,TESTING ,SOFTWARE engineering ,ENGINEERING ,COMPUTER science ,SCIENCE - Abstract
Software testing is a popular and important technique for improving software quality. There is a strong need for universities to teach testing rigorously to students studying computer science or software engineering. This paper reports the experience of teaching the classification-tree method as a black-box testing technique at the University of Melbourne, Melbourne, Australia, and Swinburne University of Technology, Melbourne, Australia. It aims to foster discussion of appropriate teaching methods of software testing. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
47. Knowledge-Based Automation of a Design Method for Concurrent Systems.
- Author
-
Mills, Kevin L. and Gomaa, Hassan
- Subjects
- *
SOFTWARE engineering , *EXPERT systems , *SYSTEMS design , *ELECTRONIC data processing , *SYSTEM analysis , *ENGINEERING - Abstract
This paper describes a knowledge-based approach to automate a software design method for concurrent systems. The approach uses multiple paradigms to represent knowledge embedded in the design method. Semantic data modeling provides the means to represent concepts from a behavioral modeling technique, called Concurrent Object-Based Real-time Analysis (COBRA), which defines system behavior using data/control flow diagrams. Entity-Relationship modeling is used to represent a design metamodel based on a design method, called Concurrent Design Approach for Real-Time Systems (CODARTS), which represents concurrent designs as software architecture diagrams, task behavior specifications, and module specifications. Production rules provide the mechanism for codifying a set of CODARTS heuristics that can generate concurrent designs based on semantic concepts included in COBRA behavioral models and on entities and relationships included in CODARTS design metamodels. Together, the semantic data model, the entity-relationship model, and the production rules, when encoded using an expert-system shell, compose CODA, an automated designer's assistant. Other forms of automated reasoning, such as knowledge-based queries, can be used to check the correctness and completeness of generated designs with respect to properties defined in the CODARTS design metamodel. CODA is applied to generate 10 concurrent designs for four real-time problems. The paper reports the degree of automation achieved by CODA. The paper also evaluates the quality of generated designs by comparing the similarity between designs produced by CODA and human designs reported in the literature for the same problems. In addition, the paper compares CODA with four other approaches used to automate software design methods. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
48. Defining and Applying Measures of Distance Between Specifications.
- Author
-
Labed Jilani, Lamia, Desharnais, Jules, and Mili, Ali
- Subjects
TECHNICAL specifications ,SOFTWARE measurement ,SOFTWARE engineering ,ENGINEERING - Abstract
Echoing Louis Pasteur's quote, [SUP1] we submit the premise that it is advantageous to define measures of distance between requirements specifications because such measures open up a wide range of possibilities both in theory and in practice. In this paper, we present a mathematical basis for measuring distances between specifications and show how our measures of distance can be used to address concrete problems that arise in the practice of software engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
49. Validating the ISO/IEC 155O4 Measure of Software Requirements Analysis Process Capability.
- Author
-
EI Emam, Khaled and Birk, Andreas
- Subjects
SOFTWARE engineering ,STANDARDS ,COMPUTER software ,TRUTHFULNESS & falsehood ,PERFORMANCE ,ENGINEERING - Abstract
ISO/IEC 15504 is an emerging international standard on software process assessment. It defines a number of software engineering processes and a scale for measuring their capability. One of the defined processes is software requirements analysis (SRA). A basic premise of the measurement scale is that higher process capability is associated with better project performance (i.e., predictive validity). This paper describes an empirical study that evaluates the predictive validity of SRA process capability. Assessments using ISO/IEC 15504 were conducted on 56 projects world-wide over a period of two years. Performance measures on each project were also collected using questionnaires, such as the ability to meet budget commitments and staff productivity. The results provide strong evidence of predictive validity for the SRA process capability measure used in ISO/IEC 15504, but only for organizations with more than 50 IT Staff. Specifically, a strong relationship was found between the implementation of requirements analysis practices as defined in ISO/IEC 15504 and the productivity of software projects. For smaller organizations, evidence of predictive validity was rather weak. This can be interpreted in a number of different ways: that the measure of capability is not suitable for small organizations or that the SRA process capability has less effect on project performance for small organizations. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
50. A Theory-Based Representation for Object-Orianted Domain Models.
- Author
-
DeLoach, Scott A. and Hartrum, Thomas C.
- Subjects
SOFTWARE engineering ,FORMAL methods (Computer science) ,TECHNICAL specifications ,SYSTEMS design ,ENGINEERING ,MODEL categories (Mathematics) - Abstract
Formal software specification has long been touted as a way to increase the quality and reliability of software; however, it remains an intricate, manually intensive activity. An alternative to using formal specifications directly is to translate graphically based, semiformal specifications into formal specifications. However, before this translation can take place, a formal definition of baste object-oriented concepts must be found. This paper presents an algebraic model of object-orientation that defines how object-oriented concepts can be represented algebraically using an object-oriented algebraic specification language O-SLANG. O-SLANG combines basic algebraic specification constructs with category theory operations to capture internal object class structure, as well as relationships between classes. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.