294 results
Search Results
2. Editor's Comments.
- Author
-
Basili, Victor R.
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
Comments on the articles in the "IEEE Transactions on Software Engineering" were given. The periodical has an established, published scope which is subject to change as the file develops. Part of the review process is to state whether the paper meets the criteria. The scope should be limited to papers of interest to the software engineering community. However, besides mainstream software engineering topics, it could include the software engineering of various applications.
- Published
- 1988
3. The 4th International Workshop on Software Engineering for HPC in Computational Science and Engineering.
- Author
-
Carver, Jeffrey C., Hong, Neil Chue, and Ciraci, Selim
- Subjects
HIGH performance computing ,SOFTWARE engineering ,ENGINEERING ,SUPERCOMPUTERS ,COMPUTER software ,CONFERENCES & conventions - Abstract
Despite the increasing demand for utilizing high-performance computing (HPC) for CSE applications, software development for HPC historically attracted little attention from the software engineering (SE) community. Paradoxically, the HPC CSE community has increasingly been adopting SE techniques and tools. Indeed, the development of CSE software for HPC differs significantly from the development of more traditional business information systems, from which many SE best practices and tools have been drawn. The workshop summarized in this column, the fourth in the series to be collocated with the Supercomputing conference series, examined two main topics: testing and tradeoffs. Through presentations of work in this area and structured group discussions, the participants highlighted some of the key issues, as well as indicated the direction the community needs to go. In particular, there is a need for more high-quality research in this area that we can use as an evidence base to help developers of CSE applications change practice and benefit from advances in software engineering. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
4. Guest Editors' Introduction to the Special Section on the International Conference on Software Engineering.
- Author
-
Griswold, William G. and Nuseibeh, Bashar
- Subjects
ENGINEERING ,SOFTWARE engineering ,COMPUTER software ,FAULT-tolerant computing ,SYSTEMS design - Abstract
This article presents information about the selection of the four best papers submitted to the 27th International Conference on Software Engineering held on May 21, 2005. A summary for each paper is provided along with the author's name and the main topic of each paper. One paper deals with a new approach to software fault tolerance. Another paper discusses an environment for finding and visualizing examples of usage of an API. The third paper reports on a study to determine the limitations of tools used in computer maintenance tasks.
- Published
- 2006
- Full Text
- View/download PDF
5. An Acyclic Expansion Algorithm for Fast Protocol Validation.
- Author
-
Kakuda, Yoshiaki, Wakahara, Yasushi, and Norigoe, Masamitsu
- Subjects
COMPUTER algorithms ,ALGORITHMS ,COMPUTER programming ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
For the development of communications software com- posed of many modules, protocol validation is essential to detect errors in the interactions among the modules. A number of protocol validation techniques were proposed in the past, but the validation time required by these techniques is too long for many actual protocols. This paper proposes a new fast protocol validation technique to overcome this drawback. The proposed technique is to construct the minimum acyclic form of state transitions in individual processes of the protocol, and to detect protocol errors such as system deadlocks and channel overflows fast. This paper also presents a protocol validation system based on the proposed technique to confirm its feasibility and shows validation results for some actual protocols obtained with this system. As a result, the protocol validation system is expected to contribute to a great extent to the improvement of the productivity in development and maintenance of communications software. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
6. Report: The Second International Workshop on Software Engineering for CSE.
- Author
-
CARVER, JEFFREY C.
- Subjects
CONFERENCES & conventions ,SOFTWARE engineering conferences ,ENGINEERING ,COMPUTER software ,CAPABILITY maturity model - Abstract
The article discusses the highlights of the 2009 International Conference on Software Engineering. It presents a summary of the workshop's position papers and breakout group sessions with focus on expanding software engineering to all types of computational science engineering (CSE). Breakout sessions answered queries such as identifying methods to facilitate communication, measuring productivity and ways to develop community software. The conference recommends the characterization of CSE communities and identification of its industry segments.
- Published
- 2009
- Full Text
- View/download PDF
7. Voltage Stability Toolbox for Power System Education and Research.
- Author
-
Ayasun, Saffet, Nwankpa, Chika O., and Kwatny, Harry G.
- Subjects
ENGINEERING ,POWER resources ,BIFURCATION theory ,STABILITY (Mechanics) ,COMPUTER software ,COMPUTER systems - Abstract
This paper presents a Matlab-based voltage stability toolbox (VST) designed to analyze bifurcation and voltage stability problems in electric power systems. VST combines proven computational and analytical capabilities of bifurcation theory, and symbolic implementation and graphical representation capabilities of Matlab and its toolboxes. The motivation for developing the package is to provide a flexible simutlation environment for an ongoing research conducted at the Center for Electric Power Engineering (CEPE) of Drexel University, Philadelphia, PA, and to enhance undergraduate/graduate power engineering courses. VST is a very flexible tool for load flow, small-signal and transient stability, and bifurcation analysis. After a brief summary of power system model and local bifurcations, the paper illustrates the capabilities of VST using the IEEE 14-bus system as an example and describes its successful integration into power engineering courses at Nigde University, Nigde, Thrkey. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
8. Introducing Software Engineering Developments to a Classical Operating Systems Course.
- Author
-
Billard, Edward A.
- Subjects
ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering ,SYSTEMS software ,COMPUTER software ,COMPUTER programming ,SYSTEM analysis - Abstract
An operating system course draws from a well-defined fundamental theory, but one needs to consider how more re- cent advances, not necessarily in the theory itself, can be applied to improve the course and the general body of knowledge of the student. The goal of this paper is to show how recent software engineering developments can be introduced to such a course to not only satisfy the theory requirements, but also make the theory more understandable. In particular, this paper focuses on how students can effectively learn the Unified Modeling Language, the object-oriented methodology, and the Java programming language in the context of an operating systems course. The goal is to form a systematic software engineering process for operating system design and implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
9. Testability Transformation.
- Author
-
Harman, Mark, Lin Hu, Hierons, Rob, Wegener, Joachim, Sthamer, Harmen, Baresel, André, and Roper, Marc
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software - Abstract
A testability transformation is a source-to-source transformation that aims to improve the ability of a given test generation method to generate test data for the original program. This paper introduces testability transformation, demonstrating that it differs from traditional transformation, both theoretically and practically, while still allowing many traditional transformation rules to be applied. The paper illustrates the theory of testability transformation with an example application to evolutionary testing. An algorithm for flag removal is defined and results are presented from an empirical study which show how the algorithm improves both the performance of evolutionary test data generation and the adequacy level of the test data so-generated. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
10. Handling Obstacles in Goal-Oriented Requirements Engineering.
- Author
-
van Lamsweerde, Axel and Letier, Emmanuel
- Subjects
ENGINEERING ,ELECTRONIC systems ,MATHEMATICAL programming ,PROGRAM transformation ,TECHNICAL specifications ,COMPUTER software - Abstract
Requirements engineering is concerned with the elicitation of high-level goals to be achieved by the envisioned system, the refinement of such goals and their operationalization into specifications of services and constraints and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. Requirements engineering processes often result in goals, requirements, and assumptions about agent behavior that are too ideal; some of them are likely not to be satisfied from time to time in the running system due to unexpected agent behavior. The lack of anticipation of exceptional behaviors results in unrealistic, unachievable, and/or incomplete requirements. As a consequence, the software developed from those requirements will not be robust enough and will inevitably result in poor performance or failures, sometimes with critical consequences on the environment. This paper presents formal techniques for reasoning about obstacles to the satisfaction of goals, requirements, and assumptions elaborated in the requirements engineering process. A first set of techniques allows obstacles to be generated systematically from goal formulations and domain properties. A second set of techniques allows resolutions to be generated once the obstacles have been identified thereby. Our techniques are based on a temporal logic formalization of goals and domain properties; they are integrated into an existing method for goal-oriented requirements elaboration with the aim of deriving more realistic, complete, and robust requirements specifications. A key principle in this paper is to handle exceptions at requirements engineering time and at the goal level, so that more freedom is left for resolving them in a satisfactory way. The various techniques proposed are illustrated and assessed in the context of a real safety-critical system. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
11. Requirements Elicitation and Specification Using the Agent Paradigm: The Case Study of an Aircraft Turnaround Simulator.
- Author
-
Miller, Tim, Bin Lu, Sterling, Leon, Beydoun, Ghassan, and Taveter, Kuldar
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,TECHNOLOGY transfer ,DIFFUSION of innovations - Abstract
In this paper, we describe research results arising from a technology transfer exercise on agent-oriented requirements engineering with an industry partner. We introduce two improvements to the state-of-the-art in agent-oriented requirements engineering, designed to mitigate two problems experienced by ourselves and our industry partner: (1) the lack of systematic methods for agent-oriented requirements elicitation and modelling; and (2) the lack of prescribed deliverables in agent-oriented requirements engineering. We discuss the application of our new approach to an aircraft turnaround simulator built in conjunction with our industry partner, and show how agent-oriented models can be derived and used to construct a complete requirements package. We evaluate this by having three independent people design and implement prototypes of the aircraft turnaround simulator, and comparing the three prototypes. Our evaluation indicates that our approach is effective at delivering correct, complete, and consistent requirements that satisfy the stakeholders, and can be used in a repeatable manner to produce designs and implementations. We discuss lessons learnt from applying this approach. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
12. A General Testability Theory: Classes, Properties, Complexity, and Testing Reductions.
- Author
-
Rodriguez, Ismael, Llana, Luis, and Rabanal, Pablo
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,ANTIPATTERNS (Software engineering) ,CAPABILITY maturity model - Abstract
In this paper we develop a general framework to reason about testing. The difficulty of testing is assessed in terms of the amount of tests that must be applied to determine whether the system is correct or not. Based on this criterion, five testability classes are presented and related. We also explore conditions that enable and disable finite testability, and their relation to testing hypotheses is studied. We measure how far incomplete test suites are from being complete, which allows us to compare and select better incomplete test suites. The complexity of finding that measure, as well as the complexity of finding minimum complete test suites, is identified. Furthermore, we address the reduction of testing problems to each other, that is, we study how the problem of finding test suites to test systems of some kind can be reduced to the problem of finding test suites for another kind of systems. This enables to export testing methods. In order to illustrate how general notions are applied to specific cases, many typical examples from the formal testing techniques domain are presented. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
13. Editorial: The State of TSE.
- Author
-
Knight, John
- Subjects
ENGINEERING ,SOFTWARE engineering ,COMPUTER software - Abstract
This article reports that several changes in the periodical "IEEE Transactions on Software Engineering," published as of February 01, 2004, will make the processing of articles more timely and more effective. First, the length restriction on submitted manuscripts has been removed. This should allow authors to document results in appropriate length papers. Despite the change, authors are encouraged to keep papers as short as possible. Second, Transactions on Software Engineering manuscript management has been moved to the Web-based Manuscript Central system. This system makes all aspects of manuscript processing much more efficient and it allows everybody involved in processing papers, including authors, to obtain details of the state of manuscripts as they are processed. Third, preprints in the future will be available online two months before the issue cover date. Software remains a critical industry to the world. The impact of software is tremendous, both when it works and when it doesn't.
- Published
- 2004
- Full Text
- View/download PDF
14. The Role of Deliberate Artificial Design Elements in Software Engineering Experiments.
- Author
-
Hannay, Jo E. and Jorgensen, Magne
- Subjects
SOFTWARE engineering ,SYSTEMS design ,COMPUTER software ,COMPUTER systems ,ENGINEERING ,DESIGN - Abstract
Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies that are badly designed or that have research questions of low relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
15. A Survey of Controlled Experiments in Software Engineering.
- Author
-
øberg, Dag I. K., Hannay, Jo E., Hansen, Ove, Kampenes, Vigdis By, Karahasanović, Amela, Liborg, Nils-Kristian, and Rekdal, Anette C.
- Subjects
SOFTWARE engineering ,COMPUTER software ,SURVEYS ,PERIODICALS ,ENGINEERING - Abstract
The classical method for identifying cause-effect relationships is to conduct controlled experiments. This paper reports upon the present state of how controlled experiments in software engineering are conducted and the extent to which relevant information is reported. Among the 5,453 scientific articles published in 12 leading software engineering journals and conferences in the decade from 1993 to 2002, 103 articles (1.9 percent) reported controlled experiments in which individuals or teams performed one or more software engineering tasks. This survey quantitatively characterizes the topics of the experiments and their subjects (number of subjects, students versus professionals, recruitment, and rewards for participation), tasks (type of task, duration, and type and size of application) and environments (location, development tools). Furthermore, the survey reports on how internal and external validity is addressed and the extent to which experiments are replicated. The gathered data reflects the relevance of software engineering experiments to industrial practice and the scientific maturity of software engineering research. [ABSTRACT FROM AUTHOR]
- Published
- 2005
16. An Approach to Developing Domain Requirements as a Core Asset Based on Commonality and Variability Analysis in a Product Line.
- Author
-
Mikyeong Moon, Keunhyuk Yeom, and Heung Seok Chae
- Subjects
PRODUCT lines ,PRODUCT management ,COMPUTER software ,COMMERCIAL products ,METHODOLOGY ,ENGINEERING - Abstract
The methodologies of product line engineering emphasize proactive reuse to construct high-quality products more quickly that are less costly. Requirements engineering for software product families differs significantly from requirements engineering for single software products. The requirements for a product line are written for the group of systems as a whole, with requirements for individual systems specified by a delta or an increment to the generic set. Therefore, it is necessary to identify and explicitly denote the regions of commonality and points of variation at the requirements level. In this paper, we suggest a method of producing requirements that will be a core asset in the product line. We describe a process for developing domain requirements where commonality and variability in a domain are explicitly considered. A CASE environment, named DREAM, for managing commonality and variability analysis of domain requirements is also described. We also describe a case study for an e-Travel System domain where we found that our approach to developing domain requirements based on commonality and variability analysis helped to produce domain requirements as a core asset for product lines. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
17. Spatial Complexity Metrics: An Investigation of Utility.
- Author
-
Gold, Nicolas E., Mohan, Andrew M., and Layzell, Paul J.
- Subjects
COMPUTER software ,COMPUTATIONAL complexity ,SOFTWARE measurement ,SOFTWARE engineering ,ENGINEERING - Abstract
Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the "spatial complexity" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
18. Nonrectangular Shaping and Sizing of Soft Modules for Floorplan-Design Improvement.
- Author
-
Chu, Chris C. N. and Young, Evangeline F. Y.
- Subjects
COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,COMPUTER industry - Abstract
Many previous works on floorplanning with non rectangular modules [1]-[12] assume that the modules are predesignated to have particular non rectangular shapes, e.g., L-shaped, T-shaped, etc. However, this is not common in practice because rectangular shapes are more preferable in many designing steps. Those non rectangular shapes are actually generated during floor planning in order to further optimize the solution. In this paper, we study this problem of changing the shapes and dimensions of the flexible modules to fill up the unused area of a preliminary floor plan, while keeping the relative positions between the modules unchanged. This feature will also be useful in fixing small incremental changes during engineering change order modifications. We formulate the problem as a mathematical program. The formulation is such that the dimensions of all of the rectangular and, nonrectangular modules can be computed by closed-form equations in O(m) time in each corresponding Lagrangian relaxation sub problem (LRS) where m is the total number of edges in the constraint graphs. As a result, the total time for the whole shaping and sizing process is O(k × m), where k is the number of iterations on the LRS. Experimental results show that the amount of area reused is 3.7% on average, while the total wirelength can be reduced by 0.43% on average because of the more compacted result packing. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
19. Simulation and Comparison of Albrecht's Function Point and DeMarco's Function Bang Metrics in a CASE Environment.
- Author
-
Rask, Raimo, Laamanen, Petteri, and Lyytinen, Kalle
- Subjects
COMPUTER software development ,COMPUTER programming management ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Software size estimates provide a basis for soft- ware cost estimation during software development. Hence, it is important to measure the system size reliably as early as possible, i.e., during the requirements specification. Two best known specification level metrics are Albrecht's Function Points and DeMarco's Function Bang. One problem in using these metrics has been that there are only few tools that can calculate them during the specification phase. We have built one such tool. Another problem has been that no research data is available how these metrics correlate with one another. The paper compares these two metrics by a simulation study in which automatically generated randomized dataflow diagrams (DED's) were used as a statistical sample to count automatically function points and function bang in a built CASE environment. These value counts were correlated statistically using correlation coefficients and regression analysis. The simulation study permits sufficient variation in the base material to cover most types of system specifications. Moreover, it allows sufficient sampling sizes to make statistical analysis of data. The obtained results show that in certain cases there is a relatively good statistical correlation between these metrics. No overall general correlation exists, however. The paper does not show which one of the two metrics fares better as a size metric. Yet, our study suggests to use in many cases Function Bang metric, because its automatic calculation is simpler and depends less on judgement. Moreover, the study demonstrates that correlations depend upon a system type. This implies that in software projects one must be careful with size estimates while using these metrics. In order to know when one needs to calibrate the size estimate we need to develop algorithms which help to detect logical system types and make adjustments accordingly. The results also point out the need of empirical research in which we can better derive the connection between specification level metrics and the number of lines of code. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
20. Capacity of Voting Systems.
- Author
-
Rangarajan, Sampath, Jalote, Pankaj, and Tripathi, Satish K.
- Subjects
BACKUP processing alternatives in electronic data processing ,SYSTEMS design ,DATABASES ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Data replication is often used to increase the availability of data in a database system. Voting schemes can be used to manage this replicated data. In this paper we use a simple model to study the capacity of systems using voting schemes for data management. Capacity of a system is defined as the number of operations the system can perform successfully, on an average, per unit time. We study the capacity of a system using voting and compare it with the capacity of a system using a single node. We show that the maximum increase in capacity by the use of majority voting is bounded by lip, where p is the steady-state probability of a node being alive. We also show that for a system employing majority voting, if the reliability of nodes is high, increasing the number of nodes to more than three gives only a marginal increase in capacity. We perform similar analysis for three other voting schemes. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
21. LISPACK—A Methodology and Tool for the Performance Analysis of Parallel Systems and Algorithms.
- Author
-
Lazeolla, Giuseppe and Marinuzzi, Francesco
- Subjects
PARALLEL computers ,SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
The paper deals with the performance analysis of parallel algorithms and systems. For these, numerical solution methods quickly show their limits because of the enormous state-space growth. The proposed methodology and software tool (LISPACK, an acronym for List-manipulation Parallel-modeling Package) uses string manipulation, lumping, and recursive elimination as a means for the definition of the large Markovian process, its restructuring and efficient solution. Initially, the enormous stage space is conveniently collapsed and the large transition matrix is reduced. Subsequently, the reduced matrix is recursively block banded, and an efficient recursive, symbolic Gauss elimination is applied. No relevant costs are incurred for the state-space collapsing and restructuring, nor for the matrix block banding. The analysis of a typical parallel system and algorithm model is developed as a case study, to discuss the features of the method. The paper has two contributions. The first, symbolic-approach methodology, is proposed for the performance analysis of parallel algorithms and systems. Second, a tool is introduced that exploits the capabilities of the symbolic approach in the solution of parallel models, where the numerical techniques reveal their limits. [ABSTRACT FROM AUTHOR]
- Published
- 1993
22. Towards Complexity Metrics for Ada Tasking.
- Author
-
Shatz, Sol M.
- Subjects
DISTRIBUTED computing ,PROGRAMMING languages ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
With growing interest in distributed computing come demands for techniques to aid in development of correct and reliable distributed software. Controlling, or at least recognizing, complexity of such software is an important part of the development and maintenance process. While a number of metrics have been proposed for quantitatively measuring the complexity of sequential, centralized programs, corresponding metrics for distributed software are noticeable by their absence. Using Ada as a representative distributed programming language, this paper discusses some ideas on complexity metrics that focus on Ada tasking and rendezvous. Concurrently active rendezvous are claimed to be an important aspect of communication complexity. A Petri net graph model of Ada rendezvous is used to introduce a "rendezvous graph," an abstraction that can be useful in viewing and computing effective communication complexity. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
23. Single-Site and Distributed Optimistic Protocols for Concurrency Control.
- Author
-
Bassiouni, M. A.
- Subjects
ELECTRONIC data processing ,DATABASES ,COMPUTER software ,ELECTRONIC systems ,ENGINEERING ,SOFTWARE engineering - Abstract
In spite of their advantage in removing the overhead of lock maintenance and deadlock handling, optimistic concurrency control methods have continued to be far less popular in practice than locking schemes. There are two complementary approaches to help render the optimistic approach practically viable. For the high-level approach, integration schemes can be utilized so that the database management system is provided with a variety of synchronization methods each of which can be applied to the appropriate class of transactions. The low-level approach seeks to increase the concurrency of the original optimistic method and improve its performance. In this paper we examine the latter approach, and present algorithms that aim at reducing backups and improve throughput. Both the single-site and distributed networks are considered. Optimistic schemes using time-stamps for fully duplicated and partially duplicated database networks are presented, with emphasis on performance enhancement and on reducing the overall cost of implementation. Abstract-A methodology is presented for evaluating the performance of database update schemes in a distributive environment. The methodology makes use of the history of how data are used in the database. Parameters, such as update to retrieval ratio and average file size, can be set based on the actual characteristics of a system. The analysis is specifically directed toward the support of derived data within the relational model. Because concurrency is a major problem in a distributive system, the support of derived data is analyzed with respect to three distributive concurrency control techniques- master/slave, distributed, and synchronized. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
24. PROTEAN: A High-level Petri Net Tool for the Specification and Verification of Communication Protocols.
- Author
-
Billington, Jonathan, Wheeler, Geoffrey R., and Wilbur-ham, Michael C.
- Subjects
PETRI nets ,GRAPH theory ,COMPUTER software ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
A computer aid for the specification and analysis of computer communication protocols has been developed over a period of 7 years by the Telecom Australia Research Laboratories. It is based on a formal specification technique called Numerical Petri Nets. The computer aid, known as PROTEAN (PROTocol Emulation and Analysis), provides both graphical (color) and textual interfaces to the protocol designer. Numerical Petri Net (NPN) specifications may be created, stored, appended to other NPNs, structured, edited, listed, displayed, and analyzed. Interactive simulation, exhaustive reachability analysis, and several directed graph analysis facilities are provided. Reachability graphs can be automatically laid out and displayed. PROTEAN determines liveness (dead code, deadlocks, and livelocks) from the reachability graph and its strongly connected components. Language analysis, involving the automatic reduction of reachability graphs to language graphs, can be used to study sequences of key system events. This allows a protocol to be compared with its service specification. Elementary cycles of graphs can be generated, allowing interesting cycles to be highlighted on reachability and language graphs. Facilities are provided for debugging the specification, once a problem with the protocol has been discovered. They allow sequences of events, which lead to the undesired behavior, to be traced. The paper commences with a comparison of specification languages, concentrating on extended finite state machines and high-level Petri nets. NPNs and PROTEAN's facilities are then described and illustrated with a simple example. The application of PROTEAN to complex examples is mentioned briefly. A discussion of the approach, its limitations and future work is presented in the context of other developments reported in the literature. Work towards a comprehensive Protocol Engineering Workstation is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
25. Specification of Synchronizing Processes.
- Author
-
Ramamritham, Krithivasan and Keller, Robert M.
- Subjects
COMPUTER software ,COMPUTER programming ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems - Abstract
The formalism of temporal logic has been suggested to be an appropriate tool for expressing the semantics of concurrent programs. This paper is concerned with the application of temporal logic to the specification of factors affecting the synchronization of concurrent processes. Towards this end, we first introduce a model for synchronization and axiomatize its behavior. SYSL, a very high-level language for specifying synchronization properties, is then described. It is designed using the primitives of temporal logic and features constructs to express properties that affect synchronization in a fairly natural and modular fashion. Since the statements in the language have intuitive interpretations, specifications are humanly readable. In addition, since they possess appropriate formal semantics, unambiguous specifications result. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
26. Toolpack—An Experimental Software Development Environment Research Project.
- Author
-
Osterweil, Leon J.
- Subjects
COMPUTER software development ,ELECTRONIC systems ,SOFTWARE engineering ,ENGINEERING ,COMPUTER systems ,COMPUTER software - Abstract
This paper discusses the goals and methods of the Toolpack project and in this context discusses the architecture and design of the software system being produced as the focus of the project. Toolpack is presented as an experimental activity in which a large software tool environment is being created for the purpose of general distribution and then careful study and analysis. The paper begins by explaining the motivation for building integrated tool sets. It then proceeds to explain the basic requirements that an integrated system of tools must satisfy in order to be successful and to remain useful both in practice and as an experimental object. The paper then summarizes the tool capabilities that will be incorporated into the environment. It then goes on to present a careful description of the actual architecture of the Toolpack integrated tool system. Finally the Toolpack project experimental plan is presented, and future plans and directions are summarized. [ABSTRACT FROM AUTHOR]
- Published
- 1983
27. Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation.
- Author
-
Albrecht, Allan J. and Gafeney Jr., John E.
- Subjects
COMPUTER software development ,COMPUTER software ,COMPUTER systems ,SOFTWARE engineering ,ENGINEERING - Abstract
One of the most important problems faced by software developers and users is the prediction of the size of a programming system and its development effort. As an alternative to "size," one might deal with a measure of the "function" that the software is to perform. Albrecht [1] has developed a methodology to estimate the amount of the "function" the software is to perform, in terms of the data it is to use (absorb) and to generate (produce). The "function" is quantified as "function points," essentially, a weighted sum of the numbers of "inputs," "outputs," master files," and "inquiries" provided to, or generated by, the software. This paper demonstrates the equivalence between Albrecht's external input/output data flow representative of a program (the "function points" metric) and Halstead's [2] "software science" or "software linguistics" model of a program as well as the "soft content" variation of Halstead's model suggested by Gaffney [7]. Further, the high degree of correlation between "function points" and the eventual "SLOC" (source lines of code) of the program, and between "function points" and the work-effort required to develop the code, is demonstrated. The "function point" measure is thought to be more useful than "SLOC" as a prediction of work effort because "function points" are relatively easily estimated from a statement of basic requirements for a program early in the development cycle. The strong degree of equivalency between "function points" and "SLOC" shown in the paper suggests a two-step work-effort validation procedure, first using "function points" to estimate "SLOC," and then using "SLOC" to estimate the work-effort. This approach would pro- vide validation of application development work plans and work-effort estimates early in the development cycle. The approach would also more effectively use the existing base of knowledge on producing "SLOC" until a similar base is developed for "function points." The paper assumes that the reader is familiar with the fundamental theory of "software science" measurements and the practice of validating estimates of work-effort to design and implement software applications (programs). If not, a review of [1] -[3] is suggested. [ABSTRACT FROM AUTHOR]
- Published
- 1983
28. On the Complexity of the Verification of the Costas Property.
- Author
-
ESCH, JIM
- Subjects
DOPPLER effect ,BANDWIDTHS ,ELECTRONICS ,COMPUTER software ,ENGINEERING - Abstract
The article presents an introduction to a paper about the verification of the Costas property. John P. Costas examined how one could determine unambiguous range and Doppler responses consistent with the overall burst duration and bandwidth. His technique involved the discarding of stage formation and cross-correlation based on the energy contents of the waveform and reflections alone. He created computer programs to perform an ordered, exhaustive search for the arrays. A number of techniques can be used to lessen the complexity of a search for Costas functions.
- Published
- 2009
- Full Text
- View/download PDF
29. Design Pattern Detection Using Similarity Scoring.
- Author
-
Tsantalis, Nikolaos, Chatzigeorgiou, Alexander, Stephanides, George, and Halkidis, Spyros T.
- Subjects
REVERSE engineering ,GRAPH algorithms ,COMPUTER software ,OPERATIONS research ,COMPUTER algorithms ,SOFTWARE patterns ,ENGINEERING ,HEURISTIC ,METHODOLOGY ,OPEN source software - Abstract
The identification of design patterns as part of the reengineering process can convey important information to the designer. However, existing pattern detection methodologies generally have problems in dealing with one or more of the following issues: Identification of modified pattern versions, search space explosion for large systems and extensibility to novel patterns. In this paper, a design pattern defection methodology is proposed that is based on similarity scoring between graph vertices. Due to the nature of the underlying graph algorithm, this approach has the ability to also recognize patterns that are modified from their standard representation. Moreover, the approach exploits the fact that patterns reside in one or more inheritance hierarchies, reducing the size of the graphs to which the algorithm is applied. Finally, the algorithm does not rely on any pattern-specific heuristic, facilitating the extension to novel design structures. Evaluation on three open-source projects demonstrated the accuracy and the efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
30. Covering Arrays for Efficient Fault Characterization in Complex Configuration Spaces.
- Author
-
Yilmaz, Cemal, Cohen, Myra B., and Porter, Adam A.
- Subjects
SOFTWARE engineering ,COMPUTER software ,ELECTRONIC systems ,MATHEMATICS ,ENGINEERING ,TECHNOLOGY ,HIGH technology industries ,COMPUTER programming - Abstract
Many modern software systems are designed to be highly configurable so they can run on and be optimized for a wide variety of platforms and usage scenarios. Testing such systems is difficult because, in effect, you are testing a multitude of systems, not just one. Moreover, bugs can and do appear in some configurations, but not in others. Our research focuses on a subset of these bugs that are "option-related"--those that manifest with high probability only when specific configuration options take on specific settings. Our goal is not only to detect these bugs, but also to automatically characterize the configuration subspaces (i.e., the options and their settings) in which they manifest. To improve efficiency, our process tests only a sample of the configuration space, which we obtain from mathematical objects called covering arrays. This paper compares two different kinds of covering arrays for this purpose and assesses the effect of sampling strategy on fault characterization accuracy. Our results strongly suggest that sampling via covering arrays allows us to characterize option-related failures nearly as well as if we had tested exhaustively, but at a much lower cost. We also provide guidelines for using our approach in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
31. Integrating Large-Scale Group Projects and Software Engineering Approaches for Early Computer Science Courses.
- Author
-
Blake, M. Brian
- Subjects
COMPUTER software ,SOFTWARE engineering ,CYBERNETICS ,ENGINEERING ,COMPUTER systems ,UNIVERSITIES & colleges ,CURRICULUM - Abstract
The utilization of large-scale group projects in early computer science courses has been readily accepted in academia. In these types of projects, students are given a specific portion of a large programming problem to design and develop. Ultimately, the consolidation of all of the independent student projects integrates to form the solution for the large-scale project. Although many studies report on the experience of executing a semester-long course of this nature, course experience at Georgetown University, Washington, DC, shows the benefits of embedding a large-scale project that comprises just a segment of the course (three to four weeks). The success of these types of courses requires an effective process for creating the specific large-scale project. In this paper, an effective process for large-scale group project course development is applied to the second computer science course at George- town University. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
32. FSM-Based Incremental Conformance Testing Methods.
- Author
-
El-Fakih, Khaled, Yevtushenko, Nina, and Bochmann, Grogor V.
- Subjects
SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,TECHNICAL specifications ,INDUSTRIAL design ,ENGINEERING - Abstract
The development of appropriate test oasis is an important issue for conformance testing of protocol implementations and other reactive software systems. A number of methods are known for the development of a test suite based on a specification given in the form of a finite state machine. In practice, the system requirements evolve throughout the lifetime of the system and the specifications are modified incrementally. In this paper, we adapt tour well-known test derivation methods, namely, the HIS, W, Wp, and UlOv methods, for generating tests that would test only the modified parts of an evolving specification. Some application examples and experimental results are provided. These results show significant gains when using incremental testing in comparison with complete testing, especially when the modified part represents less than 20 percent of the whole specification. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
33. Experience With Teaching Black-Box Testing in a Computer Science/Software Engineering Curriculum.
- Author
-
Chen, T. Y. and Poon, Pak-Lok
- Subjects
COMPUTER software ,TESTING ,SOFTWARE engineering ,ENGINEERING ,COMPUTER science ,SCIENCE - Abstract
Software testing is a popular and important technique for improving software quality. There is a strong need for universities to teach testing rigorously to students studying computer science or software engineering. This paper reports the experience of teaching the classification-tree method as a black-box testing technique at the University of Melbourne, Melbourne, Australia, and Swinburne University of Technology, Melbourne, Australia. It aims to foster discussion of appropriate teaching methods of software testing. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
34. Event-Based Traceability for Managing Evolutionary Change.
- Author
-
Cleland-Huang, Jane, Chang, Carl K., and Christensen, Mark
- Subjects
COMPUTER software ,ENGINEERING ,PROJECT management ,COMPUTER systems ,AUTOMATION ,ELECTRONIC systems - Abstract
Although the benefits of requirements traceability are widely recognized, the actual practice of maintaining a traceability scheme is not always entirely successful. The traceability infrastructure underlying a software system tends to erode over its lifetime, as time-pressured practitioners fail to consistently maintain links and update impacted artifacts each time a change occurs, even with the support of automated systems. This paper proposes a new method of traceability based upon event-notification and is applicable even in a heterogeneous and globally distributed development environment. Traceable artifacts are no longer tightly coupled but are linked through an event service, which creates an environment in which change is handled more efficiently, and artifacts and their related links are maintained in a restorable state. The method also supports enhanced project management for the process of updating and maintaining the system artifacts. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
35. Validating the ISO/IEC 155O4 Measure of Software Requirements Analysis Process Capability.
- Author
-
EI Emam, Khaled and Birk, Andreas
- Subjects
SOFTWARE engineering ,STANDARDS ,COMPUTER software ,TRUTHFULNESS & falsehood ,PERFORMANCE ,ENGINEERING - Abstract
ISO/IEC 15504 is an emerging international standard on software process assessment. It defines a number of software engineering processes and a scale for measuring their capability. One of the defined processes is software requirements analysis (SRA). A basic premise of the measurement scale is that higher process capability is associated with better project performance (i.e., predictive validity). This paper describes an empirical study that evaluates the predictive validity of SRA process capability. Assessments using ISO/IEC 15504 were conducted on 56 projects world-wide over a period of two years. Performance measures on each project were also collected using questionnaires, such as the ability to meet budget commitments and staff productivity. The results provide strong evidence of predictive validity for the SRA process capability measure used in ISO/IEC 15504, but only for organizations with more than 50 IT Staff. Specifically, a strong relationship was found between the implementation of requirements analysis practices as defined in ISO/IEC 15504 and the productivity of software projects. For smaller organizations, evidence of predictive validity was rather weak. This can be interpreted in a number of different ways: that the measure of capability is not suitable for small organizations or that the SRA process capability has less effect on project performance for small organizations. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
36. Message of the Editor for Software Engineering Experience.
- Author
-
Munson, Jack
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software - Abstract
Focuses on the need to apply good software engineering to practice.
- Published
- 1983
37. Reusing Software: Issues and Research Directions.
- Author
-
Mili, Hafedh, Mili, Fatma, and Mili, Ali
- Subjects
- *
COMPUTER software development , *ARTIFICIAL intelligence , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Software productivity has been steadily increasing over the past 30 years, but not enough to close the gap between the demands placed on the software industry and what the state of the practice can deliver [22], [39]; nothing short of an order of magnitude increase in productivity will extricate the software industry from its perennial crisis [39], 1671. Several decades of intensive research in software engineering and artificial intelligence left few alternatives but software reuse as the (only) realistic approach to bring about the gains of productivity and quality that the software industry needs. In this paper, we discuss the implications of reuse on the production, with an emphasis on the technical challenges. Software reuse involves building software that is reusable by design and building with reusable software. Software reuse includes reusing both the products of previous software projects and the processes deployed to produce them, leading to a wide spectrum of reuse approaches, from the building blocks (reusing products) approach, on one hand, to the generative or reusable processor (reusing processes), on the other 1681. We discuss the implication of such approaches on the organization, control, and method of software development and discuss proposed models for their economic analysis. Software reuse benefits from methodologies and tools to: 1) build more readily reusable software and 2) locate, evaluate, and tailor reusable software, the last being critical for the building blocks approach. Both sets of issues are discussed in this paper, with a focus on application generators and OO development for the first and a thorough discussion of retrieval techniques for software components, component composition (or bottom-up design), and transformational systems for the second. We conclude by highlighting areas that, in our opinion, are worthy of further investigation. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
38. The Processor Working Set and Its Use in Scheduling Multiprocessor Systems.
- Author
-
Ghosal, Dipak, Serazzi, Giuseppe, and Tripathi, Satish K.
- Subjects
- *
MULTIPROCESSORS , *COMPUTERS , *COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ALGORITHMS , *PARALLEL processing - Abstract
There are two main contributions of this paper. First, this paper introduces the concept of a processor working set (pws) as a single value parameter for characterizing the parallel program behavior. Through detailed experimental studies of different algorithms on a transputer-based multiprocessor machine, it is shown that the pws is indeed a robust measure for characterizing the workload of a multiprocessor system. Small deviations in the performance of algorithms arising due to communication overhead are captured in this parameter. The second contribution of this paper relates to the study of static processor allocation strategies. It is shown that processor allocation strategies based on the pws provide significantly better throughput-delay characteristics. The robustness of pws is further demonstrated by showing that allocation policies that allocate processors more than the pws are inferior in performance to those that never allocate more than the pws—even at a moderately low load. Based on the results, a simple static allocation policy that allocates the pws at low load and adaptively fragments at high load to one processor per job is proposed. This allocation strategy is shown to possess the best throughput-delay characteristic over a wide range of loads. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
39. High Performance Software Testing on SIMD Machines.
- Author
-
Krauser, Edward W., Mathur, Aditya P., and Rego, Vernon J.
- Subjects
- *
COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ELECTRONIC systems - Abstract
This paper describes a new method, called mutant unification, for high-performance software testing. The method is aimed at supporting program mutation on parallel machines based on the Single Instruction Multiple Data stream (SIMD) paradigm. Several parameters that affect the performance of unification have been identified and their effect on the time to completion of a mutation test cycle and speedup has been studied. Program mutation analysis provides an effective means for determining the reliability of large software systems. It also provides a systematic method for measuring the adequacy of test data. However, it is likely that testing large software systems using mutation is computation bound and prohibitive on traditional sequential machines. Current implementations of mutation tools are unacceptably slow and are only suitable for testing relatively small programs. The unification method reported in this paper provides a practical alternative to the current approaches. It also opens up a new application domain for SIMD machines. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
40. Implementation of an FP-Shell.
- Author
-
Kamath, Yogeesh H. and Matthews, Manton M.
- Subjects
- *
PROGRAMMING languages , *COMPUTER software , *ENGINEERING , *SOFTWARE engineering - Abstract
One of the best features of the UNIXTM Shell is that it provides a framework which can be used to build complex programs by interconnecting existing simple programs. However, it is limited to linear combinations of programs, and building of more complex programs must be accomplished by executing sequences of commands. This paper introduces Backus' FP (Functional Programming) as an alter- native command language for UNIX. In FP, programs are true functions and another distinctive feature of FP languages is that they contain functional forms, which are constructs for combining programs to build new programs. Also, the functional style of programming provides a natural way of exploiting parallel machine architecture. In this paper it is shown how to enhance the power of the UNIX Shell by the inclusion of the FP functional forms. The FP-Shell is fully implemented in "C" under UNIX. It interprets standard UNIX commands as FP primitive functions and standard UNIX files as FP system objects. It also serves as a framework for the study of functional programming languages. [ABSTRACT FROM AUTHOR]
- Published
- 1987
41. From Safety Analysis to Software Requirements.
- Author
-
Hansen, Kirsten M., Ravn, Anders P., and Stavridou, Victoria
- Subjects
COMPUTER software ,INDUSTRIAL safety ,TREE graphs ,FORMAL methods (Computer science) ,COMPONENT software ,SYSTEMS design ,ENGINEERING - Abstract
Software for safety critical systems must deal with the hazards identified by safety analysis. This paper investigates, how the results of one safety analysis technique, fault trees, are interpreted as software safety requirements to be used in the program design process. We propose that fault tree analysis and program development use the same system model. This model is formalized in a real-time, interval logic, based on a conventional dynamic systems model with state evolving over time. Fault trees are interpreted as temporal formulas, and it is shown how such formulas can be used for deriving safety requirements for software components. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
42. Data Structures for Parallel Resource Management.
- Author
-
Biswas, Jit and Browne, James C.
- Subjects
COMPUTER architecture ,COMPUTER input-output equipment ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
The problem of resource management for many processor architectures can be viewed as the problem of simultaneously updating data structures that hold system state. Consistency requirements of strict data structures introduce serialization in the resource management functions thereby leading to performance bottlenecks in highly parallel systems. Our approach is to examine the possibility of using structures with weakened specifications. Specifically, this paper introduces data structures that weaken the specification of a priority queue permitting it to be updated simultaneously by multiple processes. Two structures are proposed, the concurrent heap and the software banyan along with their associated algorithms for update. The algorithms are shown to possess attractive properties of simultaneous update and throughput. The results of simulation and actual implementations show that such data structures can improve the execution times of parallel algorithms quite significantly. These structures are proposed as possible basic building blocks for implementation of resource allocation in operating systems. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
43. Clarif3ring Some Fundamental Concepts in Software Testing.
- Author
-
Parrish, Allen S. and Zweben, Stuart H.
- Subjects
ELECTRONIC systems ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
A software test data adequacy criterion is a means for determining whether a test set is sufficient, or "adequate," for testing a given program. Previous work has proposed a set of properties that useful adequacy criteria should satisfy. In this paper, we identify some additional properties of useful adequacy criteria that are appropriate under certain realistic models of testing. We then discuss modifications to the formal definitions of certain popular adequacy criteria to make the criteria consistent with these additional properties. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
44. Block Access Estimation for Clustered Data Using a Finite LRU Buffer.
- Author
-
Grandi, Fabio and Scalas, Maria Rita
- Subjects
DATABASES ,INFORMATION storage & retrieval systems ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Data access cost evaluation is fundamental in the design and management of database systems. When some data items have duplicates, a clustering effect which can heavily influence access costs is observed. The availability of a finite amount of buffer memory in real systems has an even more dramatic impact. In this paper a comprehensive cost model for clustered data retrieval by an index using a finite buffer is presented. Our approach combines and extends previous models based either on finite buffer or on uniform data clustering assumptions. The computational cost of the formulas we propose in this work is independent of the data size or of the query cardinality and need only a single statistics per search key, the clustering factor, to be maintained by the system. The predictive power and the accuracy of the model are shown in comparison with actual costs resulting from simulations. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
45. Software Performance Engineering: A Case Study Including Performance Comparison with Design Alternatives.
- Author
-
Smith, Connie U. and Williams, Lloyd G.
- Subjects
ELECTRONIC systems ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Software Performance Engineering (SPE) provides an approach to constructing systems to meet performance objectives. This paper illustrates the application of SPE to an example with some real-time properties and demonstrates how to compare performance characteristics of design alternatives. We show how SPE can be integrated with design methods and demonstrate that performance requirements can be achieved without sacrificing other desirable design qualities such as understandability, maintainability, and reusability. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
46. On Some Reliability Estimation Problems in Random and Partition Testing.
- Author
-
Tsoukalas, Markos Z., Duran, Joe W., and Ntafos, Simeon C.
- Subjects
COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
Studies have shown that random testing can be an effective testing strategy. One of the goals of testing is to estimate the reliability of the program from the test outcomes. In this paper we extend the Thayer-Lipow-Nelson reliability model to account for the cost of errors. Also we compare random with partition testing by looking at upper confidence bounds for the cost weighted performance of the two strategies. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
47. Using Transformations in Specification-Based Prototyping.
- Author
-
Berzins, Valdis and Yehudai, Amiram
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
We explore the use of software transformations for software evolution. Meaning-preserving program transformations have been widely used for program development from a fixed initial specification. We consider a wider class of transformations to support development in which the specification evolves, rather than being fixed in advance. We present a new and general classification of transformations based on their effect on system interfaces, externally observable behavior, and abstraction level of a system description. This classification is used to rearrange chronological derivation sequences containing meaning-changing transformations into lattices containing only meaning-preserving transformations. This paper describes a process model for software evolution, utilizing prototyping techniques, and shows how this class of transformations can be used to support such a process. A set of examples illustrates our ideas. Software tool support and directions for future research are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
48. A Comparison of Function Point Counting Techniques.
- Author
-
Jeffery, D. R., Low, O. C., and Barnes, M.
- Subjects
COMPUTER software development ,SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
Effective management of the software development process requires that management be able to estimate total development effort and cost. One of the fundamental problems associated with effort and cost estimation is the a priori estimation of software size. Function point analysis has emerged over the last decade as a popular tool for this task Following its use over this time, however, a number of criticisms of the method have emerged. These criticisms relate to the way in which function counts are calculated and the impact of the processing complexity adjustment on the function point count. SPQR/20 function points among others are claimed to overcome some of these criticisms. This paper compares the SPQR/20 function point method to traditional function point analysis as a measure of software size in an empirical study of MIS environments. In a study of 64 projects in one organization it was found that both methods would appear equally satisfactory. However consistent use of one method should occur since the individual counts differ considerably. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
49. Early Experience with the Visual Programmer's WorkBench.
- Author
-
Rubin, Robert V., Walker II, James, and Golin, Eric J.
- Subjects
VISUAL programming (Computer science) ,VISUAL programming languages (Computer science) ,SOFTWARE engineering ,COMPUTER programming ,CONCURRENT engineering ,ENGINEERING ,COMPUTER algorithms ,ELECTRONIC data processing ,COMPUTER software - Abstract
Diagrams play a central role in software engineering. They are used for specifying design elements such as requirements, concurrent systems, database models and interactive systems. Families of diagrams form visual languages, and creating such diagrams constitutes visual programming. The Visual Programmer's WorkBench (VPW) addresses the rapid synthesis of programming environments for the specification, analysis, and execution of visual programs. A language-based environment for a specific visual language is generated in VPW from a specification of the syntactic structure, the abstract structure, the static semantics and the dynamic semantics of the language. VPW is built around a model of distributed processing based on shared distributed memory. This framework is used both in defining the architecture of the environment and for the execution model of visual languages. The Visual Programmer's WorkBench has been used to experiment with visual programming environments for several visual languages. This paper describes the design of the Visual Programmer's WorkBench and our experience using it to generate a distributed programming environment for a concurrent visual language. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
50. Semantic Feedback in the Higgens UIMS.
- Author
-
Hudson, Scott E. and King, Roger
- Subjects
USER interfaces ,HUMAN-computer interaction ,COMPUTER systems ,COMPUTER software ,ENGINEERING ,SOFTWARE engineering - Abstract
In systems which contain graphical user interfaces, one of the most difficult tasks facing the system implementer is translating graphical inputs into the concepts and structures appropriate to the application. Conversely, the process of translating application structures into graphical images can be equally difficult. Almost all applications using interactive computer graphics contain important structures and concepts which are deeper than the geometries used to display them to the user. One of the major tasks of the system implementer is to cause the user interface to reflect this deeper structure accurately so that it may be directly manipulated by the user. Currently, there are few tools to aid the system implementer in this task. This paper will describe a new tool, the Higgens user interface management system, which can automate much of this task for a wide class of systems using interactive computer graphics. By using new graphical techniques along with new techniques for incremental attribute evaluation, the Higgens system is able to generate graphical user interfaces automatically from a high-level interface specification. These specifications are primarily nonprocedural in nature. They describe how graphical images can be automatically derived and updated based on application entities, and how graphical inputs can be translated back into terms which are appropriate to the application. Since these things can be done automatically by the system based on a high-level specification, much of the difficulty of creating graphical interfaces can be eliminated. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.