489 results
Search Results
2. NDT. A Model-Driven Approach for Web Requirements.
- Author
-
Escalona, María José and Aragón, Gustavo
- Subjects
- *
WORLD Wide Web , *COMPUTER systems , *ELECTRONIC systems , *SOFTWARE engineering , *SOFTWARE measurement , *WEBSITES - Abstract
Web engineering is a new research line in software engineering that covers the definition of processes, techniques, and models suitable tot Web environments in order to guarantee the quality of results. The research community is working in this area and, as a very recent line, they are assuming the Model-Driven paradigm to support and solve some classic problems detected in Web developments. However, there is a lack in Web requirements treatment. This paper presents a general vision of Navigational Development Techniques (NDT), which is an approach to deal with requirements in Web systems. It is based on conclusions obtained in several comparative studies and it tries to fill some gaps detected by the research community. This paper presents its scope, its most important contributions, and offers a global vision of its associated tool: NDT-Tool. Furthermore, it analyzes how Web Engineering can be applied in the enterprise environment. NDT is being applied in real projects and has been adopted by several companies as a requirements methodology. The approach offers a Web requirements solution based on a Model-Driven paradigm that follows the most accepted tendencies by Web engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
3. Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems.
- Author
-
Marcus, Andrian, Poshyvanyk, Denys, and Ferenc, Rudolf
- Subjects
- *
OBJECT-oriented methods (Computer science) , *COMPUTER software , *SOFTWARE engineering , *COMPUTATIONAL linguistics , *SOURCE code , *COMPUTER systems - Abstract
High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
4. The Role of Deliberate Artificial Design Elements in Software Engineering Experiments.
- Author
-
Hannay, Jo E. and Jorgensen, Magne
- Subjects
- *
SOFTWARE engineering , *SYSTEMS design , *COMPUTER software , *COMPUTER systems , *ENGINEERING , *DESIGN - Abstract
Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies that are badly designed or that have research questions of low relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
5. Classifying Software Changes: Clean or Buggy?
- Author
-
Sunghun Kim, Whitehead Jr., E. James, and Yi Zhang
- Subjects
- *
COMPUTER software , *MACHINE learning , *SOURCE code , *OPEN source software , *COMPUTER systems - Abstract
This paper introduces a new technique for predicting latent software bugs, called change classification. Change classification uses a machine learning classifier to determine whether a new software change is more similar to prior buggy changes or clean changes. In this manner, change classification predicts the existence of bugs in software changes. The classifier is trained using features (in the machine learning sense) extracted from the revision history of a software project stored in its software configuration management repository. The trained classifier can classify changes as buggy or clean, with a 78 percent accuracy and a 60 percent buggy change recall on average. Change classification has several desirable qualities: 1) The prediction granularity is small (a change to a single file), 2) predictions do not require semantic information about the source code, 3) the technique works for a broad array of project types and programming languages, and 4) predictions can be made immediately upon the completion of a change. Contributions of this paper include a description of the change classification approach, techniques for extracting features from the source code, and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
6. On the Semantics of Associations and Association Ends in UML.
- Author
-
Milićev, Dragan
- Subjects
- *
SOFTWARE engineering , *UNIFIED modeling language , *COMPUTER software development , *SEMANTICS , *OBJECT-oriented methods (Computer science) , *COMPUTER systems - Abstract
Association is one of the key concepts in UML that is intensively used in conceptual modeling. Unfortunately, in spite of the fact that this concept is very old and is inherited from other successful modeling techniques, a fully unambiguous understanding of it, especially in correlation with other newer concepts connected with association ends, such as uniqueness, still does not exist. This paper describes a problem with one widely assumed interpretation of the uniqueness of association ends, the restrictive interpretation, and proposes an alternative, the intentional interpretation. Instead of restricting the association from having duplicate links, uniqueness' of an association end in the intentional interpretation modifies the way in which the association end maps an object of the opposite class to a collection of objects of the class at that association end. If the association end is unique, the collection is a set obtained by projecting the collection of all linked objects, in that sense, the uniqueness of an association end modifies the view to the objects at that end, but does not constrain the underlying object structure. This paper demonstrates how the intentional interpretation improves expressiveness of the modeling language and has some other interesting advantages. Finally, this paper gives a completely formal definition of the concepts of association and association ends, along with the related notions of uniqueness, ordering, and multiplicity. The semantics of the UML actions on associations are also defined formally. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
7. Triggered Message Sequence Charts.
- Author
-
Sengupta, Bikram and Cleaveland, Rance
- Subjects
- *
DISTRIBUTED computing , *SEMANTICS , *COMPUTER science research , *COMPUTATIONAL mathematics , *MATHEMATICAL notation , *COMPUTER systems - Abstract
This paper introduces Triggered Message Sequence Charts (TMSCs), a graphical, mathematically well-founded framework for capturing scenario-based systems requirements of distributed systems. Like Message Sequence Charts (MSCs), TMSCs are graphical depictions of scenarios, or exchanges of messages between processes in a distributed system. Unlike MSCs, however, TMSCs are equipped with a notion of trigger that permits requirements to be made conditional, a notion of partiality indicating that a scenario may be subsequently extended, and a notion of refinement for assessing whether or not a more detailed specification correctly elaborates on a less detailed on. The TMSC rotation also includes a collection of composition operators allowing structure to be introduced into scenario specifications so that interactions among different scenarios may be studied. In the first part of this paper, TMSCs are introduced and their use in support of requirements modeling is illustrated via two extended examples. The second part develops the mathematical underpinnings of the language. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
8. X-FEDERATE: A Policy Engineering Framework for Federated Access Management.
- Author
-
Bhatti, Rafae, Bertino, Elisa, and Ghafoor, Arif
- Subjects
- *
MANAGEMENT , *INFORMATION sharing , *INFORMATION resources management , *SECURITY management , *COMPUTER systems , *DIGITAL libraries , *METHODOLOGY , *PARADIGMS (Social sciences) - Abstract
Policy-Based Management (PBM) has been considered as a promising approach for design and enforcement of access management policies for distributed systems. The increasing shift toward federated information sharing in the organizational landscape, however, calls for revisiting current PBM approaches to satisfy the unique security requirements of the federated paradigm. This presents a twofold challenge for the design of a PBM approach, where, on the one hand, the policy must incorporate the access management needs of the individual systems, while, on the other hand, the policies across multiple systems must be designed in such a manner that they can be uniformly developed, deployed, and integrated within the federated system. In this paper, we analyze the impact of security management challenges on policy design and formulate a policy engineering methodology based on principles of software engineering to develop a PBM solution for federated systems. We present X-FEDERATE, a policy engineering framework for federated access management using an extension of the well-known Role-Based Access Control (RBAC) model. Our framework consists of an XML-based policy specification language, its UML-based meta-model, and an enforcement architecture. We provide a comparison of our framework with related approaches and highlight its significance for federated access management. The paper also presents a federation protocol and discusses a prototype of our framework that implements the protocol in a federated digital library environment. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
9. Software Reuse Research: Status and Future.
- Author
-
Frakes, William B. and Kyo Kang
- Subjects
- *
COMPUTER software , *PUBLICATIONS , *CONFERENCES & conventions , *PROBLEM solving , *RESEARCH , *COMPUTER systems - Abstract
This paper briefly summarizes software reuse research, discusses major research contributions and unsolved problems, provides pointers to key publications, and introduces four papers selected from The Eighth International Conference on Software Reuse (ICSR8). [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
10. An Empirical Investigation of the Key Factors for Success in Software Process Improvement.
- Author
-
Dybå, Tore
- Subjects
- *
COMPUTER systems , *COMPUTER software , *ORGANIZATION , *INFORMATION technology , *COMPUTER networks , *COMPUTER science - Abstract
Understanding how to implement software process improvement (SRI) successfully is arguably the most challenging issue facing the SPI field today. The SPI literature contains many case studies of successful companies and descriptions of their SRI programs. However, the research efforts to date are limited and inconclusive and without adequate theoretical and psychometric justification. This paper extends and integrates models from prior research by performing an empirical investigation of the key factors for success in SPI. A quantitative survey of 120 software organizations was designed to test the conceptual model and hypotheses of the study. The results indicate that success depends critically on six organizational factors, which explained more than 50 percent of the variance in the outcome variable. The main contribution of the paper is to increase the understanding of the influence of organizational issues by empirically showing that they are at least as important as technology for succeeding with SPI and, thus, to provide researchers and practitioners with important new insights regarding the critical factors of success in SPI. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
11. Retargeting Sequential Image-Processing Programs for Data Parallel Execution.
- Author
-
Baumstark, Jr., Lewis B. and Wills, Linda M.
- Subjects
- *
SOFTWARE compatibility , *COMPUTER software , *COMPUTER systems , *COMPUTER architecture , *COMPUTER science - Abstract
New compact, low-power implementation technologies for processors and imaging arrays can enable a new generation of portable video products. However, software compatibility with large bodies of existing applications written in C prevents more efficient, higher performance data parallel architectures from being used in these embedded products. If this software could be automatically retargeted explicitly for data parallel execution, product designers could incorporate these architectures into embedded products. The key challenge is exposing the parallelism that is inherent in these applications but that is obscured by artifacts imposed by sequential programming languages. This paper presents a recognition-based approach for automatically extracting a data parallel program model from sequential image processing code and retargeting it to data parallel execution mechanisms. The explicitly parallel model presented, called multidimensional data flow (MDDF), captures a model of how operations on data regions (e.g., rows, columns, and tiled blocks) are composed and interact. To extract an MDDF model, a partial recognition technique is used that focuses on identifying array access patterns in loops, transforming only those program elements that hinder parallelization, while leaving the core algorithmic computations intact. The paper presents results of retargeting a set of production programs to a representative data parallel processor array to demonstrate the capacity to extract parallelism using this technique. The retargeted applications yield a potential execution throughput limited only by the number of processing elements, exceeding thousands of instructions per cycle in massively parallel implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
12. A Taxonomy and Catalog of Runtime Software-Fault Monitoring Tools.
- Author
-
Delgado, Nelly, Gates, Ann Quiroz, and Roach, Steve
- Subjects
- *
COMPUTER software , *TAXONOMY , *ELECTRIC machinery monitoring , *COMPUTER systems , *LITERATURE , *RESEARCH - Abstract
A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. This paper presents a taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault- monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
13. Synthesis of Behavioral Models from Scenarios.
- Author
-
Uchitel, Sebastian, Kramer, Jeff, and Magee, Jeff
- Subjects
- *
COMPUTER software , *COMPUTER systems - Abstract
Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable pads and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based... [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
14. Inference Graphs: A Computational Structure Supporting Generation of Customizable and Correct Analysis Components.
- Author
-
Dillon, Laura K. and Stirewalt, R.E. Kurt
- Subjects
- *
COMPUTER software , *COMPUTER systems - Abstract
Amalia is a generator framework for constructing analyzers for operationally defined formal notations. These generated analyzers are components that are designed for customization and integration into a larger environment. The customizability and efficiency of Amalia analyzers owe to a computational structure called an inference graph. This paper describes this structure, how inference graphs enable Amalia to generate analyzers for operational specifications, and how we build in assurance. On another level, this paper illustrates how to balance the need for assurance, which typically implies a formal proof obligation, against other design concerns, whose solutions leverage design techniques that are not (yet) accompanied by mature proof methods. We require Amaliagenerated designs to be transparent with respect to the formal semantic models upon which they are based. Inference graphs are complex structures that incorporate many design optimizations. While not formally verifiable, their fidelity with respect to a formal operational semantics can be discharged by inspection. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
15. The JEDI Event-Based Infrastructure and Its Application to the Development of the OPSS WFMS.
- Author
-
Cugola, Gianpaolo, Nitto, Elisabetta Di, and Fuggetta, Alfonso
- Subjects
- *
DISTRIBUTED computing , *COMPUTER systems , *WORKFLOW , *MIDDLEWARE , *OBJECT-oriented databases , *COMPUTER industry - Abstract
The development of complex distributed systems demands for the creation of suitable architectural styles (or paradigms) and related runtime infrastructures. An emerging style that is receiving increasing attention is based on the notion of event. In an event-based architecture, distributed software components interact by generating and consuming events. An event is the occurrence of some state change in a component of a software system, made visible to the external world. The occurrence of an event in a component is asynchronously notified to any other component that has declared some interest in it. This paradigm (usually called "publish/subscribe," from the names of the two basic operations that regulate the communication) holds the promise of supporting a flexible and effective interaction among highly reconfigurable, distributed software components. In the past two years, we have developed an object-oriented infrastructure called JEDI (Java Event-Based Distributed Infrastructure). JEDI supports the development and operation of event-based systems and has been used to implement a significant example of distributed system, namely, the OPSS workflow management system (WFMS). The paper illustrates the main features of JEDI and how we have used them to implement OPSS. Moreover, the paper provides an initial evaluation of our experiences in using the event-based architectural style and a classification of some of the event-based infrastructures presented in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
16. Bounding Cache-Related Preemption Delay for Real-Time Systems.
- Author
-
Chang-Gun Lee, Kwangpo Lee, Hahn, Joosun, Seo, Yang-Min, Min, Sang Lyul, Ha, Rhan, Hong, Seongsoo, Park, Chang Yun, Minsuk Lee, and Kim, Chong Sang
- Subjects
- *
REAL-time computing , *PRODUCTION scheduling , *CACHE memory , *COMPUTER systems , *COMPUTER multitasking , *SOFTWARE engineering - Abstract
Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap between the processor and main memory. However, its use in multitasking computer systems introduces additional preemption delay due to the reloading of memory blocks that are replaced during preemption. This cache-related preemption delay poses a serious problem in real-time computing systems where predictability is of utmost importance. In this paper, we propose an enhanced technique for analyzing and thus bounding the cache-related preemption delay in fixed-priority preemptive scheduling focusing on instruction caching. The proposed technique improves upon previous techniques in two important ways. First, the technique takes into account the relationship between a preempted task and the set of tasks that execute during the preemption when calculating the cache-related preemption delay. Second, the technique considers the phasing of tasks to eliminate many infeasible task interactions. These two features are expressed as constraints of a linear programming problem whose solution gives a guaranteed upper bound on the cache-related preemption delay. This paper also compares the proposed technique with previous techniques using randomly generated task sets. The results show that the improvement on the worst-case response time prediction by the proposed technique over previous techniques ranges between 5 percent and 18 percent depending on the cache refill time when the task set utilization is 0.6. The results also show that as the cache refill time increases, the improvement increases, which indicates that accurate prediction of cache-related preemption delay by the proposed technique becomes increasingly important if the current trend of widening speed gap between the processor and main memory continues. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
17. Lightweight Extraction of Object Models from Bytecode.
- Author
-
Jackson, Daniel and Waingold, Allison
- Subjects
- *
JAVA programming language , *REVERSE engineering , *MATHEMATICAL models , *HEURISTIC programming , *COMPUTER software , *COMPUTER systems , *COMPUTER input design - Abstract
A program's object model captures the essence of its design. For some programs, no object model was developed during design; for others, an object model exists but may be out-of-sync with the code. This paper describes a tool that atomatically extracts an object model from the classfiles of a Java program. Unlike existing tools, it handles container classes by inferring the types of elements stored in a container and eliding the container itself. This feature IS crucial for obtaining models that show the structure of the abstract state and bear some relation to conceptual models. Although the tool performs only a simple, heuristic analysis that is almost entirely local, the resulting object model is surprisingly accurate. The paper explains what object models are and why they are useful; describes the analysis, its assumptions! and limitations evaluates the tool for accuracy, and illustrates its use On a suite of sample programs. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
18. A Formal Security Model for Microprocessor Hardware.
- Author
-
Lotz, Volkmar, Kessler, Volker, and Walter, Georg H.
- Subjects
- *
MICROPROCESSORS , *HARDWARE , *COMPUTER security , *SECURITY management , *COMPUTER systems - Abstract
The paper introduces a formal security model for a microprocessor hardware system. The model has been developed as part of the evaluation process of the processor product according to ITSEC assurance level E4. Novel aspects of the model are the need for defining integrity and confidentiality objectives on the hardware level without the operating system or application specification and security policy being given, and the utilization of an abstract function and data space. The security model consists of a system model given as a state transition automaton on infinite structures and the formalization of security objectives by means of properties of automaton behaviors. Validity of the security properties is proved. The paper compares the model with published ones and summarizes the lessons learned throughout the modeling process. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
19. Anonymous Remote Computing: A Paradigm for Parallel Programming on Interconnected Workstations.
- Author
-
Joshi, Rushikesh K. and Ram, D. Janaki
- Subjects
- *
ELECTRONIC data processing , *REMOTE computing , *MICROCOMPUTER workstations (Computers) , *PARALLEL programming , *COMPUTER networks , *COMPUTER systems , *COMPUTER network architectures - Abstract
Parallel computing on interconnected workstations is becoming a viable and attractive proposition due to the rapid growth in speeds of interconnection networks and processors. In the case of workstation clusters, there is always a considerable amount of unused computing capacity available in the network. However, heterogeneity in architectures and operating systems, load variations on machines, variations in machine availability, and failure susceptibility of networks and workstations complicate the situation for the programmer. In this context, new programming paradigms that reduce the burden involved in programming for distribution, load adaptability, heterogeneity, and fault tolerance gain importance. This paper identifies the issues involved in parallel computing on a network of workstations. The Anonymous Remote Computing (ARC) paradigm is proposed to address the issues specific to parallel programming on workstation systems. ARC differs from the conventional communicating process model by treating a program as one single entity consisting of several loosely coupled remote instruction blocks instead of treating it as a collection of processes. The ARC approach results in distribution transparency and heterogeneity transparency. At the same time, it provides fault tolerance and load adaptability to parallel programs on workstations. ARC is developed in a two-tiered architecture consisting of high level language constructs and low level ARC primitives. The paper describes an implementation of the ARC kernel supporting ARC primitives. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
20. Client-Access Protocols for Replicated Services.
- Author
-
Karamanolis, Christos T. and Magee, Jeffrey N.
- Subjects
- *
ACCESS to wide area computer networks , *COMPUTER network protocols , *INTERNET servers , *DISTRIBUTED computing , *COMPUTER systems , *INTERNET - Abstract
This paper addresses the problem of replicated service provision in distributed systems. Existing systems that follow the State Machine approach concentrate on the synchronization of the server replicas and do not consider the problem of client interaction with the server group. The paper analyzes client interaction and identifies a number of access protocols to meet a range of client requirements and system models. The paper demonstrates that protocols for the "open" group model — clients external to the group of servers — satisfy the requirements of the State Machine approach, even when replication is transparent to the clients. Experimental performance results indicate that the "open" model is clearly desirable when the service is used by a large, dynamically changing set of clients. The situation which pertains to Internet service provision. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
21. Model Checking Large Software Specifications.
- Author
-
Chan, William, Anderson, Richard J., Beame, Paul, Burns, Steve, Modugno, Francesmary, Notkin, David, and Reese, Jon D.
- Subjects
- *
SOFTWARE verification , *COLLISION avoidance systems in automobiles , *TECHNICAL specifications , *STATECHARTS (Computer science) , *MATHEMATICAL models , *COMPUTER systems - Abstract
In this paper, we present our experiences in using symbolic model checking to analyze a specification of a software system for aircraft collision avoidance. Symbolic model checking has been highly successful when applied to hardware systems. We are interested in whether model checking can be effectively applied to large software specifications. To investigate this, we translated a portion of the state-based system requirements specification of Traffic Alert and Collision Avoidance System II (TCAS II) into input to a symbolic model checker (SMV). We successfully used the symbolic model checker to analyze a number of properties of the system. We report on our experiences, describing our approach to translating the specification to the SMV language, explaining our methods for achieving acceptable performance, and giving a summary of the properties analyzed. Based on our experiences, we discuss the possibility of using model checking to aid specification development by iteratively applying the technique early in the development cycle. We consider the paper to be a data point for optimism about the potential for more widespread application of model checking to software systems. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
22. A Note on Regeneration with Virtual Copies.
- Author
-
Hilderman, Robert J. and Hamilton, Howard J.
- Subjects
- *
DISTRIBUTED computing , *ALGORITHMS , *COMPUTER systems , *DATABASES , *VECTOR processing (Computer science) - Abstract
Regeneration with Virtual Copies is a voting-based consistency control algorithm for replicated data objects in a distributed computing system. Proposed by Adam and Tewari, it utilizes selective regeneration and recovery mechanisms for maintaining the availability and consistency of copies. This paper describes some problems with the original paper and proposes solutions. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
23. Performance Characterization of Optimizing Compilers.
- Author
-
Saavedra, Rafael H. and Smith, Alan Jay
- Subjects
- *
COMPUTER software , *SYSTEMS software , *COMPUTER systems , *SOFTWARE engineering - Abstract
Optimizing compilers have become an essential component in achieving high levels of performance. Various simple and sophisticated optimizations are implemented at different stages of compilation to yield significant improvements, but little work has been done in characterizing the effectiveness of optimizers, or in understanding where most of this improvement comes from. In this paper we study the performance impact of optimization in the context of our methodology for CPU performance characterization based on the abstract machine model. The model considers all machines to be different implementations of the same high level language abstract machine; in previous research, the model has been used as a basis to analyze machine and benchmark performance. In this paper, we show that our model can be extended to characterize the performance improvement provided by optimizers and to predict the run time of optimized programs, and measure the effectiveness of several compilers in implementing different optimization techniques. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
24. Reusing Software: Issues and Research Directions.
- Author
-
Mili, Hafedh, Mili, Fatma, and Mili, Ali
- Subjects
- *
COMPUTER software development , *ARTIFICIAL intelligence , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems - Abstract
Software productivity has been steadily increasing over the past 30 years, but not enough to close the gap between the demands placed on the software industry and what the state of the practice can deliver [22], [39]; nothing short of an order of magnitude increase in productivity will extricate the software industry from its perennial crisis [39], 1671. Several decades of intensive research in software engineering and artificial intelligence left few alternatives but software reuse as the (only) realistic approach to bring about the gains of productivity and quality that the software industry needs. In this paper, we discuss the implications of reuse on the production, with an emphasis on the technical challenges. Software reuse involves building software that is reusable by design and building with reusable software. Software reuse includes reusing both the products of previous software projects and the processes deployed to produce them, leading to a wide spectrum of reuse approaches, from the building blocks (reusing products) approach, on one hand, to the generative or reusable processor (reusing processes), on the other 1681. We discuss the implication of such approaches on the organization, control, and method of software development and discuss proposed models for their economic analysis. Software reuse benefits from methodologies and tools to: 1) build more readily reusable software and 2) locate, evaluate, and tailor reusable software, the last being critical for the building blocks approach. Both sets of issues are discussed in this paper, with a focus on application generators and OO development for the first and a thorough discussion of retrieval techniques for software components, component composition (or bottom-up design), and transformational systems for the second. We conclude by highlighting areas that, in our opinion, are worthy of further investigation. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
25. Abstract Data Views: An Interface Specification Concept to Enhance Design for Reuse.
- Author
-
Cowan, Donald D. and Lucena, Carlos J. P.
- Subjects
- *
USER interfaces , *COMPUTER systems , *COMPUTER software development , *SOFTWARE engineering , *ABSTRACT data types (Computer science) , *COMPUTER programming - Abstract
The abstract data view (ADV) design model was originally created to specify clearly and formally the separation of the user interface from the application component of a soft- ware system, and to provide a systematic design method that is independent of specific application environments. Such a method should lead to a high degree of reuse of designs for both interface and application components. The material in this paper extends the concept of ADV's to encompass the general specification of interfaces between application components in the same or different computing environments. This approach to specifying interfaces clearly separates application components from each other, since they do not need to know how they are used, or how they obtain services from other application components. Thus, application components called abstract data objects (ADO's) in this paper, are designed to minimize knowledge of the environment in which they are used and should be more amenable to reuse. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
26. A Practical Approach to Programming With Assertions.
- Author
-
Rosenblum, David S.
- Subjects
- *
DEBUGGING , *COMPUTER science , *COMPUTER systems , *COMPUTER software , *COMPUTER programming , *COMPUTER software testing - Abstract
Embedded assertions have been recognized as a potentially powerful tool for automatic runtime detection of software faults during debugging, testing, maintenance and even production versions of software systems. Yet despite the richness of the notations and the maturity of the techniques and tools that have been developed for programming with assertions, assertions are a development tool that has seen little widespread use in practice. The main reasons seem to be that (1) previous assertion processing tools did not integrate easily with existing programming environments, and (2) it is not well understood what kinds of assertions are most effective at detecting software faults. This paper describes experience using an assertion processing tool that was built to address the concerns of ease-of-use and effectiveness. The tool is called APP, an Annotation PreProcessor for C programs developed in UNIX-based development environments. APP has been used in the development of a variety of software systems over the past five years. Based on this experience, the paper presents a classification of the assertions that were most effective at detecting faults. While the assertions that are described guard against many common kinds of faults and errors, the very commonness of such faults demonstrates the need for an explicit, high-level, automatically checkable specification of required behavior. It is hoped that the classification presented in this paper will prove to be a useful first step in developing a method of programming with assertions. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
27. How Accurate Is Scientific Software?
- Author
-
Hatton, Les and Roberts, Andy
- Subjects
- *
COMPUTER software development , *COMPUTER software , *COMPUTER programming , *COMPUTER systems , *ELECTRONIC systems , *SOFTWARE engineering - Abstract
This paper describes some results of what, to the authors' knowledge, is the largest N-version programming experiment ever performed. The object of this ongoing four-year study is to attempt to determine just how consistent the results of scientific computation really are, and, from this, to estimate accuracy. The experiment is being carried out in a branch of the earth sciences known as seismic data processing, where 15 or so independently developed large commercial packages that implement mathematical algorithms from the same for similar published specifications in the same programming language (Fortran) have been developed over the last 20 years. The results of processing the same input dataset, using the same user-specified parameters, for nine of these packages is reported in this paper. Finally, feedback of obvious flaws was attempted to reduce the overall disagreement. The results are deeply disturbing. Whereas scientists like to think that their code is accurate to the precision of the arithmetic used, in this study, numerical disagreement grows at around the rate of 1% in average absolute difference per 4000 lines of implemented code, and, even worse, the nature of the disagreement is nonrandom. Furthermore, the seismic data processing industry has better than average quality standards for its software development with both identifiable quality assurance functions and substantial test datasets. Comparing the results reported here with other work by Hatton showing broadly similar statically detectable fault rates in the software from different disciplines gives strong indications that the software realisations of work in other scientific fields may be a great deal less accurate than many would believe. Against this backdrop, the authors believe that little progress will be made in some sciences until the problem is reduced, particularly in remote sensing, where the answer is generally inaccessible to direct measurement. To this end, the feedback experiments that formed part of the study proved valuable, resulting in significant reductions in disagreement. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
28. Experience with an Approach to Comparing Software Design Methodologies.
- Author
-
Xiping Song and Osterweil, Leon J.
- Subjects
- *
COMPUTER software development , *COMPUTER systems , *ELECTRONIC systems , *COMPUTERS , *COMPUTER programming - Abstract
A number of software design methodologies (SDM's) have been developed and compared over the past two decades. An accurate comparison would aid in codifying and integrating these SDM's. However, existing comparisons are often based largely upon the experiences of practitioners and the intuitive understandings of the authors. Consequently, they tend to be subjective and affected by application domains. In this paper, we introduce a systematic and defined process (called CDM) for objectively comparing SDM's. We believe that using CDM will lead to detailed, traceable, and objective comparisons. CDM uses process modeling techniques to model SDM's, classify their components (e.g., guidelines and notations), and analyze their procedural aspects. Modeling the SDM's entails decomposing their methods into components and analyzing the structure and functioning of the components. The classification of the components illustrates which components address similar design issues and/or have similar structures. Similar components then may be further modeled to aid in understanding more precisely their similarities and differences. The models of the SDM's are also used as the bases for conjectures and analyses about the differences between the SDM's. This paper describes three experiments that we carried out in evaluating CDM. The first uses CDM to compare JSD and Booch's Object Oriented Design (BOOD). The second uses CDM to compare two other pairs of SDM's. The last compares some of our comparisons with other comparisons done in the past using different approaches. The results of these experiments demonstrate that process modeling is valuable as a powerful tool in analysis of software development approaches. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
29. Flow Control for Limited Buffer Multicast.
- Author
-
Danzig, Peter B.
- Subjects
- *
ALGORITHMS , *SOFTWARE engineering , *COMPUTER software , *ELECTRONIC systems , *COMPUTER systems , *COMPUTER operating systems - Abstract
This paper analyzes a multiround flow control algorithm that attempts to minimize the time required to multicast a message to a group of recipients and receive responses directly from each group member. Such a flow control algorithm may be necessary because the flurry of responses to the multicast can overflow the buffer space of the process that issued the multicast. The condition that each recipient directly respond to the multicast prevents the use of reliable multicast protocols based on software combining trees or negative-acknowledgments. The flow control algorithm analyzed here directs the responding processes to hold their responses for some period of time, called the backoff time, before sending them to the originator. The backoff time depends on the number of recipients that will respond, the originator's available buffer space and buffer service time distribution, and the number of times that the originator is willing to retransmit its message. This paper develops an approximate analysis of the service time distribution of the limited-buffer preemptive queuing process that occurs within the protocol processing layers of a multiprogrammed operating system. It then uses this model to calculate multicast backoff times. The paper reports experimental verification of the accuracy of this service time model and discusses its application to the multicast flow control problem. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
30. Predicate Logic for Software Engineering.
- Author
-
Parnas, David Lorge
- Subjects
- *
SOFTWARE engineering , *ENGINEERING , *SYSTEMS design , *ELECTRONIC data processing documentation , *COMPUTER systems , *ELECTRONIC systems , *COMPUTERS - Abstract
The interpretations of logical expressions found in most introductory textbooks are not suitable for use in software engineering applications because they do not deal with partial functions. More advanced papers and texts deal with partial functions in a variety of complex ways. This paper proposes a very simple change to the classic interpretation of predicate expressions, one that defines their value for all values of all variables, yet is almost identical to the standard definitions. It then illustrates the application of this interpretation in software documentation. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
31. LISPACK—A Methodology and Tool for the Performance Analysis of Parallel Systems and Algorithms.
- Author
-
Lazeolla, Giuseppe and Marinuzzi, Francesco
- Subjects
- *
PARALLEL computers , *SOFTWARE engineering , *ENGINEERING , *COMPUTER software , *COMPUTER systems - Abstract
The paper deals with the performance analysis of parallel algorithms and systems. For these, numerical solution methods quickly show their limits because of the enormous state-space growth. The proposed methodology and software tool (LISPACK, an acronym for List-manipulation Parallel-modeling Package) uses string manipulation, lumping, and recursive elimination as a means for the definition of the large Markovian process, its restructuring and efficient solution. Initially, the enormous stage space is conveniently collapsed and the large transition matrix is reduced. Subsequently, the reduced matrix is recursively block banded, and an efficient recursive, symbolic Gauss elimination is applied. No relevant costs are incurred for the state-space collapsing and restructuring, nor for the matrix block banding. The analysis of a typical parallel system and algorithm model is developed as a case study, to discuss the features of the method. The paper has two contributions. The first, symbolic-approach methodology, is proposed for the performance analysis of parallel algorithms and systems. Second, a tool is introduced that exploits the capabilities of the symbolic approach in the solution of parallel models, where the numerical techniques reveal their limits. [ABSTRACT FROM AUTHOR]
- Published
- 1993
32. An Analysis of Test Data Selection Criteria Using the RELAY Model of Fault Detection.
- Author
-
Richardson, Debra J. and Thompson, Margaret C.
- Subjects
- *
FAULT-tolerant computing , *ELECTRONIC data processing , *COMPUTER software , *SOFTWARE engineering , *COMPUTER systems , *ELECTRONIC systems - Abstract
RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. In this paper, we analyze three test data selection criteria that attempt to detect faults in six fault classes. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules; each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules; although a criterion may cause a subexpression to take on an erroneous value, there is no effort made to guarantee that the intermediate values cause observable, erroneous behavior. This paper shows how the RELAY model overcomes these weaknesses. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
33. Analysis of Concurrency Coherency Control Protocols for Distributed Transaction Processing Systems with Regional Locality.
- Author
-
Ciciani, Bruno, Dias, Daniel M., and Yu, Philip S.
- Subjects
- *
ELECTRONIC systems , *SYSTEMS software , *COMPUTER systems , *COMPUTERS , *MULTIMEDIA systems - Abstract
In this paper we examine a system structure and protocols to improve the performance of a distributed transaction processing system when there is some regional locality of data reference. Several transaction processing applications such as reservation systems, insurance, and banking belong to this category. While maintaining a distributed computer system at each region, a central computer system is introduced with a replication of all databases at the distributed sites. It can provide the advantage of distributed systems for transactions that refer principally to local data, and also can, provide the advantage of centralized systems for transactions accessing nonlocal data. Specialized protocols can be designed to keep the copies at the distributed and centralized systems consistent without incurring the overhead and delay of generalized protocols for fully replicated databases. In this paper we study the advantage achievable through this system structure and the trade-offs between protocols for concurrency and coherency control of the duplicate copies of the databases. An approximate analytic model is employed to estimate the system performance. It is found that the performance is indeed sensitive to the protocol and substantial performance improvement can be obtained as compared with distributed systems. The protocol design factors considered include the approach for intersite concurrency control (optimistic versus pessimistic), resolution of aborts due to intersite conflict, and choice of the master/primary site of the dual copies (distributed site versus central site). Among the protocols considered, the most robust one uses an optimistic protocol for intersite control with the distributed site as the master site, allows a locally running transaction to commit without any communication with the central site, and balances transaction aborts between transactions running at the central site and distributed sites. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
34. X-Ware Reliability and Availability Modeling.
- Author
-
Laprie, Jean-Claude and Kanoun, Karama
- Subjects
- *
SOFTWARE engineering , *STOCHASTIC processes , *COMPUTER systems , *COMPUTERS , *ELECTRONIC systems , *COMPUTER programming , *ELECTRONICS - Abstract
This paper addresses the problem of modeling a system's reliability and availability with respect to the various classes of faults (physical and design, internal and external) which may affect the service delivered to its users. Hardware and software models are currently exceptions in spite of the users' requirements; these requirements are expressed in terms of failures independently of their sources; i.e. the various classes of faults. The causes of this situation are analyzed; it is shown that there is no theoretical impediment to deriving such models, and that the classical reliability theory can be generalized in order to cover both hardware and software viewpoints that are X-Ware. After the introduction, which summarizes the current state of the art with regard to the dependability requirements from the users' viewpoint, the body of the paper is composed of two sections. Section H is devoted to system behavior up to failure; it focuses on failure rate and reliability, considering in turn atomic (or single component) systems and systems made out of components. Section III deals with the sequence of failures when considering restoration actions consecutive to maintenance; failure intensity, reliability, and availability are derived for various forms of maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
35. Analysis of the Periodic Update Write Policy For Disk Cache.
- Author
-
Carson, Scott D. and Setia, Sanjeev
- Subjects
- *
INFORMATION storage & retrieval systems , *CACHE memory , *COMPUTER storage devices , *COMPUTER systems , *ELECTRONIC file management , *COMPUTER files - Abstract
A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The "periodic update" write policy, widely used in existing computer systems, is one in which dirty cache blocks are written to a disk on a periodic basis. In this paper we determine the average response time for disk read requests when the periodic update write policy is used. Read and write load, cache-hit ratio, and the disk scheduler's ability to reduce service time under load are incorporated in the analysis, leading to design criteria that can be used to decide among competing cache write policies. The main conclusion of this paper is that the bulk arrivals generated by the periodic update policy cause a "traffic jam" effect which results in severely degraded service. Effective use of the disk cache and disk scheduling can alleviate this problem, but only under a narrow range of operating conditions. Based on this conclusion, alternate write policies that retain the periodic update policy's advantages and provide uniformly better service are proposed. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
36. Performance Evaluation of Parallel Systems by Using Unbounded Generalized Stochastic Petri Nets.
- Author
-
Granda, Mercedes, Drake, José M., and Gregorio, José A.
- Subjects
- *
PETRI nets , *EVALUATION , *STOCHASTIC processes , *PARALLEL computers , *COMPUTER systems , *MULTIPROCESSORS - Abstract
This paper presents methods of calculating efficiently the performance measures of parallel systems by using unbounded generalized stochastic Petri nets. One of the main limitations of the use of Petri nets for modeling and evaluating the performance of complex parallel systems is the explosion in the number of states to be analyzed. This is what occurs when unbounded places appear in the model. The state space of such nets is infinite, but it is possible to take advantage of the natural symmetries of the system to aggregate the states of the net and construct a finite graph of lumped states which can easily be analyzed. With the methods developed in this paper, the unbounded places introduce a complexity similar to that of safe places of the net. These methods can be used to evaluate models of open parallel systems in which unbounded places appear; systems which are k-bounded but are complex and have large values of k can also be evaluated in an approximate way by means of simpler unbounded models. From the steady-state solution of the model, it is possible to obtain automatically the performance measures of parallel systems represented by this type of nets, such as the time devoted to the execution of each task, the time during which each processor of the system is operating or the memory necessary for the execution of a job in a multiprocessor architecture. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
37. Prism—Methodology and Process-Oriented Environment.
- Author
-
Madhavji, Nazim H. and Schafer, Wilhelm
- Subjects
- *
EDUCATIONAL tests & measurements , *OCLC PRISM (Information retrieval system) , *COMPUTER sound processing , *COMPUTERS , *COMPUTER systems , *OPERATIONS research - Abstract
Prism is an experimental process-oriented environment supporting methodical development, instantiation, and execution of software processes. This paper describes the Prism model of engineering processes and an architecture which captures this model in its various components. The architecture has been designed to hold a product software process description, the life-cycle of which is supported by an explicit representation of a higher level (or meta) process description. The central part of this paper describes the nine-step Prism methodology for building and tailoring process models, and gives several scenarios to support this description. In Prism, process models are built using a hybrid process modeling language, which is based on a high-level Petri net formalism and rules. An important observation to note is that this environment work should be seen as an infrastructure, or a stepping stone, for carrying out the more difficult task of creating sound process models. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
38. A Retrospective on the VAX VMM Security Kernel.
- Author
-
Karger, Paul A., Zurko, Mary Ellen, Bonin, Douglas W., Mason, Andrew H., and Kahn, Clifford E.
- Subjects
- *
VIRTUAL machine systems , *COMPUTER security , *COMPUTER systems , *VAX computers , *COMPUTER operating systems , *COMPUTER multitasking - Abstract
This paper describes the development of a virtual- machine monitor (VMM) security kernel for the VAX architecture. The paper particularly focuses on how the system's hardware, microcode, and software are aimed at meeting A1-level security requirements while maintaining the standard interfaces and applications of the VMS and ULTRIX-32 operating systems. The VAX Security Kernel supports multiple concurrent virtual machines on a single VAX system, providing isolation and controlled sharing of sensitive data. Rigorous engineering standards were applied during development to comply with the assurance requirements for verification and configuration management. The VAX Security Kernel has been developed with a heavy emphasis on performance and system management tools. The kernel performs sufficiently well that much of its development was carried out in virtual machines running on the kernel itself, rather than in a conventional time-sharing system. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
39. Constraint-Based Automatic Test Data Generation.
- Author
-
Demillo, Richard A. and Offutt, A. Jefferson
- Subjects
- *
SOFTWARE engineering , *TESTING , *COMPUTER software , *COMPUTER systems , *ELECTRONIC systems , *AUTOMATION - Abstract
This paper presents a new technique for automatically generating test data. The technique is based on mutation analysis and creates test data that approximates relative adequacy. The technique is a fault-based technique that uses algebraic constraints to describe test cases designed to find particular types of faults. A set of tools, collectively called Godzilla, has been implemented that automatically generates constraints and solves them to create test cases for unit and module testing. Godzilla has been integrated with the Mothra testing system and has been used as an effective way to generate test data that kill program mutants. The paper includes an initial list of constraints and discusses some of the problems that have been solved to develop the complete implementation of the technique. [ABSTRACT FROM AUTHOR]
- Published
- 1991
40. High Performance Software Testing on SIMD Machines.
- Author
-
Krauser, Edward W., Mathur, Aditya P., and Rego, Vernon J.
- Subjects
- *
COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ELECTRONIC systems - Abstract
This paper describes a new method, called mutant unification, for high-performance software testing. The method is aimed at supporting program mutation on parallel machines based on the Single Instruction Multiple Data stream (SIMD) paradigm. Several parameters that affect the performance of unification have been identified and their effect on the time to completion of a mutation test cycle and speedup has been studied. Program mutation analysis provides an effective means for determining the reliability of large software systems. It also provides a systematic method for measuring the adequacy of test data. However, it is likely that testing large software systems using mutation is computation bound and prohibitive on traditional sequential machines. Current implementations of mutation tools are unacceptably slow and are only suitable for testing relatively small programs. The unification method reported in this paper provides a practical alternative to the current approaches. It also opens up a new application domain for SIMD machines. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
41. The Processor Working Set and Its Use in Scheduling Multiprocessor Systems.
- Author
-
Ghosal, Dipak, Serazzi, Giuseppe, and Tripathi, Satish K.
- Subjects
- *
MULTIPROCESSORS , *COMPUTERS , *COMPUTER software , *SOFTWARE engineering , *ENGINEERING , *COMPUTER systems , *ALGORITHMS , *PARALLEL processing - Abstract
There are two main contributions of this paper. First, this paper introduces the concept of a processor working set (pws) as a single value parameter for characterizing the parallel program behavior. Through detailed experimental studies of different algorithms on a transputer-based multiprocessor machine, it is shown that the pws is indeed a robust measure for characterizing the workload of a multiprocessor system. Small deviations in the performance of algorithms arising due to communication overhead are captured in this parameter. The second contribution of this paper relates to the study of static processor allocation strategies. It is shown that processor allocation strategies based on the pws provide significantly better throughput-delay characteristics. The robustness of pws is further demonstrated by showing that allocation policies that allocate processors more than the pws are inferior in performance to those that never allocate more than the pws—even at a moderately low load. Based on the results, a simple static allocation policy that allocates the pws at low load and adaptively fragments at high load to one processor per job is proposed. This allocation strategy is shown to possess the best throughput-delay characteristic over a wide range of loads. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
42. Lower Bound on the Number of Processors and Time for Scheduling Precedence Graphs with Communication Costs.
- Author
-
Al-Mouhamed, Mayez A.
- Subjects
- *
PARALLEL processing , *ELECTRONIC data processing , *COMPUTER systems , *SOFTWARE engineering , *COMPUTER software - Abstract
This paper proposes a new lower bound on the number of processors and finish time for the problem of scheduling precedence graphs with communication costs. An algorithm (ETF) has been proposed by Hwang [1] for scheduling precedence graphs In systems with interprocessor communication limes. In this paper the notion of the earliest starting time of a task is formulated for the context of lower bounds. A lower bound on the completion time is proposed. A task delay which does not increase the earliest completion time of a schedule is defined. Each task can then be scheduled within a time interval without affecting the lower bound performance on the finish time. This leads to definition of a new lower bound on the number of processors required to process the task graph. A derivation of the minimum time increase over the earliest completion time is also proposed for the case of smaller number of processors. Finally the paper proposes a lower bound on the minimum number of interprocessor communication links required to achieve optimum performance. Evaluation has been carried out by using a set of 360 small graphs. The bound on the finish time deviates at most by 5% from the optimum solution in 96% of the cases and performs well with respect to the minimum number of processors and communication links. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
43. The Derivation of Conformance Tests from LOTOS Specifications.
- Author
-
Pitt, David H. and Freestone, David
- Subjects
- *
LOTOS (Computer program language) , *PROGRAMMING languages , *SOFTWARE engineering , *COMPUTER software , *COMPUTER systems - Abstract
This paper concerns the derivation of conformance tests for communications protocols. It considers protocol specifications In the formal description technique LOTOS, which has been developed by the International Standards Organization. The intention is to construct test processes which preserve the structure of the protocol specifications. Th achieve this laws are presented for handling basic LOTOS operators. The test processes obtained by applying these laws are related to the theoretical notion of canonical testers in the literature. The paper discusses how to convert the test processes into finite test suites. It also discusses their relationship to current practice in test suite design. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
44. Extending Objects to Support Multiple Interfaces and Access Control.
- Author
-
Hailpern, Brent and Ossher, Harold
- Subjects
- *
COMPUTER security , *ACCESS control , *ELECTRONIC systems , *COMPUTER systems , *COMPUTER software , *SOFTWARE engineering - Abstract
bject-oriented languages hide the details of objects from their users; all interaction with an object must be through the operations it supports. Objects must therefore support a collection of operations sufficient to satisfy all users. The requirements of different users can differ widely. It is therefore desirable to provide restricted subsets of the supported operations to specific users or kinds of users, rather than make all supported operations universally available. This paper describes a mechanism called views that allows programmers to specify multiple interfaces for objects, and to control explicitly access to each interface. This mechanism provides a simple and flexible means of specifying enforceable access restrictions at many levels of granularity. It also results in system organization that supports browsing based on a number of different criteria. The paper motivates and defines views, gives some examples of uses of views, discusses the impact of views on system organization, and outlines five approaches to implementing views. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
45. The Design and Implementation of an ASN. 1-C Compiler.
- Author
-
Neufeld, Gerald W. and Yueli Yang
- Subjects
- *
COMPILERS (Computer programs) , *COMPUTER systems , *COMPUTER programming , *ELECTRONIC systems , *COMPUTER software , *SYSTEMS software , *ELECTRONIC data processing - Abstract
A basic requirement for communication in a heterogeneous computing environment is a standard external data representation. Abstract Syntax Notation One (ASN.1) has been widely used in international standard specifications; its transfer-syntax, the Basic Encoding Rules (BER), is used as the external data representation. This paper presents a HER implementation called the ED library. The ED library includes a number of encoding and decoding routines that may be used as primitive functions to compose encoders and decoders for arbitrarily complicated ASN.1 data-types. Based on the ED library, an ASN.1-C compiler, called CASN1, is designed and implemented to free the protocol implementors from the arduous work of translating protocol-defined data-types and constructing their encoders and decoders. Given an ASN.1 protocol specification, CASNI automatically translates the input ASN.1 modules into C and generates the BER encoders and decoders for the protocol defined data-types. This paper discusses the CASN1 design principles, user interface, and some example applications. The performance of the ED library and generated CASN1 code is also measured and discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
46. Modeling of Hierarchical Distributed Systems with Fault-Tolerance.
- Author
-
Yuan-Bao Shieh, Ghosal, Dipak, Chintamaneni, Prasad R., and Tripathi, Satish K.
- Subjects
- *
DISTRIBUTED computing , *FAULT-tolerant computing , *SOFTWARE engineering , *COMPUTER software , *COMPUTER systems , *COMPUTERS - Abstract
This paper addresses some fault-tolerant issues pertaining to hierarchically distributed systems. Since each of the levels in a hierarchical system could have various characteristics, different fault-tolerance schemes could be appropriate at different levels. In this paper, we use stochastic Petri nets (SPN's) to investigate various fault-tolerant schemes in this context. The basic SPN is augmented by parameterized subnet primitives to model the fault-tolerant schemes. Both centralized and distributed fault-tolerant schemes are considered in this paper. These two schemes are investigated by considering the individual levels in a hierarchical system independently. In the case of distributed fault-tolerance, we consider two different checkpointing strategies. This first scheme is called the arbitrary checkpointing strategy. Each process in this scheme does its checkpointing independently; thus, the domino effect may occur. The second scheme is called the planned strategy. Here, process checkpointing is constrained to ensure no domino effect. Our results show that, under certain cases, an arbitrary checkpointing strategy can perform better than a planned strategy. Finally, we have studied the effect of integration on the fault-tolerant strategies of the various levels of a hierarchy. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
47. Tradeoff in the Design of Efficient Algorithm-Based Error Detection Schemes for Hypercube Multiprocessors.
- Author
-
Balasubramanian, Vijay and Banerjee, Prithviraj
- Subjects
- *
PARALLEL processing , *COMPUTER algorithms , *MULTIPROCESSORS , *SOFTWARE engineering , *COMPUTER software , *COMPUTER systems , *COMPUTERS - Abstract
Numerous algorithms for computationally intensive tasks have been developed by researchers that are suitable for execution on hypercube multiprocessors. One characteristic of many of these algorithms is that they are extremely structured and are tuned for the highest performance to execute on hypercube architectures. In this paper, we have looked at parallel algorithm design from a different perspective. In many cases, it may be possible to redesign the parallel algorithms using software techniques so as to provide a low-cost on-line scheme for hardware error detection without any hardware modifications. This approach is called Algorithm-based error detection. In the past, we have applied algorithm-based techniques for on-line error detection on the hypercube and have reported some preliminary results of one specific implementation on some applications. In this paper, we provide an in-depth study of the various issues and tradeoffs available in Algorithm-based error detection, as well as a general methodology for evaluating the schemes. We have illustrated the approach on an extremely useful computation in the field of numerical linear algebra: QR factorization. We have implemented and investigated numerous ways of applying algorithm-based error detection using different system-level encoding strategies for QIZ factorization. Different schemes have been observed to result in varying error coverages and time overheads. We have reported the results of our studies performed on a 16 processor Intel iPSC-2/D4/MX hypercube multiprocessor. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
48. Software Cost Reduction Methods in Practice.
- Author
-
Hager, James A.
- Subjects
- *
COMPUTER software , *SOFTWARE engineering , *COST control , *LIFE cycle costing , *COMPUTER systems , *INDUSTRIAL costs , *ECONOMICS - Abstract
Sixty percent of the software costs associated with the design, development, and implementation of computer systems occurs in the maintenance phase. A significant reduction in the maintenance costs can be realized with a design for change philosophy integrated into the engineering life cycle. By carefully identifying the expected changes to a system and rigorously applying the concepts of information hiding and abstraction of interfaces, the changeable aspects of a system can be isolated. In 1978, a Software Cost Reduction program was initiated whose goal was to apply these modern design and documentation principles to the development of large systems. The results of this effort have been documented in several research papers published by Parnas and associates. This paper extends that approach by updating the methodology based upon lessons learned during the application of the concepts to the development of a computer-based training system. An engineering life cycle which provides more visibility to maintenance concerns is described and the lessons learned during its implementation are discussed. Finally, a summary provides our impressions of the methodology and its potential to reduce system maintenance costs. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
49. Applying Synthesis Principles to Create Responsive Software Systems.
- Author
-
Smith, Connie U.
- Subjects
- *
SOFTWARE engineering , *SOFTWARE architecture , *SYSTEMS design , *SOFTWARE maintenance , *COMPUTER software , *COMPUTER systems - Abstract
Performance engineering literature shows that it is important to build performance into systems beginning in early development stages when requirements and designs are formulated. This is accomplished, without adverse effects on implementation time or software maintainability, using the software performance engineering methodology, thus combining performance design and assessment. There is extensive literature about software performance prediction; this paper focuses on performance design. First, the general principles for formulating software requirements and designs that meet response time goals are reviewed. The principles are related to the system performance parameters that they improve, and thus their application may St be obvious to those whose speciality is system architecture and design. The purpose of this paper is to address the designer's perspective and illustrate how these principles apply to typical design problems. The examples illustrate requirements and design of: communication, user interfaces, information storage, retrieval and update, information hiding, and data availability. Strategies for effective use of the principles are described. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
50. Efficient Branch-and-Bound Algorithms on a Two-Level Memory System.
- Author
-
Chee-Fen Yu and Wah, Benjamin W.
- Subjects
- *
COMPUTER algorithms , *SOFTWARE engineering , *COMPUTER software , *COMPUTER programming , *COMPUTER systems , *COMPUTER storage devices - Abstract
In this paper, we have investigated the efficient evaluation of branch-and-bound algorithms in a system with a two-level memory hierarchy. An efficient implementation depends on the disparities in the numbers of subproblems expanded between the depth-first and best-first searches as well as the relative speeds of the main and secondary memories. A best-first search should be used when it expands a much smaller number of subproblems than that of a depth-first search, and the secondary memory is relatively fast. In contrast, a depth-first search should be used when the number of expanded subproblems is close to that of a best-first search. The choice is not as clear for cases in between. The Iterative Deepening A* (IDA*) algorithm has been shown to be asymptotically optimal in space, time, and cost. However, for the conditions that we have assumed, 1DM does not result in the optimal space-time tradeoff for minimizing the completion time. In this paper, we study the space-time tradeoff by proposing and analyzing two strategies: a specialized virtual-memory system that matches the architectural design with the characteristics of the existing algorithm, and a modified branch-and-bound algorithm that can be tuned to the characteristic of the problem and the architecture. The latter strategy illustrates that designing a better algorithm is sometimes more effective than tuning the architecture alone. Guidelines have also been developed to select appropriate a priori values for the parameters of the modified B&B algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 1988
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.