1,176 results
Search Results
2. Foreword: Advances in Distributed Computing Systems.
- Author
-
Lundstrom, Stephen F. and Swartzlander Jr., Earl E.
- Subjects
DISTRIBUTED computing ,SOFTWARE architecture ,REAL-time computing ,COMPUTER networks ,SYSTOLIC array circuits ,COMPUTER systems - Abstract
This note introduces the special section containing research papers on distributed computing systems, which were featured in the October 1985 issue of the periodical "IEEE Transactions on Software Engineering." It discusses the objectives of most distributed computing system work. This special section was motivated by the annual International Conference on Distributed Computing Systems. The first article is on a polynomial optimal algorithm for a special type of queries called star queries. The paper by James P. Huang shows the modeling of software partition for distributed real-time applications. The paper "A Distributed Drafting Algorithm for Load Balancing" proposes migration algorithm for load balancing which is network topology independent. The last paper is on proving liveness and termination of systolic arrays using communicating finite state machines.
- Published
- 1985
3. Editor's Comments.
- Author
-
Basili, Victor R.
- Subjects
SOFTWARE engineering ,ENGINEERING ,COMPUTER software ,COMPUTER systems - Abstract
Comments on the articles in the "IEEE Transactions on Software Engineering" were given. The periodical has an established, published scope which is subject to change as the file develops. Part of the review process is to state whether the paper meets the criteria. The scope should be limited to papers of interest to the software engineering community. However, besides mainstream software engineering topics, it could include the software engineering of various applications.
- Published
- 1988
4. Guest Editor' Introduction: 2000 International Symposium on Software Testing and Analysis.
- Author
-
Harrold, Mary Jean and Bertolino, Antonia
- Subjects
SOFTWARE verification ,COMPUTER networks ,INFORMATION technology ,COMPUTER systems ,INFORMATION networks ,COMPUTER software - Abstract
This article provides information about five papers those selected from the 2000 International Symposium on Software Testing and Analysis (ISSTA 2000), held in Portland, Oregon, in August 2000. The program for ISSTA 2000 consisted of research papers, invited papers, and a joint session with collocated Third Workshop on formal Methods in Software Practice. Following the symposium, the selection committee selected seven papers which represented the best of ISSTA 2000. The selected papers span a range of topics in software analysis and testing. the first paper is "Improving the Precision of INCA by Eliminating Solutions with Spurious Cycles," by researcher S.F. Seigel and G.S. Avrunin. INCA is a finite-state software verification tool that can check properties of concurrent systems with very large state spaces. In the paper, "Verisim: Formal Analysis of Network Simulations," by researcher K. Bhargavan and colleagues, present a tool suite called Verisim that facilitates the formal analysis of performance and correctness properties of computer network protocols, using simulators.
- Published
- 2002
- Full Text
- View/download PDF
5. Predicate Logic for Software Engineering.
- Author
-
Parnas, David Lorge
- Subjects
SOFTWARE engineering ,ENGINEERING ,SYSTEMS design ,ELECTRONIC data processing documentation ,COMPUTER systems ,ELECTRONIC systems ,COMPUTERS - Abstract
The interpretations of logical expressions found in most introductory textbooks are not suitable for use in software engineering applications because they do not deal with partial functions. More advanced papers and texts deal with partial functions in a variety of complex ways. This paper proposes a very simple change to the classic interpretation of predicate expressions, one that defines their value for all values of all variables, yet is almost identical to the standard definitions. It then illustrates the application of this interpretation in software documentation. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
6. A Simple Experiment in Top-Down Design.
- Author
-
Comer, Douglas and Halstead, Maurice H.
- Subjects
COMPUTER software ,COMPUTER systems ,SOFTWARE engineering ,COMPUTER programming ,COMPUTER science - Abstract
In this paper we: 1) discuss the need for quantitatively reproducible experiments in the study of top-down design; 2) propose the design and writing of tutorial papers as a suitably general and in- expensive vehicle; 3) suggest the software science parameters as appropriate metrics; 4) report two experiments validating the use of these metrics on outlines and prose; and 5) demonstrate that the experiments tended toward the same optimal modularity. The last point appears to offer a quantitative approach to the estimation of the total length or volume (and the mental effort required to produce it) from an early stage of the top-down design process. If results of these experiments are validated elsewhere, then they will provide basic guidelines for the design process. [ABSTRACT FROM AUTHOR]
- Published
- 1979
7. An Iconic Programming System, HI-VISUAL.
- Author
-
Hirakawa, Masahito, Tanaka, Minoru, and Ichikawa, Tadao
- Subjects
ICONS (Computer graphics) ,COMPUTER graphics ,GRAPHICAL user interfaces ,HUMAN-computer interaction ,COMPUTER systems ,VISUAL programming (Computer science) ,PROGRAMMING languages ,SOFTWARE engineering ,COMPUTER programming ,COMPUTER software - Abstract
The use of icons is effective for attaining higher man-machine interaction in programming from the viewpoints of both universality and efficiency. In this paper, the authors propose a new framework for icon management and an iconic programming based on it. In the framework, icons represent real objects or the concepts already established in a target application environment, whereas icons representing functions are not provided. A function is represented by a combination of two different icons. Each icon can take an active or a passive role against the other. The role sharing is determined dynamically depending on the environment in which the icons are activated; The framework for icon management mentioned above, which is quite object-oriented, is first proposed, and then an iconic programming system named HI-VISUAL is presented on the basis of the framework. Programming in HI-VISUAL and implementational issues of the system prototype, now in actual operation in our laboratory environment, are extensively discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
8. Resilient Distributed Computing.
- Author
-
Svobodova, Liba
- Subjects
DISTRIBUTED computing ,INFORMATION retrieval ,ERRORS ,COMPUTER reliability ,COMPUTER systems ,COMPUTER software - Abstract
A control abstraction called atomic action is a powerful general mechanism for ensuring consistent behavior of a system in spite of failures of individual computations running in the system, add in spite of system crashes. However, because of the "all-or-nothing" property of atomic actions, an important amount of work might be abandoned needlessly when an internal error is encountered. This paper discusses how implementation of resilient distributed systems can be supported using a combination of nested atomic actions and stable checkpoints. Nested atomic, actions form a free structure. When an internal atomic action terminates, its results are not made permanent until the outermost atomic action commits, but they survive local node failures. Each subtree of atomic actions is recoverable individually. A checkpoint is established in stable storage as part of a remote request so that results of such a request can be reclaimed if the requesting node fails in. the meantime. The paper shows how remote procedure call primitives with "at-most-once" semantics and recovery blocks can be built with these mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 1984
9. Communication and Synchronization in Distributed Systems.
- Author
-
Silberschatz, Abraham
- Subjects
DISTRIBUTED computing ,ELECTRONIC data processing ,PROGRAMMING languages ,MICROPROCESSORS ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
Recent advances in technology have made the construction of general-purpose systems out of many small independent microprocessors feasible. One of the issues concerning distributed systems is the question of appropriate language constructs for the handling of communication and synchronization. In his paper, "Communicating sequential processes," Hoare has suggested the use of the input and output constructs and Dijkstra's guarded commands to handle these two issues. This paper examines Hoare's concepts in greater detail by concentrating on the following two issues: 1) allowing both input and output commands to appear in guards, 2) simple abstract implementation of the input and output constructs. In the process of examining these two issues we develop a framework for the design of appropriate communication and synchronization facilities for distributed systems. [ABSTRACT FROM AUTHOR]
- Published
- 1979
10. Predicting Vulnerable Software Components via Text Mining.
- Author
-
Scandariato, Riccardo, Walden, James, Hovsepyan, Aram, and Joosen, Wouter
- Subjects
COMPUTER software research ,COMPUTER files ,COMPUTER systems ,TEXT mining ,DATA mining - Abstract
This paper presents an approach based on machine learning to predict which components of a software application contain security vulnerabilities. The approach is based on text mining the source code of the components. Namely, each component is characterized as a series of terms contained in its source code, with the associated frequencies. These features are used to forecast whether each component is likely to contain vulnerabilities. In an exploratory validation with 20 Android applications, we discovered that a dependable prediction model can be built. Such model could be useful to prioritize the validation activities, e.g., to identify the components needing special scrutiny. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
11. NDT. A Model-Driven Approach for Web Requirements.
- Author
-
Escalona, María José and Aragón, Gustavo
- Subjects
WORLD Wide Web ,COMPUTER systems ,ELECTRONIC systems ,SOFTWARE engineering ,SOFTWARE measurement ,WEBSITES - Abstract
Web engineering is a new research line in software engineering that covers the definition of processes, techniques, and models suitable tot Web environments in order to guarantee the quality of results. The research community is working in this area and, as a very recent line, they are assuming the Model-Driven paradigm to support and solve some classic problems detected in Web developments. However, there is a lack in Web requirements treatment. This paper presents a general vision of Navigational Development Techniques (NDT), which is an approach to deal with requirements in Web systems. It is based on conclusions obtained in several comparative studies and it tries to fill some gaps detected by the research community. This paper presents its scope, its most important contributions, and offers a global vision of its associated tool: NDT-Tool. Furthermore, it analyzes how Web Engineering can be applied in the enterprise environment. NDT is being applied in real projects and has been adopted by several companies as a requirements methodology. The approach offers a Web requirements solution based on a Model-Driven paradigm that follows the most accepted tendencies by Web engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
12. Classifying Software Changes: Clean or Buggy?
- Author
-
Sunghun Kim, Whitehead Jr., E. James, and Yi Zhang
- Subjects
COMPUTER software ,MACHINE learning ,SOURCE code ,OPEN source software ,COMPUTER systems - Abstract
This paper introduces a new technique for predicting latent software bugs, called change classification. Change classification uses a machine learning classifier to determine whether a new software change is more similar to prior buggy changes or clean changes. In this manner, change classification predicts the existence of bugs in software changes. The classifier is trained using features (in the machine learning sense) extracted from the revision history of a software project stored in its software configuration management repository. The trained classifier can classify changes as buggy or clean, with a 78 percent accuracy and a 60 percent buggy change recall on average. Change classification has several desirable qualities: 1) The prediction granularity is small (a change to a single file), 2) predictions do not require semantic information about the source code, 3) the technique works for a broad array of project types and programming languages, and 4) predictions can be made immediately upon the completion of a change. Contributions of this paper include a description of the change classification approach, techniques for extracting features from the source code, and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
13. On the Semantics of Associations and Association Ends in UML.
- Author
-
Milićev, Dragan
- Subjects
SOFTWARE engineering ,UNIFIED modeling language ,COMPUTER software development ,SEMANTICS ,OBJECT-oriented methods (Computer science) ,COMPUTER systems - Abstract
Association is one of the key concepts in UML that is intensively used in conceptual modeling. Unfortunately, in spite of the fact that this concept is very old and is inherited from other successful modeling techniques, a fully unambiguous understanding of it, especially in correlation with other newer concepts connected with association ends, such as uniqueness, still does not exist. This paper describes a problem with one widely assumed interpretation of the uniqueness of association ends, the restrictive interpretation, and proposes an alternative, the intentional interpretation. Instead of restricting the association from having duplicate links, uniqueness' of an association end in the intentional interpretation modifies the way in which the association end maps an object of the opposite class to a collection of objects of the class at that association end. If the association end is unique, the collection is a set obtained by projecting the collection of all linked objects, in that sense, the uniqueness of an association end modifies the view to the objects at that end, but does not constrain the underlying object structure. This paper demonstrates how the intentional interpretation improves expressiveness of the modeling language and has some other interesting advantages. Finally, this paper gives a completely formal definition of the concepts of association and association ends, along with the related notions of uniqueness, ordering, and multiplicity. The semantics of the UML actions on associations are also defined formally. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
14. X-FEDERATE: A Policy Engineering Framework for Federated Access Management.
- Author
-
Bhatti, Rafae, Bertino, Elisa, and Ghafoor, Arif
- Subjects
MANAGEMENT ,INFORMATION sharing ,INFORMATION resources management ,SECURITY management ,COMPUTER systems ,DIGITAL libraries ,METHODOLOGY ,PARADIGMS (Social sciences) - Abstract
Policy-Based Management (PBM) has been considered as a promising approach for design and enforcement of access management policies for distributed systems. The increasing shift toward federated information sharing in the organizational landscape, however, calls for revisiting current PBM approaches to satisfy the unique security requirements of the federated paradigm. This presents a twofold challenge for the design of a PBM approach, where, on the one hand, the policy must incorporate the access management needs of the individual systems, while, on the other hand, the policies across multiple systems must be designed in such a manner that they can be uniformly developed, deployed, and integrated within the federated system. In this paper, we analyze the impact of security management challenges on policy design and formulate a policy engineering methodology based on principles of software engineering to develop a PBM solution for federated systems. We present X-FEDERATE, a policy engineering framework for federated access management using an extension of the well-known Role-Based Access Control (RBAC) model. Our framework consists of an XML-based policy specification language, its UML-based meta-model, and an enforcement architecture. We provide a comparison of our framework with related approaches and highlight its significance for federated access management. The paper also presents a federation protocol and discusses a prototype of our framework that implements the protocol in a federated digital library environment. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
15. Software Reuse Research: Status and Future.
- Author
-
Frakes, William B. and Kyo Kang
- Subjects
COMPUTER software ,PUBLICATIONS ,CONFERENCES & conventions ,PROBLEM solving ,RESEARCH ,COMPUTER systems - Abstract
This paper briefly summarizes software reuse research, discusses major research contributions and unsolved problems, provides pointers to key publications, and introduces four papers selected from The Eighth International Conference on Software Reuse (ICSR8). [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
16. Synthesis of Behavioral Models from Scenarios.
- Author
-
Uchitel, Sebastian, Kramer, Jeff, and Magee, Jeff
- Subjects
COMPUTER software ,COMPUTER systems - Abstract
Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable pads and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based... [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
17. Guest Editor's Introduction Experimental Computer Science.
- Author
-
Iyer, Ravi K.
- Subjects
COMPUTER science ,TECHNOLOGY ,COMPUTERS ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,ELECTRONIC systems - Abstract
Experimental research in computer science is a relatively new, yet fast developing area. It is encouraging to that there is substantial research going on in this important area. A study on experimental evaluation of a reusability-oriented parallel programming environment is given. There is a significant amount of experimental research in the area of computer dependability.
- Published
- 1990
18. Guest Editorial for the Special Collection from the Third International Conference on Software Engineering.
- Author
-
Belady, Laszlo A.
- Subjects
SOFTWARE engineering ,COMPUTER software development ,COMPUTER systems ,COMPUTER software industry ,COMPUTERS ,COMPUTER industry ,CONFERENCES & conventions - Abstract
This editorial discusses selected papers submitted during the Third International Conference on Software Engineering held in Atlanta, Georgia in May 1978. The international congress, attended by members of the academe and the computer industry, aimed to define the emerging importance of computer software engineering. The first paper refers to how the computer industry provides solutions to everyday challenges with the use of computers. Here, customers express their views on the purpose of computers, leading computer software developers to consider user requirements. The second paper focuses on the activity of design. It discusses a modeling scheme where large programs can be built out of incomplete parts. Design activities are followed by testing, the focus of the third manuscript. A comprehensive study of the two basic approaches to program testing is discussed. The last paper considers programs in operation, the last phase of the computer software development.
- Published
- 1978
19. Client-Access Protocols for Replicated Services.
- Author
-
Karamanolis, Christos T. and Magee, Jeffrey N.
- Subjects
ACCESS to wide area computer networks ,COMPUTER network protocols ,INTERNET servers ,DISTRIBUTED computing ,COMPUTER systems ,INTERNET - Abstract
This paper addresses the problem of replicated service provision in distributed systems. Existing systems that follow the State Machine approach concentrate on the synchronization of the server replicas and do not consider the problem of client interaction with the server group. The paper analyzes client interaction and identifies a number of access protocols to meet a range of client requirements and system models. The paper demonstrates that protocols for the "open" group model — clients external to the group of servers — satisfy the requirements of the State Machine approach, even when replication is transparent to the clients. Experimental performance results indicate that the "open" model is clearly desirable when the service is used by a large, dynamically changing set of clients. The situation which pertains to Internet service provision. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
20. A Note on Regeneration with Virtual Copies.
- Author
-
Hilderman, Robert J. and Hamilton, Howard J.
- Subjects
DISTRIBUTED computing ,ALGORITHMS ,COMPUTER systems ,DATABASES ,VECTOR processing (Computer science) - Abstract
Regeneration with Virtual Copies is a voting-based consistency control algorithm for replicated data objects in a distributed computing system. Proposed by Adam and Tewari, it utilizes selective regeneration and recovery mechanisms for maintaining the availability and consistency of copies. This paper describes some problems with the original paper and proposes solutions. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
21. Use of Common Time Base for Checkpointing and Rollback Recovery in a Distributed System.
- Author
-
Ramanathan, Parameswaran and Shin, Kang G.
- Subjects
DISTRIBUTED computing ,ELECTRONIC data processing ,COMPUTER systems ,COMPUTER software ,SOFTWARE engineering ,ELECTRONIC systems - Abstract
A new approach for checkpointing and rollback recovery in a distributed computing system using a common time base is proposed in this paper. First, a common time base is established in the system using a hardware clock synchronization algorithm. This common time base is coupled with the idea of pseudo-recovery points to develop a checkpointing algorithm that has the following advantages: 1) reduced wait for commitment for establishing recovery lines, 2) fewer messages to be exchanged, and 3) less memory requirement. These advantages are assessed quantitatively by developing a probabilistic model. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
22. Providing Customized Assistance for Software Lifecycle Approaches.
- Author
-
Ramanathan, Jayashree and Sarkar, Soumitra
- Subjects
COMPUTER software development ,PUNCHED card systems ,INFORMATION theory ,COMPUTER storage devices ,COMPUTER software ,COMPUTER systems - Abstract
Environments for large scale software development should provide automated support for enforcing the discipline required to ensure the success of large multiperson projects. This paper describes a tightly coupled environment architecture centered around a customized software development assistant, that uses underlying representations of the software development process, the objects and relationships being manipulated, the functionalities of the tools, and the roles of the various project members to provide automated support for this discipline. The assistant acts as the glue that ties all the components of the environment together. The paper focuses on features of a conceptual modeling language for specifying such representations and using them to generate customized assistants. [ABSTRACT FROM AUTHOR]
- Published
- 1988
23. Integrated Performance Models for Distributed Processing in Computer Communication Networks.
- Author
-
Thomasian, Alexander and Bay, Paul F.
- Subjects
DISTRIBUTED computing ,COMPUTER systems ,COMPUTER networks ,WIDE area networks ,MULTIPROGRAMMING (Electronic computers) ,COMPUTERS - Abstract
This paper deals with the analysis of large-scale closed queueing network (QN) models which are used for the performance analysis of computer communication networks (CCN's). The computer systems are interconnected by a wide-area network. Users accessing local/remote computers are affected by the contention (queueing delays) at the computer systems and the communication subnet. The computational cost of analyzing such models increases exponentially with the number of user classes (chains), even when the QN is tractable (product-form). In fact, the submodels of the integrated model are generally not product-form, e.g., due to blocking at computer systems (multiprogramming level constraints) and in the communication subnet (window flow control constraints). Two approximate solution methods are proposed in this paper to analyze the integrated QN model. Both methods use decomposition and iterative techniques to exploit the structure of the QN model such that computational cost is proportional to the numbs of chains. The accuracy of the solution methods is validated against each other and simulation. The model is used to study the effect that channel capacity assignments, window sizes for congestion control, and routing have on system performance. [ABSTRACT FROM AUTHOR]
- Published
- 1985
24. Extending CSP to Allow Dynamic Resource Management.
- Author
-
Silberschatz, Abraham
- Subjects
SOFTWARE engineering ,RESOURCE management ,COMPUTER systems ,COMPUTER networks ,DISTRIBUTED computing ,ELECTRONIC data processing - Abstract
In his paper "Communicating Sequential Processes," bare suggested the use of the input/output construct and Dijkstra's guarded commands for handling the task of communication and synchronization in distributed systems. bare's proposal was intended for programming general parallel systems; as a result, little consideration was given by Hoare to the question of how his mechanisms could be utilized in the construction of reliable dynamic resource management schemes. In this paper, we examine this problem and propose several simple extensions to bare's constructs that will make the extended Communicating Sequential Processes concept more suitable for the handling of such management schemes. [ABSTRACT FROM AUTHOR]
- Published
- 1983
25. A Family of Locking Protocols for Database Systems that Are Modeled by Directed Graphs.
- Author
-
Silberschatz, Abraham and Kedem, Zvi M.
- Subjects
DATABASES ,COMPUTER systems ,ELECTRONIC systems ,COMPUTER software ,SOFTWARE engineering - Abstract
This paper is concerned with the problem of ensuring the integrity of database systems that are accessed concurrently by a number of independent asynchronously running transactions. It is assumed that the database system is partitioned into small units that are referred to as the database entities. The relation between the entities is represented by a directed acyclic graph in which the vertices correspond to the database entities and the arcs correspond to certain access rights. We develop a family of non-two-phase locking protocols for such systems that will be shown to ensure serializability and deadlock-freedom. This family is sufficiently general to encompass all the previously developed non-two-phase locking protocols as well as a number of new protocols. One of these new protocols that seems to be particularly useful is also presented in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 1982
26. View Modeling and Integration Using the Functional Data Model.
- Author
-
Yao, S. Bing, Waddle, V. E., and Housel, Barron C.
- Subjects
DATABASES ,DATABASE design ,COMPUTER algorithms ,COMPUTER software ,SOFTWARE engineering ,COMPUTER systems - Abstract
Conventional database design techniques rely heavily on the designer's skill and experience, which are neither efficient nor effective for large, realistic design problems. A computer design aid can help by storing many design parameters and performing simple pre- processing before design decisions are made. In this paper, we report on the development of such a system. The system is based on a data model called the Functional Data Model (FDM), and on a transaction (or process) model which is specified via the Transaction Specification Language (TASL). FDM is used to define data entities and the relationships between them. TASL is used to capture application processing intent and performance parameters that are relevant to the database design. Logical database design involves two major steps. First, local (or application) views are modeled using FDM and TASL. Second, in order to achieve an integrated database design, these local views are combined and transformed to produce a global conceptual view (schema). This paper describes FDM and TASL; in addition, a set of operations for integrating and transforming local views is presented. A unique feature of these transformations is that they apply to TASL processes as well as FDM schema Finally, an algorithm is given for view integration. This algorithm, along with heuristics and some user-supplied subset assertions, enables the database design system to automatically identify and filter out redundancies. [ABSTRACT FROM AUTHOR]
- Published
- 1982
27. Implementation of the Database Machine DIRECT.
- Author
-
Boral, Haran, DeWitt, David J., Friedland, Dina, Jarrell, Nancy F., and Wilkinson, W. Kevin
- Subjects
DATABASES ,ELECTRONIC systems ,MULTIPROCESSORS ,COMPUTERS ,COMPUTER software ,SOFTWARE engineering ,COMPUTER systems - Abstract
DIRECT is a multiprocessor database machine designed and implemented at the University of Wisconsin. This paper describes our experiences with the implementation of DIRECT. We start with a brief overview of the original machine proposal and how it differs from what was actually implemented. We then describe the structure of the DIRECT software. This includes software on host computers that interfaces with the database machine; software on the back-end controller of DIRECT; and software executed by the query processors. In addition to describing the structure of the software we will attempt to motivate and justify its design and implementation. We also discuss a number of implementation issues (e.g., debugging of the code across several machines). We conclude the paper with a list of the "lessons" we have learned from this experience. [ABSTRACT FROM AUTHOR]
- Published
- 1982
28. Specification and Verification of Communication Protocols in AFFIRM Using State Transition Models.
- Author
-
Sunshine, Carl A., Thompson, David H., Erickson, Roddy W., Gerhart, Susan L., and Schwabe, Daniel
- Subjects
PROGRAMMING languages ,COMPUTER networks ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
It is becoming increasingly important that communication protocols be formally specified and verified. This paper describes a particular approach—the state transition model—using a collection of mechanically supported specification and verification tools incorporated in a running system called AFFIRM. Although developed for the specification of abstract data types and the verification of their properties, the formalism embodied in AFFIRM can also express the concepts underlying state transition machines. Such models easily express most of the events occurring in protocol systems, including those of the users, their agent processes, and the communication channels. The paper reviews the basic concepts of state transition models and the AFFIRM formalism and methodology and describes their union. A detailed example, the alternating bit protocol, illustrates various properties of interest for specification and verification. Other examples explored using this formalism are briefly described and the accumulated experience is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1982
29. Load Balancing in Distributed Systems.
- Author
-
Chou, Timothy C. K. and Abraham, Jacob A.
- Subjects
COMPUTER systems ,DECISION support systems ,DISTRIBUTED computing ,ALGORITHMS ,OPERATIONS research ,INDUSTRIAL engineering - Abstract
In a distributed computing system made up of different types of processors each processor in the system may have different performance and reliability characteristics. In order to take advantage of this diversity of processing power, a modular distributed program should have its modules assigned in such a way that the applicable system performance index, such as execution time or cost, is optimized. This paper describes an algorithm for making an optimal module to processor assignment for a given performance criteria. We first propose a computational model to characterize distributed programs, consisting of tasks and an operational precedence relationship. This model allows us to describe probabilistic branching as well as concurrent execution in a distributed program. The computational model along with a set of seven program descriptors completely specifies a model for dynamic execution of a program on a distributed system. The optimal task to processor assignment is found by an algorithm based on results in Markov decision theory. The algorithm given in this paper is completely general and applicable to N-processor systems. [ABSTRACT FROM AUTHOR]
- Published
- 1982
30. Good System Structure Features: Their Complexity and Execution Time Cost.
- Author
-
Stankovic, John A.
- Subjects
COMPUTER systems ,COUPLINGS (Gearing) ,COHESION ,MATERIAL plasticity ,PERFORMANCE ,TECHNOLOGY - Abstract
This paper describes a multistep technique that can be applied to improve system structure and to improve performance when necessary. The technique begins with the analysis of system structure via the structured design guidelines of coupling and cohesion. Next, manual system structure improvement transformations am applied. The effect of the transformations on execution time is then determined. Finally, vertical migration is used on the restructured system to improve its performance. Using the results of this paper, system programmers can identify specific cases where both good system structure and good performance are attainable, and others where tradeoffs must be made. The technique is most applicable during the maintenance phase of the software life cycle. [ABSTRACT FROM AUTHOR]
- Published
- 1982
31. Nested Transactions in Distributed Systems.
- Author
-
Ries, Daniel R. and Smith, Gordon C.
- Subjects
DISTRIBUTED computing ,DATABASES ,COMPUTER operating systems ,COMPUTER software ,SOFTWARE engineering ,COMPUTER systems - Abstract
In database management systems and operating systems, transactions are used as units of consistency, serializability, recovery, and for deadlock control. Normally, the transactions for each of these systems are considered independently. In this paper we describe nested transactions where the transactions from one system interact with the transactions from another system. Such nested transactions can expect to become more important with the introduction of network operating systems and heterogeneous distributed database systems. Nested transactions raise additional problems for the desirability of enforcement of serializability, for the prevention and detection of deadlock, and for the consistent system state recovery from hardware and software crashes. This paper directly addresses the deadlock problems of nested transactions. In particular, we define a transaction controller (IC) that is responsible for maintaining exclusive and restricted shared access to a specific set of resources in what we call its transaction control domain (TCD). TC's are said to interact if a transaction in one controller can initiate a transaction in another controller. Three approaches to the deadlock problem of interacting TC's are described. In the first approach, TC's are tightly coupled in that they need to share transaction identifiers, resource identifiers, and use a common deadlock control mechanism. In the second approach, TC's obey protocols which ensure that if deadlock does occur it will be completely contained in one TCD. In the final approach, we resort to the currently used techniques of a timeout mechanism between interacting mutually suspicious TC's. [ABSTRACT FROM AUTHOR]
- Published
- 1982
32. An Automatic-Controller Description Language.
- Author
-
Takahashi, Hideyuki
- Subjects
ELECTRONIC controllers ,PROGRAMMING languages ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
This paper proposes a control-oriented Algol-like nonprocedural language called Condor. Automatic controller theory, in company with computer science, underlies many industrial fields. Sequential control is especially important. So far, sequential-controller description methods have by and large been limited to graphic ones, which are unreadable in the case of large controllers. A higher-level linguistic approach has been considered unlikely. This paper proposes a language widely suitable for controller architecture description. A controller, in its nature, consists of many subsystems which work in a parallel manner, interacting with one another. This paper deems that a basic component is not a process, but a response. A controller is considered to be a responding system in which every reactor works together, having a relation and communication to one another. A Condor program is a role assignment list for the reactors; it is Algol-like and nonprocedural. All the statements work at all times, interacting with one another. Besides, a controller has some kinds of tree structures: one is that of information-collecting subsystems, another is that of information-distributing subsystems, and a third is that of the hierarchical construction of the controller system. On the other hand a language also has some kinds of tree structures. It is natural to express the free structures of a controller by the free structures of a language through graph-theoretic isomorphism. We can properly call Condor a spatial programming language. Condor is a language of a new kind. This paper will bring new progress into both the programming language theory and automatic- control engineering. For the readers unfamiliar with automatic control, this paper includes elementary explanations. [ABSTRACT FROM AUTHOR]
- Published
- 1980
33. Specifying Software Requirements for Complex Systems: New Techniques and Their Application.
- Author
-
Heninger, Kathryn L.
- Subjects
SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,AIRPLANE software - Abstract
This paper concerns new techniques for making requirements specifications precise, concise, unambiguous, and easy to check for completeness and consistency. The techniques are well-suited for complex real-time software systems; they were developed to document the requirements of existing flight software for the Navy's A-7 aircraft. The paper outlines the information that belongs in a requirements document and discusses the objectives behind the techniques. Each technique is described and illustrated with examples from the A-7 document. The purpose of the paper is to introduce the A-7 document as a model of a disciplined approach to requirements specification; the document is available to anyone who wishes to see a fully worked-out example of the approach. [ABSTRACT FROM AUTHOR]
- Published
- 1980
34. Software Life Cycle Management: Weapons Process Developer.
- Author
-
McHenry, Robert C. and Walston, Claude E.
- Subjects
WEAPONS systems ,MILITARY weapons ,SYSTEMS engineering ,SOFTWARE engineering ,COMPUTER systems ,ELECTRONIC systems ,COMPUTER software - Abstract
This paper describes weapon system software life cycle management from a contractor perspective. Four procurement strategies—each requiring two or more contracts—are examined and their impact on the contractor and the software process are described. [ABSTRACT FROM AUTHOR]
- Published
- 1978
35. Reliable Resource Allocation Between Unreliable Processes.
- Author
-
Shrivastava, Santosh Kumar and Banâtre, Jean-Pierre
- Subjects
FAULT-tolerant computing ,ELECTRONIC data processing ,COMPUTER reliability ,COMPUTER programming ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
Basic error recovery problems between interacting processes are first discussed and the desirability of having separate recovery mechanisms for cooperation and competition is demonstrated. The paper then concentrates on recovery mechanisms for processes competing for the use of the shared resources of a computer system. Appropriate programming language features are developed based on the class and inner features of SIMULA, and on the structuring concepts of recovery blocks and monitors. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
36. Structured Analysis (SA): A Language for Communicating Ideas.
- Author
-
Ross, Douglas T.
- Subjects
PROGRAMMING languages ,SYSTEM analysis ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
Structured analysis (SA) combines blueprint-like graphic language with the nouns and verbs of any other language to provide a hierarchic, top-down, gradual exposition of detail in the form of an SA model. The things and happenings of a subject are expressed in a data decomposition and an activity decomposition, both of which employ the same graphic building block, the SA box, to represent a part of a whole. SA arrows, representing input, output, control, and mechanism, express the relation of each part to the whole. The paper describes the rationalization behind some 40 features of the SA language, and shows how they enable rigorous communication which results from disciplined, recursive application of the SA maxim: "Everything worth saying about anything worth saying something about must be expressed in six or fewer pieces." [ABSTRACT FROM AUTHOR]
- Published
- 1977
37. Supporting the Combined Selection of Model-Based Testing Techniques.
- Author
-
Dias-Neto, Arilo Claudio and Horta Travassos, Guilherme
- Subjects
COMPUTER software research ,COMPUTER files ,COMPUTER systems ,INFORMATION technology ,TECHNOLOGY - Abstract
The technical literature on model-based testing (MBT) offers us several techniques with different characteristics and goals. Contemporary software projects usually need to make use of different software testing techniques. However, a lack of empirical information regarding their scalability and effectiveness is observed. It makes their application difficult in real projects, increasing the technical difficulties to combine two or more MBT techniques for the same software project. In addition, current software testing selection approaches offer limited support for the combined selection of techniques. Therefore, this paper describes the conception and evaluation of an approach aimed at supporting the combined selection of MBT techniques for software projects. It consists of an evidence-based body of knowledge with 219 MBT techniques and their corresponding characteristics and a selection process that provides indicators on the level of adequacy (impact indicator) amongst MBT techniques and software projects characteristics. Results from the data analysis indicate it contributes to improve the effectiveness and efficiency of the selection process when compared to another selection approach available in the technical literature. Aiming at facilitating its use, a computerized infrastructure, evaluated into an industrial context and evolved to implement all the facilities needed to support such selection approach, is presented. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
38. Guest Editorial: Special Section on Interaction and State-Based Modeling.
- Author
-
Uchitel, Sebastian, Broy, Manfred, Krüger, Ingoif H., and Whittle, Jon
- Subjects
COMPUTER systems ,COMPUTER graphics ,WEB services ,APPLICATION software ,COMPUTER network protocols ,GRAPHIC methods ,COMPUTERS ,COMPUTER simulation ,COMPUTER networks ,COMPUTER drawing - Abstract
The article reflects on various issues discussed within the issue. An article by Y. Bontemps, P. Heymans, and P.Y. Schobbens addresses the theoretical and practical limitations of the scenario-development approach. The paper focuses on complexity and decidability of verification and synthesis within the context of live sequence charts. An article by Braberman, N. Kicillof, and A. Olivero addresses the specification and checking of real-time properties. A novel graphical scenario language is presented which allows the description of complex properties not expressible with sequence chart-like notations. In an article, X. Fu, T. Bultan, and J. Su study the interplay between global interaction modeling and local behavior modeling in the web services domain. The authors show how top-down and bottom-up development of web-services requires careful understanding of issues such as reliability and show that the notion of synchronizability can enable tractable analyses to support such development. The authors also provide insights into the relation between conversation protocols and message sequence charts for which issues such as realizability have been studied extensively.
- Published
- 2005
- Full Text
- View/download PDF
39. Author's Reply.
- Author
-
Baber, Robert L.
- Subjects
COMPUTER storage devices ,COMPUTER systems ,SOFTWARE engineering ,ELECTRONIC systems - Abstract
Presents a reply to an article on a method for representing data items of unlimited length in computer memory.
- Published
- 1982
40. EDZL Schedulability Analysis in Real-Time Multicore Scheduling.
- Author
-
Lee, Jinkyu and Shin, Insik
- Subjects
COMPUTER scheduling ,REAL-time computing ,ALGORITHMS ,COMPARATIVE studies ,COMPUTER systems ,MULTICORE processors - Abstract
In real-time systems, correctness depends not only on functionality but also on timeliness. A great number of scheduling theories have been developed for verification of the temporal correctness of jobs (software) in such systems. Among them, the Earliest Deadline first until Zero-Laxity (EDZL) scheduling algorithm has received growing attention thanks to its effectiveness in multicore real-time scheduling. However, the true potential of EDZL has not yet been fully exploited in its schedulability analysis as the state-of-the-art EDZL analysis techniques involve considerable pessimism. In this paper, we propose a new EDZL multicore schedulability test. We first introduce an interesting observation that suggests an insight toward pessimism reduction in the schedulability analysis of EDZL. We then incorporate it into a well-known existing Earliest Deadline First (EDF) schedulability test, resulting in a new EDZL schedulability test. We demonstrate that the proposed EDZL test not only has lower time complexity than existing EDZL schedulability tests, but also significantly improves the schedulability of EDZL by up to 36.6 percent compared to the best existing EDZL schedulability tests. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
41. Abstracting Runtime Heaps for Program Understanding.
- Author
-
Marron, Mark, Sanchez, Cesar, Su, Zhendong, and Fahndrich, Manuel
- Subjects
COMPUTER software testing ,COMPUTER programming ,COMPUTER systems ,COMPUTER algorithms ,DATA analysis ,COMPUTER storage devices - Abstract
Modern programming environments provide extensive support for inspecting, analyzing, and testing programs based on the algorithmic structure of a program. Unfortunately, support for inspecting and understanding runtime data structures during execution is typically much more limited. This paper provides a general purpose technique for abstracting and summarizing entire runtime heaps. We describe the abstract heap model and the associated algorithms for transforming a concrete heap dump into the corresponding abstract model as well as algorithms for merging, comparing, and computing changes between abstract models. The abstract model is designed to emphasize high-level concepts about heap-based data structures, such as shape and size, as well as relationships between heap structures, such as sharing and connectivity. We demonstrate the utility and computational tractability of the abstract heap model by building a memory profiler. We use this tool to identify, pinpoint, and correct sources of memory bloat for programs from DaCapo. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
42. Generating Event Sequence-Based Test Cases Using GUI Runtime State Feedback.
- Author
-
Xun Yuan and Memon, Atif M.
- Subjects
GRAPHICAL user interfaces ,GRAPHIC methods ,COMMON Language Runtime (Computer science) ,COMPUTER software ,COMPUTER systems - Abstract
This paper presents a fully automatic model-driven technique to generate test cases for Graphical user interfaces (GUIs)- based applications. The technique uses feedback from the execution of a "seed test suite," which is generated automatically using an existing structural event interaction graph model of the GUI. During its execution, the runtime effect of each GUI event on all other events pinpoints event semantic interaction (ESI) relationships, which are used to automatically generate new test cases. Two studies on eight applications demonstrate that the feedback-based technique 1) is able to significantly improve existing techniques and helps identify serious problems in the software and 2) the ESI relationships captured via GUI state yield test suites that most often detect more faults than their code, event, and event-interaction-coverage equivalent counterparts. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
43. Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems.
- Author
-
Marcus, Andrian, Poshyvanyk, Denys, and Ferenc, Rudolf
- Subjects
OBJECT-oriented methods (Computer science) ,COMPUTER software ,SOFTWARE engineering ,COMPUTATIONAL linguistics ,SOURCE code ,COMPUTER systems - Abstract
High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
44. The Role of Deliberate Artificial Design Elements in Software Engineering Experiments.
- Author
-
Hannay, Jo E. and Jorgensen, Magne
- Subjects
SOFTWARE engineering ,SYSTEMS design ,COMPUTER software ,COMPUTER systems ,ENGINEERING ,DESIGN - Abstract
Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies that are badly designed or that have research questions of low relevance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
45. Triggered Message Sequence Charts.
- Author
-
Sengupta, Bikram and Cleaveland, Rance
- Subjects
DISTRIBUTED computing ,SEMANTICS ,COMPUTER science research ,COMPUTATIONAL mathematics ,MATHEMATICAL notation ,COMPUTER systems - Abstract
This paper introduces Triggered Message Sequence Charts (TMSCs), a graphical, mathematically well-founded framework for capturing scenario-based systems requirements of distributed systems. Like Message Sequence Charts (MSCs), TMSCs are graphical depictions of scenarios, or exchanges of messages between processes in a distributed system. Unlike MSCs, however, TMSCs are equipped with a notion of trigger that permits requirements to be made conditional, a notion of partiality indicating that a scenario may be subsequently extended, and a notion of refinement for assessing whether or not a more detailed specification correctly elaborates on a less detailed on. The TMSC rotation also includes a collection of composition operators allowing structure to be introduced into scenario specifications so that interactions among different scenarios may be studied. In the first part of this paper, TMSCs are introduced and their use in support of requirements modeling is illustrated via two extended examples. The second part develops the mathematical underpinnings of the language. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
46. Generating Annotated Behavior Models from End-User Scenarios.
- Author
-
Damas, Christophe, Lambeau, Bernard, Dupont, Pierre, and Lamsweerde, Axel Van
- Subjects
COMPUTER simulation ,COMPUTER systems ,COMPUTER networks ,COMPUTER software ,DATA transmission systems ,COMPUTER-generated imagery ,COMPUTERS ,COMPUTER drawing ,COMPUTER graphics - Abstract
Requirements-related scenarios capture typical examples of system behavior through sequences of desired interactions between the software-to-be and its environment. Their concrete, narrative style of expression makes them very effective for eliciting software requirements and for validating behavior models. However, scenarios raise coverage problems as they only capture partial histories of interaction among system component instances. Moreover, they often leave the actual requirements implicit. Numerous efforts have therefore been made recently to synthesize requirements or behavior models inductively from scenarios. Two problems arise from those efforts. On the one hand, the scenarios must be complemented with additional input such as state assertions along episodes or flowcharts on such episodes. This makes such techniques difficult to use by the nonexpert end-users who provide the scenarios. On the other hand, the generated state machines may be hard to understand as their nodes generally convey no domain- specific properties. Their validation by analysts, complementary to model checking and animation by tools, may therefore be quite difficult. This paper describes tool-supported techniques that overcome those two problems. Our tool generates a labeled transition system (LTS) for each system component from simple forms of message sequence charts (MSC) taken as examples or counterexamples of desired behavior. No additional input is required. A global LTS for the entire system is synthesized first. This LTS covers all scenario examples and excludes all counterexamples. It is inductively generated through an interactive procedure that extends known learning techniques for grammar induction. The procedure is incremental on training examples. It interactively produces additional scenarios that the end-user has to classify as examples or counterexamples of desired behavior. The LTS synthesis procedure may thus also be used independently for requirements elicitation through scenario questions generated by the tool. The synthesized system LTS is then projected on local LTS for each system component. For model validation by analysts, the tool generates state invariants that decorate the nodes of the local LTS. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
47. On the Use of Clone Detection for Identifying Crosscutting Concern Code.
- Author
-
Bruntink, Magiel, Van Deursen, Arie, Van Engelen, Remco, and Tourwé, Tom
- Subjects
COMPUTER programming ,MODULAR design ,COMPUTER systems ,ENGINEERING design ,COMPUTER algorithms ,AUTOMATION - Abstract
In systems developed without aspect-oriented programming, code implementing a crosscutting concern may be spread over many different parts of a system. Identifying such code automatically could be of great help during maintenance of the system. First of all, it allows a developer to more easily find the places in the code that must be changed when the concern changes and, thus, makes such changes less time consuming and less prone to errors. Second, it allows the code to be refactored to an aspect-oriented solution, thereby improving its modularity. In this paper, we evaluate the suitability of clone detection as a technique for the identification of crosscutting concerns. To that end, we manually identify five specific crosscutting concerns in an industrial C system and analyze to what extent clone detection is capable of finding them. We consider our results as a stepping stone toward an automated "aspect miner" based on clone detection. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
48. An Empirical Investigation of the Key Factors for Success in Software Process Improvement.
- Author
-
Dybå, Tore
- Subjects
COMPUTER systems ,COMPUTER software ,ORGANIZATION ,INFORMATION technology ,COMPUTER networks ,COMPUTER science - Abstract
Understanding how to implement software process improvement (SRI) successfully is arguably the most challenging issue facing the SPI field today. The SPI literature contains many case studies of successful companies and descriptions of their SRI programs. However, the research efforts to date are limited and inconclusive and without adequate theoretical and psychometric justification. This paper extends and integrates models from prior research by performing an empirical investigation of the key factors for success in SPI. A quantitative survey of 120 software organizations was designed to test the conceptual model and hypotheses of the study. The results indicate that success depends critically on six organizational factors, which explained more than 50 percent of the variance in the outcome variable. The main contribution of the paper is to increase the understanding of the influence of organizational issues by empirically showing that they are at least as important as technology for succeeding with SPI and, thus, to provide researchers and practitioners with important new insights regarding the critical factors of success in SPI. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
49. Retargeting Sequential Image-Processing Programs for Data Parallel Execution.
- Author
-
Baumstark, Jr., Lewis B. and Wills, Linda M.
- Subjects
SOFTWARE compatibility ,COMPUTER software ,COMPUTER systems ,COMPUTER architecture ,COMPUTER science - Abstract
New compact, low-power implementation technologies for processors and imaging arrays can enable a new generation of portable video products. However, software compatibility with large bodies of existing applications written in C prevents more efficient, higher performance data parallel architectures from being used in these embedded products. If this software could be automatically retargeted explicitly for data parallel execution, product designers could incorporate these architectures into embedded products. The key challenge is exposing the parallelism that is inherent in these applications but that is obscured by artifacts imposed by sequential programming languages. This paper presents a recognition-based approach for automatically extracting a data parallel program model from sequential image processing code and retargeting it to data parallel execution mechanisms. The explicitly parallel model presented, called multidimensional data flow (MDDF), captures a model of how operations on data regions (e.g., rows, columns, and tiled blocks) are composed and interact. To extract an MDDF model, a partial recognition technique is used that focuses on identifying array access patterns in loops, transforming only those program elements that hinder parallelization, while leaving the core algorithmic computations intact. The paper presents results of retargeting a set of production programs to a representative data parallel processor array to demonstrate the capacity to extract parallelism using this technique. The retargeted applications yield a potential execution throughput limited only by the number of processing elements, exceeding thousands of instructions per cycle in massively parallel implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
50. A Taxonomy and Catalog of Runtime Software-Fault Monitoring Tools.
- Author
-
Delgado, Nelly, Gates, Ann Quiroz, and Roach, Steve
- Subjects
COMPUTER software ,TAXONOMY ,ELECTRIC machinery monitoring ,COMPUTER systems ,LITERATURE ,RESEARCH - Abstract
A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. This paper presents a taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault- monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.