192 results
Search Results
2. Software processes: a retrospective and a path to the future <FNR></FNR><FN>This paper is an expanded version of the keynote presentation given by Carlo Ghezzi at the 5th International Conference on Software Process (Lisle, IL, 14–17 June 1998)...
- Author
-
Cugola, Gianpaolo and Ghezzi, Carlo
- Subjects
SOFTWARE engineering ,COMPUTER software ,QUALITY control ,CUSTOMER satisfaction ,COMPUTER engineers - Abstract
Software engineering focuses on producing quality software products through quality processes. The attention to processes dates back to the early 1970s, when software engineers realized that the desired qualities (such as reliability, efficiency, evolvability, ease of use, etc.) could only be injected in the products by following a disciplined flow of activities. Such a discipline would also make the production process more predictable and economical. Most of the software process work, however, remained in an informal stage until the late 1980s. From then on, the software process was recognized by researchers as a specific subject that deserved special attention and dedicated scientific investigation, the goal being to understand its foundations, develop useful models, identify methods, provide tool support, and help manage its progress. This paper will try to characterize the main approaches to software processes that were followed historically by software engineering, to identify the strengths and weaknesses, the motivations and the misconceptions that led to the continuous evolution of the field. This will lead us to an understanding of where we are now and will be the basis for a discussion of a research agenda for the future. Copyright © 1998 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
3. Software process assessment and improvement: Selected papers from SPICE 2005.
- Author
-
Rout, Terence P.
- Subjects
COMPUTER software ,APPLICATION software - Abstract
The article discusses various reports published within the issue, including the article "An industrial experience in assessing the capability of non-software processes using ISO/IEC 15 504," by Antonio Coletta.
- Published
- 2007
- Full Text
- View/download PDF
4. Guest Editorial for the Special Issue of Papers from the 29th Hawaii International Conference on Systems Science.
- Author
-
Paulk, Mark C.
- Subjects
SCIENCE ,COMPUTER software - Abstract
Introduces a series of articles on systems science and software process.
- Published
- 1996
- Full Text
- View/download PDF
5. Real-Time Precision Vehicle Localization Using Numerical Maps.
- Author
-
Seung-Jun Han and Jeongdan Choi
- Subjects
AUTONOMOUS vehicles ,NUMERICAL analysis ,GEOGRAPHIC information systems ,COMPUTER vision ,INFORMATION technology ,COMPUTER software - Abstract
Autonomous vehicle technology based on information technology and software will lead the automotive industry in the near future. Vehicle localization technology is a core expertise geared toward developing autonomous vehicles and will provide location information for control and decision. This paper proposes an effective vision-based localization technology to be applied to autonomous vehicles. In particular, the proposed technology makes use of numerical maps that are widely used in the field of geographic information systems and that have already been built in advance. Optimum vehicle ego-motion estimation and road marking feature extraction techniques are adopted and then combined by an extended Kalman filter and particle filter to make up the localization technology. The implementation results of this paper show remarkable results; namely, an 18 ms mean processing time and 10 cm location error. In addition, autonomous driving and parking are successfully completed with an unmanned vehicle within a 300 mx 500 m space. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
6. MPS.BR: a successful program for software process improvement in Brazil.
- Author
-
Montoni, Mariano Angel, Rocha, Ana Regina, and Weber, Kival Chaves
- Subjects
COMPUTER software development ,SMALL business ,TECHNOLOGICAL innovations ,AUTOMATION ,COMPUTER software - Abstract
Software process improvement implementation based on software process reference models and standards is a complex and long-term endeavor that requires investment of large sums of money. These obstacles usually hinder organizations from implementing software process improvement successfully, especially for small and medium-size enterprises that operate under strict financial resources. This paper describes the MPS.BR, a nationwide program for software process improvement in Brazilian organizations. The main goal of this initiative is to develop and disseminate a Brazilian software process model (named MPS Model) aiming to establish a feasible pathway for organizations to achieve benefits from implementing software process improvement at reasonable costs, especially small and medium-size enterprises. This paper presents the main components of the MPS Model and discusses the strategy executed to establish and maintain a community of MPS Model practitioners. The results of MPS Model adoption and dissemination in Brazilian software industry are also presented in this paper. Copyright © 2009 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
7. Perceived Subjective Features of Software Components: Consumer Behavior in a Software Component Market.
- Author
-
Janghyuk Lee, Se-Joon Hong, Yeong-Wha Sawng, and Ju Seong Kim
- Subjects
COMPUTER software ,CONSUMER behavior ,COMPUTER software development ,CONSUMER attitudes ,SOFTWARE engineering ,COMPUTER systems ,COMPUTER software industry ,SOFTWARE productivity - Abstract
Component-based software reuse has been generally regarded as a promising approach to improving software productivity and quality within software development. However, progress in component-based software reuse has been slower than expected. Much of the software reuse literature points to the lack of software components that can maximize users' benefits as the most important source of the slow progress. Considering that the underlying processes behind component-based software reuse are strikingly similar to commercial software marketing, this paper attempts to identify the aspects of software components that consumers value and to establish relationships between the identified aspects and consumer behavior in the software component market. More specifically, this paper focuses on the perceived subjective features of software components. This study was conducted in a web-based artificial market environment called "SofTrade." [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
8. Music signal spotting retrieval by a humming query using model driven path continuous dynamic program.
- Author
-
Hashiguchi, Hiroki, Nishimura, Takuichi, Xin Zhang, J., Takita, Junko, and Oka, Ryuichi
- Subjects
COMPUTER software development ,DYNAMIC programming ,MUSICOLOGY ,MUSICAL pitch ,SOUND ,PATTERN recognition systems ,DATABASE searching ,TECHNOLOGY ,INFORMATION storage & retrieval systems ,COMPUTER software - Abstract
In this paper the authors propose a search method for finding within various time series for acoustic analysis patterns for music the similar periods in a search query consisting of a time series of pitch differences (intervals) extracted from humming. The proposed method (model driven path Continuous Dynamic Programming) is an extension of continuous DP which allows for spotting searches and has the benefit of having the time series for the reference model directly identify the form of the slope limitations used in continuous DP. In this paper the authors perform humming query experiments for 20 songs of popular music and demonstrate the effectiveness of their proposed method. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(10): 95–104, 2007; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10232 [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
9. A generalized gamma software reliability model.
- Author
-
Okamura, Hiroyuki, Ando, Mitsuaki, and Dohi, Tadashi
- Subjects
COMPUTER software ,RELIABILITY in engineering ,POISSON processes ,EXPECTATION-maximization algorithms ,ERRORS - Abstract
In this paper, the authors propose a generalized software reliability model based on a nonhomogeneous Poisson process (NHPP), on the assumption that software fault discovery times follow a generalized gamma distribution. The proposed model encompasses representative NHPP models developed before now and is also consistent with probabilistic debugging theory based on the concept of generalized order statistics. Furthermore, in this paper, the authors propose a parameter estimation method which combines the EM (Expectation-Maximization) algorithm and a heuristic solution method, as an effective parameter estimation method for the generalized gamma software reliability model. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(2): 81–90, 2007; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.20350 [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
10. Software process commonality analysis.
- Author
-
Ocampo, Alexis, Bella, Fabio, and Münch, Jürgen
- Subjects
COMPUTER software ,COMPUTER systems ,DIFFERENCES ,ORGANIZATION ,WEB services - Abstract
To remain viable and thrive, software organizations must rapidly adapt to frequent and, often, rather far-ranging changes to their operational context. These changes typically concern many factors, including the nature of the organization's marketplace in general, its customers' demands, and its business needs. In today's most highly dynamic contexts, such as web services development, other changes create additional, severe challenges. Most critical are changes to the technology in which a software product is written or which the software product has to control or use to provide its functionality. These product-support technology changes are frequently relatively ‘small’ and incremental. They are, therefore, often handled by relatively ‘small’, incremental changes to the organization's software processes. However, the frequency of these changes is high, and their impact is elevated by time-to-market and requirements change demands. The net result is an extremely challenging need to create and manage a large number of customized process variants, collectively having more commonalities than differences, and incorporating experience-based, proven ‘best practices’. This paper describes a tool-based approach to coping with product-support technology changes. The approach utilizes established capabilities such as descriptive process modeling and the creation of reference models. It incorporates a new, innovative, tool-based capability to analyze commonalities and differences among processes. The paper includes an example-based evaluation of the approach in the domain of Wireless Internet Services as well as a discussion of its potentially broader application. Copyright © 2005 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
11. Context-aware agent platform in ubiquitous environments and its verification tests.
- Author
-
Hattori, Masanori, Cho, Kenta, Ohsuga, Akihiko, Isshiki, Masao, and Honiden, Shinichi
- Subjects
INFORMATION services ,COMPUTER software ,INFORMATION processing ,INFORMATION science ,INFORMATION networks ,CELL phone systems - Abstract
This paper deals with the “ubiquitous personal agent,” which distinctively provides information services to the individual user depending on his/her personal contexts. This is made possible by recognition of user contexts in ubiquitous environments through an agent technology used to realize context-aware intelligent environments. This agent technology consists of a “context reasoning agent” and a “lightweight intelligent mobile agent.” The former is used for automatic recognition of various kinds of data and events which can be collected through realization of ubiquitous environments and of user “contexts.” The latter executes appropriate decentralization of a large variety of personalization-related processing activities between various devices and server systems existing in the ubiquitous environments. In this paper, the details of the agent technology used are discussed and its effectiveness is confirmed by evaluation in tests of agent service using home information appliances already available on the domestic market. © 2004 Wiley Periodicals, Inc. Syst Comp Jpn, 35(7): 13–23, 2004; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.10655 [ABSTRACT FROM AUTHOR]- Published
- 2004
- Full Text
- View/download PDF
12. A porting symbolic dedicated machine by micro program translation.
- Author
-
Amagai, Yoshiji
- Subjects
APPLICATION software porting ,CROSS-platform software development ,SOFTWARE compatibility ,COMPUTER software ,COMPUTER systems ,COMPUTER architecture - Abstract
A well-known technique for inheriting the software resources of computers that were used in the past is to emulate the relevant computer's instructions on a modern computer. One method of using this technique is the static translation method in which the relevant program's binary code is translated in advance and executed. Although this method provides good posttranslation execution performance, each application program must be individually translated. In this paper, the computer's hardware itself is reproduced by statically translating the micro program part of the symbolic processing dedicated machine that had provided the micro program control CPU to the C language, enabling all programs on that dedicated machine to be executed on a UNIX computer. In addition to providing powerful emulation capabilities, translating to the C language enables emulations to be easily executed on computers having different architectures. This paper describes the static translation technique and shows the effectiveness of static micro program translation through static and dynamic performance evaluations of the system that was obtained. © 2003 Wiley Periodicals, Inc. Syst Comp Jpn, 34(9): 105–114, 2003; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.1213 [ABSTRACT FROM AUTHOR]- Published
- 2003
- Full Text
- View/download PDF
13. A development method of time-triggered object-oriented software for embedded control systems.
- Author
-
Yokoyama, Takanori, Naya, Hidemitsu, Narisawa, Fumio, Kuragaki, Satoru, Nagaura, Wataru, Imai, Takaaki, and Suzuki, Shoji
- Subjects
EMBEDDED computer systems ,COMPUTERS ,REAL-time control ,AUTOMATIC control systems ,COMPUTER software development ,COMPUTER software - Abstract
This paper describes a time-triggered object-oriented model for embedded control systems and a control software development method based on that model. Conventional object-oriented models had been based on event-triggered actions. In contrast, we propose a time-triggered object-oriented model for real-time control. This model, which consists of a set of objects for calculating data values required for control, activates these objects cyclically to implement time-triggered actions. This paper also shows that a distributed control system having location transparency can be implemented efficiently according to this model. In addition, we propose a control software development method based on the proposed time-triggered object-oriented model. This development method can extract objects according to simple operations from control block diagrams used to describe control specifications. This paper also introduces an embedded system-oriented implementation method that does not consume much resource and presents an application example in a distributed control system for automobiles. © 2003 Wiley Periodicals, Inc. Syst Comp Jpn, 34(2): 43–54, 2003; Published online in Wiley InterScience (
www.interscience.wiley.com ). DOI 10.1002/scj.1189 [ABSTRACT FROM AUTHOR]- Published
- 2003
- Full Text
- View/download PDF
14. Findings from Phase 2 of the SPICE trials.
- Author
-
Ho-Won Jung, Hunter, Robin, Goldenson, Dennis R., and El-Emam, Khaled
- Subjects
COMPUTER software ,SOFTWARE architecture ,SOFTWARE engineering ,SOFTWARE productivity ,STANDARDS - Abstract
The international SPICE (Software Process Improvement and Capability dEtermination) project was set up to support the development of the ISO/IEC 15504 standard for software process assessment (SPA). The project mounted a set of trials to validate the emerging standard against the goals and requirements defined at the start of the SPICE project and to verify the consistency and usability of its component parts. A considerable number of empirical evaluation studies have been conducted during the Phase 2 SPICE Trials based on ISO/IEC PDTR 15504 (between September 1996 and June 1998). Such an exercise is unprecedented in the software engineering standards community and it provides a unique opportunity for empirical validation. The purpose of this paper is to present major parts of the findings of the empirical studies conducted as part of the SPICE Project during Phase 2 of the SPICE Trials. The topics covered in this paper include (i) investigation into reasons for performing SPAs, (ii) evaluation of the internal consistency of the capability dimension, (iii) use of interrater agreement as a measure of the reliability of assessments, (iv) evaluation of the predictive validity of process capability, (v) evaluation of an exemplar model (Part 5), (vi) identification of factors influencing assessor effort, and (vii) empirical comparison between ISO/IEC PDTR 15504 and ISO 9001. Major lessons learned as well as future research directions are summarized on the strengths and weaknesses of ISO/IEC 15505. Copyright © 2001 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
15. The group-noted state transition method applied to a PC management system.
- Author
-
Hiyama, Kunio
- Subjects
COMPUTER systems ,ELECTRONIC systems ,COMPUTER software ,COMPUTERS ,ELECTRONICS - Abstract
To improve software productivity, my colleagues and I pointed out in a previous paper that the state transition method is suitable as the functional description and implementation method. In that paper we proposed the distributed state transition method, which was derived from the model of distributed sequential machines and showed results obtained by applying it to an electronic switching system. In this paper, to enlarge the application range of the state transition method, I consider applying it to a PC management database system, and propose a new integrated design method derived from the single sequential machine model as being suitable. First I propose the “group-noted state transition method” by introducing the sequential machine model. Then I propose this design method as follows. Initially, the function of the outer world of the system is classified into several groups, and management jobs are then divided into the same number of groups corresponding to them. The database information can also be classified in the same way and can support the design of the database system. After that, the state transition description is defined from the input/output information to/from the groups of the outer world. Furthermore, the method of notation in which the status group is added to the status name is proposed by classifying statuses using the concept of the management cycle. I also propose a new method for designing the software structure in which state transition diagram information is stored in the database system by converting it to table format. Results clarify that it is possible to make an integrated management database system by using this state transition method. They also clarify how to make a more advanced system by cooperating with the workflow system. © 2001 Scripta Technica, Syst Comp Jpn, 32(10): 13–21, 2001 [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
16. Processes for software in safety critical systems.
- Author
-
Benediktsson, O., Hunter, R. B., and McGettrick, A. D.
- Subjects
COMPUTER software ,SAFETY ,INTEGRITY ,SOFTWARE engineering ,MANAGEMENT - Abstract
Two complementary standards are compared, both of which are concerned with the production of quality software. One, IEC 61508, is concerned with the safety of software intensive systems and the other, ISO/IEC TR 15504, takes a process view of software capability assessment. The standards are independent, though both standards build on ISO/IEC 12207. The paper proposes a correspondence between the safety integrity levels (SILs) of 61508 and the capability levels (CLs) of 15504, and considers the appropriateness of the 15504 reference model as a framework for assessing safety critical software processes. Empirical work from the SPICE trials and COCOMO II is used to support the arguments of the paper as well as to investigate their consequences. The development of a 15504 compatible assessment model for software in safety critical systems is proposed. Copyright © 2001 John Wiley & Sons Ltd [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
17. An efficient procedure for maintaining materialized views on distributed database systems.
- Author
-
Kasukawa, Takeya, Nakanishi, Michio, Matsuda, Hideo, and Hashimoto, Akihiro
- Subjects
DATABASES ,DATABASE searching ,ELECTRONIC information resources ,ELECTRONIC data processing ,INFORMATION retrieval ,COMPUTER software - Abstract
This paper considers a distributed database represented by the relational model, and proposes an efficient procedure to reflect the update of the base relation for materialized views defined by the inequality query from the base relation in the element database. Methods to reflect updates of the base relation on the materialized views in the distributed database include a method that calculates the tuple inserted or deleted for the materialized view due to the update of the base relation, using the existing query processing procedure, as well as a method that stores the copy of the base relation to be utilized in the database containing the materialized view. The method proposed in the paper is to store and utilize the key attribute value of the tuple satisfying a part of the inequality query condition as the materialized view, independently of the originally considered materialized view. By this mechanism, the communication cost in query processing can be reduced. The required memory area can also be reduced, compared to the case where the copy of the base relation is directly stored. This paper shows the validity of the proposed method, and compares the communication cost and the amount of data to be stored, versus the existing method. It is also shown that the proposed method is useful in the database for molecular biology. © 2000 Scripta Technica, Syst Comp Jpn, 31(5): 86–96, 2000 [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
18. Success factors of organizational change in software process improvement.
- Author
-
Stelzer, Dirk and Mellis, Werner
- Subjects
ORGANIZATIONAL change ,COMPUTER software ,ISO 9000 Series Standards ,APPLICATION software ,COMPUTER software quality control - Abstract
The management of organizational change is an essential element of successful software process improvement efforts. This paper describes ten factors that affect organizational change in software process improvement initiatives based on the Capability Maturity Model or the ISO 9000 quality standards. It also assesses the relative importance of these factors and compares the findings with the results of previous research into organizational change in software process improvement. The paper is based on an analysis of published experience reports and case studies of 56 software organizations that have implemented an ISO 9000 quality system or that have conducted a CMM-based process improvement initiative. Copyright © 1998 John Wiley & Sons Ltd [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
19. Managing change for software process improvement initiatives: a practical experience-based approach.
- Author
-
Moitra, Deependra
- Subjects
COMPUTER software development ,SOFTWARE engineering ,ORGANIZATIONAL change ,COMPUTER software ,MANAGEMENT - Abstract
This paper provides a practical experience-based approach to managing change in software process improvement initiatives. The contents presented in this paper are based on author’s experience in leading software process improvement initiatives and deploying quality processes in hi-tech organizations involved in large-scale software development, and also based on some years of research and consulting. The paper emphasizes the critical role of processes for the success of software development organizations and provides a practical insight into the process of change for achieving software excellence. Importance of a strategic and structured approach to change management is discussed. An analysis of the problems and difficulties associated with change management in software process improvement initiatives is presented. A simple and pragmatic approach for successfully managing change for software process improvement is provided. A matrix identifying the skills, attitude and characteristics essential for a successful change agent is also presented. Copyright © 1998 John Wiley & Sons Ltd [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
20. High-speed, high-accuracy Hough transform using simultaneous recurrence formula.
- Author
-
Nakashima, Katsuyuki, Yakabe, Hidetoshi, Obuchi, Yutaka, and Inoue, Katsunori
- Subjects
COMPUTER science ,COMPUTER architecture ,COMPUTER programming ,COMPUTERS ,COMPUTER software - Abstract
This paper is concerned with a high-speed, high-accuracy Hough transform which utilizes simultaneous recurrence formula. Existing methods of this kind still produce a small error in the ρ value calculation, and the ρ value does not agree well with the value calculated from the original Hough transform expression. Moreover, its form of expression is not adapted to parallel processing. The accuracy of the ρ value calculated by the new method provided in this paper is confirmed by computer simulation. The proposed method has also achieved very high calculation speed by replacing multiplication operations with the shift operations on the CPU registers. Because of its extremely simple configuration, it is easy to implement the algorithm in special hardware. It is suitable for parallel operation on the special hardware or a parallel processor. Thus, the newly proposed method assures a great progress in applying the Hough transform to real time processing. © 1997 Scripta Technica, Inc. Syst Comp Jpn, 28(3): 24–33, 1997 [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
21. Optimization Methods for Look-up Table-Type FPGAs Based on Permissible Functions.
- Author
-
Yamashita, Shigeru, Kambayashi, Yahiko, and Muroga, Sabura
- Subjects
FIELD programmable gate arrays ,COMPUTER circuits ,COMPUTER network architectures ,MATHEMATICAL optimization ,COMPUTER programming ,COMPUTER software - Abstract
Recently, due to remarkable technological developments and because their logic can be modified flexible and easily, Field Programmable Gate Arrays (FPGAs) have increasingly been applied to hardware prototyping and the design of restructurable computer architctures. For this reason, it is highly accessory to establish logic design techniques for FPGAs. This paper describes methods for optimizing the circuits mapped in look-up table-type FPGAs. These methods apply the concept of permissible functions from the transduction method, which s a logic circuit optimization technique based on design improvement, and attempt to reduce the degree of redundancy. The following two methods were investigated: 1) reducing the number of blocks by using a logic block to replace another; and 2) together with replacing logic blocks, modifying the internal logic that implements a logic block as long as the outputs are not affected. Although the latter generally is more flexible, the former requires less processing time and sometimes produces better results. The effectiveness of these methods is demonstrated by showing that the degree of redundancy was reduced by about 9 percent when the methods were applied to circuits mapped in look-up table-type FPGAs. The methods proposed in this paper can also be applied to designs using standard cells and gate arrays. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
22. Automatic Tracking of Highway Road Edge Based on Vehicle Dynamics.
- Author
-
Negishi, Shinji, Osawa, Shinji, and Chiba, Masataka
- Subjects
AUTOMATIC tracking ,TRANSPORTATION ,COMPUTER software ,COMPUTER systems ,COMPUTERS - Abstract
There are many efforts being made toward the visual navigation system for the autonomous vehicle. When the highway road is considered, where the constrain for the road structure can be utilized, it is effective to extract the white line (lane marker), which is enforced to be drawn as the borderline of the lane. This paper considers the automatic extraction of the road edge from the white line in the road image taken by the vehicle camera. there is already proposed a technique which assumes that the road edge image is extracted from the white line, and estimates the horizontal curvature of the road and the vibration component of the vehicle (dynamic angle of the vehicle). It is adequate to use the break (end point) of the white dashed line as the feature point for estimation of the vehicle angle. To extract the road edge from the dashed line, however, it is required to trace the dashed line which moves between consecutive frames. From such a viewpoint, this paper utilizes the result obtained by the already proposed estimation method for the vehicle angle, and predicts the motion of the end-point of the dashed line on the image due to the motion of a vehicle. By restricting the range of search based on the result, the end-point is tracked. An experiment was conducted for the sequence of the image taken for the actual road. It is verified that a sufficient accuracy is obtained for the position prediction, so that the restricted range of search contains the end-point, which will help the automatic extraction of the road edge. Then, a system is constructed as a cooperative system, which estimates the vehicle angle the road shape from the real image, using the automatically extracted road edge. It is verified by an on-line experiment for the real image that the vehicle angle and the horizontal curvature of the road can effectively be estimated. Those results are reported in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
23. Analysis of the Error Back-Propagation Learning Algorithms with Gain.
- Author
-
Jia, Qi, Hagiwara, Katsuyuki, Usui, Shiro, and Toda, Naohiro
- Subjects
COMPUTER algorithms ,ALGORITHMS ,COMPUTER programming ,COMPUTER software ,COMPUTER systems ,COMPUTERS - Abstract
As the method to accelerate the learning by error back-propagation, several studies have been proposed in which the parameter called gain is introduced. In those studies, however, the acceleration effect is evaluated only numerically, and there is no theoretical analysis of the effect of the gain on the learning process. This paper points out that those studies can also be realized by methods without introducing the gain, and presents a detailed analysis of the effect of the gain from a unified viewpoint. The following properties are revealed as a result. The error back-propagation method, in which a constant gain is introduced, can be reduced to the ordinary error back-propagation method without introducing the gain. When the dynamic gain is introduced, the method cannot be reduced to the steepest descent method, as well as the momentum method, without introducing the gain. Furthermore, it is shown that there exists a characteristic superellipse that determines the behavior of the gain. By analyzing the characteristic superellipse, a theoretical basis is provided for the instability of the method introducing the dynamic gain. This paper presents a unified treatment of the method introducing the gain and the method not introducing the gain from a unified viewpoint which have been considered independently. The effect of the gain on the learning process is analyzed, which will help in developing a new learning method in the future. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
24. A Speculative Execution Scheme of Macrotasks for Parallel Processing Systems.
- Author
-
Yamana, Hayato, Yasue, Toshiaki, Ishii, Yoshihiko, and Muraoka, Yoichi
- Subjects
PARALLEL processing ,MULTIPROCESSORS ,PROGRAMMING languages ,COMPUTER software ,COMPUTER systems ,COMPUTERS - Abstract
This paper considers the high-speed execution of FORTRAN programs on parallel processing systems and proposes the parallelizing scheme of the program and execution based on the speculative execution over multiple conditional branches. Several techniques have been proposed that parallelize the program including conditional branches. A method which does not use the speculative executions is: (1) the method called earliest execution condition determination. As the methods which use the speculative execution are: (2) speculative evaluation scheme for a single conditional branch for the superscalar processor or VLIW computer; and (3) multiple speculative execution scheme assuming particular loops. There are the following problems: (1) sufficient parallelism is not extracted only by determining the earliest execution condition; (2) the speed improvement that can be realized by the speculative execution of a single conditional branch is at most twofold; and (3) the scheme can be applied only to particular loops. This paper divides the program into macrotasks, and defines the multiple stage speculative execution scheme between macrotasks on the general parallel processing system. Then, the macrotask executions control for the individual macrotask is proposed, using the execution start condition, the control establishment condition and the execution stop condition. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
25. Interpretation of SDL Specification in LOTOS.
- Author
-
Ando, Tsuyoshi, Ohta, Masataka, and Takahashi, Kaoru
- Subjects
TECHNICAL specifications ,COMPUTER software ,SOFTWARE engineering ,COMPUTER programming ,COMPUTER algorithms ,COMPUTER science - Abstract
This paper aims at: (1) improvement of the specification verification capability of SDI, without losing its advantages, i.e., transparency and descriptive power; and (2) realization of translation between specifications described by SDL and LOTOS. A method is proposed which gives a LOTOS interpretation of the specification by SDL. First, the mapping of the object system structure indicated by SDL specification to the LOTOS description is discussed. Then the description of the processing definition of SDL specification is described. Using the method proposed in this paper, a LOTOS interpretation is presented as an example for the SDL specification for the switching service. The result of interpretation is evaluated and the flexibility of the verification process is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
26. PQL: Modal Logic for Compositional Verification of Concurrent Programs.
- Author
-
Uchihira, Naoshi
- Subjects
SOFTWARE verification ,SOFTWARE engineering ,COMPUTER systems ,COMPUTER logic ,COMPUTER software - Abstract
The temporal logic model checking method is very useful for verification of concurrent programs that can be expressed by finite state transition systems. However, a major drawback to using this method is that as the scale of the programs increases, the computation costs for verification increase exponentially. An effective solution for this problem is compositional verification. Compositional verification is a method in which the bisimulation equivalence of concurrent programs is used to extract from each construction element (subprocess) only that abstract information necessary to verify each query, thereby avoiding an explosion in cost. In this paper, PQL (Process Query Language) is proposed as an improved method in the solution of this problem. PQL is based on modal logic which is the union of temporal and process logic. Also in this paper, the compositional verification method is proposed by using PQL with consideration of the ‘divergence by internal transition’. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
27. Realization of a Self-Testing Bus Arbiter.
- Author
-
Tokito, Kazuo, Kurokawa, Takakazu, and Koga, Yoshiaki
- Subjects
COMPUTER systems ,COMPUTER buses ,COMPUTER software ,FAULT-tolerant computing ,REAL-time computing ,COMPUTER input-output equipment ,COMPUTER architecture - Abstract
This paper discusses a realization of a self-testing distributed arbiter for bus-connected systems. One of the traditional schemes for arbitration is that each of the connected modules acknowledges when the arbitration is completed. However, there is a problem m that a dedicated software as well as a complex hardware are required, which increases the cost and affects the throughput of the system. The method proposed in this paper assigns a certain code as the module number for verification of arbitration, and detects failure by a retrial using its complement. In other words, the method aims at secure arbitration using time-space redundancy, i.e., redundancy in time and redundancy in code space, and at realization by as simple a circuit as possible, with the self-testing function. It is verified from the results of simulation that all single stuck-at faults can be detected for the case of 3 hits, under the assumption that all possible inputs are given before the next future is produced. Even if not all of the inputs ate given, there was no observed extraordinary operation where an incorrect module is selected. Thus, the self-testing arbiter with a high speed and a simple hardware configuration can be realized. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
28. Global States Monitoring Algorithm for Distributed System.
- Author
-
Moriyasu, Kenji, Soneoka, Terunao, and Manabe, Yoshifumi
- Subjects
DISTRIBUTED computing ,SYNCHRONIZATION ,ERRORS ,COMPUTER software ,TECHNOLOGICAL complexity ,ALGORITHMS - Abstract
In distributed systems, there are not only synchronization errors which stop the system but also errors in which each process behaves according to its process definition, yet the system as a whole behaves differently from the requirements. Since many synchronization errors cannot be detected by locally monitoring the state of each process, monitoring system global states is required. Stable global states are the global states with all channels empty, and monitoring them is known to be important for detecting loss of synchronization in distributed systems, i.e., the presence of unintended global states or the absence of intended ones. This paper describes a method of monitoring stable global states during the operation of distributed systems. This paper first clarifies the lower bound of control information, which each process should record for detection of stable global states, under the assumption that all control information is piggybacked with ordinary system messages. A stable global state monitoring algorithm is then presented, whose control information is minimum and which has minimum-order complexity. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
29. An Environment for Dataflow Program Development of Parallel Processing System --Harray.
- Author
-
Yamana, Hayato, Kohdate, Jun, Yasue, Toshiaki, and Muraoka, Yoichi
- Subjects
DATA flow computing ,SYSTEMS programming (Computer science) ,DEBUGGING ,PROBLEM solving ,COMPUTER software - Abstract
This paper considers the dataflow program development environment for the system programmer who develops the compiler and proposes a method to improve the debugging efficiency. The conventional debugging methods are either; (1) to monitor the packet in the dataflow ring, or (2) to specify the function containing a bug. The former contains unsolved problems such as the determination of start timing for the data monitoring and the presentation of a large amount of information to the user. The latter contains a problem in that the debugging is impossible at the dataflow level. This paper aims at the solution of those problems, and the detailed debugging is executed on the software, not on the real machine. The information presentation on a dataflow graph is considered for systematic presentation of the debugging information. As the development environment, the parallel processing system Harray proposed by the authors is considered. In the proposed system, a two-stage process is employed in which the first step is to specify the macroblock (which is a task unit in Harray) containing the bug, and the second step is the detailed debugging of the specified macroblock. The debugging within the macroblock is executed on the software, and the debugging efficiency is improved by: (1) diagram representation for easier visual recognition, and (2) backward tracing function. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
30. Evaluation of a Data-Driven Machine with Advanced Control Mechanism.
- Author
-
Yamaguchi, Yoshinori, Toda, Kenji, and Yuba, Toshitsugu
- Subjects
COMPUTERS ,COMPUTER software ,PARALLEL processing ,PARALLEL programming ,COMPUTER files - Abstract
This paper evaluates the EM-3, a data-driven machine with advanced control mechanism. The advanced control mechanism with pseudo-result concept has been introduced into the data-driven mechanism. To evaluate this mechanism on a parallel processing environment, the prototype machine has been developed. Five benchmark programs, which have different characteristics, are evaluated by both software simulator and a prototype machine. This paper presents the results of discussing the data-driven mechanism with advanced control in the list processing system. The profiles of parallelism for the five programs, dynamic execution rate of instructions, and the characteristics of waiting time in a matching store are described. Finally, the performance of the EM-3 prototype is shown as well as the effectiveness of introducing the advanced control mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
31. A Design Method of Network Operating Systems Based on the Concept of an Object.
- Author
-
Takahashi, Kaoru, Ohta, Yoshiyuki, Shiratori, Norio, and Noguchi, Shoichi
- Subjects
COMPUTER operating systems ,COMPUTER interfaces ,COMPUTER software ,LOCAL area networks ,SYSTEMS software - Abstract
This paper aims at an improvement of the software productivity of the system and proposes a software design method for network OS. In this paper the network OS is defined as a system composed of the following two functions: (1) The function constructed on the existing OS, realizing a uniform access to the distributed resources; and (2) the interface between the forementioned function and the existing OS. In the proposed design, the function (1) (called network OS kernel) is composed as a set of objects existing in the problem domain related to the network OS. As a result of the design, the actual structure of the system and the actual objects are reflected directly on the software design which improves the understanding and the maintenance of the system. The interface between the network OS kernel and the existing OS is introduced to achieve the portability of network OS kernels. It is expected that by applying the design method based on the forementioned principle, the software productivity of the network OS will be improved. As an application example of the proposed design method, an LAN-oriented network OS was designed and implemented experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
32. Analyzing Parallel Executability of Production Systems.
- Author
-
Ishida, Toru
- Subjects
PRODUCTION control ,COMPUTER simulation ,SIMULATION methods & models ,DATABASES ,PROCESS control systems ,PRODUCTION scheduling ,COMPUTER software ,MODEL-integrated computing ,COMPUTER-aided design - Abstract
The production system is one in which mutually independent rules communicate with each other through a common database aiming at the solution of a problem. Because of its nature, the production system seems to contain a larger potential parallelism than the procedural program. At present, however, only one rule succeeded in condition matching, is selected and executed, which prevents the full utilization of the parallelism. The aim of this paper is to realize a high-speed execution of the production system by executing the rules in parallel as far as possible. First, a new execution model is proposed for the production system, where the rules succeeded in condition match are executed in parallel as far as possible. Then the data dependency graph for the production system is introduced, and a method is presented by which more than one rule is analyzed to determine whether or not they can be executed in parallel. A method is presented which applies the result of parallel executability analysis to the parallel execution of the rules on a parallel computer, or the execution scheduling of the rules on a sequential computer. A production system PLANET was constructed which can analyze a program by a parallel execution simulation, and the proposed method was evaluated. It was verified that four to eight rules can be executed in parallel in the range of programs used in the evaluation. By combining the result of this paper with the traditional. parallel condition matching, it is expected that a further parallelism can be realized. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
33. A Rule-Based Model for Visual Geometrical Illusions.
- Author
-
Seki, Kazunori, Sugie, Noboru, and Sugihara, Kokichi
- Subjects
COMPUTER vision ,IMAGE processing ,COMPUTER systems ,COMPUTER software ,COMPUTERS - Abstract
An illusion is not simply a vision-produced error, but a phenomenon produced in the normal visual sensation. To clarify the visual perception process, various studies have been made concerning illusions, especially the geometrical illusion. This paper presents a model of the illusion process using figure transformation rules, and discusses the illusion using the model. The figure transformation rules used in this paper are as follows: 1. The acute angle is perceived to be larger, and the obtuse angle is perceived to be smaller. 2. The segment forming an acute angle is perceived to be shorter, and the segment forming an obtuse angle is perceived to be longer. 3. When two segments intersect, the segment is perceived to be longer if the point of intersection is close to the mid-point and the angle is close to the right angle. 4. The vertex shifts along the line of bisection of the angle at the vertex, in the direction corresponding to the larger perception of the acute angle and smaller perception of the obtuse angle. Applying these rules to typical figures of geometrical illusion, results corresponding to human illusion were obtained, indicating the validity of the transformation rules. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
34. Space-Time Theory GQOT and Its Application To Concurrent Processes.
- Author
-
Tanabe, Koichiro and Suzuki, Atsuyuki
- Subjects
SPACETIME ,DISTRIBUTED computing ,COMPUTER logic ,COMPUTER systems ,COMPUTER software ,COMPUTERS - Abstract
This paper discusses a logical verificational theory GQOT for concurrent processes which communicate and synchronize by asynchronous message passing. In concurrent processes, especially, in distributed systems, each process acts independently, and yet one operation is done as a whole by communicating and synchronizing. Thus, regarding each process as having its own local time, we formalize each by a temporal theory QOT. Then regarding all the concurrent processes as a complete graph with each process as a node, we formalize it as space G. GQOT is the merge of G and QOT. In this paper we define this GQOT and, moreover, to make it more suitable for concurrent processes, introduce a sort of action, concept of interval-time and an operator "next." Then, we show the description and verification of the concurrent processes and its properties. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
35. Distributed Algorithms for Fault Diagnosis of Processors.
- Author
-
Masuzawa, Toshimitsu, Hagihara, Ken'ichi, and Tokura, Nobuki
- Subjects
COMPUTER algorithms ,DISTRIBUTED computing ,COMPUTER systems ,COMPUTER software ,COMPUTERS ,COMPUTER networks - Abstract
This paper considers distributed algorithms for solving the problem π
p to test whether for any processor in a network, any one of the adjacent processors is faulty. First, we show that it is not solvable when processors are asynchronous or when there is no upper limit in the transmission delay along the link. Next, we define a network (PL-synchronized network) which assumes a certain kind of synchronicity in processors and their communication. Whether or not πp is solvable in a PL-synchronized network depends on knowledge of the network topology (for example, identifiers of adjacent processors, the number of processors in a network, and edge connectivity of a network) initially available at each processor. This paper examines how the knowledge available at each processor affects the conditions that πp is solvable (situation of faults of processors and links in the overall network). [ABSTRACT FROM AUTHOR]- Published
- 1988
- Full Text
- View/download PDF
36. Representation of Problem-Solving Processes by a Model of Associative Processor HASP. A Case Study for the Process of Addition.
- Author
-
Harai, Yuzo and Qing Ma
- Subjects
MULTIPROCESSORS ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER systems ,COMPUTER software ,COMPUTERS - Abstract
This paper aims at a constructive description of which neural network model can represent the human problem-solving process and how the process is related to the various human memories by what context. From such a viewpoint, a neural network model is constructed for the representation of the problem-solving process. In this paper, a model for the addition process is constructed as the first step, using the associative memory model HASP (human associative processor) proposed by one of the present authors. The model is composed of the following components: the procedure memory, which stores and retrieves the procedures of the addition; semantics memory, which stores the knowledge about the addition such as 2 + 2 = 4; the working memory, which temporarily stores the external and internal information; and the goal memory, which stores the problem-solving process as a chain of subgoals, and manages the flow of procedures from one higher level in the control hierarchy. In this study, a computer simulation was performed for the addition processes for several numbers with multiple digits, and it is shown that the addition process can be represented. Then the computer simulation is made for the generation of four kinds of bugs in the addition. It is shown that the bug generation can be represented by the model. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
37. An Object-Oriented Approach for Interactive Microprogram Simulator.
- Author
-
Sugimoto, Akira, Abe, Shigeru, Kuroda, Masahiro, and Katou, Sachio
- Subjects
COMPUTER software ,COMPUTER systems ,COMPUTER programming ,COMPUTER algorithms - Abstract
fficiency and reliability are essential to microprograms, and it is important to perform debugging and evaluation through interaction with the simulator. The simulator is also indispensable in the development of the microprogram in parallel to the hardware. However, if a dedicated simulator is to be developed for each target computer, constraints are imposed on time and cost, making it difficult to realize a high-level interactive interface. On the ocher hand, if a general-purpose simulator is to be employed, it is difficult to provide an interface suited to the target computer, such as visual simulation. This paper proposes an object-oriented system aiming at the realization of both the detailed function of the dedicated simulator and the versatility of the software. Although the object-oriented language has a high modularity, adequate modeling must be provided for the whole system including the interlaces, to ensure the machine-independence of the microprogram simulator, where the object of the description itself depends on the target computer. In this paper, an interactive interface is proposed, which is suited to the debugging of the microprogram. Then a realization by the object-oriented scheme is described which has a high flexibility against the modification of hardware specification and permits an easy reuse of the software for other computers. As a practical example, the development of μP simutator by the authors is outlined, together with the description of the user interface. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
38. Intelligent Problem-Solving System Based on Model Building Method.
- Author
-
Ohsuga, Setsuo
- Subjects
PROBLEM solving ,DESIGN ,TECHNOLOGY ,COMPUTERS ,LANGUAGE & languages ,COMPUTER software - Abstract
This paper discusses the conceptual design of a knowledge processing system. First, by means of knowledge processing technology, a new relationship is established between man and computers, and it becomes possible to adopt a new problem-solving style. The paper discusses also the need to analyze the types of problems man has to solve, the manner of problem solving, and the limit to the capability of knowledge processing to design an effective computer-aided problem-solving system. It is concluded that the new problem-solving system must be a knowledge processing system founded on the model-based method. Then the requirements for the knowledge processing system and, in particular, for the knowledge representation language, are discussed. Finally, an example is used to show that a system satisfying these requirements is possible. Here, the importance of the model building concept is stressed for establishing the new problem-solving style and the declarative language including data structure for realizing the knowledge processing system that can aid the process. In general, the real problems are very complex, and it is impossible to solve them by a simple procedure. However, we have to approach the solution through a trial sand-error process. The knowledge processing system must be able to aid man in this process very naturally and without any restrictions on man's capability. [ABSTRACT FROM AUTHOR]
- Published
- 1987
- Full Text
- View/download PDF
39. A New Method of Software Structural Testing by Path Analysis.
- Author
-
Masuyama, Hiroshi and Ichimuri, Tetsuo
- Subjects
PATH analysis (Statistics) ,MULTIVARIATE analysis ,COMPUTER software ,ALGORITHMS ,COMPUTATIONAL complexity ,ELECTRONIC data processing - Abstract
This paper discusses the structural software testing method by path analysis. The problem of the method is discussed, and a coverage measure is proposed. The problem in the structural testing is discussed first. If the evaluation is made based on the past coverage measures, the quality the software tends to be overestimated. A problem occurs of testing the cost when the number of steps for determining the test data increases with the size of the software. Also, another problem occurs wherein if a new coverage measure is introduced so that the quality is not overestimated, additional test cost is produced. From such a viewpoint, this paper proposes a method which can essentially avoid these problems by using the minimum number of test paths. An algorithm which is independent of the earlier method. Then it is shown that the number of test paths determined by that algorithm is the minimum. The total distance of the test paths is also discussed. Finally, the computational complexity in determining the test paths is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
40. Fusion Algorithm not Affected by the Picture Frame.
- Author
-
Suzuki, Satoshi and Abe, Keiichi
- Subjects
PARALLEL processing ,ELECTRONIC data processing ,DATA flow computing ,COMPUTER architecture ,COMPUTER software ,ALGORITHMS - Abstract
This paper discusses the properties and the algorithms of the fusion operations, which are one of the fundamental techniques of binary image processing. First, a theoretical discussion is made for infinitely spread images, deriving several properties for the fusion operations. Those properties indicate the equivalence among the fusion operations and their iterations and combinations. Since the actual input images are finite, those properties are not satisfied by a simple algorithm due to the effect of the picture frame. This paper presents algorithm not affected by the picture frame for the general-purpose sequential computers and for the dedicated parallel-processing hardwares. Those algorithms perform the fusion operation by executing the first operation in the fusion (that is, the expansion or the contraction) together with the distance transformation, and by performing the second operation using the distance information obtained in the first. The proposed algorithms have the feature that most of the properties for infinite images also apply to images of finite size. With this feature, the proposed algorithms are shown to be suited to parallel processing with a small number of processors. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
41. Improving software quality by static program analysis.
- Author
-
Lichter, Horst and Riedinger, Gerhard
- Subjects
COMPUTER software ,COMPUTER software industry ,QUALITY assurance - Abstract
This paper presents the results and conclusions obtained during a process improvement project of ABB Energy Information Systems, Germany. This project was executed in the context of ABB Energy Information Systems software process improvement activities. It was launched in order to collect experience in the usage of systematic static program analysis. We first give a brief overview on software development at ABB Energy Information Systems. Then we list the objectives of the project. After having given a short overview on the technique of static program analysis, we present in the main part of the paper our experience and conclusions. We do this by formulating and discussing nine major theses. © 1997 John Wiley & Sons Ltd [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
42. Evolving improvement paradigms: capability maturity models and ISO/IEC 15504 (PDTR).
- Author
-
Garcia, Suzanne M.
- Subjects
CAPABILITY maturity model ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems ,ENGINEERING - Abstract
This paper describes the evolution of the structure and representation of Capability Maturity Models
sm and various components of the ISO/IEC 15504 (PDTR) product set, formerly known as ‘SPICE’ (Software Process Improvement and Capability dEtermination). ‘15504’ will be used as shorthand for the product set encompassed by the 15504 project. The paper focuses on historical, structural, and conceptual evolution of the two product types. © 1997 John Wiley & Sons Ltd [ABSTRACT FROM AUTHOR]- Published
- 1997
- Full Text
- View/download PDF
43. Assessor Agreement in Rating SPICE Processes.
- Author
-
El Emam, Khaled, Briand, Lionel, and Smith, Robert
- Subjects
RELIABILITY in engineering ,RATING ,TESTING ,EVALUATION ,COMPUTER software - Abstract
One of the enduring issues being evaluated during the SPICE trials is the reliability of assessments. One type of reliability is the extent to which different assessors produce similar ratings when assessing the same organization and presented with the same evidence. In this paper we report on a study that was conducted to start answering this question. Data was collected from an assessment of 21 process instances covering 15 processes. In each of these assessments two independent assessors performed the ratings. We found that six of the fifteen processes do not meet our minimal benchmark for interrater agreement. Three of these were due to systematic biases by either an internal or external assessor. Furthermore, for eight processes specific rating scale adjustments were identified that could improve its reliability. The findings reported in this paper provide guidance for assessors using the SPICE framework. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
44. Software Process Improvement via ISO 9000?
- Author
-
Stelzer, Dirk, Mellis, Werner, and Herzwurm, Georg
- Subjects
SURVEYS ,COMPUTER software ,ISO 9000 Series Standards - Abstract
This paper presents results of two surveys among European software houses. One of the targets of the studies was to identify software process improvements that companies have achieved while implementing ISO 9000-based quality systems. The first survey was conducted among 20 German software suppliers that have received an ISO 9001 certificate. The study focuses on five elements of an ISO 9000 quality system; code reviews and inspections; software testing; product and process measurements; measurement of quality costs; and demonstration of quality improvements. Many software houses included in our first survey have not carried out any modifications of the five elements. Thus, it seemed that ISO 9000 had not led to significant improvements. Nevertheless, the majority of the companies would decide in favor of implementing an ISO 9000 quality system once again. We conducted a second study to gain a better understanding of the improvements achieved with the help of ISO 9000. We analyzed experience reports and conducted interviews with quality managers from a total of 36 European software houses. The paper presents ten key success factors that the respondents of our studies considered to be the most helpful when implementing an ISO 9000 quality system. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
45. A Formal Framework For Specifying Design Methods.
- Author
-
D'Inverno, Mark, Justo, G. R. Ribeiro, and Howells, Paul
- Subjects
COMPUTER software ,HIGH performance computing ,METHODOLOGY - Abstract
The main objective of this paper is to put forward a software process model for high-performance systems (HPS), and to present a formal framework to describe software design methodologies (SDM) for those systems. The framework consists of two main parts: the software process activities which characterize the development of HPS, and the components of the SDM (concepts, artifacts, representation and actions) which are essential for any methodology. The framework relates these two parts by identifying generic components of each activity in the software process that can be used to classify and evaluate SDM for HPS. Another important property of the framework is that it has been formally specified using the language Z. As a result, it is also used to derive formal specifications of SDM. This is illustrated in the paper by presenting part of the specification of oDM (an occam design method). [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
46. A Design-based Model for the Reduction of Software Cycle Time.
- Author
-
Collier, Ken W. and Collofello, James S.
- Subjects
COMPUTER software ,MANUFACTURING processes ,SOFTWARE engineering ,RESEARCH ,OPERATIONS research - Abstract
This paper presents a design-based software cycle time reduction model that can be easily implemented without replacement of existing development paradigms or design methodologies. The research results suggest that there are many cycle time factors that are influenced by design decisions. If manipulated carefully it appears that an organization can reduce cycle time and improve quality simultaneously. The preliminary results look promising and it is expected that further experimentation will support the use of this model. This paper will analyze the basis for the model proposed here, describe the model's details, and summarize the preliminary results of the model. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
47. The Year 2000 Date Conversion Process.
- Author
-
Chavan, Subhash
- Subjects
YEAR 2000 date conversion (Computer systems) ,COMPUTER software ,COMPUTER software industry ,BUSINESS partnerships ,SOFTWARE maintenance - Abstract
This paper describes a viable technical solution to the year 2000 date conversion issue. An important aspect of this paper is the process for offshore software development. The reader will get a first hand close-up look at the processes and methods used in implementing geographically dispersed projects. Chavan Software & Services (CSS) implements software projects in Germany with partner software houses based in India. The methods and processes outlined in this paper have been successfully used to implement several projects in major corporations around the world. We have selected the year 2000 date conversion issue as an example for this paper since this issue promises to be the biggest single software change program since the birth of computers (partly mandatory for many companies wishing to stay in business beyond the closing years of this century). [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
48. Execution Mechanism of Von Neumann Program in Data Flow Computer.
- Author
-
Sowa, Masahiro
- Subjects
DATA flow computing ,COMPUTER systems ,PARALLEL processing ,COMPUTER software ,VON Neumann algebras ,ELECTRONIC data processing - Abstract
The data-flow computer is presently considered as promising as a parallel-processing computer system since it can execute the parallel processing in a natural way. However, when a serial program is executed in a data-flow computer, the execution speed is more degraded than in the serial processing dedicated computer, since the operation which is originally prepared for the parallel processing is performed even in the serial processing. This paper considers the data flow computer architecture, in which the serial processing function of Neumann-type computer is added to the data-flow computer so that the parallel processing is performed by the data-flow processing, and the program containing a large number of serial processing is performed by Neumann-type processing. By adding Neumann-type processing function to the data-flow computer, a mixed data-flow processing and Neumann-type processing becomes possible, which results in the improvement of execution speed and the possibility of executing the existing large number of Neumann-type programs. This implies a smooth transition from the serial processing to the parallel processing by utilizing this system. This paper describes the Neumann-type processing execution mechanism in the data flow computer and determines the execution speed-up ratio between the data-flow and the Neuitiann computers in case of a simple serial program. [ABSTRACT FROM AUTHOR]
- Published
- 1984
49. Functional Programming Language ASL/F and Its Optimizing Compiler.
- Author
-
Inoue, Katsuro, Seki, Hiroyuki, Taniguchi, Kenichi, and Kasami, Tadao
- Subjects
FUNCTIONAL programming languages ,PROGRAMMING languages ,COMPUTER software ,PROGRAM transformation ,COMPUTERS ,ELECTRONIC data processing - Abstract
This paper defines the functional programming language ASL/F, and discusses the method to compile or optimize the program in order to execute the program efficiently on a conventional machine. In order to investigate the effectiveness of the proposed method, the compiler was actually constructed and several sample programs were executed. The results of those executions are described. The ASL/F program consists of several statements to define functions and the description of a term to be evaluated (the main program term). The semantics of the program Is clearly defined by rewriting terms. The items of optimization adopted in this paper were the pre-computation of the needed arguments of the defined function, the avoidance of duplicated computations among common subterms, the globalization of sorts (data type) such as arrays, the elimination of tail-recursion and rewriting of terms in compilation. Those optimizations were realized in the computer. It was seen from the result of executions of example programs that: (1) the optimizations greatly reduce the execution time and the work area; and (2) those ASL/F programs can be executed in nearly the saline time as equivalent PASCAL programs. [ABSTRACT FROM AUTHOR]
- Published
- 1984
50. A Language MPL for Microcomputer Networks.
- Author
-
Hamada, Takashi, Kayano, Yoshiaki, and Yamaguchi, Takeshi
- Subjects
PROGRAMMING languages ,PERSONAL computers ,MICROPROCESSORS ,COMPUTER networks ,DATA transmission systems ,COMPUTER software - Abstract
As microprocessors have come into wide use, network systems that connect multiple microcomputers have come to be used as one application of microprocessors. As for their control software, productivity and reliability will be low if we use conventional descriptions for each node. In this paper, we will present the specification and implementation of a high level language MPL (Multiprocessors Programming Language) designed for network system description by extending language Module. MPL can facilitate the description of a data transmission mechanism and the entire network integratedly and efficiently. Also, the paper presents an example of message transmission systems in a network utilizing simple connection to evaluate the description power of MPL. [ABSTRACT FROM AUTHOR]
- Published
- 1984
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.