77 results
Search Results
2. Parallel Programs: Proofs, Principles, and Practice.
- Author
-
Andrews, Gregory R.
- Subjects
COMPUTER algorithms ,ELECTRONIC data processing ,PARALLEL programming ,PROGRAMMING languages ,COMPUTER software - Abstract
Several principles are identified from work on the verification of parallel programs. Concrete examples of the ways these principles can be applied, even when formal verification is not the goal, are then described. The purpose of this paper is to demonstrate ways in which the concepts of program verification yield insight into the programming process, programming languages, and program properties. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
3. Improving Resilience of Software Systems: A Case Study in 3D-Online Game System.
- Author
-
Lu, Wei, Wang, Weidong, Bao, Ergude, Xing, Weiwei, and Zhu, Kai
- Subjects
COMPUTER software ,VIDEO games ,ELECTRONIC data processing ,COMPUTER algorithms ,THREE-dimensional modeling - Abstract
Resilience is the property that enables a system to continue operating properly when one or more faults occur. Nowadays, as software systems become more and more complex, their hardware execution platforms also become more heterogenous with larger scale. Software systems may fail due to some faults such as node breakdown, communication failure, or data processing failure. In this paper, we propose a ring-based resilience mechanism, which implements fault detection and recovery. (1) To solve the problem that the central server may have high burden of network traffic, we design a ring-based heartbeat algorithm for crash fault detection. (2) We also design a light-weight recovery mechanism to recover from crash faults as compared with the current system-specific mechanisms. To evaluate our mechanism, we use a 3D-online game system as a case study. By injecting faults, we test the effectiveness and overhead of the proposed mechanism. Compared with other mechanisms, the experimental results show that our mechanism can support resilience very well and is better at dealing with the crash fault caused by high cluster workload with acceptable overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
4. A Proposal for Error Handling in OpenMP.
- Author
-
Duran, Alejandro, Ferrer, Roger, Costa, Juan, Gonzàlez, Marc, Martorell, Xavier, Ayguadé, Eduard, and Labarta, Jesús
- Subjects
PARALLEL programming ,COMPUTER programming ,PARALLEL processing ,PARALLEL logic programming ,COMPUTER software ,COMPUTER algorithms ,SYSTEMS development ,ELECTRONIC data processing - Abstract
OpenMP has been focused in performance applied to numerical applications, but when we try to move this focus to other kind of applications, like Web servers, we detect one important lack. In these applications, performance is important, but reliability is even more important, and OpenMP does not have any recovery mechanism. In this paper we present a novel proposal to address this lack. In order to add error handling to OpenMP we propose some extensions to the current OpenMP specification. A directive and a clause are proposed, defining a scope for the error handling (where the error can occur) and specifying a behaviour for handling the specific errors. Some examples of use are presented, and we present also an evaluation showing the impact of this proposal in OpenMP applications. We show that this impact is low enough to consider the proposal worthwhile for OpenMP. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
5. Miss Rate Prediction Across Program Inputs and Cache Configurations.
- Author
-
Yutao Zhong, Dropsho, Steven G., Xipeng Shen, Studer, Ahren, and Chen Ding
- Subjects
CACHE memory ,ELECTRONIC data processing ,COMPUTER storage devices ,COMPUTER software ,COMPUTER algorithms ,DATA structures ,ELECTRONIC file management ,VISUAL programming languages (Computer science) ,HIGH technology industries - Abstract
Improving cache performance requires understanding cache behavior. However, measuring cache performance for one or two data input sets provides little insight into how cache behavior varies across all data input sets and all cache configurations. This paper uses locality analysis to generate a parameterized model of program cache behavior. Given a cache size and associativity, this model predicts the miss rate for arbitrary data input set sizes. This model also identifies critical data input sizes where cache behavior exhibits marked changes. Experiments show this technique is within 2 percent of the hit rate for set associative caches on a set of floating-point and integer programs using array and pointer-based data structures. Building on the new model, this paper presents an interactive visualization tool that uses a three-dimensional plot to show miss rate changes across program data sizes and cache sizes and its use in evaluating compiler transformations. Other uses of this visualization tool include assisting machine and benchmark-set design. The tool can be accessed on the Web at http://www.cs.rochester.edu/research/locality. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
6. Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization.
- Author
-
Min-Ling Zhang and Zhi-Hua Zhou
- Subjects
ARTIFICIAL neural networks ,COMPUTER algorithms ,MACHINE learning ,COMPUTER programming ,COMPUTER software ,ELECTRONIC data processing ,MACHINE theory ,ARTIFICIAL intelligence ,ALGORITHMS - Abstract
In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., Backpropagation for Multilabel Learning, is proposed. It is derived from the popular Backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
7. Exploiting application-level similarity to improve SSD cache performance in Hadoop.
- Author
-
Chen, Zhijian, Luo, Wenhai, Wu, Dan, Huang, Xiang, He, Jian, Zheng, Yuanhuan, and Wu, Di
- Subjects
CACHE memory ,COMPUTER storage devices ,PERFORMANCE evaluation ,COMPUTER software ,ELECTRONIC data processing ,COMPUTER algorithms - Abstract
To boost the performance of massive data processing, solid-state drives (SSDs) have been used as a kind of cache in the Hadoop system. However, most of existing SSD cache management algorithms are ignorant of the characteristics of upper-level applications. In this paper, we propose a novel SSD cache management algorithm called DSA, which can exploit the application-level data similarity to improve the SSD cache performance in Hadoop. Our algorithm takes both temporal similarity and user similarity in querying behaviors into account. We evaluate the effectiveness of our proposed DSA algorithm in a small-scale Hadoop cluster. Our experimental results show that our algorithm can achieve much better performance than other well-known algorithms (e.g., LRU, FIFO). We also clearly point out the underlying tradeoff between cache performance and SSD deployment cost, and identify a number of key factors that affect SSD cache performance. Our findings can provide useful guidelines on how to effectively integrate SSDs into Hadoop. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
8. High Level Programming for Distributed Computing.
- Author
-
Feldman, Jerome A.
- Subjects
COMPUTER programming ,ELECTRONIC data processing ,PROGRAMMING languages ,COMPUTER software ,DATA structures ,COMPUTER algorithms - Abstract
Programming for distributed and other loosely coupled systems is a problem of growing interest. This paper describes an approach to distributed computing at the level of general purpose programming languages. Based on primitive notions of module, message, and transaction key, the methodology is shown to be independent of particular languages and machines. It appears to be useful for programming a wide range of tasks. This is part of an ambitious program of development in advanced programming languages, and relations with other aspects of the project are also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1979
- Full Text
- View/download PDF
9. CONVERT: A High Level Translation Definition Language for Data Conversion.
- Author
-
Shu, Nan C., Housel, Barron C., and Lum, Vincent Y.
- Subjects
PROGRAMMING languages ,DOCUMENT markup languages ,DATA structures ,COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing ,COMPUTER software ,MATHEMATICAL analysis - Abstract
This paper describes a high level and nonprocedural translation definition language, CONVERT, which provides very powerful and highly flexible data restructuring capabilities. Its design is based on the simple underlying concept of a form which enables the users to visualize the translation processes, and thus makes data translation a much simpler task. "CONVERT" has been chosen for conveying the purpose of the language and should not be confused with any other language or program bearing the same name. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
10. Component-Based Java Legacy Code Refactoring.
- Author
-
Arboleda, Hugo, Paz, Andrés, and Royer, Jean-Claude
- Subjects
COMPUTER programming ,ELECTRONIC data processing ,COMPUTER algorithms ,JAVA programming language ,OBJECT-oriented programming languages ,COMPUTER software - Abstract
Copyright of Revista Facultad de Ingeniería Universidad de Antioquia is the property of Universidad de Antioquia and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2013
- Full Text
- View/download PDF
11. Uncertainty Evaluation of Measurement Data Processing Algorithm Based on Its Matrix Form.
- Author
-
Konopka, K. and Topór-Kaminski, T.
- Subjects
ELECTRONIC data processing ,COMPUTER algorithms ,ALGORITHMS ,COMPUTER software ,MATRIX analytic methods - Abstract
Data processing algorithms are important parts of modern measurement systems. These algorithms are often delivered to the user as complex program and their numerical structure is not known. Therefore also influence of algorithm on processed data accuracy is not known. One of the methods to evaluate uncertainty propagation through algorithm is based on matrix form of algorithm. Coefficient matrix of algorithm represents its numerical operations and it is a basis to algorithm accuracy evaluation. The paper presents a method how to identify this coefficient matrix when algebraic form of the algorithm is known or is difficult to use. Identified matrix form of algorithm is then used to estimate uncertainty propagation through exemplary algorithms. Results are compared with experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
12. Efficient Fast 1-D 8 x 8 Inverse Integer Transform for VC-1 Application.
- Author
-
Chih-Peng Fan and Guo-An Su
- Subjects
COMPUTATIONAL complexity ,COMPUTER software ,VIDEO compression ,ELECTRONIC data processing ,COMPUTER system conversion ,COMPUTER algorithms ,SYSTEMS design - Abstract
In this paper, the one-dimensional (1-D) fast 8 x 8 inverse integer transform algorithm for Windows Media Video 9 (WMV-9/VC-1) is proposed. Based on the symmetric property of the integer transform matrix and the matrix operations, which denote the row/column permutations and the matrix decompositions, the efficient fast 1-D 8 x 8 inverse integer transform is developed. Therefore, the computational complexities of the proposed fast inverse transform are smaller than those of the direct method and the previous fast method. With low complexity, the proposed fast algorithm is suitable to accelerate the video coding computations. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
13. Distributional Features for Text Categorization.
- Author
-
Xiao-Bing Xue and Zhi-Hua Zhou
- Subjects
ELECTRONIC data processing ,MATHEMATICAL analysis ,COMPUTER algorithms ,COMPUTER software ,SYSTEMS development ,PROGRAMMING languages - Abstract
Text categorization is the task of assigning predefined categories to natural language text. With the widely used "bag-of-word" representation, previous researches usually assign a word with values that express whether this word appears in the document concerned or how frequently this word appears. Although these values are useful for text categorization, they have not fully expressed the abundant information contained in the document. This paper explores the effect of other types of values, which express the distribution of a word in the document. These novel values assigned to a word are called distributional features, which include the compactness of the appearances of the word and the, position of the first appearance of the word. The proposed distributional features are exploited by a tfidf style equation, and different features are combined using ensemble learning techniques. Experiments show that the distributional features are useful for text categorization. In contrast to using the traditional term frequency values solely, including the distributional features requires only a little additional cost, while the categorization performance can be significantly improved. Further analysis shows that the distributional features are especially useful when documents are long and the writing style is casual. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
14. On Weight Design of Maximum Weighted Likelihood and an Extended EM Algorithm.
- Author
-
Zhenyue Zhang and Yiu-ming Cheung
- Subjects
COMPUTER algorithms ,SYSTEMS design ,MAXIMAL functions ,COMPUTER software ,COMPUTER programming ,MATHEMATICAL optimization ,ELECTRONIC data processing ,COMPUTER science ,OPERATIONS research - Abstract
The recent Maximum Weighted Likelihood (MWL) [18], [19] has provided a general learning paradigm for density-mixture model selection and learning, in which weight design, however, is a key issue. This paper will therefore explore such a design, and through which a heuristic extended Expectation-Maximization (X-EM) algorithm is presented accordingly. Unlike the EM algorithm [1], the X-EM algorithm is able to perform model selection by fading the redundant components out from a density mixture, meanwhile estimating the model parameters appropriately. The numerical simulations demonstrate the efficacy of our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
15. A Comparison of the Axiomatic and Functional Models of Structured Programming.
- Author
-
Basili, Victor R. and Noonan, Robert E.
- Subjects
COMPUTER programming ,ELECTRONIC data processing ,COMPUTER algorithms ,COMPUTER software ,INFORMATION theory - Abstract
This paper discusses axiomatic and functional models of the semantics of structured programming. The models are presented together with their respective methodologies (or proving program correctness and for denying correct programs. Examples using these methodologies are given. Finally, the models are compared and contrasted. [ABSTRACT FROM AUTHOR]
- Published
- 1980
16. Consistency maintenance of compound operations in real-time collaborative environments.
- Author
-
Gao, Liping, Yu, Fangyu, Gao, Lily, Xiong, Naixue, and Yang, Guisong
- Subjects
- *
COMPUTER algorithms , *COMPUTER programming , *COMPUTER storage devices , *ELECTRONIC data processing , *COMPUTER software - Abstract
In real-time collaborative environments, address space transformation strategy can be used to achieve consistency maintenance of shared documents. However, as for the execution of compound operations, they are firstly decomposed into primitive operations, the relationships between the referencing objects and referenced objects are lost during the decomposition process. Besides, the Undo operations in this environment are targeted at compound operations, but not decomposed basic ones. However, the traditional algorithms take primitive operation as the manipulation unit, thus leading to semantic inconsistencies of compound Undo operations. This paper appends two history buffers to maintain the relationships between the original operations and the decomposed ones and introduces “Retrace-Undo-VT-Redo-Retrace” strategy to realize the consistency maintenance of compound operations. Also, this paper introduces the version-decomposition strategy, describes the main algorithms of the compound Undo operations and analyses the validity of the strategy. Case analysis is given to show the effectiveness of the strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. Breeding Terrains with Genetic Terrain Programming: The Evolution of Terrain Generators.
- Author
-
Frade, Miguel, de Vega, F. Fernandez, and Cotta, Carlos
- Subjects
COMPUTER programming ,VIDEO games ,ELECTRONIC games ,COMPUTER software ,APPLICATION software ,ELECTRONIC data processing ,COMPUTER algorithms - Abstract
Although a number of terrain generation techniques have been proposed during the last few years, all of them have some key constraints. Modelling techniques depend highly upon designer's skills, time, and effort to obtain acceptable results, and cannot be used to automatically generate terrains. The simpler methods allow only a narrow variety of terrain types and offer little control on the outcome terrain. The Genetic Terrain Programming technique, based on evolutionary design with Genetic Programming, allows designers to evolve terrains according to their aesthetic feelings or desired features. This technique evolves Terrain Programmes (TPs) that are capable of generating a family of terrains—different terrains that consistently present the same morphological characteristics. This paper presents a study about the persistence of morphological characteristics of terrains generated with different resolutions by a given TP. Results show that it is possible to use low resolutions during the evolutionary phase without compromising the outcome, and that terrain macrofeatures are scale invariant. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
18. Complete Formal Specification of the OpenMP Memory Model.
- Author
-
Bronevetsky, Greg and Supinski, Bronis
- Subjects
PARALLEL programming ,COMPUTER programming ,PARALLEL processing ,PARALLEL logic programming ,COMPUTER software ,COMPUTER algorithms ,SYSTEMS development ,ELECTRONIC data processing - Abstract
OpenMP [OpenMP Architecture Review Board. OpenMP application program interface, version 2.5] is an important API for shared memory programming, combining shared memory’s potential for performance with a simple programming interface. Unfortunately, OpenMP lacks a critical tool for demonstrating whether programs are correct: a formal memory model. Instead, the current official definition of the OpenMP memory model (the OpenMP 2.5 specification [OpenMP Architecture Review Board. OpenMP application program interface, version 2.5]) is in terms of informal prose. As a result, it is impossible to verify OpenMP applications formally since the prose does not provide a formal consistency model that precisely describes how reads and writes on different threads interact. We expand on our previous work that focused on the formal verification of OpenMP programs through a formal memory model [Greg Bronevetsky and Bronis de Supinski. Formal specification of the memory model. In International Workshop on OpenMP (IWOMP), (2006)]. As in that work, our formalization, which is derived from the existing prose model [OpenMP Architecture Review Board. OpenMP application program interface, version 2.5], provides a two-step process to verify whether an observed OpenMP execution is conformant. This paper extends the model to cover the entire specification. In addition to this formalization, our contributions include a discussion of ambiguities in the current prose-based memory model description. Although our formal model may not capture the current informal memory model perfectly, in part due to these ambiguities, our model reflects our understanding of the informal model’s intent. We conclude with several examples that may indicate areas of the OpenMP memory model that need further refinement, however it is specified. Our goal is to motivate the OpenMP community to adopt those refinements eventually, ideally through a formal model, in later OpenMP specifications. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
19. Supporting Nested OpenMP Parallelism in the TAU Performance System.
- Author
-
Morris, Alan, Malony, Allen, and Shende, Sameer
- Subjects
PARALLEL programming ,COMPUTER programming ,PARALLEL processing ,PARALLEL logic programming ,COMPUTER software ,COMPUTER algorithms ,SYSTEMS development ,ELECTRONIC data processing - Abstract
Nested OpenMP parallelism allows an application to spawn teams of nested threads. This hierarchical nature of thread creation and usage poses problems for performance measurement tools that must determine thread context to properly maintain per-thread performance data. In this paper we describe the problem and a novel solution for identifying threads uniquely. Our approach has been implemented in the TAU performance system and has been successfully used in profiling and tracing OpenMP applications with nested parallelism. We also describe how extensions to the OpenMP standard can help tool developers uniquely identify threads. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
20. Loop Shifting for Loop Compaction.
- Author
-
Darte, Alain and Huard, Guillaume
- Subjects
COMPUTER software ,DATA pipelining ,LOOP tiling (Computer science) ,PARALLEL programming ,COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing - Abstract
The idea of decomposed software pipelining is to decouple the software pipelining problem into a cyclic scheduling problem without resource constraints and an acyclic scheduling problem with resource constraints. In terms of loop transformation and code motion, the technique can be formulated as a combination of loop shifting and loop compaction. Loop shifting amounts to moving statements between iterations thereby changing some loop independent dependences into loop carried dependences and vice versa. Then, loop compaction schedules the body of the loop considering only loop independent dependences, but taking into account the details of the target architecture. In this paper, we show how loop shifting can be optimized so as to minimize both the length of the critical path and the number of dependences for loop compaction. The first problem is well-known and can be solved by an algorithm due to Leiserson and Saxe. We show that the second optimization (and the combination with the first one) is also polynomially solvable with a fast graph algorithm, variant of minimum-cost flow algorithms. Finally, we analyze the improvements obtained on loop compaction by experiments on random graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
21. A Vectorizing Compiler for Multimedia Extensions.
- Author
-
Sreraman, N. and Govindarajan, R.
- Subjects
COMPILERS (Computer programs) ,COMPUTER software ,SYSTEMS software ,PARALLEL programming ,COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing - Abstract
In this paper, we present an implementation of a vectorizing C compiler for Intel's MMX (Multimedia Extension). This compiler would identify data parallel sections of the code using scalar and array dependence analysis. To enhance the scope for application of the subword semantics, our compiler performs several code transformations. These include strip mining, scalar expansion, grouping and reduction, and distribution. Thereafter inline assembly instructions corresponding to the data parallel sections are generated. We have used the Stanford University Intermediate Format (SUIF), a public domain compiler tool, for our implementation. We evaluated the performance of the code generated by our compiler for a number of benchmarks. Initial performance results reveal that our compiler generated code produces a reasonable performance improvement (speedup of 2 to 6.5) over the the code generated without the vectorizing transformations/inline assembly. In certain cases, the performance of the compiler generated code is within 85% of the hand-tuned code for MMX architecture. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
22. An Intelligent Tutoring System for the Dijkstra-Gries Methodology.
- Author
-
Ng, Frank, Butler, Gregory, and Kay, Judy
- Subjects
- *
COMPUTER programming , *COMPUTER algorithms , *ELECTRONIC data processing , *COMPUTER software , *SOFTWARE engineering - Abstract
This paper describes the design and implementation of an intelligent tutoring system for the Dijkstra-Gries programming methodology as defined by Cries in "The Science of Programming"[12]. The first part of the paper identifies the requirements of intelligent tutoring systems in general and those of the methodology in particular. It shows the suitability of the Smalltalk environment for developing expandable intelligent systems and the compatibility of Smalltalk's object-oriented paradigm with the Gries methodology's goal/plan approach to programming. We then describe how these requirements are met: an overview of the system's support of the methodology and the modules that enable the system to respond intelligently. As an example, a reusable tutorial part is presented, first from a student's perspective, then from an author's perspective. Finally the results of an evaluation of the system drawn from actual student experience are presented. [ABSTRACT FROM AUTHOR]
- Published
- 1995
23. On Satisfying Timing Constraints in Hard-Real-Time Systems.
- Author
-
Xu, Jia and Parnas, David Lorge
- Subjects
COMPUTER systems ,COMPUTER software ,SOFTWARE engineering ,EMBEDDED computer systems ,COMPUTER algorithms ,ELECTRONIC data processing ,COMPUTER science - Abstract
We explain why pre-run-time scheduling is essential if one wishes to guarantee that timing constraints will be satisfied in a large complex hard-real-time system. We examine some of the major concerns in pre-run-time scheduling and consider what formulations of mathematical scheduling problems can be used to address those concerns. This paper provides a guide to the available algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 1993
24. Early Experience with the Visual Programmer's WorkBench.
- Author
-
Rubin, Robert V., Walker II, James, and Golin, Eric J.
- Subjects
VISUAL programming (Computer science) ,VISUAL programming languages (Computer science) ,SOFTWARE engineering ,COMPUTER programming ,CONCURRENT engineering ,ENGINEERING ,COMPUTER algorithms ,ELECTRONIC data processing ,COMPUTER software - Abstract
Diagrams play a central role in software engineering. They are used for specifying design elements such as requirements, concurrent systems, database models and interactive systems. Families of diagrams form visual languages, and creating such diagrams constitutes visual programming. The Visual Programmer's WorkBench (VPW) addresses the rapid synthesis of programming environments for the specification, analysis, and execution of visual programs. A language-based environment for a specific visual language is generated in VPW from a specification of the syntactic structure, the abstract structure, the static semantics and the dynamic semantics of the language. VPW is built around a model of distributed processing based on shared distributed memory. This framework is used both in defining the architecture of the environment and for the execution model of visual languages. The Visual Programmer's WorkBench has been used to experiment with visual programming environments for several visual languages. This paper describes the design of the Visual Programmer's WorkBench and our experience using it to generate a distributed programming environment for a concurrent visual language. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
25. Programming with Verification Conditions.
- Author
-
van Emden, M.H.
- Subjects
COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing ,SOFTWARE engineering ,COMPUTER software ,COMPUTER systems - Abstract
This paper contains an exposition of the method of programming with verification conditions. Although this method has much in common with the one discussed by Dijkstra in A Discipline of Programming, it is shown to have the advantage in simplicity and flexibility. The simplicity is the result of the method's being directly based on Floyd's inductive assertions. The method is flexible because of the way in which the program is constructed in two stages. In the first stage, a set of verification conditions is collected which corresponds to a program in "flowgraph" form. In this stage sequencing control is of no concern to the programmer. Control is introduced in the second stage, which consists of automatable applications of translation and optimization rules, resulting in conventional code. Although our method has no use for the sequencing primitives of "structured programming," it is highly secure and systematic. [ABSTRACT FROM AUTHOR]
- Published
- 1979
26. An Approach to Formal Definitions and Proofs of Programming Principles.
- Author
-
Misra, Jayadev
- Subjects
COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing ,MATHEMATICAL analysis ,COMPUTER science ,COMPUTER software ,SOFTWARE engineering ,ENGINEERING - Abstract
A method for formal description of programming principles is presented in this paper. Programming principles, such as sequential search can be defined and proven even in the absence of an application. We represent a principle as a program scheme which has partially interpreted functions in it. The functions must obey certain input constraints. Use of these ideas in program proving is illustrated with examples. [ABSTRACT FROM AUTHOR]
- Published
- 1978
27. acm forum.
- Author
-
Ashenhurst, Robert L.
- Subjects
LETTERS to the editor ,COMPUTER software ,COMPUTER programming ,ELECTRONIC data processing ,PERIODICALS ,COMPUTER algorithms - Abstract
Presents several letters to the editor referencing articles and topics that were published in the September 1988 issue of the journal "Communications of the ACM." "Program Verification: The Very Idea," which focused on the analysis of the nature of verification programme in computers; "A Comparison of Techniques for the Specification of External System Behavior," which discussed the Real-Time structured analysis; Feasibility of real time structure programmes; Reply to the criticism by the author.
- Published
- 1989
- Full Text
- View/download PDF
28. A Generalized Control Structure and Its Formal Definition.
- Author
-
Parnas, David Lorge
- Subjects
PROGRAMMING languages ,ELECTRONIC data processing ,COMPUTER programming ,MATHEMATICAL analysis ,COMPUTER algorithms ,COMPUTERS ,COMPUTER systems ,ALGEBRA ,COMPUTER software - Abstract
A new programming language control structure as well as an improved approach to a formal definition of programming languages are presented. The control structure can replace both iteration and conditional structures. Because it is a semantic generalization of those structures, a single statement using the new control structure can implement the functions of loops, conditionals, and also programs that would require several conventional constructs. As a consequence of this increased capability, it is possible to write algorithms that are simpler, more efficient, and more clearly correct than those that can be written with earlier structured-programming control structures. In order to provide a precise definition of the new constructs, a new version of relational semantics, called LD- relations is presented. An algebra of these relations is developed and used to define the meaning of the new constructs. A short discussion of program development and the history of control structures is included. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
29. Analysis of an Algorithm for Real Time Garbage Collection.
- Author
-
Wadler, Philip L.
- Subjects
GARBAGE collection (Computer science) ,COMPUTER memory management ,COMPUTER algorithms ,ELECTRONIC data processing ,COMPUTER software ,COMPUTERS - Abstract
A real lime garbage collection system avoids suspending the operations of a list processor for the long times that garbage collection normally requires by performing garbage collection on a second processor in parallel with list processing operations, or on a single processor time-shared with them. Algorithms for recovering discarded list structures in this manner are presented and analyzed to determine sufficient conditions under which the list processor never needs to wait on the collector. These techniques are shown to require at most twice as much processing power as regular garbage collectors, if they are used efficiently. The average behavior of the program is shown to be very nearly equal to the worst-case performance, so that the sufficient conditions are also suitable for measuring the typical behavior of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 1976
- Full Text
- View/download PDF
30. New Upper Bounds for Selection.
- Author
-
Yap, Chee K.
- Subjects
COMPUTER algorithms ,COMPUTATIONAL complexity ,ELECTRONIC data processing ,MACHINE theory ,COMPUTER software ,COMPUTER systems - Abstract
The worst-case, minimum number of comparisons complexity V
i (n) of the i-th selection problem is considered. A new upper bound for Vi (n) improves the bound given by the standard Hadian-Sobel algorithm by a generalization of the Kirkpatrick-Hadian-Sobel algorithm, and extends Kirkpatrick's method to a much wider range of application. This generalization compares favorably with a recent algorithm by Hyafil. [ABSTRACT FROM AUTHOR]- Published
- 1976
- Full Text
- View/download PDF
31. Unlocking Ordered Parallelism with the Swarm Architecture.
- Author
-
Jeffrey, Mark C., Subramanian, Suvinay, Yan, Cong, Emer, Joel, and Sanchez, Daniel
- Subjects
PARALLEL processing ,COMPUTER algorithms ,COMPUTER software ,COMPUTER programming ,ELECTRONIC data processing - Abstract
The authors present Swarm, a parallel architecture that exploits ordered parallelism, which is abundant but hard to mine with current software and hardware techniques. Swarm programs consist of short tasks, as small as tens of instructions each, with programmer-specified order constraints. Swarm executes tasks speculatively and out of order and efficiently speculates thousands of tasks ahead of the earliest active task to uncover enough parallelism. Several techniques allow Swarm to scale to large core counts and speculation windows. The authors evaluate Swarm on graph analytics, simulation, and database benchmarks. At 64 cores, Swarm outperforms sequential implementations of these algorithms by 43 to 117 times and state-of-the-art software-only parallel algorithms by 3 to 18 times. Besides achieving near-linear scalability, Swarm programs are almost as simple as their sequential counterparts, because they do not use explicit synchronization. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
32. A general technique for proving lock-freedom
- Author
-
Colvin, Robert and Dongol, Brijesh
- Subjects
- *
COMPUTER programming , *ELECTRONIC data processing , *COMPUTER architecture , *COMPUTER algorithms , *COMPUTER software - Abstract
Abstract: Lock-freedom is a property of concurrent programs which states that, from any state of the program, eventually some process will complete its operation. Lock-freedom is a weaker property than the usual expectation that eventually all processes will complete their operations. By weakening their completion guarantees, lock-free programs increase the potential for parallelism, and hence make more efficient use of multiprocessor architectures than lock-based algorithms. However, lock-free algorithms, and reasoning about them, are considerably more complex. In this paper we present a technique for proving that a program is lock-free. The technique is designed to be as general as possible and is guided by heuristics that simplify the proofs. We demonstrate our theory by proving lock-freedom of two non-trivial examples from the literature. The proofs have been machine-checked by the PVS theorem prover, and we have developed proof strategies to minimise user interaction. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
33. Reconciling statechart semantics
- Author
-
Eshuis, Rik
- Subjects
- *
COMPUTER programming , *ELECTRONIC data processing , *COMPUTER algorithms , *COMPUTER software - Abstract
Abstract: Statecharts are a visual technique for modelling reactive behaviour. Over the years, a plethora of statechart semantics have been proposed. The three most widely used are the fixpoint, Statemate, and UML semantics. These three semantics differ considerably from each other. In general, they interpret the same statechart differently, which impedes the communication of statechart designs among both designers and tools. In this paper, we identify a set of constraints on statecharts that ensure that the fixpoint, Statemate and UML semantics coincide, if observations are restricted to linear, stuttering-closed, separable properties. Moreover, we show that for a subset of these constraints, a slight variation of the Statemate semantics coincides for linear stuttering-closed properties with the UML semantics. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
34. Comparisons Between Data Clustering Algorithms.
- Author
-
Abbas, Osama Abu
- Subjects
ALGORITHMS ,COMPUTER programming ,SELF-organizing systems ,ELECTRONIC data processing ,MATHEMATICAL analysis ,COMPUTER algorithms ,COMPUTER software - Abstract
Clustering is a division of data into groups of similar objects. Each group, called a cluster; consists of objects that are similar between themselves and dissimilar compared to objects of other groups. This paper is intended to study and compare different data clustering algorithms. The algorithms under investigation are: k-means algorithm, hierarchical clustering algorithm, self-organizing maps algorithm, and expectation maximization clustering algorithm. All these algorithms are compared according to the following factors: size of dataset, number of clusters, type of dataset and type of software used. Some conclusions that are extracted belong to the performance, quality, and accuracy of the clustering algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2008
35. Executable JVM model for analytical reasoning: A study
- Author
-
Liu, Hanbing and Moore, J. Strother
- Subjects
- *
COMPUTER software , *COMPUTER programming , *COMPUTER algorithms , *ELECTRONIC data processing - Abstract
Abstract: To study the properties of the Java Virtual Machine (JVM) and Java programs, our research group has produced a series of JVM models written in a functional subset of Common Lisp. In this paper, we present our most complete JVM model from this series, namely, M6, which is derived from a careful study of the J2ME KVM [Connected Limited Device Configuration (CLDC) and the K Virtual Machine, http://java.sun.com/products/cldc/] implementation. On the one hand, our JVM model is a conventional machine emulator. M6 implements dynamic class loading, class initialization and synchronization via monitors. It executes most J2ME CLDC Java programs that do not use any I/O or floating point operations. Engineers may consider M6 an implementation of the JVM. The June 2003 version is implemented with around 10K lines of Lisp in 28 modules. On the other hand, M6 is novel because it allows for analytical reasoning in addition to conventional testing. M6 is written in an applicative (side-effect free) subset of Common Lisp, for which we have given precise meaning in terms of axioms and inference rules. Properties of M6 and its bytecoded programs can be expressed as formulas and proved as theorems. Proofs are constructed interactively with a mechanical theorem prover. Its concreteness, completeness, executability and mechanized reasoning support make our model unique among JVM models. We argue that our approach of building an executable model of the system with an axiomatically described functional language can bring benefits from both the testing and the formal reasoning worlds. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
36. State Constraints and Pathwise Decomposition of Programs.
- Author
-
Huang, J. C.
- Subjects
- *
COMPUTER programming , *COMPUTER algorithms , *MATHEMATICAL decomposition , *COMPUTER software , *ELECTRONIC data processing - Abstract
A state constraint is a new programming construct designed to restrict the domain of definition of a program. It can be used to decompose a program pathwise, i.e., to divide the program into subprograms along the control How, as opposed to divide the program across the control flow when the program is decomposed into functions and procedures. As a result one can now construct and manipulate a program consisting of one or more execution paths of another program. This paper describes the idea involved, examines the properties of state constraints, establishes a formal basis for pathwise decomposition, and discusses their utilities in program simplification, testing, and verification. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
37. A Distributed Scheme for Detecting Communication Deadlocks.
- Author
-
Natarajan, N.
- Subjects
- *
DISTRIBUTED computing , *COMPUTER algorithms , *COMPUTER networks , *ELECTRONIC data processing , *COMPUTER systems , *COMPUTER software , *SOFTWARE engineering - Abstract
A distributed system is an interconnected network of computing elements or nodes, each of which has its own storage. A distributed program is a collection of processes which execute asynchronously, possibly in different nodes of a distributed system, and they communicate with each other in order to realize a common goal. In such an environment, a group of processes may sometimes get involved in a communication deadlock. This is a situation in which each member process of the group is waiting for some member to communicate with it, but no member is attempting communication with it. In this paper, we present an algorithm for detecting such communication deadlocks. The algorithm is distributed, i.e., processes detect deadlocks during the course of their communication, without the aid of a central controller. The detection scheme does not presume any a priori structure among processes, and detection is made "on the fly" without freezing normal activities. The scheme does not require any storage whose size is determined by the size of the network, and hence is suitable also for an environment where processes are created dynamically. [ABSTRACT FROM AUTHOR]
- Published
- 1986
38. Formal Checklists for Remote Agent Dependability.
- Author
-
Denker, Grit and Talcott, Carolyn L.
- Subjects
COMPUTER software ,COMPUTER programming ,ELECTRONIC data processing ,COMPUTER algorithms - Abstract
Abstract: Remote agents used in Deep Space Missions such as rovers or solar airplanes must function autonomously over a prolonged time during planetary exploration. The Mission Data System (MDS) framework has been developed to address design and deployment of these complex systems. We are using the Maude environment to develop a formal framework with methods and supporting tools for increasing the dependability of MDS space systems. This is done by developing formal executable specifications of the MDS framework and its mission-specific adaptations and providing a set of formal checklists (formal analysis suites) that can be used to achieve better predictability and dependability. In this paper we present our formal model of the MDS framework, an adaptation for a remote rover and preliminary checklists for remote agents. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
39. TECHNICAL CORRESPONDENCE.
- Author
-
Riehle, Richard, Winkler, Jürgen F. H., and Jameson, David
- Subjects
STRUCTURED programming ,COMPUTER algorithms ,COMPUTER software ,ELECTRONIC data processing - Abstract
The article presents the author views regarding the use of Go TO statements while generating a source code. In 1968 Edsgar Dijkstra, published his comments in the "Communications," GO TO Statements Considered Harmful. One GO TO was relegated to the status of a software obscenity, some programming managers began to dictate rules such as, "No GO TO statements are allowed in new program development." The goals behind eliminating the GO TO were to enhance source code clarity, reduce maintenance costs, and improve algorithmic reliability. Cobol, as originally designed, was not well-suited to "structured programming" because it lacked the more important language constructs required to properly support that style of programming. Use of PERFORM is suggested while creating the source code, instead of GO TO. GO TO statement is adequate for very small programs but "harmful" in larger more complex programs. Similarly, PERFORM is useful in small to medium programs but "harmful" in very large programs.
- Published
- 1992
40. Technical Correspondence.
- Author
-
Pleasant, James C., Paulson, Lawrence, Cohen, Avra, Gordon, Michael, Bevier, William R., Smith, Michael K., Young, William D., Clune, Thomas R., Savitzky, Stephen, and Fetzer, James H.
- Subjects
LETTERS to the editor ,COMPUTER software ,PERIODICALS ,COMPUTER algorithms ,ELECTRONIC data processing ,COMPUTER programming - Abstract
Presents several letters to the editor commenting on the article "Program Verification; The Very Idea," which was published in the September 1988 issue of the journal "Communications of the ACM." Criticism of the applicability of 'program verification' as proposed by the author; Proposal of program verification in the performance of a computer program; Response to the criticism by the author.
- Published
- 1989
41. PROGRAM INDENTATION AND COMPREHENSIBILITY.
- Author
-
Miara, Richard J., Musselman, Joyce A., Navarro, Juan A., Shneiderman, Ben, and Ledgard, Henry
- Subjects
PROGRAMMING languages ,C (Computer program language) ,COMPUTER software ,COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing - Abstract
The consensus in the programming community is that indentation aids program comprehension, although many studies do not back this up. We tested program comprehension on a Pascal program. Two styles of indentation were used—blocked and nonblocked—in addition to four possible levels of indentation (0, 2, 4, 6 spaces). Both experienced and novice subjects were used. Although the blocking style made no difference, the level of identation had a significant effect on program comprehension (2-4 spaces had the highest mean score for program comprehension). We recommend that a moderate level of indentation be used to increase program comprehension and user satisfaction. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
42. programming pearls.
- Author
-
Denning, Peter J.
- Subjects
COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing ,COMPUTER programmers ,COMPUTER software ,COMPUTER systems ,COMPUTER files ,COMPUTER science - Abstract
The article announces the inauguration of Programming Pearls. It is a new department of Communications aimed to distinguish great programs from other programs to be written by Jon Bentley that will appear every other month after an initial set of four columns monthly in the journal. The topics of the column will vary, subject to conceptual simplicity, importance of scientific principle, quality and efficiency of the software. Moreover, readers are encouraged to submit their own program for a chance to be included in Bentley's columns.
- Published
- 1983
43. Some Basic Determinants of Computer Programming Productivity.
- Author
-
Chrysler, Earl and Morgan, H.
- Subjects
COMPUTER programming ,COMPUTER software development ,COMPUTER programmers ,ELECTRONIC data processing ,COMPUTER software ,COMPUTER algorithms - Abstract
The purpose of this research was to examine the relationship between processing characteristics of programs and experience characteristics of programmers and program development time. The ultimate objective was to develop a technique for predicting the amount of time necessary to create a computer program. The fifteen program characteristics hypothesized as being associated with an increase in programming time required are objectively measurable from preprogramming specifications. The five programmer characteristics are experience-related and are also measurable before a programming lack is begun. Nine program characteristics emerged as major influences on program development time, each associated with increased program development time. All five programmer characteristics were found to be related to reduced program development time. A multiple regression equation which contained one programmer characteristic and four program characteristics gave evidence of good predictive power for forecasting program development time. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
44. A Very High Level Programming Language for Data Processing Applications.
- Author
-
Hammer, Michael, Howe, W. Gerry, Kruskal, Vincent J., and Wladawsky, Irving
- Subjects
PROGRAMMING languages ,COMPUTERS in business ,COMPUTER algorithms ,BUSINESS forms ,BUSINESS records ,ELECTRONIC data processing ,SUBLANGUAGE ,FORMS management ,COMPUTER software - Abstract
Application development today is too labor-intensive. In recent years, very high-level languages have been increasingly explored as a solution to this problem. The Business Definition Language (BDL) is such a language, one aimed at business data processing problems. The concepts in BDL mimic those which have evolved through the years in businesses using manual methods. This results in three different sublanguages or components: one for defining the business forms, one for describing the business organization, and one for writing calculations. [ABSTRACT FROM AUTHOR]
- Published
- 1977
- Full Text
- View/download PDF
45. A Weighted Buddy Method for Dynamic Storage Allocation.
- Author
-
Shen, Kenneth K., Peterson, James L., and Weissman, C.
- Subjects
DYNAMIC storage allocation (Computer science) ,COMPUTER software ,COMPUTER algorithms ,COMPUTER programming ,ELECTRONIC data processing - Abstract
An extension of the buddy method, called the weighted buddy method, for dynamic storage allocation is presented The weighted buddy method allows block sizes of 2
k and 3 ,2k , whereas the original buddy method allowed only block sizes of 2k . This extension is achieved at an additional cost of only two bits per block. Simulation results are presented which compare this method with the buddy method. These results indicate that, for a uniform request distribution, the buddy system has less total memory fragmentation than the weighted buddy algorithm. However, the total fragmentation is smaller for the weighted buddy method when the requests are for exponentially distributed block sizes. [ABSTRACT FROM AUTHOR]- Published
- 1974
- Full Text
- View/download PDF
46. FAST POINT CLOUD REGISTRATION ALGORITHM TO BUILD 3D MAPS FOR ROBOT NAVIGATION.
- Author
-
MAJDIK, Andras, TAMAS, Levente, and LAZEA, Gheorghe
- Subjects
- *
MOBILE robots , *ROBOT motion , *COMPUTER algorithms , *ARTIFICIAL intelligence , *COMPUTER software , *DIGITAL computer simulation , *ELECTRONIC data processing - Abstract
This paper presents a very fast algorithm to register point clouds obtained from stereo pair images. The algorithm is based on SURF (Speeded up Robust Features) interest point detector to reduce the amount of data and it uses the JCP (iterative Closest Point) algorithm to compute the translation and rotations of the mobile robot motions on which the stereo camera is mounted. [ABSTRACT FROM AUTHOR]
- Published
- 2009
47. AN OPTICAL METHOD FOR THE ROBOT'S PERFORMANCES TESTING.
- Author
-
VACARESCU, Valeria, VACARESCU, Cella - Flavia, ARGESANU, Veronica, and DRAGHICI, Anca
- Subjects
- *
ROBOTICS , *ARTIFICIAL intelligence , *OPTICAL measurements , *COMPUTER software , *DIGITAL computer simulation , *ELECTRONIC data processing , *COMPUTER algorithms - Abstract
In the field of industrial robotics, there are many different methods to increase the robots accuracy and to reduce the pose errors of robot systems. Generally, different calibration methods are used to increase the robots accuracy. The optical measurement systems have the advantage of high resolution, large workspace and contactless measurement. In this paper, it is presented a system for measuring the robot's pose performances by using two digitally theodolits. The used method and algorithm are in concordance with the ISO 9283:1998 recommendations and are validated by experimentally researches results. [ABSTRACT FROM AUTHOR]
- Published
- 2009
48. logn P and log3 P: Accurate Analytical Models of Point-to-Point Communication in Distributed Systems.
- Author
-
Cameron, Kirk W., Rong Ge, and Xian-He Sun
- Subjects
DISTRIBUTED computing ,COMPUTER networks ,MIDDLEWARE ,COMPUTER storage devices ,PPP (Computer network protocol) ,COMPUTER software ,ELECTRONIC data processing ,COMPUTER algorithms ,HIGH technology industries - Abstract
Many existing models of point-to-point communication in distributed systems ignore the impact of memory and middleware. Including such details may make these models impractical. Nonetheless, the growing gap between memory and CPU performance combined with the trend toward large-scale, clustered shared memory platforms implies an increased need to consider the impact of middleware on distributed communication. We present a general software-parameterized model of point-to-point communication for use in performance prediction and evaluation. We illustrate the utility of the model in three ways: 1) to derive a simplified, useful, more accurate model of point-to-point communication in clusters of SMPs, 2) to predict and analyze point-to-point and broadcast communication costs in clusters of SMPs, and 3) to express, compare, and contrast existing communication models. Though our methods are general, we present results on several Linux clusters to illustrate practical use on real systems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
49. Worldwide computing: Adaptive middleware and programming technology for dynamic Grid environments.
- Author
-
Varela, Carlos A., Ciancarini, Paolo, and Taura, Kenjiro
- Subjects
MIDDLEWARE ,COMPUTER programming ,COMPUTER software ,COMPUTER algorithms ,ELECTRONIC data processing - Abstract
The article discusses adaptive middleware and programming technology for dynamic Grid environments. Grid systems enable to integrate and virtualize geographically dispersed resources offered, and used, by distributed and dynamic virtual organizations. Current Grid implementations are able to federate worldwide distributed resources across different domains.
- Published
- 2005
- Full Text
- View/download PDF
50. Compilation Techniques for Multimedia Processors.
- Author
-
Krall, Andreas and Lelait, Sylvain
- Subjects
MICROPROCESSORS ,MULTIMEDIA computer applications ,COMPILERS (Computer programs) ,COMPUTER software ,PARALLEL programming ,COMPUTER programming ,COMPUTER algorithms ,ELECTRONIC data processing - Abstract
The huge processing power needed by multimedia applications has led to multimedia extensions in the instruction set of microprocessors which exploit subword parallelism. Examples of these extended instruction sets are the Visual Instruction Set of the UltraSPARC processor, the AltiVec instruction set of the PowerPC processor, the MMX and ISS extensions of the Pentium processors, and the MAX-2 instruction set of the HP PA-RISC processor. Currently, these extensions can only be used by programs written in assembly language, through system libraries or by calling specialized macros in a high-level language. Therefore, these instructions are not used by most applications. We propose two code generation techniques to produce native code using these multimedia extensions for programs written in a high-level language: classical vectorization and vectorization by unrolling. Vectorization by unrolling is simpler than classical vectorization since data dependence analysis is reduced to acyclic control flow graph analysis. Furthermore, we address the problem of unaligned memory accesses. This can be handled by both static analysis and dynamic runtime checking. Preliminary experimental results for a code generator for the UltraSPARC VIS instruction set show that speedups of up to a factor of 4.8 are possible, and that vectorization by unrolling is much simpler but as effective as classical vectorization. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.