221 results
Search Results
2. Achieving High Performance the Functional Way: Expressing High-Performance Optimizations as Rewrite Strategies.
- Author
-
Hagedorn, Bastian, Lenfers, Johannes, Koehler, Thomas, Xueying Qin, Gorlatch, Sergei, and Steuwer, Michel
- Subjects
- *
OPL (Computer program language) , *PROGRAMMING languages , *DOMAIN-specific programming languages , *COMPUTER programming , *ELECTRONIC data processing , *COMPUTER software development - Abstract
Optimizing programs to run efficiently on modern parallel hardware is hard but crucial for many applications. The predominantly used imperative languages force the programmer to intertwine the code describing functionality and optimizations. This results in a portability nightmare that is particularly problematic given the accelerating trend toward specialized hardware devices to further increase efficiency. Many emerging domain-specific languages (DSLs) used in performance-demanding domains such as deep learning attempt to simplify or even fully automate the optimization process. Using a high-level--often functional--language, programmers focus on describing functionality in a declarative way. In some systems such as Halide or TVM, a separate schedule specifies how the program should be optimized. Unfortunately, these schedules are not written in well-defined programming languages. Instead, they are implemented as a set of ad hoc predefined APIs that the compiler writers have exposed. In this paper, we show how to employ functional programming techniques to solve this challenge with elegance. We present two functional languages that work together--each addressing a separate concern. RISE is a functional language for expressing computations using well-known data-parallel patterns. ELEVATE is a functional language for describing optimization strategies. A high-level RISE program is transformed into a low-level form using optimization strategies written in ELEVATE. From the rewritten low-level program, high-performance parallel code is automatically generated. In contrast to existing high-performance domain-specific systems with scheduling APIs, in our approach programmers are not restricted to a set of built-in operations and optimizations but freely define their own computational patterns in RISE and optimization strategies in ELEVATE in a composable and reusable way. We show how our holistic functional approach achieves competitive performance with the state-of-the-art imperative systems such as Halide and TVM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Sampling Near Neighbors in Search for Fairness.
- Author
-
Aumüller, Martin, Har-Peled, Sariel, Mahabadi, Sepideh, Pagh, Rasmus, and Silvestri, Francesco
- Subjects
- *
DATA , *FAIRNESS , *DATABASE searching , *SEARCH algorithms , *COMPUTER algorithms , *COMPUTER science , *COMPUTER programming - Abstract
Similarity search is a fundamental algorithmic primitive, widely used in many computer science disciplines. Given a set of points S and a radius parameter r > 0, the r-near neighbor (r-NN) problem asks for a data structure that, given any query point q, returns a point p within distance at most r from q. In this paper, we study the r-NN problem in the light of individual fairness and providing equal opportunities: all points that are within distance r from the query should have the same probability to be returned. The problem is of special interest in high dimensions, where Locality Sensitive Hashing (LSH), the theoretically leading approach to similarity search, does not provide any fairness guarantee. In this work, we show that LSH-based algorithms can be made fair, without a significant loss in efficiency. We propose several efficient data structures for the exact and approximate variants of the fair NN problem. Our approach works more generally for sampling uniformly from a subcollection of sets of a given collection and can be used in a few other applications. We also carried out an experimental evaluation that highlights the inherent unfairness of existing NN data structures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. BioScript: Programming Safe Chemistry on Laboratories-on-a-Chip.
- Author
-
Ott, Jason, Loveless, Tyson, Curtis, Chris, Lesani, Mohsen, and Brisk, Philip
- Subjects
- *
COMPUTER programming , *BIOCHIPS , *BIOCHEMISTRY , *MICROFLUIDICS , *DOMAIN specificity , *SYNTAX (Grammar) - Abstract
This paper introduces BioScript, a domain-specific language (DSL) for programmable biochemistry that executes on emerging microfluidic platforms. The goal of this research is to provide a simple, intuitive, and type-safe DSL that is accessible to life science practitioners. The novel feature of the language is its syntax, which aims to optimize human readability; the technical contribution of the paper is the BioScript type system. The type system ensures that certain types of errors, specific to biochemistry, do not occur, such as the interaction of chemicals that may be unsafe. Results are obtained using a custom-built compiler that implements the BioScript language and type system. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Resolution of the Burrows-Wheeler Transform Conjecture.
- Author
-
Kempa, Dominik and Kociumaka, Tomasz
- Subjects
- *
COMPUTER programming , *COMPUTERS in lexicography , *ALGORITHMS , *DATA structures , *COMPUTER science - Abstract
The Burrows-Wheeler Transform (BWT) is an invertible text transformation that permutes symbols of a text according to the lexicographical order of its suffixes. BWT is the main component of popular lossless compression programs (such as bzip2) as well as recent powerful compressed indexes (such as the r-index7), central in modern bioinformatics. The compressibility of BWT is quantified by the number r of equal-letter runs in the output. Despite the practical significance of BWT, no nontrivial upper bound on r is known. By contrast, the sizes of nearly all other known compression methods have been shown to be either always within a polylog n factor (where n is the length of the text) from z, the size of Lempel--Ziv (LZ77) parsing of the text, or much larger in the worst case (by an nε factor for ε > 0). In this paper, we show that r = O (z log² n) holds for every text. This result has numerous implications for text indexing and data compression; in particular: (1) it proves that many results related to BWT automatically apply to methods based on LZ77, for example, it is possible to obtain functionality of the suffix tree in O (z polylog n) space; (2) it shows that many text processing tasks can be solved in the optimal time assuming the text is compressible using LZ77 by a sufficiently large polylog n factor; and (3) it implies the first nontrivial relation between the number of runs in the BWT of the text and of its reverse. In addition, we provide an O (z polylog n)-time algorithm converting the LZ77 parsing into the run-length compressed BWT. To achieve this, we develop several new data structures and techniques of independent interest. In particular, we define compressed string synchronizing sets (generalizing the recently introduced powerful technique of string synchronizing sets11) and show how to efficiently construct them. Next, we propose a new variant of wavelet trees for sequences of long strings, establish a nontrivial bound on their size, and describe efficient construction algorithms. Finally, we develop new indexes that can be constructed directly from the LZ77 parsing and efficiently support pattern matching queries on text substrings. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Framing Sustainability as a Property of Software Quality.
- Author
-
LAGO, PATRICIA, AKINLI KOÇAK,, SEDEF, CRNKOVIC, IVICA, and PENZENSTADLER, BIRGIT
- Subjects
- *
SUSTAINABILITY , *COMPUTER programming , *CAR sharing , *PAPER mills , *COMPUTER software development - Abstract
The article focuses on case studies on sustainability in computer programming. The cases investigated were a paper mill's energy control system and a car-sharing service in Germany. The authors argue that creating a sustainability analysis framework will allow software developers to analyze the environmental and social aspects of their work and to balance this with technical and economic considerations. However, they note that sustainability requirements will add to the system scope and will necessitate more work during the requirements phase. INSET: Software Sustainability..
- Published
- 2015
- Full Text
- View/download PDF
7. Constant Overhead Quantum Fault Tolerance with Quantum Expander Codes.
- Author
-
Fawzi, Omar, Grospellier, Antoine, and Leverrier, Anthony
- Subjects
- *
QUANTUM computing , *COMPUTER programming , *ALGORITHMS , *FAULT-tolerant computing - Abstract
The threshold theorem is a seminal result in the field of quantum computing asserting that arbitrarily long quantum computations can be performed on a faulty quantum computer provided that the noise level is below some constant threshold. This remarkable result comes at the price of increasing the number of qubits (quantum bits) by a large factor that scales polylogarithmically with the size of the quantum computation we wish to realize. Minimizing the space overhead for fault-tolerant quantum computation is a pressing challenge that is crucial to benefit from the computational potential of quantum devices. In this paper, we study the asymptotic scaling of the space overhead needed for fault-tolerant quantum computation. We show that the polylogarithmic factor in the standard threshold theorem is in fact not needed and that there is a fault-tolerant construction that uses a number of qubits that is only a constant factor more than the number of qubits of the ideal computation. This result was conjectured by Gottesman who suggested to replace the concatenated codes from the standard threshold theorem by quantum error-correcting codes with a constant encoding rate. The main challenge was then to find an appropriate family of quantum codes together with an efficient classical decoding algorithm working even with a noisy syndrome. The efficiency constraint is crucial here: bear in mind that qubits are inherently noisy and that faults keep accumulating during the decoding process. The role of the decoder is therefore to keep the number of errors under control during the whole computation. On a technical level, our main contribution is the analysis of the small-set-flip decoding algorithm applied to the family of quantum expander codes. We show that it can be parallelized to run in constant time while correcting sufficiently many errors on both the qubits and the syndrome to keep the error under control. These tools can be seen as a quantum generalization of the bit-flip algorithm applied to the (classical) expander codes of Sipser and Spielman. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Efficient Parallelization Using Rank Convergence in Dynamic Programming Algorithms.
- Author
-
Maleki, Saeed, Musuvathi, Madanlal, and Mytkowicz, Todd
- Subjects
- *
PARALLEL algorithms , *DYNAMIC programming , *VITERBI decoding , *PARALLEL computers , *COMPUTER programming - Abstract
This paper proposes an efficient parallel algorithm for an important class of dynamic programming problems that includes Viterbi, Needleman-Wunsch, Smith-Waterman, and Longest Common Subsequence. In dynamic programming, the subproblems that do not depend on each other, and thus can be computed in parallel, form stages, or wavefronts. The algorithm presented in this paper provides additional parallelism allowing multiple stages to be computed in parallel despite dependences among them. The correctness and the performance of the algorithm relies on rank convergence properties of matrix multiplication in the tropical semiring, formed with plus as the multiplicative operation and max as the additive operation. This paper demonstrates the efficiency of the parallel algorithm by showing significant speedups on a variety of important dynamic programming problems. In particular, the parallel Viterbi decoder is up to 24? faster (with 64 processors) than a highly optimized commercial baseline. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. SHORT PAPERS.
- Author
-
Cheatham, Thomas E.
- Subjects
- *
COMPUTER software research , *COMPUTER programming , *COMPUTER software , *VIDEO games - Abstract
This article presents several research papers related to computing machinery and computer programs. Designing a system for the analytic processing of mathematical functions around a central syntax processor and permitting the user to work within the context of mathematical syntax leads to a flexible system with a broad range of capabilities. One advantage of the approach mentioned in "A Syntax-Directed Approach to Automated Aids for Symbolic Math," by L. Clapp, is that the basic system can be developed without many a prior restrictions on the nature of the mathematical entities to be processed. Once the basic structure has been developed, the user is free to define and operate within his own system of mathematics by modifying or extending the syntax definitions. In the paper "Programming Languages, Logic and Cooperative Games," by L. Hodes, an application of computers is presented which is not restricted to a single problem area but is perhaps not general purpose enough to be considered programming language, even a problem-oriented one.
- Published
- 1966
10. THE 1981 GEORGE E. FORSYTHE STUDENT PAPER COMPETITION.
- Subjects
- *
COMPUTER programming , *DEBUGGING , *AWARDS , *COMPUTER science - Abstract
The article presents information on a research paper related to computer programming and debugging, which won an award in the 1981 George E. Forsythe Student Paper Competition. The research paper "A Program Testing Assistant," by David Chapman, a graduate student, was selected because of its original and incisive research into program development and debugging. The competition memorializes George E. Forsythe, president of the Association for Computing Machinery from 1964 to 1966, and leader in the development of computer science as a separate discipline. Chapman will receive a cash award of 500 dollars and a certificate. The papers of the first competition were refereed and judged by a committee of graduate students. The tradition has continued, with each competition administered by the graduate students at a major university with the assistance of a faculty advisor. For 1981, it was a committee of graduate students in computer science at Cornell University, New York, with professor David J. Gries serving as faculty advisor.
- Published
- 1982
11. Papers from the Second ACM Symposium on Principles of Programming Languages.
- Author
-
Harrison, M. A., Morris, Jr., J. H., and Standish, T. A.
- Subjects
- *
CONFERENCES & conventions , *PROGRAMMING languages , *COMPUTER programming , *ARTIFICIAL languages , *TECHNICAL reports , *SCIENTIFIC literature , *INFORMATION theory , *SEMINARS - Abstract
The article presents information on the papers from the Second Association for Computing Machinery (ACM) Symposium. The topic of the symposium was principles of programming languages. It was sponsored by ACM Special Interest Group on Automata and Computability Theory and ACM Special Interest Group on Programming Languages. The organizers of the symposium believed that this joint effort will result in to increased, productive understanding of many topics. The topics studied in the papers for the second symposium included topics like including: proving assertions about programs and data, optimization techniques, the computational complexity of testing grammars for certain key properties, automatic choice of data representations, exception handling, and various new approaches to programming language semantics. The program committee received 107 abstracts out of which 22 papers were selected for presentation. The papers represented almost all areas of current research. All papers were reviewed by referees in the normal fashion.
- Published
- 1975
- Full Text
- View/download PDF
12. Student Paper Competition Awards.
- Subjects
- *
COMPUTER programming , *CONTESTS , *COLLEGE students , *PRIZES (Contests & competitions) , *COMPUTER storage devices , *UNIVERSITIES & colleges , *AWARDS - Abstract
This article presents information on the winning papers of the first annual ACM Communications Student Paper Competition organized by the Association for Computing Machinery (ACM). First place was given to the paper "Generating Parsers for Affix Grammars," by David R. Crowe of the University of British Columbia. The prize included $250 cash, a trip to ACM 72 to receive the award in person, and a three-year subscription to the ACM serial publication of his choice. Second place was given to the paper "Political Redistricting by Computer," by Robert E. Helbig, Patrick K. Orr, and Robert R. Roediger of Washington University. The prize included $150 cash, and for each author a three-year subscription to the ACM serial publication of his choice. Third place was given to the paper "An Extensible Editor for a Small Machine with Disk Storage," by Arthur J. Benjamin of Brandeis University. The prize included $100 cash, and a three-year subscription to the ACM serial publication of his choice. All of the refereeing of Competition papers was done by graduate students at various colleges and universities.
- Published
- 1972
13. Research for Practice: Knowledge Base Construction in the Machine-Learning Era.
- Author
-
RATNER, ALEX and RÉ, CHRIS
- Subjects
- *
KNOWLEDGE base , *DEEP learning , *DATA , *ERRORS , *COMPUTER programming - Abstract
The article reports on the construction of knowledge bases through the use of deep learning. It presents summaries of three papers on several facets of knowledge base construction: the need for joint learning to prevent error cascades, the weak supervision of training data, and the representation of the data both as it is input and output.
- Published
- 2018
- Full Text
- View/download PDF
14. Calls for Papers: Important Dates.
- Subjects
- *
COMPUTER programming , *COMPUTER software , *INFORMATION science - Abstract
A call for papers on computer programming and information science is presented.
- Published
- 1977
15. Soylent: A Word Processor with a Crowd Inside.
- Author
-
Bernstein, Michael S., Little, Greg, Miller, Robert C., Hartmann, Björn, Ackerman, Mark S., Karger, David R., Crowell, David, and Panovich, Katrina
- Subjects
- *
CROWDSOURCING , *COMPUTER architecture , *USER interfaces , *HUMAN-machine systems , *COMPUTER programming , *ELECTRONIC data processing , *COMPUTER algorithms - Abstract
This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
16. PROFESSIONAL ACTIVITIES.
- Subjects
- *
CONFERENCES & conventions , *COMPUTERS , *TRANSACTION systems (Computer systems) , *CYBERNETICS , *COMPUTER programming , *HIGH performance computing - Abstract
The article presents information on various events related to the field of computers and computing. "The 1985 ACM Thirteenth Annual Computer Science Conference," is to be held during March 12-14, 1985, in New Orleans, Louisiana. Highlights include a professional program organized around three theme days, exhibits, the finals of the ACM International Scholastic Programming Contest, the Annual Employment Register, the conference luncheon, a special luncheon for department chairpersons, and several social functions. Besides, "The ACM/IEEE-CS Workshop on High Performance Transaction Systems" is to be held during September 23-25, 1985, in Pacific Grove, California. Participation is by invitation only, and attendance is to be limited to 50 participants. Each participant is expected to actively contribute to the workshop by submitting a position paper, an extended abstract, or a full paper.
- Published
- 1985
17. Abstracts from Other ACM Publications.
- Subjects
- *
COMPUTERS , *DATABASES , *HEURISTIC programming , *COMPUTER programming , *ARTIFICIAL intelligence , *COMPUTER networks - Abstract
The article presents abstracts of various papers related to computers and computing. The paper "Disaggregations in Databases," by Serge Abiteboul, presents an algebraic foundation of database schema design. A new database operator, namely disaggregation, is introduced. Beginning with "free" families, repeated applications of disaggregation and three other operators yield families of increasingly elaborate structure. In particular, it suggests that families defined by one join dependency and several embedded functional dependencies can be obtained in this manner. Another paper entitled "Three Approaches to Heuristic Search in Networks," A. Bagchi and A. Mahanti, analyzes three different approaches to heuristic search in networks. In the first approach the basic idea is to choose for expansion that node for which the evaluation function has a minimum value. In the second approach, in contrast to the earlier one, a node that is expanded once is not expanded again; instead, a propagation of values takes place. The third approach is an adaptation for networks of an AND/OR graph marking algorithm.
- Published
- 1985
18. Performance Results of the Simplex Algorithm for a Set of Real-World Linear Programming Models.
- Author
-
McCall, Edward H. and Greenberg, Harvey J.
- Subjects
- *
LINEAR programming , *MATHEMATICAL programming , *ALGORITHMS , *COMPUTER programming , *COMPUTER algorithms , *ALGEBRA - Abstract
This paper provides performance results using the SPERRY UNIVAC 1100 Series linear programming product FMPS to solve a set of 16 real-world linear programming problems. As such, this paper provides a data point for the actual performance of a commercial simplex algorithm on real-world linear programming problems and shows that the simplex algorithm is a linear time algorithm in actual performance. Correlations and performance relationships not previously available are also provided. [ABSTRACT FROM AUTHOR]
- Published
- 1982
- Full Text
- View/download PDF
19. An Algorithm for Exhaustive Generation of Building Floor Plans.
- Author
-
Galle, Per and Fritsch, F. N.
- Subjects
- *
ARCHITECTURAL designs , *ARCHITECTURAL drawing , *COMPUTER algorithms , *COMPUTER programming , *ELECTRONIC data processing , *MATHEMATICAL analysis - Abstract
The combinatorial complexity of most floor plan design problems makes it practically impossible to obtain a systematic knowledge of possible solutions using pencil and paper. The objective of this paper is to contribute to the development of computer methods providing such knowledge for the designer. The paper describes an algorithm which generates all possible rectangular plans on modular grids with congruent cells, subject to constraints on total area, room areas, wall lengths, room adjacencies, and room orientations. To make room sizes regular and limit the solution set only such grids are used which minimize the number of cells in the smallest room. The description is sufficiently detailed to serve as a basis for programming. Test results for a Pascal implementation of the algorithm are reported. Realistic problems of up to ten rooms have been solved in modest lengths of computer time The results indicate that the approach of exhaustive generation may prove to be more fruitful than generally assumed. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
20. Multidimensional Divide-and-Conquer.
- Author
-
Bentley, Jon Louis
- Subjects
- *
COMPUTER algorithms , *DATA structures , *ALGORITHMS , *ELECTRONIC file management , *COMPUTER programming , *ELECTRONIC data processing - Abstract
Most results in the field of algorithm design are single algorithms that solve single problems. In this paper we discuss multidimensional divide-anti-conquer, an algorithmic paradigm that can be instantiated in many different ways to yield a number of algorithms and data structures for multidimensional problems. We use this paradigm to give best-known solutions to such problems as the ECDF, maximal range searching, closest pair, and all nearest neighbor problems The contributions of the paper are on two levels. On the first level are the particular algorithms and data structures given by applying the paradigm. On the second level is the more novel contribution of this paper a detailed study of an algorithmic paradigm that is specific enough to be described precisely yet general enough to solve a wide variety of problems. [ABSTRACT FROM AUTHOR]
- Published
- 1980
- Full Text
- View/download PDF
21. Optimization Decision Trees Through Heuristically Guided Search.
- Author
-
Martelli, Alberto, Montanari, Ugo, and Morgan, H.
- Subjects
- *
DECISION trees , *DECISION logic tables , *COMPUTER programming , *COMPUTER software , *ALGORITHMS , *COMPUTER algorithms - Abstract
Optimal decision table conversion has been tackled in the literature using two approaches, dynamic programming and branch-and-bound. The former technique is quite effective, but its time and space requirements are independent of how "easy" the given table is. Furthermore, it cannot be used to produce good, quasioptimal solutions. The branch-and-bound technique uses a good heuristic to direct the search, but is cluttered up by an enormous search space, since the number of solutions increases with the number of test variables according to a double exponential. In this paper we suggest a heuristically guided top-down search algorithm which, like dynamic programming, recognizes identical subproblems but which can be used to find both optimal and quasioptimal solutions. The heuristic search method introduced in this paper combines the positive aspects of the above two techniques. Compressed tables with a large number of variables can be handled without deriving expanded tables first. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
22. Detection of Logical Errors in Decision Table Programs.
- Author
-
Ibramsha, M., Rajaraman, V., and Morgan, H.
- Subjects
- *
DECISION logic tables , *ERRORS , *COMPUTER programming , *MATHEMATICAL variables , *ALGORITHMS , *FORTRAN , *COMPUTER software - Abstract
n this paper an algorithm to detect logical errors in a limited-entry decision table and in loop-free programs with embedded decision tables is developed. All the conditions in the decision tables are assumed to be inequalities or equalities relating linear expressions. It is also assumed that actions in a decision table are linear in variables which occur in the condition stub of the decision table (or tables) to which control is transferred from the table. The algorithm is based on determining whether a set of linear inequalities has or does not have a solution. The algorithm described in the paper is implemented in Fortran IV. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
23. Analyses of Deterministic Parsing Algorithms.
- Author
-
Cohen, Jacques, Roth, Martin S., and Horning, J. J.
- Subjects
- *
COMPUTER algorithms , *PARSING (Computer grammar) , *COMPUTATIONAL linguistics , *FORMAL languages , *COMPUTER programming , *PROGRAMMING languages - Abstract
This paper describes an approach for determining the minimum, maximum, and average times to parse sentences acceptable by a deterministic parser. These quantities are presented in the form of symbolic formulas, called time-formulas. The variables in these formulas represent not only the length of the input string but also the lime to perform elementary operations such as pushing, popping, subscripting, iterating, etc. By binding to the variables actual numerical values corresponding to a given compiler-machine configuration, one can determine the execution time for that configuration. Time-formulas are derived by examining the grammar rules and the program representing the algorithm one wishes to analyze. The approach is described by using a specific grammar that defines simple arithmetic expressions. Two deterministic parsers are analyzed: a top-down recursive descent LL(I) parser, and a bottom-up SLR(I) parser. The paper provides estimates for the relative efficiencies of the two parsers. The estimates applicable to a specific machine, the PDP-10, are presented and substantiated by benchmarks. Finally, the paper illustrates the proposed approach by applying it to the analyses of parsers for a simple programming language. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
24. Computers as an Innovation in American Local Governments.
- Author
-
Danziger, James N. and Dutton, William H.
- Subjects
- *
COMPUTER users , *TECHNOLOGICAL innovations , *COMPUTER programmers , *COMPUTER programming , *ELECTRONIC data processing , *LOCAL government - Abstract
Computers and electronic data processing are a major technological innovation in the operations of American local government. This paper establishes that there is substantial variation among the larger local governments in the rate at which they adopt computer technology, In the level of financial support they provide for EDP, and in the extensiveness and sophistication of their automated applications. The central question addressed is: What might explain the differences between governments in the extent to which they adopt and use computers? Hypotheses are tested for several streams of explanatory factors, using data from more than 500 city and county governments. The findings identify certain local government milieus which are particularly conducive to higher levels of computer innovation. Somewhat unexpected findings reveal the significant impact of the distribution of control over EDP decisions and the dominant political values within the government. Other Important factors include the measured need for computer applications and the presence of external funding support for computing. Finally, the paper suggests a framework for identifying the key determinants of other technological innovations. [ABSTRACT FROM AUTHOR]
- Published
- 1977
- Full Text
- View/download PDF
25. Abstract Data Types and the Development of Data Structures.
- Author
-
Guttag, John and Wegbreit, B.
- Subjects
- *
ABSTRACT data types (Computer science) , *COMPUTER programming , *PROGRAMMING languages , *DATA structures , *STRUCTURED programming , *COMPUTER software development - Abstract
abstract data types can play a significant role in the development of software that is reliable, efficient, and flexible. This paper presents and discusses the application of an algebraic technique for the specification of abstract data types. Among the examples presented is a top-down development of a symbol table for a block structured language; a discussion of the proof of its correctness is given. The paper also contains a brief discussion of the problems involved in constructing algebraic specifications that are both consistent and complete. [ABSTRACT FROM AUTHOR]
- Published
- 1977
- Full Text
- View/download PDF
26. Practical Syntactic Error Recovery.
- Author
-
Graham, Susan L. and Rhodes, Steven P.
- Subjects
- *
DEBUGGING , *RESEARCH , *COMPUTER programming , *PROGRAMMING languages , *COMPUTER software , *COMPUTER programmers , *LANGUAGE & languages , *SYNTAX (Grammar) , *COMPUTER software correctness - Abstract
This paper describes a recovery scheme for syntax errors which provides automatically-generated high quality recovery with good diagnostic information at relatively low cost. Previous recovery techniques are summarized and empirical comparisons are made. Suggestions for further research on this topic conclude the paper. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
27. Register Allocation Via Usage Counts.
- Author
-
Freiburghouse, R. A. and Manacher, G.
- Subjects
- *
COMPUTER algorithms , *REGISTERS (Computers) , *COMPUTER storage devices , *COMPUTER programming , *COMPUTER simulation , *OPERATIONS research - Abstract
This paper introduces the notion of usage counts, shows how usage counts can be developed by algorithms that eliminate redundant computations, and describes how usage counts can provide the basis for register allocation. The paper compares register allocation based on usage counts to other commonly used register allocation techniques, and presents evidence which shows that the usage count technique is significantly better than these other techniques. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
28. Extending the Information Theory Approach to Converting Limited-Entry Decision Tables to Computer Programs.
- Author
-
Manacher, G. and Shwayder, Keith
- Subjects
- *
COMPUTER software , *COMPUTER algorithms , *PROGRAMMING languages , *DECISION logic tables , *FLOW charts , *COMPUTER programming - Abstract
This paper modifies an earlier algorithm for converting decision tables into flowcharts which minimize subsequent execution time when compiled into a computer program. The algorithms considered in this paper perform limited search and, accordingly, do not necessarily result in globally optimal solutions. However, the greater search effort needed to obtain a globally optimal solution for complex decision tables is usually not justified by sufficient savings in execution time. There is an analogy between the problem of converting decision tables into efficient flowcharts and the well-understood problem in information theory of noiseless coding. The results of the noiseless coding literature are used to explore the limitations of algorithms used to solve the decision table problem. The analogy between the two problems is also used to develop improvements to the information algorithm in extending the depth of search under certain conditions and in proposing additional conditions to be added to the decision table. Finally, the information algorithm is compared with an algorithm proposed in a recent paper by Verhelst. [ABSTRACT FROM AUTHOR]
- Published
- 1974
29. META II: Digital Vellum in the Digital Scriptorium.
- Author
-
LONG, DAVE
- Subjects
- *
COMPILERS (Computer programs) , *PROGRAMMING software , *PROGRAMMING languages , *RECURSION theory , *COMPUTER programming , *PYTHON programming language - Abstract
The article considers a paper written in 1962 by computer scientist Dewey Val Schorre that describes a computer compiler called META II. The author quotes various passages from the text that reference concepts including recursion, monadic composition, and the VALGOL computer programming language. He also describes how he replicated Val Schorre's work with the aid of the Python programming language and a process called bootstrapping. In his view META II can be characterized as a self-applicable compiler.
- Published
- 2015
- Full Text
- View/download PDF
30. Index Ranges for Matrix Calculi.
- Author
-
Bayer, R., Witzgall, C., and Gries, D.
- Subjects
- *
MATHEMATICAL analysis , *ALGORITHMS , *DATA structures , *COMPUTER programming , *ELECTRONIC file management , *ELECTRONIC data processing - Abstract
The paper describes a scheme for symbolic manipulation of index expressions which arise as a by-product of the symbolic manipulation of expressions in the matrix calculi described by the authors in a previous paper. This scheme attempts program optimization by transforming the original algorithm rather than the machine code. The goal is to automatically generate code for handling the tedious address calculations necessitated by complicated data structures. The paper is therefore preoccupied with "indexing by position." The relationship of "indexing by name" and "indexing by position" is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
31. professional activities.
- Subjects
- *
CONFERENCES & conventions , *MICROPROGRAMMING , *COMPUTER programming , *FORUMS , *COMPUTER graphics , *PARALLEL computers , *AUTOMATIC data collection systems - Abstract
This article presents information on various activities organized by the Association for Computing Machinery (ACM). The Fifth Annual Microprogramming Workshop will be held in the Illini Union of the University of Illinois at Urbana-Champaign on September 25 and 26. Sponsored by the ACM Special Interest Group on Microprogramming and the 555 Computer Society, this workshop provides a leading forum where active workers in the microprogramming area can hear several formal papers and participate in informal discussion groups oriented to specific topics. Professor Daniel L. Slotnick, director of the Center for Advanced Computation at the University of Illinois at Urbana-Champaign, will be the featured speaker at a banquet on September 25; his topic will be "Parallel Processing." The 1973 San Diego Biomedical Symposium will be held January 31 through February 2, 1973, at the Sheraton-Harbor Island Hotel, San Diego, California. The Symposium theme will be "Innovations in Biomedicine." Technical papers appropriate to the following sessions are invited: aids to clinical care, modeling and analysis, interpretation and data reduction, scanning and image processing, information engineering, and general innovations.
- Published
- 1972
32. Programming Languages: History and Future.
- Author
-
Sammet, Jean E.
- Subjects
- *
PROGRAMMING languages , *CHRONOLOGY , *COMPUTER programming , *COMPUTER software , *COMPUTER programming management , *ELECTRONIC data processing , *SOFTWARE engineering , *COMPUTER software developers - Abstract
This paper discusses both the history and future of programming languages (= higher level languages). Some of the difficulties in writing such a history are indicated. A key part of the paper is a tree showing the chronological development of languages and their interrelationships, Reasons for the proliferation of languages are given. The major languages are listed with the reasons for their importance. A section on chronology indicates the happenings of the significant previous time periods and the major topics of 1972. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
33. Algorithms.
- Author
-
Paciorek, Kathleen A. and Fosdick, L. D.
- Subjects
- *
ALGORITHMS , *COMPUTERS , *ELECTRONIC systems , *FORTRAN , *COMPUTER programming - Abstract
The article presents several research papers relating to algorithms. The paper "Greatest Common Divisor of n Integers and Multipliers" presents an algorithm which calculates the greatest common divisor, IGCD, of n integers. Details of the method and comparisons to other algorithms are also given. The algorithm is a new version of the Euclidean algorithm for n integers. The n-1 calculations of the greatest common divisor of two integers is accomplished by means of a modified version of the Blankinship algorithm. The paper "Exponential Integral Ei (x)" presents the results of one phase of research carried out at the Jet Propulsion Laboratory, California Institute of Technology, under Contract NAS7-100, sponsored by the National Aeronautics and Space Administration. It presents an algorithm which was compiled and executed without any modification on a UNIVAC 1108 computer. An unfortunate precedent has been set in several recent algorithms of using an illegal FORTRAN construction.
- Published
- 1970
34. Code Extension in ASCII (An ASA Tutorial).
- Author
-
Gorn, S.
- Subjects
- *
ASCII (Character set) , *CHARACTER sets (Data processing) , *ALPHABET -- Data processing , *DATA processing of signs & symbols , *COMPUTER programming - Abstract
The American Standard Code for Information Interchange (ASCII) contains a number of control characters associated with the principle of code extension, that is, with the representation of Information which cannot be directly represented by means of the characters in the Code. The manner of use of these characters has not previously been completely described. This paper presents a set of mutually consistent philosophies regarding code extension applications, and suggests a corollary set of doctrines for the application of the code extension characters. Distinctions are drawn between code extension and such other concepts as "graphic substitution" or "syntactic representation" which are often used to meet similar requirements. Also covered are certain topics which are not truly concerned with code extension but which are often linked with it in discussion on code applications. The material In this paper is equally applicable in principle to the (proposed) ISO international 7-bit code for information interchange. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
35. Partial evaluation of string obfuscations for Java malware detection.
- Author
-
Chawdhary, Aziem, Singh, Ranjeet, and King, Andy
- Subjects
- *
MALWARE prevention , *JAVA programming language , *ANTIVIRUS software , *COMPUTER programming , *COMPUTER security - Abstract
The fact that Java is platform independent gives hackers the opportunity to write exploits that can target users on any platform, which has a JVM implementation. Metasploit is a well-known source of Java exploits and to circumvent detection by anti virus (AV) software, obfuscation techniques are routinely applied to make an exploit more difficult to recognise. Popular obfuscation techniques for Java include string obfuscation and applying reflection to hide method calls; two techniques that can either be used together or independently. This paper shows how to apply partial evaluation to remove these obfuscations and thereby improve AV matching. The paper presents a partial evaluator for Jimple, which is an intermediate language for JVM bytecode designed for optimisation and program analysis, and demonstrates how partially evaluated Jimple code, when transformed back into Java, improves the detection rates of a number of commercial AV products. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. Generalised rely-guarantee concurrency: an algebraic foundation.
- Author
-
Hayes, Ian
- Subjects
- *
ALGEBRAIC geometry , *SOFTWARE verification , *KLEENE algebra , *COMPUTER programming , *QUOTIENT rule , *LATTICE theory - Abstract
The rely-guarantee technique allows one to reason compositionally about concurrent programs. To handle interference the technique makes use of rely and guarantee conditions, both of which are binary relations on states. A rely condition is an assumption that the environment performs only atomic steps satisfying the rely relation and a guarantee is a commitment that every atomic step the program makes satisfies the guarantee relation. In order to investigate rely-guarantee reasoning more generally, in this paper we allow interference to be represented by a process rather than a relation and hence derive more general rely-guarantee laws. The paper makes use of a weak conjunction operator between processes, which generalises a guarantee relation to a guarantee process, and introduces a rely quotient operator, which generalises a rely relation to a process. The paper focuses on the algebraic properties of the general rely-guarantee theory. The Jones-style rely-guarantee theory can be interpreted as a model of the general algebraic theory and hence the general laws presented here hold for that theory. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
37. Hazy: Making It Easier to Build and Maintain Big-Data Analytics.
- Author
-
KUMAR, ARUN, NIU, FENG, and RÉ, CHRISTOPHER
- Subjects
- *
BIG data , *MACHINE learning , *ALGORITHMS , *SYSTEMS design , *COMPUTER programming , *SYSTEM analysis - Abstract
The article discusses the construction and maintenance of Big-Data analytics systems as of March 2013, focusing on machine-learning methods and statistical methods while describing the Hazy project for algorithm management. Topics include trained systems, interfaces between algorithms and systems, the GeoDeepDive application for the statistical analysis of geological research papers, and the Bismarck project for infrastructure abstractions. Data sets, probabilistic logic programming, debugging, convex programming, and scalability are mentioned.
- Published
- 2013
- Full Text
- View/download PDF
38. Achievements and Challenges in Software Reverse Engineering.
- Author
-
CANFORA, GERARDO, DI PENTA, MASSIMILIANO, and CERULO, LUIGI
- Subjects
- *
REVERSE engineering software , *COMPUTER software development , *COMPUTER science research , *COMPUTER programming , *SOFTWARE architecture , *COMPUTER software laws - Abstract
The article discusses the use of computer software reverse engineering to comprehend the abilities of a particular software application before engaging in modifications of it. Perspective is given on the need to modify software to adapt it to changing user requirements, changing business models, changing legislation, and technological innovations. A definition of software reverse engineering is excerpted from a paper authored by Chikofsky and Cross. A summary of the article's key insights is listed.
- Published
- 2011
- Full Text
- View/download PDF
39. FastTrack: Efficient and Precise Dynamic Race Detection.
- Author
-
Flanagan, Cormac and Freund, Stephen N.
- Subjects
- *
PARALLEL programs (Computer programs) , *VECTOR analysis , *DATA analysis , *ACQUISITION of data , *COMPUTER programming , *ALGORITHM research - Abstract
Multithreaded programs are notoriously prone to race conditions. Prior work developed precise dynamic race detectors that never report false alarms. However, these checkers employ expensive data structures, such as vector clocks (VCs), that result in significant performance overhead. This paper exploits the insight that the full generality of VCs is not necessary in most cases. That is, we can replace VCs with an adaptive lightweight representation that, for almost all operations of the target program, requires constant space and supports constant-time operations. Experimental results show that the resulting race detection algorithm is over twice as fast as prior precise race detectors, with no loss of precision. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
40. x86-TSO: A Rigorous and Usable Programmer's Model for x86 Multiprocessors.
- Author
-
Sewell, Peter, Sarkar, Susmit, Owens, Scott, Nardelli, Francesco Zappa, and Myreen, Magnus O.
- Subjects
- *
MULTIPROCESSORS , *COMPUTER software correctness , *COMPUTER programming , *DISTRIBUTED shared memory , *COMPUTER multitasking , *COMPUTER science - Abstract
Exploiting the multiprocessors that have recently become ubiquitous requires high-performance and reliable concurrent systems code, for concurrent data structures, operating system kernels, synchronization libraries, compilers, and so on. However, concurrent programming, which is always challenging, is made much more so by two problems. First, real multiprocessors typically do not provide the sequentially consistent memory that is assumed by most work on semantics and verification. Instead, they have relaxed memory models, varying in subtle ways between processor families, in which different hardware threads may have only loosely consistent views of a shared memory. Second, the public vendor architectures, supposedly specifying what programmers can rely on, are often in ambiguous informal prose (a particularly poor medium for loose specifications), leading to widespread confusion. In this paper we focus on x86 processors. We review several recent Intel and AMD specifications, showing that all contain serious ambiguities, some are arguably too weak to program above, and some are simply unsound with respect to actual hardware. We present a new x86-TSO programmer's model that, to the best of our knowledge, suffers from none of these problems. It is mathematically precise (rigorously defined in HOL4) but can be presented as an intuitive abstract machine which should be widely accessible to working programmers. We illustrate how this can be used to reason about the correctness of a Linux spinlock implementation and describe a general theory of data-race freedom for x86-TSO. This should put x86 multiprocessor system building on a more solid foundation; it should also provide a basis for future work on verification of such systems. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
41. Declarative Networking.
- Author
-
Boon Thau Loo, Condie, Tyson, Garofalakis, Minos, Gay, David E., Hellerstein, Joseph M., Maniatis, Petros, Ramakrishnan, Raghu, Roscoe, Timothy, and Stoica, Ion
- Subjects
- *
DECLARATIVE programming languages , *DECLARATIVE programming , *COMPUTER programming , *COMPUTER network protocols , *COMPUTER networks , *SEMANTIC network analysis - Abstract
Declarative Networking is a programming methodology that enables developers to concisely specify network protocols and services, which are directly compiled to a dataflow framework that executes the specifications. This paper provides an introduction to basic issues in declarative networking, including language design, optimization, and dataflow execution. We present the intuition behind declarative programming of networks, including roots in Datalog, extensions for networked environments, and the semantics of long-running queries over network state. We focus on a sublanguage we call Network Datalog (NDlog), including execution strategies that provide crisp eventual consistency semantics with significant flexibility in execution. We also describe a more general language called Overlog, which makes some compromises between expressive richness and semantic guarantees. We provide an overview of declarative network protocols, with a focus on routing protocols and overlay networks. Finally, we highlight related work in declarative networking, and new declarative approaches to related problems. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
42. Composable Memory Transactions.
- Author
-
Harris, Tim, Marlow, Simon, Jones, Simon Peyton, and Herlihy, Maurice
- Subjects
- *
PARALLEL programs (Computer programs) , *COMPUTER memory management , *THREADS (Computer programs) , *COMPUTER programming , *PARALLEL algorithms , *SOFTWARE engineering - Abstract
Writing concurrent programs is notoriously difficult and is of increasing practical importance. A particular source of concern is that even correctly implemented concurrency abstractions cannot be composed together to form larger abstractions. In this paper we present a concurrency model, based on transactional memory, that offers far richer composition. All the usual benefits of transactional memory are present (e.g., freedom from low-level deadlock), but in addition we describe modular forms of blocking and choice that were inaccessible in earlier work. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
43. Making Learning a Part of Life.
- Author
-
Eden, Hal, Eisenberg, Mike, Fischer, Gerhard, and Repenning, Alexander
- Subjects
- *
INSTRUCTIONAL systems , *LEARNING , *COMPUTER software development , *COMPUTER programming , *PAPER sculpture , *THREE-dimensional imaging - Abstract
The article focuses on tools to build domain-oriented design environments, such as Agentsheets, as well as specific design environments, such as HyperGami, developed by the Center for Lifelong Learning and Design. Agentsheets and HyperGami have been used in different settings ranging from elementary school to industry. Agentsheets is a programming substrate that has been used in the past to create a large number of domain-oriented design environments including simulation, visual programming, and game environments. Allowing users to create a simulation game rather than just playing it, represents the constructionist aspect of Agentsheets. The instructionist aspect of Agentsheets is captured in its domain-orientation and support for the creation of critiquing components. HyperGami is a design environment for the domain of "tangible solid geometry" that allows the creation of paper sculptures. It permits users to create and view 3D objects on the screen. One of the key features of HyperGami is that it does not limit users to "standard" polyhedral models, but allows the design and creation of an endless range of customized 3D shapes.
- Published
- 1996
- Full Text
- View/download PDF
44. Program Verification: Public Image And Private Reality.
- Author
-
Dobson, John and Randell, Brian
- Subjects
- *
COMPUTER programming , *SCIENTIFIC literature , *TESTING , *COMPUTER software - Abstract
Presents the authors' views on a paper entitled "Program Verification: The Very Idea," by James Fetzer which was published in the September 1988 issue of the journal "Communications of the ACM." Issue of accurate reporting of science and accurate representation of research; Rage caused by the paper in the program verification community; Failure of Fetzer to give a clear picture of program verification.
- Published
- 1989
45. PROFESSIONAL ACTIVITIES.
- Subjects
- *
CONFERENCES & conventions , *COMPUTER industry , *COMPUTER software development , *COMPUTER programming , *SYSTEMS design , *TECHNOLOGICAL innovations - Abstract
This article presents information about several events related to the computer industry. The ACM (Association for Computing machinery) SIGSOFT/SIGPLAN Symposium on Practical Software Development Environments is scheduled for April 23-25, 1984, at Carnegie-Mellon University in Pittsburgh, Pennsylvania. The National Bureau of Standards and Office of Naval Research are co-sponsoring the meeting. Papers are invited, with particular emphasis on new implementation schemes for interactive program design, construction, debugging, and testing. The Fourth International Conference on Information Systems, which will be organized during December 15-17, 1983, in Houston, Texas, is soliciting papers describing research in such areas as the theory of IS effectiveness; techniques for systems design; societal impact; management of technological innovation, transfer, and diffusion; office systems and the professional workstation; information centers and other approaches to the user interface; and cross-cultural comparisons of national and multi-national systems.
- Published
- 1983
46. Abstracts from Other ACM Publications.
- Subjects
- *
COMPUTER programming , *HASHING , *ELECTRONIC file management , *ERRORS , *MARKOV processes , *MATRICES (Mathematics) - Abstract
This article presents several abstracts related to computer programming. The first one is "Analysis of the Search Performance of Coalesced Hashing," by Jeffrey Scott Vitter. An analysis is presented of the coalesced hashing method, in which a portion of memory (called the address region) serves as the range of the hash function while the rest of memory (called the cellar) is devoted solely to storing records that collide when inserted. The main result of this paper expresses the average search times as a function of the number of records and the cellar size, solving a long-standing open problem. Another abstract is "Computable Error Bounds for Aggregated Markov Chains," by G.W. Stewart. In this paper, a method is described for computing the steady-state probability vector of a nearly completely decomposable Markov chain. It does not require the determination of a completely decomposable stochastic approximation to the transition matrix, and hence it is applicable to non stochastic matrices. An error analysis of the procedure which results in effectively computable error bounds is given.
- Published
- 1983
47. On the Inevitable Intertwining of Specification and Implementation.
- Author
-
Swartout, William and Balzer, Robert
- Subjects
- *
TECHNICAL specifications , *COMPUTER software development , *COMPUTER software , *COMPUTER programming , *COMPUTER systems - Abstract
Contrary to recent claims that specification should be completed before implementation begins, this paper presents two arguments that the two processes must be intertwined. First, limitations of available implementation technology may force a specification change. For example, deciding to implement a stack as an array (rather than as a linked list) may impose a fixed limit on the depth of the stack. Second, implementation choices may suggest augmentations to the original specification. For example deciding to use an existing pattern-match routine to implement the search command in an editor may lead to incorporating some of the routine's features into the specification, such as the ability to include wild cards in the search key. This paper elaborates these points and illustrates how they arise in the specification of a con- troller for a package router. [ABSTRACT FROM AUTHOR]
- Published
- 1982
- Full Text
- View/download PDF
48. Variable-Precision Exponentiation.
- Author
-
Richman, P. L. and Timlake, W. P.
- Subjects
- *
COMPUTER algorithms , *COMPUTER programming , *ARTIFICIAL intelligence , *PROGRAMMING languages , *ELECTRONIC data processing , *COMPUTER science - Abstract
A previous paper presented an efficient algorithm, called the Recomputation Algorithm, for evaluating a rational expression to within any desired tolerance on a computer which performs variable-precision arithmetic operations. The Recomputation Algorithm can be applied to expressions involving any variable-precision operations having O(10-... + Σ ∣ε∣) error bounds, where p denotes the operation's precision and ε, denotes the error in the operation's with argument. This paper presents an efficient variable-precision exponential operation with an error bound of the above order. Other operations, such as log, sin, and cos, which have simple series expansions, can be handled similarly. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
49. CHAPTERS.
- Subjects
- *
ELECTRONIC data processing , *COMPUTER training , *COMPUTER science , *COMPUTER programming - Abstract
The article presents information on various chapters of the Association for Computing Machinery (ACM) across the U.S. The Greater Rio Grande Chapter of ACM has recently extracted from its files a list of the titles of all technical papers presented at Chapter meetings since its inception in 1957. Using a program developed by D.K. Robbins of Sandia Corp., a KWIC-type listing of permuted titles of these papers has been made. This listing has been distributed to all members of the Chapter. Topics and speakers at ACM Chapters across the U.S. indicate the current trends of interest in computer science and data processing. The ACM Tidewater Chapter is sponsoring this spring a professional development course on "Real-Time Computing." The course is given by the Chapter's professional development chairman, Cecil Frost, who is applications staff specialist for Control Data Corp. At its April 21, 1966 meeting, the Westchester-Fairfield Chapter heard William Orchard-Hays speak on "Linear Programming of Computational Techniques."
- Published
- 1966
50. Denotational semantics and its algebraic derivation for an event-driven system-level language.
- Author
-
Zhu, H., He, Jifeng, Qin, Shengchao, and Brooke, Phillip
- Subjects
- *
DENOTATIONAL semantics , *EVENT driven systems (Computer science) , *PROGRAMMING languages , *COMPUTER programming , *TIME delay systems - Abstract
As a system-level modelling language, SystemC possesses several novel features such as delayed notifications, notification cancelling, notification overriding and delta-cycle. It also has real-time and shared-variable features. Previously we have studied an operational semantics for SystemC Peng et al. (An operational semantics of an event-driven system-level simulator, pp 190-200, ) and bisimulation has been introduced based on some aspects of reasonable abstractions. The denotational method is another approach to studying the semantics of a programming language. It provides the mathematical meaning to programs and can predict the behaviour of programs. Due to the novel features of SystemC, it is challenging to study the denotational semantics for SystemC. In this paper, we apply Unifying Theories of Programming (abbreviated as UTP) Hoare and He (Unifying theories of programming, ) in exploring the denotational semantics. Two trace variables are introduced, one to record the state behaviours and another to record the event behaviours. The timed model is formalized in a threedimensional structure. A set of algebraic laws is explored, which can be proved via the presented denotational semantics. In this paper, we also consider the linking between denotational semantics and algebraic semantics. The linking is obtained by deriving the denotational semantics from algebraic semantics for SystemC. A complete set of parallel expansion laws is explored, where the location status of an instantaneous action is studied. The location status indicates an instantaneous action is due to which exact parallel component. We introduce the concept of head normal form for each program and every program is expressed in the form of guarded choice with location status. Based on this, the derivation strategy for deriving denotational semantics from algebraic semantics is provided. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.