35 results on '"Compiling (Electronic computers) -- Analysis"'
Search Results
2. XARK: an Extensible framework for automatic recognition of computational kernels
- Author
-
Arenaz, Manuel, Tourino, Juan, and Doallo, Ramon
- Subjects
Compiler/decompiler ,Compilers -- Usage ,Compilers -- Analysis ,Compiling (Electronic computers) -- Usage ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Usage ,Coroutines -- Analysis ,Kernel functions -- Analysis ,Voice recognition -- Analysis - Abstract
The recognition of program constructs that are frequently used by software developers is a powerful mechanism for optimizing and parallelizing compilers to improve the performance of the object code. The development of techniques for automatic recognition of computational kernels such as inductions, reductions and array recurrences has been an intensive research area in the scope of compiler technology during the 90's. This article presents a new compiler framework that, unlike previous techniques that focus on specific and isolated kernels, recognizes a comprehensive collection of computational kernels that appear frequently in full-scale real applications. The XARK compiler operates on top of the Gated Single Assignment (GSA) form of a high-level intermediate representation (IR) of the source code. Recognition is carried out through a demand-driven analysis of this high-level IR at two different levels. First, the dependences between the statements that compose the strongly connected components (SCCs) of the data-dependence graph of the GSA form are analyzed. As a result of this intra-SCC analysis, the computational kernels corresponding to the execution of the statements of the SCCs are recognized. Second, the dependences between statements of different SCCs are examined in order to recognize more complex kernels that result from combining simpler kernels in the same code. Overall, the XARK compiler builds a hierarchical representation of the source code as kernels and dependence relationships between those kernels. This article describes in detail the collection of computational kernels recognized by the XARK compiler. Besides, the internals of the recognition algorithms are presented. The design of the algorithms enables to extend the recognition capabilities of XARK to cope with new kernels, and provides an advanced symbolic analysis framework to run other compiler techniques on demand. Finally, extensive experiments showing the effectiveness of XARK for a collection of benchmarks from different application domains are presented. In particular, the SparsKit-II library for the manipulation of sparse matrices, the Perfect benchmarks, the SPEC CPU2000 collection and the PLTMG package for solving elliptic partial differential equations are analyzed in detail. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processor--Compilers, optimization General Terms: Algorithms, Languages, Experimentation Additional Key Words and Phrases: Automatic kernel recognition, demand-driven algorithms, use-def chains, symbolic analysis, gated single assignment, strongly connected component DOI = 10.1145/1391956.1391959 http://doi.acm.org/10.1145/1391956.1391959
- Published
- 2008
3. Domain specific language implementation via compile-time meta-programming
- Author
-
Tratt, Laurence
- Subjects
Compiler/decompiler ,Programming language ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Programming languages -- Analysis - Abstract
Domain specific languages (DSLs) are mini-languages that are increasingly seen as being a valuable tool for software developers and non-developers alike. DSLs must currently be created in an ad-hoc fashion, often leading to high development costs and implementations of variable quality. In this article, I show how expressive DSLs can be hygienically embedded in the Converge programming language using its compile-time meta-programming facility, the concept of DSL blocks, and specialised error reporting techniques. By making use of pre-existing facilities, and following a simple methodology, DSL implementation costs can be significantly reduced whilst leading to higher quality DSL implementations. Categories and Subject Descriptors: D.3.4 [Software Engineering]: Processors--Translator writing systems and compiler generators General Terms: Languages Additional Key Words and Phrases: Syntax extension, compile-time meta-programming, domain specific languages DOI = 10.1145/ 1391956.1391958 http://doi.acm.org/10.1145/1391956.1391958
- Published
- 2008
4. A compiler for variational forms
- Author
-
Kirby, Robert C. and Logg, Anders
- Subjects
Algorithm ,Compiler/decompiler ,Algorithms -- Analysis ,Variational principles -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Finite element method -- Usage ,Automation -- Usage ,Mechanization -- Usage - Abstract
As a key step towards a complete automation of the finite element method, we present a new algorithm for automatic and efficient evaluation of multilinear variational forms. The algorithm has been implemented in the form of a compiler, the FEniCS Form Compiler (FFC). We present benchmark results for a series of standard variational forms, including the incompressible Navier-Stokes equations and linear elasticity. The speedup compared to the standard quadrature-based approach is impressive; in some cases the speedup is as large as a factor of 1000. Categories and Subject Descriptors: G.4 [Mathematies of Computing]: Mathematical Software--Algorithm design and analysis; efficiency; G.1.8 [Numerical Analysis]: Partial Differential Equations Finite element methods General Terms: Algorithms, Performance Additional Key Words and Phrases: Variational form, compiler, finite element, automation
- Published
- 2006
5. Exploiting locality for irregular scientific codes
- Author
-
Hwansoo Han and Chau-Wen Tseng
- Subjects
Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Cache memory -- Management ,Disk caching -- Management ,Electronic data processing -- Methods ,Compiler/decompiler ,Cache memory ,Company business management ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
Two new locality improving techniques, data recording (GPART) and computation record (Z-SORT) are used for irregular scientific codes. Experiments on irregular codes for a variety of meshes show locality optimization techniques are effective for both sequential and parallelized codes, improving performance by 60-87 percent.
- Published
- 2006
6. MultiJava: design rationale, compiler implementation, and applications
- Author
-
Clifton, Curtis, Millstein, Todd, Leavens, Gary T., and Chambers, Craig
- Subjects
Java ,Modularity ,Software architecture ,Compiler/decompiler ,Java (Computer program language) -- Usage ,Java (Computer program language) -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis - Abstract
MultiJava is a conservative extension of the Java programming language that adds symmetric multiple dispatch and open classes. Among other benefits, multiple dispatch provides a solution to the binary method problem. Open classes provide a solution to the extensibility problem of object-oriented programming languages, allowing the modular addition of both new types and new operations to an existing type hierarchy. This article illustrates and motivates the design of MultiJava and describes its modular static typechecking and modular compilation strategies. Although MultiJava extends Java, the key ideas of the language design are applicable to other object-oriented languages, such as C# and C++, and even, with some modifications, to functional languages such as ML. This article also discusses the variety of application domains in which MultiJava has been successfully used by others, including pervasive computing, graphical user interfaces, and compilers. MultiJava allows users to express desired programming idioms in a way that is declarative and supports static typechecking, in contrast to the tedious and type-unsafe workarounds required in Java. MultiJava also provides opportunities for new kinds of extensibility that are not easily available in Java. Categories and Subject Descriptors: D.1.5 [Programming Techniques]: Object-oriented Programming; D.3.2 [Programming Languages]: Language Classifications--Object-oriented languages; D.3.3 [Programming Languages]: Language Constructs and Features--Abstract data types, classes and objects, control structures, inheritance, modules, packages, patterns, procedures, functions, and subroutines; D.3.4 [Programming Languages]: Processors--Compilers; D.3.m [Programming Languages]: Miscellaneous General Terms: Languages, Design Additional Key Words and Phrases: Open classes, open objects, extensible classes, extensible external methods, external methods, multimethods, method families, generic functions, object-oriented programming languages, single dispatch, multiple dispatch, encapsulation, modularity, static typechecking, subtyping, inheritance, Java language, MultiJava language, separate compilation, expression problem, binary method problem, augmenting method problem
- Published
- 2006
7. Edo: exception-directed optimization in Java
- Author
-
Ogasawara, Takeshi, Komatsu, Hideaki, and Nakatani, Toshio
- Subjects
Java ,Compiler/decompiler ,Java (Computer program language) -- Usage ,Java (Computer program language) -- Analysis ,Compilers -- Usage ,Compilers -- Analysis ,Compiling (Electronic computers) -- Usage ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Usage ,Coroutines -- Analysis - Abstract
Optimizing exception handling is critical for programs that frequently throw exceptions. We observed that there are many such exception-intensive programs written in Java. There are two commonly used exception handling techniques, stack unwinding and stack cutting. Stack unwinding optimizes the normal path by leaving the exception handling path unoptimized, while stack cutting optimizes the exception handling path by adding extra work to the normal path. However, there has been no single exception handling technique to optimize the exception handling path without incurring any overhead to the normal path. We propose a new technique called Exception-Directed Optimization (Edo) that optimizes exception-intensive programs without slowing down exception-minimal programs. It is a feedback-directed dynamic optimization consisting of three steps: exception path profiling, exception path inlining, and throw elimination. Exception path profiling attempts to detect hot exception paths. Exception path inlining embeds every hot exception path into the corresponding catching method. Throw elimination replaces a throw with a branch to the corresponding handler. We implemented Edo in IBM's production Just-in-Time compiler and made several experiments. In summary, it improved the performance of exception-intensive programs by up to 18.3% without decreasing the performance of exception-minimal programs for SPECjvm98. We also found an opportunity for performance improvement using EDO in the startup of a Java application server. Categories and Subject Descriptors: D.3 [Programming Languages]; D.3.4 [Programming Languages]: Processors--Incremental compilers, optimization, run-time environment General Terms: Performance, Experimentation, Languages Additional Key Words and Phrases: Feedback-directed dynamic optimization, dynamic compilers, exception handling, inlining
- Published
- 2006
8. A region-based compilation technique for dynamic compilers
- Author
-
Suganuma, Toshio, Yasue, Toshiaki, and Nakatani, Toshio
- Subjects
Compiler/decompiler ,Compilers -- Methods ,Compilers -- Analysis ,Compiling (Electronic computers) -- Methods ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Methods ,Coroutines -- Analysis - Abstract
Method inlining and data flow analysis are two major optimization components for effective program transformations, but they often suffer from the existence of rarely or never executed code contained in the target method. One major problem lies in the assumption that the compilation unit is partitioned at method boundaries. This article describes the design and implementation of a region-based compilation technique in our dynamic optimization framework, in which the compiled regions are selected as code portions without rarely executed code. The key parts of this technique are the region selection, partial inlining, and region exit handling. For region selection, we employ both static heuristics and dynamic profiles to identify and eliminate rare sections of code. The region selection process and method inlining decisions are interwoven, so that method inlining exposes other targets for region selection, while the region selection in the inline target conserves the inlining budget, allowing more method inlining to be performed. The inlining process can be performed for parts of a method, not just for the entire body of the method. When the program attempts to exit from a region boundary, we trigger recompilation and then use on-stack replacement to continue the execution from the corresponding entry point in the recompiled code. We have implemented these techniques in our Java JIT compiler, and conducted a comprehensive evaluation. The experimental results show that our region-based compilation approach achieves approximately 4% performance improvement on average, while reducing the compilation overhead by 10% to 30%, in comparison to the traditional method-based compilation techniques. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processors--Incremental compilers; optimization; run-time environments General Terms: Performance, Design, Experimentation Additional Key Words and Phrases: Region-based compilation, dynamic compilation, partial inlining, on-stack replacement, JIT compiler
- Published
- 2006
9. Automatic detection and correction of programming faults for software applications
- Author
-
Deeprasertkul, Prattana, Bhattarakosol, Pattarasinee, and O'Brien, Fergus
- Subjects
Compiler/decompiler ,Programming project management ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Programming management (Computers) -- Methods - Published
- 2005
10. Supporting demanding hard-real-time systems with STI
- Author
-
Welch, Benjamin J., Kanaujia, Shobhit O., Seetharam, Adarsh, Thirumalai, Deepaksrivats, and Dean, Alexander G.
- Subjects
Real-time system ,Embedded system ,System on a chip ,Compiler/decompiler ,Real-time control -- Design and construction ,Real-time systems -- Design and construction ,Embedded systems -- Design and construction ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis - Published
- 2005
11. A general compiler framework for speculative multithreaded processors
- Author
-
Bhowmik, Anasua and Franklin, Manoj
- Subjects
Parallel processing -- Analysis ,Coroutines -- Analysis ,Compiling (Electronic computers) -- Analysis ,Compilers -- Analysis ,Multithreading -- Analysis ,Parallel processing ,Compiler/decompiler ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
A compiler framework for partitioning a sequential program into multiple threads for parallel execution in an speculative multithreading (SpMT) is presented. A simluation-based evaluation of the generated threads shows that the use of nonspeculative threads and nonloop speculative threads provides a significant increase in speedup for nonnumeric programs.
- Published
- 2004
12. Case study: An infrastructure for C/ATLAS environments with object-oriented design and XML representation
- Author
-
Chen, Cheng-Wei and Lee, Jenq Kuen
- Subjects
Compiler/decompiler ,XML ,Software development/engineering ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,XML (Document markup language) -- Evaluation ,Software engineering -- Case studies - Published
- 2004
13. An experimental evaluation of data dependence analysis techniques
- Author
-
Psarris, Kleanthis and Kyriakopoulos, Konstantinos
- Subjects
Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Distributed processing (Computers) -- Analysis ,Compiler/decompiler ,Distributed processing (Computers) ,Business ,Computers ,Electronics ,Electronics and electrical industries - Abstract
An attempt was made to optimize compilers rely upon program analysis techniques to detect data dependences between program statements. Data dependence information captures the essential ordering constraints of the statements in a program that need to be preserved in order to produce valid optimized and parallel code.
- Published
- 2004
14. Monotonic reductions, representative equivalence, and compilation of intractable problems
- Author
-
Liberatore, Paolo
- Subjects
Compiler/decompiler ,Algorithm ,Artificial intelligence ,Coroutines -- Analysis ,Compiling (Electronic computers) -- Analysis ,Compilers -- Analysis ,Algorithms -- Analysis ,Artificial intelligence -- Analysis - Published
- 2001
15. Type elaboration and subtype completion for Java bytecode
- Author
-
Knoblock, Todd B. and Rehof, Jakob
- Subjects
Programming language ,Compiler/decompiler ,Software quality ,Java ,Software -- Analysis ,Programming languages -- Analysis ,Coroutines -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Java (Computer program language) -- Analysis - Abstract
1. INTRODUCTION Type elaboration is a technique for type inference on verifiable Java bytecode. Verification provides type consistency rules that are based upon program flow and safety checking. These are […], Java source code is strongly typed, but the translation from Java source to bytecode omits much of the type information originally contained within methods. Type elaboration is a technique for reconstructing strongly typed programs from incompletely typed bytecode by inferring types for local variables. There are situations where, technically, there are not enough types in the original type hierarchy to type a bytecode program. Subtype completion is a technique for adding necessary types to an arbitrary type hierarchy to make type elaboration possible for all verifiable Java bytecode. Type elaboration with subtype completion has been implemented as part of the Marmot Java compiler. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processors--Compilers; F.3.3 [Logics and Meanings of Programs]: Studies of Program Constructs--Type Structure General Terms: Languages, Theory Additional Key Words and Phrases: Java compiler, lattice completion, object-oriented type systems, type-directed compilation, typed intermediate language, type inference, type reconstruction
- Published
- 2001
16. Containers on the Parallelization of General-Purpose Java Programs
- Author
-
Wu, Peng and Padua, David
- Subjects
Compiler/decompiler ,Java ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Parallel programming (Computer science) -- Analysis ,Java (Computer program language) -- Usage - Published
- 2000
17. A Constant Propagation Algorithm for Explicitly Parallel Programs
- Subjects
Algorithm ,Compiler/decompiler ,Algorithms -- Usage ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Parallel programming (Computer science) -- Analysis ,Graph theory -- Analysis - Abstract
COMPILER OPTIMIZATION; CONSTANT PROPAGATION; CONTROL FLOW GRAPH; EXPLICIT PARALLELISM; INTERMEDIATE REPRESENTATION; STATIC SINGLE ASSIGNMENT FORM Abstract: In this paper, we present a constant propagation algorithm for explicitly parallel programs, which we call the Concurrent Sparse Conditional Constant propagation algorithm. This algorithm is an extension of the Sparse Conditional Constant propagation algorithm. Without considering the interaction between threads, classical optimizations lead to an incorrect program transformation for parallel programs. To make analyzing parallel programs possible, a new intermediate representation is needed. We introduce the Concurrent Static Single Assignment (CSSA) form to represent explicitly parallel programs with interleaving semantics and synchronization. The only parallel construct considered in this paper is cobegin/coend. A new confluence function, the [pi]-assignment, which summarizes the information of interleaving statements between threads, is introduced. The Concurrent Control Flow Graph, which contains information about conflicting statements, control flow, and synchronization, is used as an underlying representation for the CSSA from. Article History: Registration Date: 29/09/2004
- Published
- 1998
18. Eliminating Barrier Synchronization for Compiler-Parallelized Codes on Software DSMs
- Subjects
Algorithm ,Compiler/decompiler ,Algorithms -- Usage ,Parallel programming (Computer science) -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis - Abstract
ELIMINATING BARRIER; PARALLELIZING COMPILER; SOFTWARE DSM Abstract: Software distributed-shared-memory (DSM) systems provide an appealing target for parallelizing compilers due to their flexibility. Previous studies demonstrate such systems can provide performance comparable to messagepassing compilers for dense-matrix kernels. However, synchronization and load imbalance are significant sources of overhead. In this paper, we investigate the impact of compilation techniques for eliminating barrier synchronization overhead in software DSMs. Our compile-time barrier elimination algorithm extends previous techniques in three ways: (1) we perform inexpensive communication analysis through local subscript analysis when using chunk iteration partitioning for parallel loops (2) we exploit delayed updates in lazy-release-consistency DSMs to eliminate barriers guarding only anti-dependences (3) when possible we replace barriers with customized nearest-neighbor synchronization. Experiments on an IBM SP-2 indicate these techniques can improve parallel performance by 20% on average and by up to 60% for some applications. Article History: Registration Date: 29/09/2004
- Published
- 1998
19. Simplifying Control Flow in Compiler-Generated Parallel Code
- Subjects
Algorithm ,Compiler/decompiler ,Algorithms -- Usage ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Parallel programming (Computer science) -- Analysis - Abstract
EXPRESSION RANGE ANALYSIS; CONSTRAINT PROPAGATION Abstract: Optimizing compilers for data-parallel languages such as High Performance Fortran perform a complex sequence of transformations. However, the effects of many transformations are not independent, which makes it challenging to generate high quality code. In particular, some transformations introduce conditional control flow, while others make some conditionals unnecessary by refining program context. Eliminating unnecessary conditional control flow during compilation can reduce code size and remove a source of overhead in the generated code. This paper describes algorithms to compute symbolic constraints on the values of expressions used in control predicates and to use these constraints to identify and remove unnecessary conditional control flow. These algorithms have been implemented in the Rice dHPF compiler and we show that these algorithms are effective in reducing the number of conditionals and the overall size of generated code. Finally, we describe a synergy between control flow simplification and data-parallel code generation based on loop splitting which achieves the effects of more narrow data-parallel compiler optimizations such as vector message pipelining and the use of overlap areas. Article History: Registration Date: 29/09/2004
- Published
- 1998
20. Constraint-based array dependence analysis
- Author
-
Pugh, William and Wonnacott, David
- Subjects
Programming language ,Compiler/decompiler ,Algorithm ,Arrays (Data structures) -- Analysis ,Programming languages -- Analysis ,Coroutines -- Analysis ,Compilers -- Analysis ,Parallel programming (Computer science) -- Analysis ,Compiling (Electronic computers) -- Analysis ,Algorithms -- Analysis - Abstract
Traditional array dependence analysis, which detects potential memory aliasing of array references, is a key analysis technique for automatic parallelization. Recent studies of benchmark codes indicate that limitations of analysis […]
- Published
- 1998
21. Toward a complete transformational toolkit for compilers
- Author
-
Bergstra, J.A., Dinesh, T.B., Field, J., and Heering, J.
- Subjects
Programming language ,Code generator ,Compiler/decompiler ,Program generators -- Analysis ,Programming languages -- Analysis ,Coroutines -- Analysis ,Compilers -- Analysis ,Code generators -- Analysis ,Compiling (Electronic computers) -- Analysis - Abstract
PIM is an equational logic designed to function as a 'transformational toolkit' for compilers and other programming tools that analyze and manipulate imperative languages. It has been applied to such […]
- Published
- 1997
22. Finding binary clones with Opstrings & function digests: part III
- Author
-
Schulman, Andrew
- Subjects
Compiler/decompiler ,Source code -- Usage ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Functional equations -- Usage ,Functions -- Usage ,Reverse engineering -- Analysis - Published
- 2005
23. Contributions to the GNU Compiler Collection
- Author
-
Edelsohn, D., Gellerich, W., Hagog, M., Naishlos, D., Namolaru, M., Pasch, E., Penner, H., Weigand, U., and Zaks, A.
- Subjects
Open source software ,Operating system ,64-bit operating system ,32-bit operating system ,Compiler/decompiler ,Public software -- Analysis ,Operating systems -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis - Published
- 2005
24. The QT designer IDE: everything but the compiler
- Author
-
Berton, Dave
- Subjects
Application installation/distribution software ,Application development software ,Expert system development software ,Compiler/decompiler ,Program development software -- Analysis ,Coroutines -- Analysis ,Compiling (Electronic computers) -- Analysis ,Compilers -- Analysis - Published
- 2004
25. Code efficiency & compiler-directed feedback
- Author
-
Brenner, Jackie and Levy, Markus
- Subjects
Algorithm ,Compiler/decompiler ,Algorithms -- Analysis ,Coroutines -- Analysis ,Compiling (Electronic computers) -- Analysis ,Compilers -- Analysis - Published
- 2003
26. Adding exceptions & run time type identification to the Windows CE compiler: part 1: picking up where Microsoft left off
- Author
-
Carles, Dani and Szymanski, Boleslaw K.
- Subjects
Expert system development software ,Application development software ,Application installation/distribution software ,Compiler/decompiler ,Technology application ,Program development software -- Usage ,Program development software -- Analysis ,Program development software -- Technology application ,Compilers -- Usage ,Compilers -- Analysis ,Compiling (Electronic computers) -- Usage ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Usage ,Coroutines -- Analysis - Published
- 2002
27. GCJ & the Cygnus Native Interface
- Author
-
Sally, Gene
- Subjects
Compiler/decompiler ,C++ programming language ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis - Published
- 2004
28. Optimization techniques
- Author
-
Kientzle, Tim
- Subjects
Product enhancement ,Compiler/decompiler ,Microprocessor upgrade ,Microprocessor ,Assembling (Electronic computers) -- Evaluation ,Coroutines -- Analysis ,Coroutines -- Product enhancement ,Compiling (Electronic computers) -- Analysis ,Compiling (Electronic computers) -- Product enhancement ,Compilers -- Analysis ,Compilers -- Product enhancement ,Microprocessors -- Analysis ,Microprocessors -- Product enhancement - Published
- 2004
29. xDSPcore: a compiler-based configurable digital signal processor
- Author
-
Krall, Andreas, Pryanishnikov, Ivan, Hirnschrott, Ulrich, and Panis, Christian
- Subjects
Digital signal processor ,Compiler/decompiler ,Technology application ,Digital signal processors -- Analysis ,Signal processing -- Technology application ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis - Published
- 2004
30. Understanding the CLR binder
- Author
-
Ramamurthy, Aarthi and Miller, Mark
- Subjects
Application development software ,Application installation/distribution software ,Rapid application development ,Assembler ,Compiler/decompiler ,Microsoft .Net Framework (Application development software) -- Analysis ,Program development software -- Analysis ,Applications programming -- Analysis ,Assemblers -- Evaluation ,Assembling (Electronic computers) -- Evaluation ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis - Published
- 2009
31. Best practices for creating reliable builds, part 2
- Author
-
Hashimi, Sayed Ibrahim
- Subjects
Compiler/decompiler ,Programming utility ,Application development software ,Application installation/distribution software ,Microsoft Build Engine (Application development software) -- Usage ,Microsoft Build Engine (Application development software) -- Evaluation ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Program development software -- Usage ,Program development software -- Evaluation ,Configurations (Computers) -- Methods - Published
- 2009
32. Team build 2008 customization
- Author
-
Randell, Brian A.
- Subjects
Application development software ,Application installation/distribution software ,Rapid application development ,Compiler/decompiler ,Programming language ,Programming utility ,Application programming interface ,Microsoft Build Engine (Application development software) -- Evaluation ,Program development software -- Evaluation ,Applications programming -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Programming languages -- Usage ,Application Programming Interface -- Analysis - Published
- 2009
33. Best practices for creating reliable builds, part 1
- Author
-
Hashimi, Sayed Ibrahim
- Subjects
Rapid application development ,Compiler/decompiler ,Application development software ,Application installation/distribution software ,Programming utility ,Microsoft Build Engine (Application development software) -- Evaluation ,Microsoft Build Engine (Application development software) -- Usage ,Applications programming -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Coroutines -- Analysis ,Program development software -- Usage ,Program development software -- Evaluation - Published
- 2009
34. PIC Microcontroller Basic Compilers
- Author
-
IOVINE, JOHN
- Subjects
Coroutines -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Microcontrollers -- Analysis ,Business ,Electronics and electrical industries ,Compiler/decompiler ,Microcontroller ,Analysis - Abstract
A few months ago, we constructed a robot that used a PIC microcontroller for its neural intelligence (brains). The PIC microcontroller was programmed with a compiled basic program. At the [...]
- Published
- 2001
35. The compiler advantage
- Author
-
Baker, Bonnie
- Subjects
Coroutines -- Analysis ,Compilers -- Analysis ,Compiling (Electronic computers) -- Analysis ,Embedded systems -- Analysis ,C# (Programming language) -- Analysis ,Business ,Electronics and electrical industries ,Embedded system ,Compiler/decompiler ,System on a chip ,C# (Programming language) ,Analysis - Abstract
Choosing The Right C Compiler for your embedded-system projects may be a bigger challenge than you expect. The embedded-system-industry trend is moving from using assembly to using the higher level [...]
- Published
- 2005
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.