544 results
Search Results
2. professional activities.
- Subjects
COMPUTER architecture - Abstract
Invites several research papers for conferences related to computer architecture by the Association for Computing Machinery.
- Published
- 1971
3. Analyzing Worldwide Research in Hardware Architecture, 1997–2011.
- Author
-
SINGH, VIRENDER, PERDIGONES, ALICIA, GARCIA, JOSÉ LUIS, CAÑAS-GUERRORO, IGNACIO, and MAZARRÓN, FERNANDO R.
- Subjects
COMPUTER architecture ,COMPUTER engineering ,COMPUTER science periodicals ,BIBLIOMETRICS ,KEYWORDS ,COMPUTER networks ,COMPUTER science research ,INTERNATIONAL cooperation - Abstract
The article offers an overview and analysis of research done in the field of computer hardware architecture between 1997 and 2011. It discusses a keyword analysis of a large number of journal articles in the field published during that period, noting keywords whose frequency rose over the period including wireless sensor network and reliability, and keywords whose frequency fell over the period, including neural network and fault tolerance. It discusses the representation of various countries and various institutions among the authors of the papers analyzed as well as the change in the share of papers involving international collaboration.
- Published
- 2015
- Full Text
- View/download PDF
4. Foreword to TODS Invited Papers Issue 2011.
- Author
-
ÖZSOYOĞLU, Z. MERAL
- Subjects
- *
COMPUTER architecture , *DATABASES - Abstract
An introduction is presented which discusses various reports published within the journal including "Finding Maximal Cliques in Massive Networks," "Designing Fast Architecture-Sensitive Tree Search on Modern Multicore/Many-Core Processors," and "Characterizing Schema Mappings via Data Examples."
- Published
- 2011
5. Multi-Layer Faults in the Architectures of Mobile, Context-Aware Adaptive Applications: A Position Paper.
- Author
-
Sama, Michele, Rosenblum, David S., Zhimin Wang, and Elbaum, Sebastian
- Subjects
CELL phones ,COMPUTER software ,COMPUTER architecture ,DETECTORS ,TELEPHONES - Abstract
Five cellphones are sold every second, and there are four times more cellphones than computers, meaning there are some billions of mobile handheld devices in existence. Modern cellphones are equipped with multiple context sensors used by increasingly sophisticated software applications that exploit the sensors, allowing the applications to adapt automatically to changes in the surrounding environment, such as by responding to the location and speed of the user. The architecture of such applications is typically layered and incorporates a context-awareness middleware to support processing of context values. While this layered architecture is very natural for the design and implementation of applications, it gives rise to new kinds of faults and faulty behavior modes, which are difficult to detect using existing validation techniques. In this paper we provide scenarios illustrating such faults and exploring how they manifest in context-aware adaptive applications. [ABSTRACT FROM AUTHOR]
- Published
- 2008
6. Speculative Taint Tracking (STT): A Comprehensive Protection for Speculatively Accessed Data.
- Author
-
Jiyong Yu, Mengjia Yan, Khyzha, Artem, Morrison, Adam, Torrellas, Josep, and Fletcher, Christopher W.
- Subjects
COMPUTER security ,DATA protection ,MALWARE prevention ,COMPUTER architecture ,COMPUTER performance - Abstract
Speculative execution attacks present an enormous security threat, capable of reading arbitrary program data under malicious speculation, and later exfiltrating that data over microarchitectural covert channels. This paper proposes speculative taint tracking (STT), a high security and high performance hardware mechanism to block these attacks. The main idea is that it is safe to execute and selectively forward the results of speculative instructions that read secrets, as long as we can prove that the forwarded results do not reach potential covert channels. The technical core of the paper is a new abstraction to help identify all microarchitectural covert channels, and an architecture to quickly identify when a covert channel is no longer a threat. We further conduct a detailed formal analysis on the scheme in a companion document. When evaluated on SPEC06 workloads, STT incurs 8.5% or 14.5% performance overhead relative to an insecure machine. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. professional activities.
- Subjects
CONFERENCES & conventions ,COMPUTER architecture ,COMPUTER software ,MICROPROCESSORS ,COMPUTER input-output equipment ,COMPUTER science - Abstract
This article focuses on conferences related to computer programs. Architecture. The program for the Fourth Annual Symposium on Computer Architecture, sponsored by ACM SIGARCH and IEEEes and scheduled for March 23-25 at the Sheraton Silver Spring Motor Inn, Silver Spring, Maryland, will include two special tutorials: Microprocessor Architecture at the Chip Level and Microprocessor System Architecture: Application to Industrial Control; The 1977 IEEE Workshop on Picture Data Description and Management will be held April 21-22,The 1977 IEEE International Symposium on Information Theory will be held on the campus of Cornell University, Ithaca, New York, on October 10-14, 1977. 1977 at the Sheraton-Chicago Hotel. Its program, consisting of invited and contributed papers and panel discussions, will include sessions on: vector graphics; picture description and parsing techniques; pictorial databases; picture machines and image processors.
- Published
- 1977
8. ACM Proceedings and Special Publications.
- Subjects
COMPUTER architecture ,COMPUTER science ,PROGRAMMING languages ,CONFERENCES & conventions ,LISTS - Abstract
A list of ACM proceedings and special reports is presented. The list includes the proceedings of the fourteenth annual symposium on theory of computing, the proceedings of the 1981 conference on functional programming language and computer architecture, and the conference record of the ninth annual ACM symposium on principles of programming languages.
- Published
- 1982
9. DeePattern: Layout Pattern Generation with Transforming Convolutional Auto-Encoder.
- Author
-
Haoyu Yang, Pathak, Piyush, Gennari, Frank, Ya-Chieh Lai, and Bei Yu
- Subjects
MACHINE learning ,LINEAR systems ,LITHOGRAPHY ,GENERATIVE adversarial networks ,COMPUTER architecture - Abstract
VLSI layout patterns provide critic resources in various design for manufacturability researches, from early technology node development to back-end design and sign-off flows. However, a diverse layout pattern library is not always available due to long logic-to-chip design cycle, which slows down the technology node development procedure. To address this issue, in this paper, we explore the capability of generative machine learning models to synthesize layout patterns. A transforming convolutional auto-encoder is developed to learn vector-based instantiations of squish pattern topologies. We show our framework can capture simple design rules and contributes to enlarging the existing squish topology space under certain transformations. Geometry information of each squish topology is obtained from an associated linear system derived from design rule constraints. Experiments on 7nm EUV designs show that our framework can more effectively generate diverse pattern libraries with DRC-clean patterns compared to a state-of-the-art industrial layout pattern generator. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. PROFESSIONAL ACTIVITIES.
- Subjects
CONFERENCES & conventions ,COMPUTER science ,SEMINARS ,COMPUTER networks ,COMPUTER architecture ,ELECTRONIC data processing - Abstract
The article presents a schedule of various conferences related to computer science research. Twenty-seven Professional Development Seminars covering computing and data processing topics are to be offered at the 1985 National Computer Conference, which will be held from July 15-18, 1985, in Chicago, Illinois. ACM's 16th North American Computer Chess Championship is to be held from October 13-15, 1985, at the 1985 ACM Annual Conference, which will take place from October 14-16, 1985, in Denver, Colorado. Papers on computer networks are solicited for the Pacific Computer Communication Symposium to be held from October 22-24, 1985, in Seoul, Republic of Korea. Some of the areas of interest include network protocols, international and regional computer networks, gateway architecture and internetworking, metropolitan area networks and local area networks. Original research contributions are solicited for the Sixth International Conference on Information Systems to be held from December 16-18, 1985, in Indianapolis, Indiana.
- Published
- 1985
11. Applying Design-Space Exploration to Quantum Architectures.
- Author
-
Chong, Frederic T.
- Subjects
QUANTUM computers ,COMPUTER architecture - Abstract
An article on the scaling trapped-ion quantum computer architecture is introduced.
- Published
- 2022
- Full Text
- View/download PDF
12. Calls for paper: Important Dates.
- Subjects
SPECIAL events ,CONFERENCES & conventions ,SYSTEMS software ,INFORMATION technology ,COMPUTER architecture - Abstract
The article presents a calendar of events of the Association for Computing Machinery. A workshop on Computer Architecture and Non-Numeric Processing will be held on March 31, 1978. An International Symposium on Operating Systems will be held on April 14, 1978. A Third Berkeley Workshop on Distributed Data Management and Computer Networks will be held on April 15, 1978.
- Published
- 1978
13. professional activities.
- Subjects
COMPUTER science ,AUTOMATION ,CONFERENCES & conventions ,MICROPROGRAMMING ,COMPUTER architecture ,COMPUTER industry - Abstract
The article presents information on professional activities related to computing. "The Human Connection" is the theme of this year's Office Automation Conference scheduled for April 5-7, 1982, in San Francisco. The three-day conference agenda will include an overview session, 48 technical sessions, and eight industry workshops in addition to exhibits representing more than 100 manufacturers, consultants, associations, and suppliers of software services. The ACM-IEEE Fifteenth Annual Workshop on Microprogramming is to be held October 4-7, 1982, in Palo Alto, California. The abstract deadline for the ACM SIGUCCS Tenth User Services Conference has been moved to April 25. The ACM SIGSMALL Conference on Small Systems is seeking papers in all related areas, including novel applications, communications and I/O, new architecture, operating systems, distributed systems, and new research. A call for papers has been issued for the Third International Conference on Information Systems, to be held during December 13-15, 1982 in Ann Arbor, Michigan. Original research papers are sought addressing the following broad topics: techniques for designing and developing effective information systems; methods for assuring successful implementation; impact on individuals, organizations, and societies; and innovative and transferable curricula.
- Published
- 1982
14. Report on the Sixth International Workshop on Location and the Web (LocWeb 2016).
- Author
-
Ahlers, Dirk and Wilde, Erik
- Subjects
COMPUTER architecture ,WORLD Wide Web ,WEBSITES ,ACCESS to information ,GEOSPATIAL data - Abstract
For describing and understanding the real world, location is an important factor. Consequently, it also appears in many Web applications and mining approaches as a crosscutting issue. LocWeb 2016 continues a workshop series addressing issues at the intersection of location-based services and Web architecture and was held at WWW 2016. It combines geospatial search, information management, and Web architecture, with a main focus on location-aware information access. The workshop drew contributions from various fields, ranging from mobility analytics over new ways to understand cities to Web standards. LocWeb2016 had an interdisciplinary combination of contributions, with two keynotes and three long papers. We will briefly discuss the workshop theme and the contributions. [ABSTRACT FROM AUTHOR]
- Published
- 2016
15. ACM Proceedings and Special Publications.
- Subjects
PUBLICATIONS ,CONFERENCES & conventions ,ASSOCIATIONS, institutions, etc. ,COMPUTERS ,COMPUTER architecture ,COMPUTER science - Abstract
The article presents information on how to obtain some publications and conference proceedings of the Association for Computing Machinery (ACM). It provides the contact address from where the publications and special reports can be obtained. Some of the publications which could be ordered are: "Proceedings of the 5th Annual Symposium on Computer Architecture" and "Proceedings of the 14th Design Automation Conference."
- Published
- 1978
16. Foreword to the Special Issue on Computer Architecture.
- Subjects
PREFACES & forewords ,COMPUTER architecture - Abstract
A foreword to the January 1978 issue of the "Communications of the ACM," which focuses on computer architecture, is presented.
- Published
- 1978
17. A Metamodel of Information Flow: A Tool to Support Information Systems Theory.
- Author
-
Davis, Gordon and Ahituv, Niv
- Subjects
DATA flow computing ,DECISION making ,COMPUTER architecture ,ELECTRONIC data processing ,COMPUTER systems ,MANAGEMENT information systems - Abstract
In this paper an axiomatic, fundamental metamodel of data flow is constructed. The components of the metamodel are the states along the flow of data: physical events, language (data), stored data, human data processing, and decision making. Transfers front one state to another are performed by functions: coding, keying, processing, perception, and human acting. The entire flow is evaluated by a value function. Each of the states and functions is rigorously described by means of definitions, axioms, and theorems. The main purpose of the meta model is to provide a common framework for various models in MIS and consequently to remedy the "Tower of Babel" syndrome prevailing in this area. The way the metamodel can he used to develop other models in MIS is explained in the last part of the paper. [ABSTRACT FROM AUTHOR]
- Published
- 1987
18. How Flexible is Your Computing System?
- Author
-
SHIHUA HUANG, WAEIJEN, LUC, and CORPORAAL, HENK
- Subjects
COMPUTER systems ,COMPUTER architecture ,ENERGY consumption - Abstract
In literature, computer architectures are frequently claimed to be highly flexible, typically implying the existence of trade-offs between flexibility and performance or energy efficiency. Processor flexibility, however, is not very sharply defined, and consequently these claims cannot be validated, nor can such hypothetical relations be fully understood and exploited in the design of computing systems. This paper is an attempt to introduce scientific rigour to the notion of flexibility in computing systems. A survey is conducted to provide an overview of references to flexibility in literature, both in the computer architecture domain, as well as related fields. A classification is introduced to categorize different views on flexibility, which ultimately form the foundation for a qualitative definition of flexibility. Departing from the qualitative definition of flexibility, a generic quantifiable metric is proposed, enabling valid quantitative comparison of the flexibility of various architectures. To validate the proposed method, and evaluate the relation between the proposed metric and the general notion of flexibility, the flexibility metric is measured for 25 computing systems, including CPUs, GPUs, DSPs, and FPGAs, and 40 ASIPs taken from literature. The obtained results provide insights into some of the speculative trade-offs between flexibility and properties such as energy efficiency and area efficiency. Overall the proposed quantitative flexibility metric shows to be commensurate with some generally accepted qualitative notions of flexibility collected in the survey, although some surprising discrepancies can also be observed. The proposed metric and the obtained results are placed into context of the state of the art on compute flexibility, and extensive reflection provides not only a complete overview of the field, but also discusses possible alternative approaches and open issues. Note that this work does not aim to provide a final answer to the definition of flexibility, but rather provides a framework to initiate a broader discussion in the computer architecture society on defining, understanding, and ultimately taking advantage of flexibility. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Soylent: A Word Processor with a Crowd Inside.
- Author
-
Bernstein, Michael S., Little, Greg, Miller, Robert C., Hartmann, Björn, Ackerman, Mark S., Karger, David R., Crowell, David, and Panovich, Katrina
- Subjects
CROWDSOURCING ,COMPUTER architecture ,USER interfaces ,HUMAN-machine systems ,COMPUTER programming ,ELECTRONIC data processing ,COMPUTER algorithms - Abstract
This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
20. Architecture evaluation without an architecture.
- Author
-
Kazman, Rick, Bass, Len, Ivers, James, and Moreno, Gabriel A.
- Subjects
SMART power grids ,ELECTRIC power distribution automation ,COMPUTER architecture ,RISK assessment - Abstract
This paper describes an analysis of some of the challenges facing one portion of the Electrical Smart Grid in the United States - residential Demand Response (DR) systems. The purposes of this paper are twofold: 1) to discover risks to residential DR systems and 2) to illustrate an architecture-based analysis approach to uncovering risks that span a collection of technical and social concerns. The results presented here are specific to residential DR but the approach is general and it could be applied to other systems within the Smart Grid and to other critical infrastructure domains. Our architecture-based analysis is different from most other approaches to analyzing complex systems in that it addresses multiple quality attributes simultaneously (e.g., performance, reliability, security, modifiability, usability, etc.) and it considers the architecture of a complex system from a socio-technical perspective where the actions of the people in the system are as important, from an analysis perspective, as the physical and computational elements of the system. This analysis can be done early in a system's lifetime, before substantial resources have been committed to its construction or procurement, and so it provides extremely cost-effective risk analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
21. Computer Vision-based Analysis of Buildings and Built Environments: A Systematic Review of Current Approaches.
- Author
-
STARZYŃSKA-GRZEŚ, MAŁGORZATA B., ROUSSEL, ROBIN, JACOBY, SAM, and ASADIPOUR, ALI
- Subjects
BUILT environment ,COMPUTER architecture ,EVIDENCE gaps ,COMPUTER science ,ARCHITECTURAL design ,COMPUTER vision - Abstract
Analysing 88 sources published from 2011 to 2021, this article presents a first systematic review of the computer vision-based analysis of buildings and the built environment. Its aim is to assess the potential of this research for architectural studies and the implications of a shift to a cross-disciplinarity approach between architecture and computer science for research problems, aims, processes, and applications. To this end, the types of algorithms and data sources used in the reviewed studies are discussed in respect to architectural applications such as a building classification, detail classification, qualitative environmental analysis, building condition survey, and building value estimation. Based on this, current research gaps and trends are identified, with two main research aims emerging. First, studies that use or optimise computer vision methods to automate time-consuming, labour-intensive, or complex tasks when analysing architectural image data. Second, work that explores the methodological benefits of machine learning approaches to overcome limitations of conventional analysis to investigate new questions about the built environment by finding patterns and relationships among visual, statistical, and qualitative data. The growing body of research offers new methods to architectural and design studies, with the article identifying future challenges and directions of research. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. ACM Proceedings and Special Publications.
- Subjects
COMPUTER operating systems ,COMPUTER architecture ,COMPUTER software ,PROGRAMMING languages ,COMPUTER science ,BIBLIOGRAPHY - Abstract
The article lists several publications from the Association for Computing Machinery (ACM). Some of them are Proceedings of the Symposium on Architectural Support for Programming Languages and Operating Systems; Abstracts of the 1982 ACM Computer Science Conference; The Papers of the Thirteenth SIGCSE Technical Symposium on Computer Science Education; 1981 Winter Simulation Conference Proceedings; ACM 81 Conference Proceedings; Proceedings of the 1981 Conference on Functional Programming Languages and Computer Architecture.
- Published
- 1982
23. SMApproxLib: Library of FPGA-based Approximate Multipliers.
- Author
-
Ullah, Salim, Murthy, Sanjeev Sripadraj, and Kumar, Akash
- Subjects
COMPUTING platforms ,COMPUTER architecture ,ADDERS (Digital electronics) ,MAGNITUDE estimation ,COMPUTER software - Abstract
The main focus of the existing approximate arithmetic circuits has been on ASIC-based designs. However, due to the architectural differences between ASICs and FPGAs, comparable performance gains cannot be achieved for FPGA-based systems by using the approximations defined, particularly for ASIC-based systems. This paper exploits the structure of the 6-input lookup tables and associated carry chains of modern FPGAs to define a methodology for designing approximate multipliers optimized for FPGA-based systems. Using our presented methodology, we present SMApproxLib, an open source library of approximate multipliers with different bit-widths, output accuracies and performance gains. Being the first open source library of FPGA-based approximate multipliers, SMApproxLib can serve as a benchmark for designing and comparing future FPGA-based approximate arithmetic circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. professional activities.
- Subjects
CONFERENCES & conventions ,COMPUTER industry ,COMPUTER security ,DATA protection ,COMPUTER-aided design ,COMPUTER architecture - Abstract
This article presents information about several events related to the computer industry. The 1982 Symposium on Security and Privacy sponsored by IEEE-CS will take place during April 26-28, 1982, in Oakland/Berkeley, California. Papers and proposals for panel sessions related to security and privacy are solicited. Possible topics include encryption, database security, and operating system and privacy protection. Both theoretical and practical contributions are invited. The Nineteenth Design Automation Conference is to be held in Las Vegas, Nevada, during June 14-16, 1982. It is sponsored by the ACM Special Interest Group on Design Automation and the Design Automation Technical Committee of IEEE-CS. The major area of interest is computer-aided design of digital systems, but a significant portion of the conference is devoted to architectural and mechanical design. Recurring areas of interest are design verification, simulation, physical design and layout, documentation, testing, databases, DA system, architecture, and mechanical design.
- Published
- 1981
25. ACM News.
- Subjects
COMPUTER science ,ASSOCIATIONS, institutions, etc. ,COLLEGE teachers ,TRADE associations ,COMPUTER architecture - Abstract
The article presents news related to the Association for Computing Machinery (ACM). Charles W. Gullotta has been appointed Chairman of the ACM Accreditation Committee by President Walter M. Carlson, to succeed Carl Hammer. The Accreditation Committee has been particularly active in the programmer training area, concerning itself with the creation of proper guidelines for trade school accreditation. The enthusiasm generated by the call for papers issued in early November 1970, for ACM 1971, the ACM annual conference, is evidenced by an unusually large early response. According to Melvyn H. Schwartz, Technical Program Chairman, 265 notices of intentions to submit papers have been received. Alan J. Perlis has been appointed by Yale University to be the first Eugene Higgins Professor of Computer Science. He will join the university's computer science department, effective July 1, 1971. The chair which he will occupy is named after Eugene Higgins, who was responsible for the establishment of trusts for the advancement of science at several major universities.
- Published
- 1971
26. Pauli Frames for Qantum Computer Architectures.
- Author
-
Riesebos, L., Fu, X., Varsamopoulos, S., Almudever, C. G., and Bertels, K.
- Subjects
QUANTUM computing ,COMPUTER architecture ,COMPUTER systems ,MULTICORE processors ,PAULI spin algebra ,PAULI matrices - Abstract
The Pauli frame mechanism allows Pauli gates to be tracked in classical electronics and can relax the timing constraints for error syndrome measurement and error decoding. When building a quantum computer, such a mechanism may be beneficial, and the goal of this paper is not only to study the working principles of a Pauli frame but also to quantify its potential effect on the logical error rate. To this purpose, we implemented and simulated the Pauli frame module which, in principle, can be directly mapped into a hardware implementation. Simulation of a surface code 17 logical qubit has shown that a Pauli frame can reduce the error rate of a logical qubit up to 70% compared to the same logical qubit without Pauli frame when the decoding time equals the error correction time, and maximum parallelism can be obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. Technical Perspective: A Chiplet Prototype System for Deep Learning Inference.
- Author
-
Jerger, Natalie Enright
- Subjects
DEEP learning ,COMPUTER architecture - Abstract
An introduction to an article published in the journal about the topic of deep-learning inference associated with Chiplet-Based computer architecture is presented.
- Published
- 2021
- Full Text
- View/download PDF
28. ACM Proceedings and Special Publications.
- Subjects
LISTS ,CONFERENCE proceedings (Publications) ,COMPUTER science ,COMPUTER architecture ,DATA transmission systems - Abstract
A list of the special reports of the proceedings of the Association for Computing Machinery (ACM) is presented. The list includes Proceedings of the SIGIR-SIGARCH-SIGMOD Third Workshop on Computer Architecture for Non-Numeric Processing, Proceedings of the Fifteenth Annual Computer Personnel Research Conference and Proceedings of the Fifth Data Communications Symposium.
- Published
- 1977
29. On The Design and Application of Thermal Isolation Servers.
- Author
-
AHMED, REHAN, PENGCHENG HUANG, MILLEN, MAX, and THIELE, LOTHAR
- Subjects
COMPUTER architecture ,EMBEDDED computer systems ,MULTICORE processors ,REAL-time computing ,CENTRAL processing units - Abstract
Recently, there has been an increasing trend towards executing real-time applications on multi-core platforms. However, this complicates the design problem, as applications running on different cores can interfere due to shared resources and mediums. In this paper, we focus on thermal interference, where a given task (τ
1 ) heats the processor, resulting in reduced service (due to Dynamic Thermal Management (DTM)) to another task (τ2 ). In real-time domain, where tasks have deadline constraints, thermal interference is a substantial problem as it directly impacts the Worst Case Execution Time (WCET) of the effected application (τ2 ). The problem exacerbates as we move to mixed-criticality systems, where the criticality of τ2 may be greater than the criticality of τ1 , complicating the certification process. In this paper,we propose a server based strategy (Thermal Isolation Server (TI Server))which can be used to avoid thermal interference of applications.We also present a heuristic to design TI Servers to meet the timing constraints of all tasks and the thermal constraints of the system. TI Servers are time/space composable, and can be applied to a variety of task models. We also evaluate TI Servers on a hardware test-bed for validation purposes. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
30. A DWM-Based Stack Architecture Implementation for Energy Harvesting Systems.
- Author
-
KHOUZANI, HODA AGHAEI and CHENGMO YANG
- Subjects
COMPUTER architecture ,DOMAIN walls (String models) ,ENERGY harvesting ,MULTIPROCESSORS ,CENTRAL processing units - Abstract
Energy harvesting systems tend to use non-volatile processors to conduct computation under intermittent power supplies. While previous implementations of non-volatile processors are based on register architectures, stack architecture, known for its simplicity and small footprint, seems to be a better fit for energy harvesting systems. In this work, Domain Wall Memory (DWM) is used to implement ZPU, the world's smallest working CPU. Not only does DWM offer ultra-high density and SRAM-comparable access latency, but the sequential access structure of DWM also makes it well suited for a stack whose accesses display high temporal locality. As the performance and energy of DWMare determined by the number of shift operations performed to access the stack, this paper further reduces shift operations through novel data placement and micro-code transformation optimizations. The impact of compiler optimization techniques on the number of shift operations is also investigated so as to select the most effective optimizations for DWM-based stack machine. Experimental studies confirm the effectiveness of the proposed DWM-based stack architectures in improving the performance and energy-efficiency of energy harvesting systems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
31. A refactoring tool to extract GPU kernels.
- Author
-
Damevski, Kostadin and Muralimanohar, Madhan
- Subjects
COMPUTER architecture ,SOFTWARE refactoring ,GRAPHICS processing units ,COMPUTER programming ,SOFTWARE engineering - Abstract
Significant performance gains can be achieved by using hardware architectures that integrate GPUs with conventional CPUs to form a hybrid and highly parallel computational engine. However, programming these novel architectures is tedious and error prone, reducing their ease of acceptance in an even wider range of computationally intensive applications. In this paper we discuss a refactoring technique, called Extract Kernel that transforms a loop written in C into a parallel function that uses NVIDIA's CUDA framework to execute on a GPU. The selected approach and the challenges encountered are described, as well as some early results that demonstrate the potential of this refactoring. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
32. An Efficient Dynamically Reconfigurable On-chip Network Architecture.
- Author
-
Modarressi, Mehdi, Sarbazi-Azad, Hamid, and Tavakkol, Arash
- Subjects
NETWORKS on a chip ,ADAPTIVE computing systems ,COMPUTER architecture ,INTEGRATED circuit interconnections ,SIMULATION methods & models - Abstract
In this paper, we present a reconfigurable architecture for NoCs on which arbitrary application-specific topologies can be implemented. The proposed NoC can dynamically tailor its topology to the traffic pattern of different applications at run-time. The run-time topology construction mechanism involves monitoring the network traffic and changing the inter-node connections in order to reduce the number of intermediate routers between the source and destination nodes of heavy communication flows. This mechanism should also preserve the NoC connectivity. In this paper, we first introduce the proposed reconfigurable topology and then address the problem of run-time topology reconfiguration. Experimental results show that this architecture effectively improves the NoC power and performance over the existing conventional architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2010
33. 3-Step Knowledge Transition: A Case Study on Architecture Evaluation.
- Author
-
Michalik, Bartosz, Nawrocki, Jerzy, and Ochodek, Mirosław
- Subjects
COMPUTER software industry ,COMPUTER service industry ,SOFTWARE engineering ,COMPUTER architecture ,CASE studies - Abstract
Software Engineering is developing very fast. To keep up with the changes, software companies need effective methods of knowledge transfer. In the paper a 3-step approach to knowledge transfer, called Technical Drama, is presented. The paper is focused on transferring knowledge concerning architecture evaluation, but the approach could also be applied to transferring knowledge concerning inspections, testing etc. It is claimed in the paper that the Technical Drama can be useful in the industrial context (two case studies are described) as well as at university (then a kind of software studio is required). [ABSTRACT FROM AUTHOR]
- Published
- 2008
34. Policy-Based Self-Adaptive Architectures: A Feasibility Study in the Robotics Domain.
- Author
-
Georgas, John C. and Taylor, Richard N.
- Subjects
COMPUTER architecture ,SYSTEMS development ,COMPUTER systems ,ROBOTICS ,CASE studies - Abstract
Robotics is a challenging domain which sometimes exhibits a clear need for self-adaptive capabilities, as such functionality offers the potential for robots to account for their unstable and unpredictable deployment domains. This paper focuses on a feasibility study in applying a policy- and architecture- based approach to the development of self-adaptive robotic systems. We describe two case studies in which we construct self-adaptive Robocode and Mindstorms robots, report on our development experiences, and discuss the challenges we encountered. The paper establishes that it is feasible to apply our approach to the robotics domain, contributes a discussion of the architectural issues we encountered, and further evaluates our general-purpose approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
35. Empirical Evaluation of Some Features of Instruction Set Processor Architectures.
- Author
-
Bell, G., Siewiorek, D., Fuller, S. H., and Lunde, Åmund
- Subjects
INSTRUCTION set processors ,PROGRAMMING languages ,COMPUTERS ,COMPUTER architecture ,COMPUTER input-output equipment ,COMPUTER systems - Abstract
This paper presents methods for empirical evaluation of features of Instruction Set Processors (ISPs). ISP features are evaluated in terms of the time used or saved by having or not having the feature. The methods are based on analysis of traces of program executions. The concept of a register life is introduced, and used to answer questions like: How many registers are used simultaneously? How many would be sufficient all of the time? Most of the time? What would the overhead be if the number of registers were reduced? What are registers used for during their lives? The paper also discusses the problem of detecting desirable but non-existing instructions. Other problems are briefly discussed. Experimental results are presented, obtained by analyzing 41 programs running on the DECsystem 10 ISP. [ABSTRACT FROM AUTHOR]
- Published
- 1977
36. Formal Requirements for Virtualizable Third Generation Architectures.
- Author
-
Popek, Gerald J. and Goldberg, Robert P.
- Subjects
COMPUTER network architectures ,COMPUTER input-output equipment ,PDP-10 (Computer) ,VIRTUAL machine systems ,COMPUTER architecture ,DIGITAL computer simulation - Abstract
Virtual machine systems have been implemented on a limited number of third generation computer systems, e.g. CP-67 on the IBM 360/67. From previous empirical studies, it is known that certain third generation computer systems, e.g. the DEC PDP-10, cannot support a virtual machine system. In this paper, model of a third-generation-like computer system is developed. Formal techniques are used to derive precise sufficient conditions to test whether such an architecture can support virtual machines. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
37. A Survey of Cache Simulators.
- Author
-
BRAIS, HADI, KALAYAPPAN, RAJSHEKAR, and PANDA, PREETI RANJAN
- Subjects
CACHE memory ,COMPUTER architecture ,SOURCE code ,COMPUTER simulation ,HIERARCHIES - Abstract
Computer architecture simulation tools are essential for implementing and evaluating new ideas in the domain and can be useful for understanding the behavior of programs and finding microarchitectural bottlenecks. One particularly important part of almost any processor is the cache hierarchy. While some simulators support simulating a whole processor, including the cache hierarchy, cores, and on-chip interconnect, others may only support simulating the cache hierarchy. This survey provides a detailed discussion on 28 CPU cache simulators, including popular or recent simulators. We compare between all of these simulators in four different ways: major design characteristics, support for specific cache design features, support for specific cache-related metrics, and validation methods and efforts. The strengths and shortcomings of each simulator and major issues that are common to all simulators are highlighted. The information presented in this survey was collected from many different sources, including research papers, documentations, source code bases, and others. This survey is potentially useful for both users and developers of cache simulators. To the best of our knowledge, this is the first comprehensive survey on cache simulation tools. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Joint Variable Partitioning and Bank Selection Instruction Optimization for Partitioned Memory Architectures.
- Author
-
TIANTIAN LIU, CHUN JASON XUE, and MINMING LI
- Subjects
COMPUTER architecture ,COMPUTER storage devices ,CENTRAL processing units ,MICROPROCESSORS ,COMPUTER buses ,COMPARATIVE studies - Abstract
About 55% of all CPUs sold in the world are 8-bit microcontrollers or microprocessors which can only access limited memory space without extending address buses. Partitioned memory with bank switching is a technique to increase memory size without extending address buses. Bank Selection Instructions (BSLs) need to be inserted into the original programs to modify the bank register to point to the desired banks. These BSLs introduce both code size and execution time overheads. In this paper, we partition variables into different banks and insert BSLs at different positions of programs so that the overheads can be minimized. Minimizing speed (execution time) overhead and minimizing space (code size) overhead are two objectives investigated in this paper. A multi-copy approach is also proposed to store multiple copies of several variables on different banks when the memory space allows. It takes the read/write properties of variables into consideration and achieves more BSL overhead reduction. Experiments show that the proposed algorithms can reduce BSL overheads effectively compared to state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
39. AlignS: A Processing-In-Memory Accelerator for DNA Short Read Alignment Leveraging SOT-MRAM.
- Author
-
Angizi, Shaahin, Jiao Sun, Wei Zhang, and Deliang Fan
- Subjects
BIG data ,DATA analytics ,COMPUTER architecture ,NONVOLATILE memory ,DNA sequencing - Abstract
Classified as a complex big data analytics problem, DNA short read alignment serves as a major sequential bottleneck to massive amounts of data generated by next-generation sequencing platforms. With Von-Neumann computing architectures struggling to address such computationally-expensive and memory-intensive task today, Processing-in-Memory (PIM) platforms are gaining growing interests. In this paper, an energy-efficient and parallel PIM accelerator (AlignS) is proposed to execute DNA short read alignment based on an optimized and hardware-friendly alignment algorithm. We first develop AlignS platform that harnesses SOT-MRAM as computational memory and transforms it to a fundamental processing unit for short read alignment. Accordingly, we present a novel, customized, highly parallel read alignment algorithm that only seeks the proposed simple and parallel in-memory operations (i.e. comparisons and additions). AlignS is then optimized through a new correlated data partitioning and mapping methodology that allows local storage and processing of DNA sequence to fully exploit the algorithm-level's parallelism, and to accelerate both exact and inexact matches. The device-to-architecture co-simulation results show that AlignS improves the short read alignment throughput per Watt per mm² by ~12x compared to the ASIC accelerator. Compared to recent FM-index-based ReRAM platform, AlignS achieves 1.6x higher throughput per Watt. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. A Flat Timing-Driven Placement Flow for Modern FPGAs.
- Author
-
Martin, Timothy, Maarouf, Dani, Abuowaimer, Ziad, Alhyari, Abeer, Grewal, Gary, and Areibi, Shawki
- Subjects
COST functions ,CRITICAL path analysis ,COMPUTER architecture ,COORDINATES ,BIPARTITE graphs - Abstract
In this paper, we propose a novel, flat analytic timing-driven placer without explicit packing for Xilinx UltraScale FPGA devices. Our work uses novel methods to simultaneously optimize for timing, wirelength and congestion throughout the global and detailed placement stages. We evaluate the effectiveness of the flat placer on the ISPD 2016 benchmark suite for the xcvu095 UltraScale device, as well as on industrial benchmarks. Experimental results show that on average, FTPlace achieves an 8% increase in maximum clock rate, an 18% decrease in routed wirelength, and produces placements that require 80% less time to route when compared to Xilinx Vivado 2018.1. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Reorganization of Computer Systems Department.
- Author
-
Bell, Gordon, Siewiorek, Dan, and Fuller, Sam
- Subjects
PUBLISHING ,COMPUTER science ,COMPUTER architecture ,COMPUTER systems - Abstract
Discusses a proposed reorganization of the Computer Systems Department of the Association for Computing Machinery. Information that the department has been divided into two sub-departments: Computer Architecture and Measurement, and Performance Evaluation; Functions and roles of the departments; Information on the steps taken by the association for the publication of valuable papers related to the field of computer science in the periodical "Communications of the ACM."
- Published
- 1972
42. An Efficient STT-RAM Last Level Cache Architecture for GPUs.
- Author
-
Mohammad Hossein Samavatian, Abbasitabary, Hamed, Arjomandy, Mohammad, and Sarbazi-Azady, Hamid
- Subjects
RANDOM access memory ,MULTICORE processors ,CACHE memory ,COMPUTER architecture ,GRAPHICS processing units - Abstract
In this paper, having investigated the behavior of GPGPU applications, we present an efficient L2 cache architecture for GPUs based on STT-RAM technology. With the increase of processing cores count, larger on-chip memories are required. Due to its high density and low power characteristics, STT-RAM technology can be utilized in GPUs where numerous cores leave a limited area for on-chip memory banks. They have however two important issues, high energy and latency of write operations, that have to be addressed. Low data retention time STT-RAMs can reduce the energy and delay of write operations. However, employing STT-RAMs with low retention time in GPUs requires a thorough investigation on the behavior of GPGPU applications based on which the STT-RAM based L2 cache is architectured. The STT-RAM L2 cache architecture proposed in this paper, can improve IPC by more than 100% (16% on average) while reducing the average consumed power by 20% compared to a conventional L2 cache architecture with equal on-chip area. [ABSTRACT FROM AUTHOR]
- Published
- 2014
43. Reinforcement Learning-Based Inter- and Intra-Application Thermal Optimization for Lifetime Improvement of Multicore Systems.
- Author
-
Das, Anup, Shafik, Rishad A., Merrett, Geoff V., Al-Hashimi, Bashir M., Kumar, Akash, and Veeravalli, Bharadwaj
- Subjects
REINFORCEMENT learning ,MULTICORE processors ,COMPUTER architecture ,COMPUTER reliability ,THERMAL management (Electronic packaging) - Abstract
The thermal profile of multicore systems vary both within an application's execution (intra) and also when the system switches from one application to another (inter). In this paper, we propose an adaptive thermal management approach to improve the lifetime reliability of multicore systems by considering both inter- and intra-application thermal variations. Fundamental to this approach is a reinforcement learning algorithm, which learns the relationship between the mapping of threads to cores, the frequency of a core and its temperature (sampled from on-board thermal sensors). Action is provided by overriding the operating system's mapping decisions using affinity masks and dynamically changing CPU frequency using in-kernel governors. Lifetime improvement is achieved by controlling not only the peak and average temperatures but also thermal cycling, which is an emerging wear-out concern in modern systems. The proposed approach is validated experimentally using an Intel quad-core platform executing a diverse set of multimedia benchmarks. Results demonstrate that the proposed approach minimizes average temperature, peak temperature and thermal cycling, improving the mean-timeto- failure (MTTF) by an average of 2x for intra-application and 3x for inter-application scenarios when compared to existing thermal management techniques. Furthermore, the dynamic and static energy consumption are also reduced by an average 10% and 11% respectively. 1. INTRODUCTION A major challenge of modern multicore systems is decreasing lifetime reliability, threatened by high power densities and hence elevated operating temperatures. This leads to an acceleration of device wear-out. Thermal management has attracted significant attention both in industry and academia. Examples include dynamic thermal management using voltage and frequency scaling [7], slack time management [10], peak temperature management through system-level task scheduling [3] and thermal stress management through application task mapping [2] (refer to Section 2 for a summary of related works). These approaches, however, suffer from the following limitations. First, modern multicore systems switch between applications exhibiting wide performance and workload variations, and therefore the thermal behavior of these systems vary both within (intra) and across (inter) applications. Although intra-application thermal variations are considered in existing studies, inter-application variations are not addressed. Second, the existing studies focus on average temperature reduction; thermal cycling is not accounted. Last, the existing adaptive techniques are either implemented on a simulator or rely on time-consuming thermal prediction from the HotSpot tool [14], limiting their scalability. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. DAC '14, June 01 - 05 2014, San Francisco, CA, USA Copyright 2014 ACM 978-1-4503-2730-5/14/06 ...$15.00 http://dx.doi.org/10.1145/2593069.2593199. In this paper, we address the above gaps and present a dynamic thermal management approach for multicore systems that adapts to thermal variations within (intra) and across (inter) applications. Fundamental to this approach is a run-time system, which interfaces with the on-board thermal sensors and uses reinforcement learning to learn the relationship between the mapping of threads to cores, the frequency of a core and its temperature. The aim is to control the average temperature and the thermal cycling to achieve an extended mean time to failure (MTTF). This paper makes the following contributions: [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
44. A novel shared memory framework for distributed deep learning in high-performance computing architecture.
- Author
-
Ahn, Shinyoimg, Kim, Joongheon, and Kang, Sungwon
- Subjects
HIGH performance computing ,COMPUTER architecture ,DEEP learning ,COMPUTER storage devices ,DISTRIBUTED computing - Abstract
This paper proposes a novel virtual shared memory framework, Soft Memory Box (SMB), which directly shares the memory of remote nodes among distributed processes to improve communication performance/speed via deep learning parameter sharing. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. Attackboard: A Novel Dependency-Aware Traffic Generator for Exploring NoC Design Space.
- Author
-
Yoshi Shih-Chieh Huang, Yu-Chi Chang, Tsung-Chan Tsai, Yuan-Ying Chang, and Chung-Ta King
- Subjects
NETWORKS on a chip ,COMPUTER architecture ,DATA transmission systems ,INPUT-output analysis software ,MULTICORE processors - Abstract
Network-on-chip (NoC) is very important for many applications, such as many-core architectures and applicationspecific usages. For exploring the design space, several approaches have been proposed with different considerations. In this paper, inspired by bloom filters, we propose Attackboard, a novel design for exploring the design space of NoC, which satisfies accuracy, space efficiency, and simplicity. To justify the usage of Attackboard, a parallel object detection program is used as the benchmark program to evaluate the performance of a specific NoC. By comparing the results with an execution-based simulator, it shows that Attackboard simultaneously achieves the requirements of fast speed, simplicity, and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2012
46. Towards Graceful Aging Degradation in NoCs Through an Adaptive Routing Algorithm.
- Author
-
Bhardwaj, Kshitij, Chakraborty, Koushik, and Roy, Sanghamitra
- Subjects
NETWORKS on a chip ,NETWORK routers ,COMPUTER architecture ,ROUTING algorithms ,DATA packeting - Abstract
Continuous technology scaling has made aging mechanisms such as Negative Bias Temperature Instability (NBTI) and electromigration primary concerns in Network-on-Chip (NoC) designs. In this paper, we model the effects of these aging mechanisms on NoC components such as routers and links using a novel reliability metric called Traffic Threshold per Epoch (TTpE). We observe a critical need of a robust aging-aware routing algorithm that not only reduces power-performance overheads caused due to aging degradation but also minimizes the stress experienced by heavily utilized routers and links. To solve this problem, we propose an aging-aware adaptive routing algorithm and a router microarchitecture that routes the packets along the paths which are both least congested and experience minimum aging stress. After an extensive experimental analysis using real workloads, we observe a 13%, 12.7% average overhead reduction in network latency and Energy- Delay-Product-Per-Flit (EDPPF) and a 10.4% improvement in performance using our aging-aware routing algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2012
47. MASH: A Tool for End-User Plug-In Composition.
- Author
-
Mariani, Leonaro and Pastore, Fabrizio
- Subjects
COMPUTER architecture ,PLUG-ins (Computer programs) ,COMPUTER users ,COMPUTER programming ,WORKFLOW software ,JAVA programming language - Abstract
Most of the modern Integrated Development Environments are developed with plug-in based architectures that can be extended with additional functionalities and plug-ins, according to user needs. However, extending an IDE is still a possibility restricted to developers with deep knowledge about the specific development environment and its architecture. In this paper we present MASH, a tool that eases the programming of Integrated Development Environments. The tool supports the definition of workflows that can be quickly designed to integrate functionalities offered by multiple plugins, without the need of knowing anything about the internal architecture of the IDE. Workflows can be easily reshaped every time an analysis must be modified, without the need of producing Java code and deploying components in the IDE. Early results suggest that this approach can effectively facilitate programming of IDEs. [ABSTRACT FROM AUTHOR]
- Published
- 2012
48. Transaction Level Statistical Analysis for Efficient Micro-Architectural Power and Performance Studies.
- Author
-
Copty, Eman, Kamhi, Gila, and Novakovsky, Sasha
- Subjects
COMPUTER architecture ,SIMULATION methods & models ,TRANSACTIONAL analysis ,STATISTICS ,FLOWGRAPHS - Abstract
In general, we lack in EDA industry tools and automated solutions in μArchitectural domain. In this paper, we elaborate on our attempt to advance performance simulation based statistical analysis techniques. On one hand, we utilize the content knowledge of μArchitectural specification (e.g., explicit specification of the major transactions), and on the other hand, the statistical compact representation of the simulation trace. We name the compact statistical modeling of transaction level performance simulation traces, Magenta (Modeling Agent for Transactional Analysis). As demonstrated by industrial case studies, Magenta can effectively capture all the sample flows that are represented in the simulation trace that exhibit the transaction of interest in terms of μArchitectural events in a statistical event dependency graph. Our industrial experience shows that Magenta is an effective statistical model for μArchitectural performance verification and power/performance trade-off. [ABSTRACT FROM AUTHOR]
- Published
- 2011
49. Transforming trace information in architectural documents into re-usable and effective traceability links.
- Author
-
Mirakhorli, Mehdi and Cleland-Huang, Jane
- Subjects
COMPUTER architecture ,COMPUTER software ,VISUAL programming languages (Computer science) - Abstract
Architectural analysis processes, such as the Architecture Trade-off and Analysis Method (ATAM), utilize a scenario based approach to evaluate the extent to which an architectural solution meets a potentially competing set of quality goals. The resulting architectural documents contain a rich set of trace relationships between quality goals, decisions, and architectural elements. Unfortunately this information is not readily accessible for supporting tasks other than initial architectural assessments. In this paper we describe a technique and supporting tools for extracting and generating traceability links from the architectural documents. A specialized Traceability Information Model is used to guide the user through the task of establishing traceability links from design decisions to the architectural elements in which the decision is realized. The retrieved and generated traceability links can then be used to support a far broader set of activities including visualization of design rationale and architectural preservation. We evaluate our approach using a case study of the NASA Crew Exploration Vehicle. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
50. Fishtail.
- Author
-
Sawadsky, Nicholas and Murphy, Gail C.
- Subjects
COMPUTER software development ,COMPUTER programming ,COMPUTER architecture ,COMPUTER programmers ,SOFTWARE engineering - Abstract
Implementing software development tools as integrated development environment (IDE) plugins gives tools direct access to a range of useful representations of the program being created and can improve programmer efficiency. These benefits must be weighed against the effort to integrate the tool into the IDE, effort which may need to be repeated for each IDE targeted. In this paper, we introduce Fishtail, a prototype plugin for the Eclipse IDE, which assists programmers in discovering code examples and documentation on the web relevant to their current task. Fishtail uses a detailed history of programmer interactions with the source code to automatically determine relevant web resources. We describe the key factors that make it attractive to implement Fishtail as a plugin, and the requirements Fishtail imposes on the plugin/IDE interface. To reach a broader user base and understand how well our tool supports different programming styles and IDE architectures, we have recently begun investigating how to make a version of Fishtail available in the Visual Studio IDE. We outline some of the challenges we face in trying to reuse code from the original Eclipse plugin. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.