22 results on '"Vladimir Gajinov"'
Search Results
2. Dynamic Verification for Hybrid Concurrent Programming Models.
- Author
-
Erdal Mutlu, Vladimir Gajinov, Adrián Cristal, Serdar Tasiran, and Osman S. Unsal
- Published
- 2014
- Full Text
- View/download PDF
3. Dynamic transaction coalescing.
- Author
-
Srdan Stipic, Vasileios Karakostas, Vesna Smiljkovic, Vladimir Gajinov, Osman S. Unsal, Adrián Cristal, and Mateo Valero
- Published
- 2014
- Full Text
- View/download PDF
4. DaSH: a benchmark suite for hybrid dataflow and shared memory programming models: with comparative evaluation of three hybrid dataflow models.
- Author
-
Vladimir Gajinov, Srdan Stipic, Igor Eric, Osman S. Unsal, Eduard Ayguadé, and Adrián Cristal
- Published
- 2014
- Full Text
- View/download PDF
5. Integrating Dataflow Abstractions into the Shared Memory Model.
- Author
-
Vladimir Gajinov, Srdjan Stipic, Osman S. Unsal, Tim Harris 0001, Eduard Ayguadé, and Adrián Cristal
- Published
- 2012
- Full Text
- View/download PDF
6. DaSH: A benchmark suite for hybrid dataflow and shared memory programming models.
- Author
-
Vladimir Gajinov, Srdjan Stipic, Igor Eric, Osman S. Unsal, Eduard Ayguadé, and Adrián Cristal
- Published
- 2015
- Full Text
- View/download PDF
7. Atomic quake: using transactional memory in an interactive multiplayer game server.
- Author
-
Ferad Zyulkyarov, Vladimir Gajinov, Osman S. Unsal, Adrián Cristal, Eduard Ayguadé, Tim Harris 0001, and Mateo Valero
- Published
- 2009
- Full Text
- View/download PDF
8. QuakeTM: parallelizing a complex sequential application using transactional memory.
- Author
-
Vladimir Gajinov, Ferad Zyulkyarov, Osman S. Unsal, Adrián Cristal, Eduard Ayguadé, Tim Harris 0001, and Mateo Valero
- Published
- 2009
- Full Text
- View/download PDF
9. Multithreaded software transactional memory and OpenMP.
- Author
-
Milos Milovanovic, Roger Ferrer, Vladimir Gajinov, Osman S. Unsal, Adrián Cristal, Eduard Ayguadé, and Mateo Valero
- Published
- 2007
- Full Text
- View/download PDF
10. Nebelung: Execution Environment for Transactional OpenMP.
- Author
-
Milos Milovanovic, Roger Ferrer, Vladimir Gajinov, Osman S. Unsal, Adrián Cristal, Eduard Ayguadé, and Mateo Valero
- Published
- 2008
- Full Text
- View/download PDF
11. Supporting stateful tasks in a dataflow graph.
- Author
-
Vladimir Gajinov, Srdjan Stipic, Osman S. Unsal, Tim Harris 0001, Eduard Ayguadé, and Adrián Cristal
- Published
- 2012
- Full Text
- View/download PDF
12. DaSH: a benchmark suite for hybrid dataflow and shared memory programming models
- Author
-
Osman Unsal, Srdjan Stipic, Eduard Ayguadé, Vladimir Gajinov, Adrian Cristal, Igor Erić, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions, Ministerio de Ciencia e Innovación (España), Ministerio de Educación (España), European Commission, and Generalitat de Catalunya
- Subjects
Computer Networks and Communications ,Dataflow ,Computer science ,Parallel programming (Computer science) ,Benchmark suite ,010103 numerical & computational mathematics ,02 engineering and technology ,Parallel computing ,Programació en paral·lel (Informàtica) ,computer.software_genre ,01 natural sciences ,Theoretical Computer Science ,Transactional memory ,Shared memory ,Artificial Intelligence ,020204 information systems ,Dash ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Informàtica::Arquitectura de computadors::Arquitectures paral·leles [Àrees temàtiques de la UPC] ,Implementation ,Dataflow architecture ,Programming language ,Architectures ,Computer Graphics and Computer-Aided Design ,Algorithm ,Programming model ,Hardware and Architecture ,Programming paradigm ,Benchmark (computing) ,computer ,Software - Abstract
© 2015 Elsevier B.V. All rights reserved. Abstract The current trend in development of parallel programming models is to combine different well established models into a single programming model in order to support efficient implementation of a wide range of real world applications. The dataflow model has particularly managed to recapture the interest of the research community due to its ability to express parallelism efficiently. Thus, a number of recently proposed hybrid parallel programming models combine dataflow and traditional shared memory models. Their findings have influenced the introduction of task dependency in the OpenMP 4.0 standard. This article presents DaSH - the first comprehensive benchmark suite for hybrid dataflow and shared memory programming models. DaSH features 11 benchmarks, each representing one of the Berkeley dwarfs that capture patterns of communication and computation common to a wide range of emerging applications. DaSH also includes sequential and shared-memory implementations based on OpenMP and Intel TBB to facilitate easy comparison between hybrid dataflow implementations and traditional shared memory implementations based on work-sharing and/or tasks. Finally, we use DaSH to evaluate three different hybrid dataflow models, identify their advantages and shortcomings, and motivate further research on their characteristics., This work has been supported by the Spanish Severo Ochoa award SEV-2011-00067 and project TIN2012-34557 from the Ministry of Science and Innovation, by the Generalitat de Catalunya 2014-SGR-1051 award, the Spanish Ministry of Education (TIN2007-60625 and CSD2007-00050) and the RoMoL project (ERC Advanced Grant Agreement 321253). We thankfully acknowledge the Microsoft Research support through the BSC-Microsoft Research Center and the European Commission through the HiPEAC-3 Network of Excellence.
- Published
- 2015
13. Dynamic transaction coalescing
- Author
-
Adrian Cristal, Vasileios Karakostas, Vladimir Gajinov, Vesna Smiljković, Srđan Stipić, Mateo Valero, Osman Unsal, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
Profiling (computer programming) ,Low overhead ,Computer science ,Distributed computing ,Llenguatges de programació ,Parallel computing ,Programming languages (Electronic computers) ,Supercomputers ,Distributed transaction ,Software transactional memory ,Superordinadors ,Granularity ,Super (very large) computers ,Database transaction ,Informàtica::Arquitectura de computadors [Àrees temàtiques de la UPC] ,Compile time - Abstract
Prior work in Software Transactional Memory has identified high overheads related to starting and committing transactions that may degrade the application performance. To amortize these overheads, transaction coalescing techniques have been proposed that coalesce two or more small transactions into one large transaction. However, these techniques either coalesce transactions statically at compile time, or lack on-line profiling mechanisms that allow coalescing transactions dynamically. Thus, such approaches lead to sub-optimal execution or they may even degrade the performance. In this paper, we introduce Dynamic Transaction Coalescing (DTC), a compile-time and run-time technique that improves transactional throughput. DTC reduces the overheads of starting and committing a transaction. At compile-time, DTC generates several code paths with a different number of coalesced transactions. At runtime, DTC implements low overhead online profiling and dynamically selects the corresponding code path that improves throughput. Compared to coalescing transactions statically, DTC provides two main improvements. First, DTC implements online profiling which removes the dependency on a pre-compilation profiling step. Second, DTC dynamically selects the best transaction granularity to improve the transaction throughput taking into consideration the abort rate. We evaluate DTC using common TM benchmarks and micro-benchmarks. Our findings show that: (i) DTC performs like static transaction coalescing in the common case, (ii) DTC does not suffer from performance degradation, and (iii) DTC outperforms static transaction coalescing when an application exposes phased behavior.
- Published
- 2014
- Full Text
- View/download PDF
14. Dynamic verification for hybrid concurrent programming models
- Author
-
Osman Unsal, Erdal Mutlu, Serdar Tasiran, Adrian Cristal, and Vladimir Gajinov
- Subjects
Computer science ,Programming language ,Dataflow ,Concurrency ,Transactional memory ,computer.software_genre ,Runtime system ,Shared memory ,Synchronization (computer science) ,Dynamic verification ,Programming paradigm ,Concurrent computing ,computer - Abstract
© Springer International Publishing Switzerland 2014. We present a dynamic verification technique for a class of concurrent programming models that combine dataflow and shared memory programming. In this class of hybrid concurrency models, programs are built from tasks whose data dependencies are explicitly defined by a programmer and used by the runtime system to coordinate task execution. Differently from pure dataflow, tasks are allowed to have shared state which must be properly protected using synchronization mechanisms, such as locks or transactional memory (TM). While these hybrid models enable programmers to reason about programs, especially with irregular data sharing and communication patterns, at a higher level, they may also give rise to new kinds of bugs as they are unfamiliar to the programmers. We identify and illustrate a novel category of bugs in these hybrid concurrency programming models and provide a technique for randomized exploration of program behaviors in this setting.
- Published
- 2014
15. Integrating dataflow abstractions into the shared memory model
- Author
-
Tim Harris, Adrian Cristal, Eduard Ayguadé, Srdjan Stipic, Vladimir Gajinov, Osman Unsal, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
Distributed shared memory ,Signal programming ,Computer science ,Dataflow ,Programming language ,Transactional memory ,Parallel computing ,computer.software_genre ,Shared memory ,Distributed memory ,Memory model ,High performance computing ,computer ,Informàtica::Arquitectura de computadors [Àrees temàtiques de la UPC] ,Càlcul intensiu (Informàtica) ,Dataflow architecture - Abstract
In this paper we present Atomic Dataflow model (ADF), a new task-based parallel programming model for C/C++ which integrates dataflow abstractions into the shared memory programming model. The ADF model provides pragma directives that allow a programmer to organize a program into a set of tasks and to explicitly define input data for each task. The task dependency information is conveyed to the ADF runtime system which constructs the dataflow task graph and builds the necessary infrastructure for dataflow execution. Additionally, the ADF model allows tasks to share data. The key idea is that comput ation is triggered by dataflow between tasks but that, within a task, execution occurs by making atomic updates to common mutable state. To that end, the ADF model employs transactional memory which guarantees atomicity of shared memory updates. We show examples that illustrate how the programmability of shared memory can be improved using the ADF model. Moreover, our evaluation shows that the ADF model performs well in comparison with programs para llelized using OpenMP and transactional memory.
- Published
- 2012
16. QuakeTM
- Author
-
Vladimir Gajinov, Ferad Zyulkyarov, Osman Unsal, Adrian Cristal, Eduard Ayguade, Tim Harris, and Mateo Valero
- Published
- 2009
17. Atomic quake
- Author
-
Ferad Zyulkyarov, Vladimir Gajinov, Osman Unsal, Adrián Cristal, Eduard Ayguadé, Tim Harris, and Mateo Valero
- Published
- 2009
18. Multithreaded software transactional memory and OpenMP
- Author
-
Osman Unsal, Roger Ferrer, Mateo Valero, Milos Milovanovic, Adrian Cristal, Eduard Ayguadé, and Vladimir Gajinov
- Subjects
Runtime system ,Computer science ,Asynchronous communication ,Operating system ,Software transactional memory ,Software productivity ,Transactional memory ,Parallel computing ,Thread (computing) ,Software_PROGRAMMINGTECHNIQUES ,computer.software_genre ,computer ,Database transaction - Abstract
Transactional Memory (TM) is a key future technology for emerging many-cores. On the other hand, OpenMP provides a vast established base for writing parallel programs, especially for scientific applications. Combining TM with OpenMP provides a rich, enhanced programming environment and an attractive solution to the many-core software productivity problem.In this paper, we discuss the first multithreaded runtime environment for supporting our combined TM and OpenMP framework. We present the extensions of OpenMP for using Transactional Memory. We then present the novel multithreaded STM design with a dedicated thread for eager asynchronous conflict detection. Conflict detection will be executed in a separate thread so the transaction will not waste time on that. On the other hand, eager conflict detection is asynchronous which will increase speculative parallel execution. We also include an initial performance analysis of the runtime system and possible gains which can be achieved.
- Published
- 2007
- Full Text
- View/download PDF
19. Atomic quake: Using transactional memory in an interactive multiplayer game server
- Author
-
Vladimir Gajinov, Adrian Cristal, Ferad Zyulkyarov, Mateo Valero, Tim Harris, Eduard Ayguadé, Osman Unsal, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
Input/output ,Nested transaction ,Quake (series) ,Transaction processing ,Computer science ,Distributed computing ,Transactional memory ,Parallel programming (Computer science) ,Signal theory (Telecommunication) ,Enginyeria de la telecomunicació [Àrees temàtiques de la UPC] ,computer.software_genre ,Data structure ,Computer Graphics and Computer-Aided Design ,Senyal, Teoria del (Telecomunicació) ,Shared memory ,System call ,Operating system ,Multiplayer game ,computer ,Software - Abstract
Transactional Memory (TM) is being studied widely as a new technique for synchronizing concurrent accesses to shared memory data structures for use in multi-core systems. Much of the initial work on TM has been evaluated using microbenchmarks and application kernels; it is not clear whether conclusions drawn from these workloads will apply to larger systems. In this work we make the first attempt to develop a large, complex, application that uses TM for all of its synchronization. We describe how we have taken an existing parallel implementation of the Quake game server and restructured it to use transactions. In doing so we have encountered examples where transactions simplify the structure of the program. We have also encountered cases where using transactions occludes the structure of the existing code. Compared with existing TM benchmarks, our workload exhibits non-block-structured transactions within which there are I/O operations and system call invocations. There are long and short running transactions (200– 1.3M cycles) with small and large read and write sets (a few bytes to 1.5MB). There are nested transactions reaching up to 9 levels at runtime. There are examples where error handling and recovery occurs inside transactions. There are also examples where data changes between being accessed transactionally and accessed nontransactionally. However, we did not see examples where the kind of access to one piece of data depended on the value of another.
20. Supporting stateful tasks in a dataflow graph
- Author
-
Osman Unsal, Vladimir Gajinov, Eduard Ayguadé, Tim Harris, Adrian Cristal, Srdjan Stipic, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
Signal programming ,Dataflow ,Computer science ,Programming language ,Parallelization ,Dataflow programming ,Transactional memory ,computer.software_genre ,Arquitectura d'ordinadors ,Runtime system ,Programming paradigm ,Graph (abstract data type) ,Computer architecture ,Informàtica::Arquitectura de computadors::Arquitectures paral·leles [Àrees temàtiques de la UPC] ,computer ,Optimistic concurrency control - Abstract
This paper introduces Atomic Dataflow Model (ADF) - a programming model for shared-memory systems that combines aspects of dataflow programming with the use of explicitly mutable state. The model provides language constructs that allow a programmer to delineate a program into a set of tasks and to explicitly define input data for each task. This information is conveyed to the ADF runtime system which constructs the task dependency graph and builds the necessary infrastructure for dataflow execution. However, the key aspect of the proposed model is that it does not require the programmer to specify all of the task’s dependencies exp licitly, but only those that imply logical ordering between tasks. The ADF model manages the remainder of inter-task dependencies automatically, by executing the body of the task within an implicit memory transaction. This provides an easy- to -program optimistic concurrency substrate and enables a task to safely share data with other concurrent tasks. In this paper, we describe the ADF model and show how it can increase the programmability of shared memory systems.
21. A case study of hybrid dataflow and shared-memory programming models: Dependency-based parallel game engine
- Author
-
Veljko Milutinovic, Osman Unsal, Eduard Ayguadé, Vladimir Gajinov, Igor Erić, Adrian Cristal, Saa Stojanovic, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions, and Barcelona Supercomputing Center
- Subjects
game engine ,Distributed shared memory ,Theoretical computer science ,Signal programming ,Speedup ,Parallel processing (Electronic computers) ,Dataflow ,Game programming ,Computer science ,Processament en paral·lel (Ordinadors) ,Parallel computing ,shared memory ,Shared memory ,Synchronization (computer science) ,Programming paradigm ,Informàtica::Arquitectura de computadors::Arquitectures paral·leles [Àrees temàtiques de la UPC] ,dataflow - Abstract
Recently proposed hybrid dataflow and shared memory programming models combine these two underlying models in order to support a wider range of problems naturally. The effectiveness of such hybrid models for parallel implementations of dense and sparse algebra problems is well known. In this paper, we show another real world example for which hybrid dataflow models provide better support than traditional shared memory models. Specifically, we compare these models using the game engine parallelization as a case study. We show that hybrid dataflow models decrease the complexity of the parallel game engine implementation by eliminating or restructuring the explicit synchronization that is necessary in shared memory implementations. The corresponding implementations also exhibit good scala-bility and better speedup than the shared memory parallel implementations, especially in the case of a highly congested game world that contains a large number of game objects. Ultimately, on an eight core machine we were able to obtain 4.72x speedup compared to the sequential baseline, and to improve 49% over the lock-based parallel implementation based on work-sharing.
22. DaSH: a benchmark suite for hybrid dataflow and shared memory programming models: with comparative evaluation of three hybrid dataflow models
- Author
-
Srđan Stipić, Vladimir Gajinov, Adrian Cristal, Igor Erić, Eduard Ayguadé, Osman Unsal, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. CAP - Grup de Computació d'Altes Prestacions
- Subjects
Dataflow ,Computer science ,Parallel programming (Computer science) ,Llenguatges de programació ,010103 numerical & computational mathematics ,02 engineering and technology ,Parallel computing ,Programming languages (Electronic computers) ,Programació en paral·lel (Informàtica) ,computer.software_genre ,01 natural sciences ,Transactional memory ,Shared memory ,0202 electrical engineering, electronic engineering, information engineering ,0101 mathematics ,Informàtica::Arquitectura de computadors::Arquitectures paral·leles [Àrees temàtiques de la UPC] ,Implementation ,Dataflow architecture ,020203 distributed computing ,Programming language ,Suite ,Informàtica::Llenguatges de programació [Àrees temàtiques de la UPC] ,Programming paradigm ,Benchmark (computing) ,computer - Abstract
The current trend in development of parallel programming models is to combine different well established models into a single programming model in order to support efficient implementation of a wide range of real world applications. The dataflow model has particularly managed to recapture the interest of the research community due to its ability to express parallelism efficiently. Thus, a number of recently proposed hybrid parallel programming models combine dataflow and traditional shared memory. Their findings have influenced the introduction of task dependency in the recently published OpenMP 4.0 standard. In this paper, we present DaSH - the first comprehensive benchmark suite for hybrid dataflow and shared memory programming models. DaSH features 11 benchmarks, each representing one of the Berkeley dwarfs that capture patterns of communication and computation common to a wide range of emerging applications. We also include sequential and shared-memory implementations based on OpenMP and TBB to facilitate easy comparison between hybrid dataflow implementations and traditional shared memory implementations based on work-sharing and/or tasks. Finally, we use DaSH to evaluate three different hybrid dataflow models, identify their advantages and shortcomings, and motivate further research on their characteristics.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.