79 results on '"Rachata Ausavarungnirun"'
Search Results
2. Janus: A Flexible Processing-in-Memory Graph Accelerator Toward Sparsity.
3. ICE: Collaborating Memory and Process Management for User Experience on Resource-limited Mobile Devices.
4. Utopia: Fast and Efficient Address Translation via Hybrid Restrictive & Flexible Virtual-to-Physical Address Mappings.
5. vPIM: Efficient Virtual Address Translation for Scalable Processing-in-Memory Architectures.
6. GenStore: a high-performance in-storage processing system for genome sequence analysis.
7. Gzippo: Highly-Compact Processing-in-Memory Graph Accelerator Alleviating Sparsity and Redundancy.
8. Memory Harvesting in Multi-GPU Systems with Hierarchical Unified Virtual Memory.
9. CacheSifter: Sifting Cache Files for Boosted Mobile Performance and Lifetime.
10. GenStore: In-Storage Filtering of Genomic Data for High-Performance and Energy-Efficient Genome Analysis.
11. Chapter Eight - The design of an energy-efficient deflection-based on-chip network.
12. SISA: Set-Centric Instruction Set Architecture for Graph Mining on Processing-in-Memory Systems.
13. FPRA: A Fine-grained Parallel RRAM Architecture.
14. Improving Inter-kernel Data Reuse With CTA-Page Coordination in GPGPU.
15. GenASM: A High-Performance, Low-Power Approximate String Matching Acceleration Framework for Genome Sequence Analysis.
16. The Virtual Block Interface: A Flexible Alternative to the Conventional Virtual Memory Framework.
17. PRISM: Architectural Support for Variable-granularity Memory Metadata.
18. Acclaim: Adaptive Memory Reclaim to Improve User Experience in Android Systems.
19. iTRIM: I/O-Aware TRIM for Improving User Experience on Mobile Devices.
20. RevaMp3D: Architecting the Processor Core and Cache Hierarchy for Systems with Monolithically-Integrated Logic and Memory.
21. GenStore: A High-Performance and Energy-Efficient In-Storage Computing System for Genome Sequence Analysis.
22. Utopia: Efficient Address Translation using Hybrid Virtual-to-Physical Address Mapping.
23. A Framework for Memory Oversubscription Management in Graphics Processing Units.
24. Binary Star: Coordinated Reliability in Heterogeneous Memory Systems for High Performance and Scalability.
25. CoNDA: efficient cache coherence support for near-data accelerators.
26. NoM: Network-on-Memory for Inter-Bank Data Transfer in Highly-Banked Memories.
27. Slim NoC: A Low-Diameter On-Chip Network Topology for High Energy Efficiency and Scalability.
28. MASK: Redesigning the GPU Memory Hierarchy to Support Multi-Application Concurrency.
29. Google Workloads for Consumer Devices: Mitigating Data Movement Bottlenecks.
30. LTRF: Enabling High-Capacity Register Files for GPUs via Hardware/Software Cooperative Register Prefetching.
31. Differentiating Cache Files for Fine-grain Management to Improve Mobile Performance and Lifetime.
32. SISA: Set-Centric Instruction Set Architecture for Graph Mining on Processing-in-Memory Systems.
33. Energy-Efficient Deflection-based On-chip Networks: Topology, Routing, Flow Control.
34. Highly Concurrent Latency-tolerant Register Files for GPUs.
35. ITAP: Idle-Time-Aware Power Management for GPU Execution Units.
36. Processing data where it makes sense: Enabling in-memory computation.
37. Mosaic: a GPU memory manager with application-transparent support for multiple page sizes.
38. Mosaic: Enabling Application-Transparent Support for Multiple Page Sizes in Throughput Processors.
39. NOM: Network-On-Memory for Inter-Bank Data Transfer in Highly-Banked Memories.
40. Enabling High-Capacity, Latency-Tolerant, and Highly-Concurrent GPU Register Files via Software/Hardware Cooperation.
41. The Virtual Block Interface: A Flexible Alternative to the Conventional Virtual Memory Framework.
42. A Modern Primer on Processing in Memory.
43. GenASM: A High-Performance, Low-Power Approximate String Matching Acceleration Framework for Genome Sequence Analysis.
44. Slim NoC: A Low-Diameter On-Chip Network Topology for High Energy Efficiency and Scalability.
45. μC-States: Fine-grained GPU Datapath Power Management.
46. SizeCap: Efficiently handling power surges in fuel cell powered data centers.
47. Design-Induced Latency Variation in Modern DRAM Chips: Characterization, Analysis, and Latency Reduction Mechanisms.
48. A Low-Overhead, Fully-Distributed, Guaranteed-Delivery Routing Algorithm for Faulty Network-on-Chips.
49. A case for core-assisted bottleneck acceleration in GPUs: enabling flexible data compression with assist warps.
50. Decoupled Direct Memory Access: Isolating CPU and IO Traffic by Leveraging a Dual-Data-Port DRAM.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.