444 results on '"William J. Dally"'
Search Results
2. GPU-Initiated On-Demand High-Throughput Storage Access in the BaM System Architecture.
3. Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training.
4. Frontier vs the Exascale Report: Why so long? and Are We Really There Yet?
5. A 0.190-pJ/bit 25.2-Gb/s/wire Inverter-Based AC-Coupled Transceiver for Short-Reach Die-to-Die Interfaces in 5-nm CMOS.
6. SPAA'21 Panel Paper: Architecture-Friendly Algorithms versus Algorithm-Friendly Architectures.
7. Optimal Operation of a Plug-in Hybrid Vehicle with Battery Thermal and Degradation Model.
8. SpArch: Efficient Architecture for Sparse Matrix Multiplication.
9. A 17-95.6 TOPS/W Deep Learning Inference Accelerator with Per-Vector Scaled 4-bit Quantization for Transformers in 5nm.
10. A 0.297-pJ/bit 50.4-Gb/s/wire Inverter-Based Short-Reach Simultaneous Bidirectional Transceiver for Die-to-Die Interface in 5nm CMOS.
11. A Fine-Grained GALS SoC with Pausible Adaptive Clocking in 16 nm FinFET.
12. A 0.11 PJ/OP, 0.32-128 Tops, Scalable Multi-Chip-Module-Based Deep Neural Network Accelerator Designed with A High-Productivity vlsi Methodology.
13. Simba: Scaling Deep-Learning Inference with Multi-Chip-Module-Based Architecture.
14. MAGNet: A Modular Accelerator Generator for Neural Networks.
15. A 2-to-20 GHz Multi-Phase Clock Generator with Phase Interpolators Using Injection-Locked Oscillation Buffers for High-Speed IOs in 16nm FinFET.
16. Darwin-WGA: A Co-processor Provides Increased Sensitivity in Whole Genome Alignments with High Speedup.
17. VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference.
18. Darwin: A Genomics Co-processor Provides up to 15, 000X Acceleration on Long Read Assembly.
19. Hardware-Enabled Artificial Intelligence.
20. Ground-referenced signaling for intra-chip and short-reach chip-to-chip interconnects.
21. Bandwidth-efficient deep learning.
22. A Novel High-Efficiency Three-Phase Multilevel PV Inverter With Reduced DC-Link Capacitance
23. A 95.6-TOPS/W Deep Learning Inference Accelerator With Per-Vector Scaled 4-bit Quantization in 5 nm
24. Fine-grained DRAM: energy-efficient DRAM for extreme bandwidth systems.
25. Exploring the Granularity of Sparsity in Convolutional Neural Networks.
26. SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks.
27. Architecting an Energy-Efficient DRAM System for GPUs.
28. A 1.17pJ/b 25Gb/s/pin ground-referenced single-ended serial link for off- and on-package communication in 16nm CMOS using a process- and temperature-adaptive voltage regulator.
29. Darwin: A Genomics Co-processor Provides up to 15, 000X Acceleration on Long Read Assembly.
30. EIE: Efficient Inference Engine on Compressed Deep Neural Network.
31. Efficient Sparse-Winograd Convolutional Neural Networks.
32. Learning both Weights and Connections for Efficient Neural Network.
33. SLIP: reducing wire energy in the memory hierarchy.
34. Network endpoint congestion control for fine-grained communication.
35. Trained Ternary Quantization.
36. DSD: Dense-Sparse-Dense Training for Deep Neural Networks.
37. Efficient Sparse-Winograd Convolutional Neural Networks.
38. Scaling the Power Wall: A Path to Exascale.
39. OP-VENT
40. A 0.11 pJ/Op, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator with Ground-Reference Signaling in 16nm.
41. Analog/Mixed-Signal Hardware Error Modeling for Deep Learning Inference.
42. Channel reservation protocol for over-subscribed channels and destinations.
43. A detailed and flexible cycle-accurate Network-on-Chip simulator.
44. 21st century digital design tools.
45. Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor.
46. Adaptive Backpressure: Efficient buffer management for on-chip networks.
47. Network congestion avoidance through Speculative Reservation.
48. Evolution of the Graphics Processing Unit (GPU)
49. Packet chaining: efficient single-cycle allocation for on-chip networks.
50. A compile-time managed multi-level register file hierarchy.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.