102 results on '"Sze, Vivienne"'
Search Results
2. HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity
3. Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity
4. Kernel Computation
5. Advanced Technologies
6. Overview of Deep Neural Networks
7. Exploiting Sparsity
8. Efficient Processing of Deep Neural Networks
9. Conclusion
10. Operation Mapping on Specialized Hardware
11. Designing DNN Accelerators
12. Introduction
13. Reducing Precision
14. Key Metrics and Design Objectives
15. Designing Efficient DNN Models
16. RAELLA: Reforming the Arithmetic for Efficient, Low-Resolution, and Low-Loss Analog PIM: No Retraining Required!
17. Data Centers on Wheels: Emissions From Computing Onboard Autonomous Vehicles
18. NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
19. Video Compression
20. Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling
21. Developing a Series of AI Challenges for the United States Department of the Air Force
22. Uncertainty from Motion for DNN Monocular Depth Estimation
23. Memory-Efficient Gaussian Fitting for Depth Images in Real Time
24. Efficient Computation of Map-scale Continuous Mutual Information on Chip in Real Time
25. NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization
26. Decoder Hardware Architecture for HEVC
27. Entropy Coding in HEVC
28. Progress on photonic tensor processors based on time multiplexing and photoelectric multiplication
29. Domain-Specific Language Abstractions for Compression
30. Sparseloop: An Analytical, Energy-Focused Design Space Exploration Methodology for Sparse Tensor Accelerators
31. Architecture-Level Energy Estimation for Heterogeneous Computing Systems
32. Session 9 Overview: ML Processors From Cloud to Edge
33. SE4: ICs in PandemICs
34. Freely scalable and reconfigurable optical hardware for deep learning
35. App-based saccade latency and directional error determination across the adult age spectrum
36. An Architecture-Level Energy and Area Estimator for Processing-In-Memory Accelerator Designs
37. Efficient Computing for AI and Robotics
38. Efficient Processing of Deep Neural Networks
39. FSMI: Fast computation of Shannon mutual information for information-theoretic mapping
40. Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
41. Balancing Actuation and Computing Energy in Motion Planning
42. An Efficient and Continuous Approach to Information-Theoretic Exploration
43. Measuring Saccade Latency Using Smartphone Cameras
44. Digital Optical Neural Networks for Large-Scale Machine Learning
45. How to Evaluate Deep Neural Network Processors: TOPS/W (Alone) Considered Harmful
46. Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators
47. Accelergy: An Architecture-Level Energy Estimation Methodology for Accelerator Designs
48. Low Power Adaptive Time-of-Flight Imaging for Multiple Rigid Objects
49. High-Throughput Computation of Shannon Mutual Information on Chip
50. Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.