1,495 results on '"Huazhong Yang"'
Search Results
52. A Unified Sampling Framework for Solver Searching of Diffusion Probabilistic Models.
53. Ada3D : Exploiting the Spatial Redundancy with Adaptive Inference for Efficient 3D Object Detection.
54. WeightLock: A Mixed-Grained Weight Encryption Approach Using Local Decrypting Units for Ciphertext Computing in DNN Accelerators.
55. DF-GAS: a Distributed FPGA-as-a-Service Architecture towards Billion-Scale Graph-based Approximate Nearest Neighbor Search.
56. Lowering Latency of Embedded Memory by Exploiting In-Cell Victim Cache Hierarchy Based on Emerging Multi-Level Memory Devices.
57. TSTC: Two-Level Sparsity Tensor Core Enabling both Algorithm Flexibility and Hardware Efficiency.
58. Realizing Extreme Endurance Through Fault-aware Wear Leveling and Improved Tolerance.
59. Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics.
60. NTGAT: A Graph Attention Network Accelerator with Runtime Node Tailoring.
61. Learning Graph-Enhanced Commander-Executor for Multi-Agent Navigation.
62. Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative Exploration.
63. Minimizing Communication Conflicts in Network-On-Chip Based Processing-In-Memory Architecture.
64. CLAP: Locality Aware and Parallel Triangle Counting with Content Addressable Memory.
65. Fe-GCN: A 3D FeFET Memory Based PIM Accelerator for Graph Convolutional Networks.
66. Design Exploration of Dynamic Multi-Level Ternary Content-Addressable Memory Using Nanoelectromechanical Relays.
67. OMS-DPM: Optimizing the Model Schedule for Diffusion Probabilistic Models.
68. Ensemble-in-One: Ensemble Learning within Random Gated Networks for Enhanced Adversarial Robustness.
69. Memory-Oriented Structural Pruning for Efficient Image Restoration.
70. PIM-HLS: An Automatic Hardware Generation Tool for Heterogeneous Processing-In-Memory-based Neural Network Accelerators.
71. Processing-In-Hierarchical-Memory Architecture for Billion-Scale Approximate Nearest Neighbor Search.
72. Victor: A Variation-resilient Approach Using Cell-Clustered Charge-domain computing for High-density High-throughput MLC CiM.
73. ASMCap: An Approximate String Matching Accelerator for Genome Sequence Analysis Based on Capacitive Content Addressable Memory.
74. Memory-Efficient and Real-Time SPAD-based dToF Depth Sensor with Spatial and Statistical Correlation.
75. A 28nm 1.2GHz 5.27TOPS/W Scalable Vision/Point Cloud Deep Fusion Processor with CAM-based Universal Mapping Unit for BEVFusion Applications.
76. A 28nm 8928Kb/mm2-Weight-Density Hybrid SRAM/ROM Compute-in-Memory Architecture Reducing >95% Weight Loading from DRAM.
77. A 16-bit 10-GS/s Calibration-Free DAC Achieving
78. Multi-UAV Pursuit-Evasion with Online Planning in Unknown Environments by Deep Reinforcement Learning.
79. Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs.
80. CityLight: A Universal Model Towards Real-world City-scale Traffic Signal Control Coordination.
81. ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation.
82. Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better.
83. Can LLMs Learn by Teaching? A Preliminary Study.
84. MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression.
85. A 28nm 386.5GOPS/W Coarse-Grained DSP Using Configurable Processing Elements for Always-on Computation with FPGA Implementation.
86. A 10TFLOPS Datacenter-Oriented GPU with 4-Corner Stacked 64GB Memory by The Means of 2.5D Packaging Technology.
87. An RRAM-Based Digital Computing-in-Memory Macro With Dynamic Voltage Sense Amplifier and Sparse-Aware Approximate Adder Tree.
88. A Weight-Reload-Eliminated Compute-in-Memory Accelerator for 60 fps 4K Super-Resolution.
89. FAST: A Fully-Concurrent Access SRAM Topology for High Row-Wise Parallelism Applications Based on Dynamic Shift Operations.
90. A Heterogeneous Microprocessor Based on All-Digital Compute-in-Memory for End-to-End AIoT Inference.
91. Pareto Frequency-Aware Power Side-Channel Countermeasure Exploration on CNN Systolic Array.
92. Gibbon: An Efficient Co-Exploration Framework of NN Model and Processing-In-Memory Architecture.
93. CoGNN: An Algorithm-Hardware Co-Design Approach to Accelerate GNN Inference With Minibatch Sampling.
94. MNSIM 2.0: A Behavior-Level Modeling Tool for Processing-In-Memory Architectures.
95. Adaptive Multidimensional Parallel Fault Simulation Framework on Heterogeneous System.
96. Modularized Equalization Architecture With Transformer-Based Integrating Voltage Equalizer for the Series-Connected Battery Pack in Electric Bicycles.
97. A 6.0-GS/s Time-Interleaved DAC Using an Asymmetric Current-Tree Summation Network and Differential Clock Timing Calibration.
98. Serving Multi-DNN Workloads on FPGAs: A Coordinated Architecture, Scheduling, and Mapping Perspective.
99. FeFET-Based Logic-in-Memory Supporting SA-Free Write-Back and Fully Dynamic Access With Reduced Bitline Charging Activity and Recycled Bitline Charge.
100. Reliable and Efficient Parallel Checkpointing Framework for Nonvolatile Processor With Concurrent Peripherals.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.