Search

Your search keyword '"Tu, Fengbin"' showing total 85 results

Search Constraints

Start Over You searched for: Author "Tu, Fengbin" Remove constraint Author: "Tu, Fengbin" Search Limiters Available in Library Collection Remove constraint Search Limiters: Available in Library Collection
85 results on '"Tu, Fengbin"'

Search Results

1. A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications

2. Towards Efficient Control Flow Handling in Spatial Architecture via Architecting the Control Flow Plane

3. DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference

4. Alleviating Datapath Conflicts and Design Centralization in Graph Analytics Acceleration

5. H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

6. SWG: an architecture for sparse weight gradient computation

7. AdaP-CIM: Compute-in-Memory Based Neural Network Accelerator using Adaptive Posit

10. BIOS: A 40nm Bionic Sensor-defined 0.47pJ/SOP, 268.7TSOPs/W Configurable Spiking Neuron-in-Memory Processor for Wearable Healthcare

11. AutoDCIM: An Automated Digital CIM Compiler

12. PIM-HLS: An Automatic Hardware Generation Tool for Heterogeneous Processing-In-Memory-based Neural Network Accelerators

13. MulTCIM: Digital Computing-in-Memory-Based Multimodal Transformer Accelerator With Attention-Token-Bit Hybrid Sparsity

14. SDP: Co-Designing Algorithm, Dataflow, and Architecture for In-SRAM Sparse NN Acceleration

15. ECSSD: Hardware/Data Layout Co-Designed In-Storage-Computing Architecture for Extreme Classification

16. ReDCIM: Reconfigurable Digital Computing-in-Memory Processor with Unified FP/INT Pipeline for Cloud AI Acceleration

17. 16.4 TensorCIM: A 28nm 3.7nJ/Gather and 8.3TFLOPS/W FP32 Digital-CIM Tensor Processor for MCM-CIM-Based Beyond-NN Acceleration

18. SPCIM: Sparsity-Balanced Practical CIM Accelerator with Optimized Spatial-Temporal Multi-Macro Utilization

19. STAR: An STGCN ARchitecture for Skeleton-Based Human Action Recognition

20. Reconfigurability, Why It Matters in AI Tasks Processing: A Survey of Reconfigurable AI Chips

21. SPG: Structure-Private Graph Database via SqueezePIR

22. DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference

23. HDSuper: High-Quality and High Computational Utilization Edge Super-Resolution Accelerator With Hardware-Algorithm Co-Design Techniques

29. A 28nm 29.2TFLOPS/W BF16 and 36.5TOPS/W INT8 Reconfigurable Digital CIM Processor with Unified FP/INT Pipeline and Bitwise In-Memory Booth Multiplication for Cloud Deep Learning Acceleration

30. INSPIRE: IN-Storage Private Information REtrieval via Protocol and Architecture Co-design

31. Accelerating Spatiotemporal Supervised Training of Large-Scale Spiking Neural Networks on GPU

32. H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

33. DOTA: Detect and OmitWeak Attentions for Scalable Transformer Acceleration

34. Dynamic Sparse Attention for Scalable Transformer Acceleration

35. A 28nm 15.59J/Token Full-Digital Bitline-Transpose CIM-Based Sparse Transformer Accelerator with Pipeline/Parallel Reconfigurable Modes

36. GQNA: Generic Quantized DNN Accelerator With Weight-Repetition-Aware Activation Aggregating

38. ADROIT: An Adaptive Dynamic Refresh Optimization Framework for DRAM Energy Saving in DNN Training

39. Evolver: A Deep Learning Processor with On-Device Quantization-Voltage-Frequency Tuning

40. Brain-Inspired Computing: Adventure from Beyond CMOS Technologies to Beyond von Neumann Architectures

41. STC: Significance-aware transform-based codec framework for external memory access reduction

42. DUET: Boosting deep neural network efficiency on dual-module architecture

43. Reconfigurable Architecture for Neural Approximation in Multimedia Computing

44. Parana: A Parallel Neural Architecture Considering Thermal Problem of 3D Stacked Memory

45. A High Throughput Acceleration for Hybrid Neural Networks with Efficient Resource Management on FPGA

46. Towards Efficient Compact Network Training on Edge-Devices

Catalog

Books, media, physical & digital resources