123 results on '"Deliang Fan"'
Search Results
2. Hyb-Learn: A Framework for On-Device Self-Supervised Continual Learning with Hybrid RRAM/SRAM Memory.
3. Efficient Memory Integration: MRAM-SRAM Hybrid Accelerator for Sparse On-Device Learning.
4. DeepShuffle: A Lightweight Defense Framework against Adversarial Fault Injection Attacks on Deep Neural Networks in Multi-Tenant Cloud-FPGA.
5. EMGAN: Early-Mix-GAN on Extracting Server-Side Model in Split Federated Learning.
6. SP-IMC: A Sparsity Aware In-Memory-Computing Macro in 28nm CMOS with Configurable Sparse Representation for Highly Sparse DNN Workloads.
7. FP-IMC: A 28nm All-Digital Configurable Floating-Point In-Memory Computing Macro.
8. A 65nm RRAM Compute-in-Memory Macro for Genome Sequencing Alignment.
9. Accelerating Low Bit-width Neural Networks at the Edge, PIM or FPGA: A Comparative Study.
10. DSPIMM: A Fully Digital SParse In-Memory Matrix Vector Multiplier for Communication Applications.
11. A 1.23-GHz 16-kb Programmable and Generic Processing-in-SRAM Accelerator in 65nm.
12. DA3: Dynamic Additive Attention Adaption for Memory-Efficient On-Device Multi-Domain Learning.
13. ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning.
14. RepNet: Efficient On-Device Learning via Feature Reprogramming.
15. Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning.
16. MnM: A Fast and Efficient Min/Max Searching in MRAM.
17. DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories.
18. Efficient Multi-task Adaption for Crossbar-based In-Memory Computing.
19. XBM: A Crossbar Column-wise Binary Mask Learning Method for Efficient Multiple Task Adaption.
20. XST: A Crossbar Column-wise Sparse Training for Efficient Continual Learning.
21. Gradient-Based Novelty Detection Boosted by Self-Supervised Binary Classification.
22. XMA: a crossbar-aware multi-task adaption framework via shift-based mask learning method.
23. Slimmed Asymmetrical Contrastive Learning and Cross Distillation for Lightweight Model Training.
24. Characterization and Mitigation of Relaxation Effects on Multi-level RRAM based In-Memory Computing.
25. KSM: Fast Multiple Task Adaption via Kernel-Wise Soft Mask Learning.
26. NeurObfuscator: A Full-stack Obfuscation Tool to Mitigate Neural Architecture Stealing.
27. RNSiM: Efficient Deep Neural Network Accelerator Using Residue Number Systems.
28. Processing-in-Memory Acceleration of MAC-based Applications Using Residue Number System: A Comparative Study.
29. Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA.
30. MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning.
31. Dynamic Neural Network to Enable Run-Time Trade-off between Accuracy and Latency.
32. RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery.
33. Self-supervised Novelty Detection for Continual Learning: A Gradient-Based Approach Boosted by Binary Classification.
34. Max-PIM: Fast and Efficient Max/Min Searching in DRAM.
35. PIM-Quantifier: A Processing-in-Memory Platform for mRNA Quantification.
36. Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robust DNN Hardware Against Adversarial Input and Weight Attacks.
37. TBT: Targeted Neural Network Attack With Bit Trojan.
38. Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack.
39. FeFET-based low-power bitwise logic-in-memory with direct write-back and data-adaptive dynamic sensing interface.
40. Exploring DNA Alignment-in-Memory Leveraging Emerging SOT-MRAM.
41. Redundant Neurons and Shared Redundant Synapses for Robust Memristor-based DNNs with Reduced Overhead.
42. Modeling and Benchmarking Computing-in-Memory for Design Space Exploration.
43. Robust Sparse Regularization: Defending Adversarial Attacks Via Regularized Sparse Network.
44. DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips.
45. Representable Matrices: Enabling High Accuracy Analog Computation for Inference of DNNs using Memristors.
46. A Flexible Processing-in-Memory Accelerator for Dynamic Channel-Adaptive Deep Neural Networks.
47. PIM-Aligner: A Processing-in-MRAM Platform for Biological Sequence Alignment.
48. Processing-in-Memory Accelerator for Dynamic Neural Network with Run-Time Tuning of Accuracy, Power and Latency.
49. Harmonious Coexistence of Structured Weight Pruning and Ternarization for Deep Neural Networks.
50. PIM-Assembler: A Processing-in-Memory Platform for Genome Assembly.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.