155 results on '"Hamed Pirsiavash"'
Search Results
2. CompGS: Smaller and Faster Gaussian Splatting with Vector Quantization.
3. SlowFormer: Adversarial Attack on Compute and Energy Consumption of Efficient Vision Transformers.
4. BrainWash: A Poisoning Attack to Forget in Continual Learning.
5. SimA: Simple Softmax-free Attention for Vision Transformers.
6. A Closer Look at Robustness of Vision Transformers to Backdoor Attacks.
7. A collective AI via lifelong learning and sharing at the edge.
8. NOLA: Compressing LoRA using Linear Combination of Random Basis.
9. PRANC: Pseudo RAndom Networks for Compacting deep models.
10. Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning.
11. Is Multi-Task Learning an Upper Bound for Continual Learning?
12. MASTAF: A Model-Agnostic Spatio-Temporal Attention Fusion Network for Few-shot Video Classification.
13. Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillation.
14. MoIN: Mixture of Introvert Experts to Upcycle an LLM.
15. One Category One Prompt: Dataset Distillation using Diffusion Models.
16. MCNC: Manifold Constrained Network Compression.
17. Multi-Agent Lifelong Implicit Neural Learning.
18. Consistent Explanations by Contrastive Learning.
19. Backdoor Attacks on Self-Supervised Learning.
20. Sparsity and Heterogeneous Dropout for Continual Learning in the Null Space of Neural Activations.
21. Adaptive Token Sampling for Efficient Vision Transformers.
22. Constrained Mean Shift Using Distant yet Related Neighbors for Representation Learning.
23. Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization.
24. BrainWash: A Poisoning Attack to Forget in Continual Learning.
25. SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers.
26. A Cookbook of Self-Supervised Learning.
27. GeNIe: Generative Hard Negative Images Through Diffusion.
28. NOLA: Networks as Linear Combination of Low Rank Random Basis.
29. Mean Shift for Self-Supervised Learning.
30. ISD: Self-Supervised Learning by Iterative Similarity Distillation.
31. Explainable Models with Consistent Interpretations.
32. Role of Spatial Context in Adversarial Robustness for Object Detection.
33. Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs.
34. On-Chip Voltage and Temperature Digital Sensor for Security, Reliability, and Portability.
35. Hidden Trigger Backdoor Attacks.
36. Backdoor Attacks on Vision Transformers.
37. SimA: Simple Softmax-free Attention for Vision Transformers.
38. PRANC: Pseudo RAndom Networks for Compacting deep models.
39. A Simple Approach to Adversarial Robustness in Few-shot Image Classification.
40. Is Multi-Task Learning an Upper Bound for Continual Learning?
41. SimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge Distillation.
42. Amenable Sparse Network Investigator.
43. Fooling Network Interpretation in Image Classification.
44. Boosting Self-Supervised Learning via Knowledge Transfer.
45. CompRess: Self-Supervised Learning by Compressing Representations.
46. COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning.
47. ATS: Adaptive Token Sampling For Efficient Vision Transformers.
48. A Simple Baseline for Low-Budget Active Learning.
49. Backdoor Attacks on Self-Supervised Learning.
50. STAF: A Spatio-Temporal Attention Fusion Network for Few-shot Video Classification.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.