Search

Your search keyword '"Rastegari, Mohammad"' showing total 178 results

Search Constraints

Start Over You searched for: Author "Rastegari, Mohammad" Remove constraint Author: "Rastegari, Mohammad"
178 results on '"Rastegari, Mohammad"'

Search Results

1. The Llama 3 Herd of Models

2. LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

3. KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation

4. CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data

5. OpenELM: An Efficient Language Model Family with Open Training and Inference Framework

6. Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation

7. Speculative Streaming: Fast LLM Inference without Auxiliary Models

8. Weight subcloning: direct initialization of transformers using larger pretrained ones

9. LLM in a flash: Efficient Large Language Model Inference with Limited Memory

10. Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific Models

11. SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding

12. CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement

13. ReLU Strikes Back: Exploiting Activation Sparsity in Large Language Models

14. Diffusion Models as Masked Audio-Video Learners

15. Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications

16. On the Efficacy of Multi-scale Data Samplers for Vision Applications

17. eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models

18. Bytes Are All You Need: Transformers Operating Directly On File Bytes

19. Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement

20. RangeAugment: Efficient Online Augmentation with Range Learning

21. SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks

22. Separable Self-attention for Mobile Vision Transformers

23. CVNets: High Performance Library for Computer Vision

24. LCS: Learning Compressible Subspaces for Adaptive Network Compression at Inference Time

25. Token Pooling in Vision Transformers

26. MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer

28. DKM: Differentiable K-Means Clustering Layer for Neural Network Compression

29. Learning Neural Network Subspaces

30. Layer-Wise Data-Free CNN Compression

31. Supermasks in Superposition

32. What's Hidden in a Randomly Weighted Neural Network?

33. DeFINE: DEep Factorized INput Token Embeddings for Neural Sequence Modeling

34. Assisted Excitation of Activations: A Learning Technique to Improve Object Detectors

35. DiCENet: Dimension-wise Convolutions for Efficient Networks

36. Butterfly Transform: An Efficient FFT Based Neural Architecture Design

37. Discovering Neural Wirings

38. OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge

39. Two Body Problem: Collaborative Visual Task Completion

40. ELASTIC: Improving CNNs with Dynamic Scaling Policies

41. Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning

42. ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network

43. Pyramidal Recurrent Unit for Language Modeling

44. Label Refinery: Improving ImageNet Classification through Label Progression

45. ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

46. IQA: Visual Question Answering in Interactive Environments

48. LCNN: Lookup-based Convolutional Neural Network

Catalog

Books, media, physical & digital resources