Search

Your search keyword '"Shen, Zhiqiang"' showing total 38 results

Search Constraints

Start Over You searched for: Author "Shen, Zhiqiang" Remove constraint Author: "Shen, Zhiqiang" Topic computer science - artificial intelligence Remove constraint Topic: computer science - artificial intelligence
38 results on '"Shen, Zhiqiang"'

Search Results

1. Mamba or Transformer for Time Series Forecasting? Mixture of Universals (MoU) Is All You Need

2. FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation

3. Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs

4. Open-LLM-Leaderboard: From Multi-choice to Open-style Questions for LLMs Evaluation, Benchmark, and Arena

5. Elucidating the Design Space of Dataset Condensation

6. TransLinkGuard: Safeguarding Transformer Models Against Model Stealing in Edge Deployment

7. Self-supervised Dataset Distillation: A Good Compression Is All You Need

8. FerKD: Surgical Label Adaptation for Efficient Distillation

9. Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4

10. LLM360: Towards Fully Transparent Open-Source LLMs

11. Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching

12. Beyond Size: How Gradients Shape Pruning Decisions in Large Language Models

13. SlimPajama-DC: Understanding Data Combinations for LLM Training

14. Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models

15. Point Contrastive Prediction with Semantic Clustering for Self-Supervised Learning on Point Cloud Videos

16. Masked Spatio-Temporal Structure Prediction for Self-supervised Learning on Point Cloud Videos

17. Variation-aware Vision Transformer Quantization

18. Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective

19. One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning

20. Dropout Reduces Underfitting

21. i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable?

22. MixMask: Revisiting Masking Strategy for Siamese ConvNets

23. SDQ: Stochastic Differentiable Quantization with Mixed Precision

24. Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space

25. Data-Free Neural Architecture Search via Recursive Label Calibration

26. A Fast Knowledge Distillation Framework for Visual Recognition

27. Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation

28. Sliced Recursive Transformer

29. Multi-modal Self-supervised Pre-training for Regulatory Genome Across Cell Types

30. How Do Adam and Training Strategies Help BNNs Optimization?

31. Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study

32. S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration

33. Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning

34. MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

35. Towards Instance-level Image-to-Image Translation

36. Transfer Learning for Sequences via Learning to Collocate

37. MEAL: Multi-Model Ensemble via Adversarial Learning

38. Learning Efficient Convolutional Networks through Network Slimming

Catalog

Books, media, physical & digital resources