35 results on '"Zhilin Yang"'
Search Results
2. ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization.
3. A Universal Discriminator for Zero-Shot Generalization.
4. GPS: Genetic Prompt Search for Efficient Few-shot Learning.
5. Learning to Detect Noisy Labels Using Model-Based Features.
6. Zero-Label Prompt Selection.
7. Prompt-Based Metric Learning for Few-Shot NER.
8. All NLP Tasks Are Generation Tasks: A General Pretraining Framework.
9. FastMoE: A Fast Mixture-of-Expert Training System.
10. GPT Understands, Too.
11. P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks.
12. FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning.
13. FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding.
14. NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework.
15. Controllable Generation from Pre-trained Language Models via Inverse Prompting.
16. Distribution Matching for Rationalization.
17. XLNet: Generalized Autoregressive Pretraining for Language Understanding.
18. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context.
19. GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations.
20. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering.
21. Neural Models for Reasoning over Multiple Mentions using Coreference.
22. Neural Cross-Lingual Named Entity Recognition with Minimal Resources.
23. Good Semi-supervised Learning that Requires a Bad GAN.
24. Linguistic Knowledge as Memory for Recurrent Neural Networks.
25. A Probabilistic Framework for Location Inference from Social Media.
26. Differentiable Learning of Logical Rules for Knowledge Base Completion.
27. Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent.
28. Semi-Supervised QA with Generative Domain-Adaptive Nets.
29. Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks.
30. Breaking the Softmax Bottleneck: A High-Rank RNN Language Model.
31. Encode, Review, and Decode: Reviewer Module for Caption Generation.
32. Revisiting Semi-Supervised Learning with Graph Embeddings.
33. Words or Characters? Fine-grained Gating for Reading Comprehension.
34. Multi-Task Cross-Lingual Sequence Tagging from Scratch.
35. Multi-Source Bayesian Embeddings for Learning Social Knowledge Graphs.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.