207 results on '"Luke Zettlemoyer"'
Search Results
2. Better Alignment with Instruction Back-and-Forth Translation.
3. Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models.
4. Altogether: Image Captioning via Re-aligning Alt-text.
5. MoDE: CLIP Data Experts via Clustering.
6. Trusting Your Evidence: Hallucinate Less with Context-aware Decoding.
7. OLMo: Accelerating the Science of Language Models.
8. MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual Language Modeling.
9. Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research.
10. REPLUG: Retrieval-Augmented Black-Box Language Models.
11. The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants.
12. Translate to Disambiguate: Zero-shot Multilingual Word Sense Disambiguation with Pretrained Language Models.
13. RA-DIT: Retrieval-Augmented Dual Instruction Tuning.
14. Representation Deficiency in Masked Language Modeling.
15. Detecting Pretraining Data from Large Language Models.
16. Self-Alignment with Instruction Backtranslation.
17. SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore.
18. Demystifying CLIP Data.
19. In-Context Pretraining: Language Modeling Beyond Document Boundaries.
20. CiT: Curation in Training for Effective Vision-Language Data.
21. Getting MoRE out of Mixture of Language Model Reasoning Experts.
22. RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering.
23. FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation.
24. Demystifying Prompts in Language Models via Perplexity Estimation.
25. Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
26. XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models.
27. Revisiting Machine Translation for Cross-lingual Classification.
28. CREPE: Open-Domain Question Answering with False Presuppositions.
29. One Embedder, Any Task: Instruction-Finetuned Text Embeddings.
30. Contrastive Decoding: Open-ended Text Generation as Optimization.
31. Prompting Language Models for Linguistic Structure.
32. Nonparametric Masked Language Modeling.
33. Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations.
34. In-context Examples Selection for Machine Translation.
35. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters.
36. Training Trajectories of Language Models Across Scales.
37. The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender Characterisation in 55 Languages.
38. The case for 4-bit precision: k-bit Inference Scaling Laws.
39. Retrieval-Augmented Multimodal Language Modeling.
40. DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation.
41. Scaling Laws for Generative Mixed-Modal Language Models.
42. Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI.
43. On the Role of Bidirectionality in Language Model Pre-Training.
44. Improving Passage Retrieval with Zero-Shot Question Generation.
45. Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models.
46. CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation.
47. Nearest Neighbor Zero-Shot Inference.
48. Efficient Large Scale Language Modeling with Mixtures of Experts.
49. M2D2: A Massively Multi-Domain Language Modeling Dataset.
50. Language Contamination Helps Explains the Cross-lingual Capabilities of English Pretrained Models.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.