204 results on '"Ivan Titov"'
Search Results
2. SIP: Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation.
3. Cache & Distil: Optimising API Calls to Large Language Models.
4. Unlearning Traces the Influential Training Data of Language Models.
5. Autoencoding Conditional Neural Processes for Representation Learning.
6. Layerwise Recurrent Router for Mixture-of-Experts.
7. Generalisation First, Memorisation Second? Memorisation Localisation for Natural Language Classification Tasks.
8. Strengthening Structural Inductive Biases by Pre-training to Perform Syntactic Transformations.
9. Explanation Regularisation through the Lens of Attributions.
10. Extending the Limit Theorem of Barmpalias and Lewis-Pye to all reals.
11. Optimising Calls to Large Language Models with Uncertainty-Based Two-Tier Selection.
12. Unlearning Reveals the Influential Training Data of Language Models.
13. Cross-Modal Conceptualization in Bottleneck Models.
14. Compositional Generalization for Data-to-Text Generation.
15. On the Transferability of Visually Grounded PCFGs.
16. Memorisation Cartography: Mapping out the Memorisation-Generalisation Continuum in Neural Machine Translation.
17. Subspace Chronicles: How Linguistic Information Emerges, Shifts and Interacts during Language Model Training.
18. Compositional Generalisation with Structured Reordering and Fertility Layers.
19. Compositional Generalization without Trees using Multiset Tagging and Latent Permutations.
20. Recursive Neural Networks with Bottlenecks Diagnose (Non-)Compositionality.
21. Hierarchical Phrase-Based Sequence-to-Sequence Learning.
22. Sparse Interventions in Language Models with Differentiable Masking.
23. Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation.
24. Theoretical and Practical Perspectives on what Influence Functions Do.
25. Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation.
26. Recursive Neural Networks with Bottlenecks Diagnose (Non-)Compositionality.
27. Autoencoding Conditional Neural Processes for Representation Learning.
28. On the Transferability of Visually Grounded PCFGs.
29. Cross-Modal Conceptualization in Bottleneck Models.
30. Subspace Chronicles: How Linguistic Information Emerges, Shifts and Interacts during Language Model Training.
31. Compositional Generalization for Data-to-Text Generation.
32. Cache & Distil: Optimising API Calls to Large Language Models.
33. Compositional Generalization without Trees using Multiset Tagging and Latent Permutations.
34. Theoretical and Practical Perspectives on what Influence Functions Do.
35. Memorisation Cartography: Mapping out the Memorisation-Generalisation Continuum in Neural Machine Translation.
36. Latent Feature-based Data Splits to Improve Generalisation Evaluation: A Hate Speech Detection Case Study.
37. Editing Factual Knowledge in Language Models.
38. Sparse Attention with Linear Units.
39. Learning Opinion Summarizers by Selecting Informative Reviews.
40. Language Modeling, Lexical Translation, Reordering: The Training Process of NMT through the Lens of Classical SMT.
41. Highly Parallel Autoregressive Entity Linking with Discriminative Correction.
42. A Differentiable Relaxation of Graph Segmentation and Alignment for AMR Parsing.
43. Structured Reordering for Modeling Latent Alignments in Sequence Transduction.
44. Meta-Learning for Domain Generalization in Semantic Parsing.
45. Learning from Executions for Semantic Parsing.
46. On Sparsifying Encoder Outputs in Sequence-to-Sequence Models.
47. Beyond Sentence-Level End-to-End Speech Translation: Context Helps.
48. Exploring Unsupervised Pretraining Objectives for Machine Translation.
49. Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation.
50. Meta-Learning to Compositionally Generalize.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.