164 results on '"Stéphane Clinchant"'
Search Results
2. Two-Step SPLADE: Simple, Efficient and Effective Approximation of SPLADE.
3. SPLATE: Sparse Late Interaction Retrieval.
4. Towards Effective and Efficient Sparse Neural Information Retrieval.
5. Towards Query Performance Prediction for Neural Information Retrieval: Challenges and Opportunities.
6. On the Limitations of Query Performance Prediction for Neural IR.
7. An Experimental Study on Pretraining Transformers from Scratch for IR.
8. MS-Shift: An Analysis of MS MARCO Distribution Shifts on Neural Retrieval.
9. Parameter-Efficient Sparse Retrievers and Rerankers Using Adapters.
10. Query Performance Prediction for Neural IR: Are We There Yet?
11. A Study on FGSM Adversarial Training for Neural Retrieval.
12. AToMiC: An Image/Text Retrieval Test Collection to Support Multimedia Content Creation.
13. Benchmarking Middle-Trained Language Models for Neural Search.
14. A Static Pruning Study on Sparse Neural Retrievers.
15. The Tale of Two MSMARCO - and Their Unfair Comparisons.
16. Retrieval-augmented generation in multilingual settings.
17. Context Embeddings for Efficient Answer Generation in RAG.
18. SPLADE-v3: New baselines for SPLADE.
19. A Thorough Comparison of Cross-Encoders and LLMs for Reranking SPLADE.
20. Match Your Words! A Study of Lexical Matching in Neural Information Retrieval.
21. Learning with Label Noise for Image Retrieval by Selecting Interactions.
22. An Efficiency Study for SPLADE Models.
23. Learned Token Pruning in Contextualized Late Interaction over BERT (ColBERT).
24. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective.
25. TREC2023 AToMiC Overview.
26. The tale of two MS MARCO - and their unfair comparisons.
27. Naver Labs Europe (SPLADE) @ TREC Deep Learning 2022.
28. Naver Labs Europe (SPLADE) @ TREC NeuCLIR 2022.
29. Efficient Inference for Multilingual Neural Machine Translation.
30. A White Box Analysis of ColBERT.
31. Evaluating an Itinerary Recommendation Algorithm for Runners.
32. SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking.
33. Composite Code Sparse Autoencoders for First Stage Retrieval.
34. Naver Labs Europe (SPLADE) @ TREC NeuCLIR 2022.
35. A Study of Lexical Matching in Neural Information Retrieval - Abstract⋆.
36. GReS: Workshop on Graph Neural Networks for Recommendation and Search.
37. Learning to Rank Images with Cross-Modal Graph Convolutions.
38. Composite Code Sparse Autoencoders for first stage retrieval.
39. Toward A Fine-Grained Analysis of Distribution Shifts in MSMARCO.
40. LayoutXLM vs. GNN: An Empirical Evaluation of Relation Extraction for Documents.
41. Naver Labs Europe (SPLADE) @ TREC Deep Learning 2021.
42. Une Analyse du Modèle ColBERT.
43. On the use of BERT for Neural Machine Translation.
44. Running Tour Generation for Unknown Environments.
45. Comparing Machine Learning Approaches for Table Recognition in Historical Register Books.
46. Naver Labs Europe @ TREC Deep Learning 2020.
47. A Study on Token Pruning for ColBERT.
48. Match Your Words! A Study of Lexical Matching in Neural Information Retrieval.
49. Learning with Label Noise for Image Retrieval by Selecting Interactions.
50. Masked Adversarial Generation for Neural Machine Translation.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.