15 results on '"Zeng, Belinda"'
Search Results
2. SST: Semantic and Structural Transformers for Hierarchy-aware Language Models in E-commerce
3. Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
4. Understanding and Constructing Latent Modality Structures in Multi-Modal Representation Learning
5. OssCSE: Overcoming Surface Structure Bias in Contrastive Learning for Unsupervised Sentence Embedding
6. ReAugKD: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models
7. Multi-modal Alignment using Representation Codebook
8. Vision-Language Pre-Training with Triple Contrastive Learning
9. DCAF-BERT: A Distilled Cachable Adaptable Factorized Model For Improved Ads CTR Prediction
10. MLIM: Vision-and-Language Model Pre-training with Masked Language and Image Modeling
11. DynaMaR: Dynamic Prompt with Mask Token Representation
12. Asynchronous Convergence in Multi-Task Learning via Knowledge Distillation from Converged Tasks
13. Top-Down Attention in End-to-End Spoken Language Understanding
14. Semantic Aligned Multi-modal Transformer for Vision-LanguageUnderstanding: A Preliminary Study on Visual QA
15. CAM: Uninteresting Speech Detector
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.