Back to Search Start Over

Pre-training with Aspect-Content Text Mutual Prediction for Multi-Aspect Dense Retrieval

Authors :
Sun, Xiaojie
Bi, Keping
Guo, Jiafeng
Ma, Xinyu
Yixing, Fan
Shan, Hongyu
Zhang, Qishen
Liu, Zhongyi
Publication Year :
2023

Abstract

Grounded on pre-trained language models (PLMs), dense retrieval has been studied extensively on plain text. In contrast, there has been little research on retrieving data with multiple aspects using dense models. In the scenarios such as product search, the aspect information plays an essential role in relevance matching, e.g., category: Electronics, Computers, and Pet Supplies. A common way of leveraging aspect information for multi-aspect retrieval is to introduce an auxiliary classification objective, i.e., using item contents to predict the annotated value IDs of item aspects. However, by learning the value embeddings from scratch, this approach may not capture the various semantic similarities between the values sufficiently. To address this limitation, we leverage the aspect information as text strings rather than class IDs during pre-training so that their semantic similarities can be naturally captured in the PLMs. To facilitate effective retrieval with the aspect strings, we propose mutual prediction objectives between the text of the item aspect and content. In this way, our model makes more sufficient use of aspect information than conducting undifferentiated masked language modeling (MLM) on the concatenated text of aspects and content. Extensive experiments on two real-world datasets (product and mini-program search) show that our approach can outperform competitive baselines both treating aspect values as classes and conducting the same MLM for aspect and content strings. Code and related dataset will be available at the URL \footnote{https://github.com/sunxiaojie99/ATTEMPT}.<br />Comment: accepted by cikm2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.11474
Document Type :
Working Paper