Back to Search
Start Over
CrossCBR: Cross-view Contrastive Learning for Bundle Recommendation
- Source :
- KDD 2022
- Publication Year :
- 2022
-
Abstract
- Bundle recommendation aims to recommend a bundle of related items to users, which can satisfy the users' various needs with one-stop convenience. Recent methods usually take advantage of both user-bundle and user-item interactions information to obtain informative representations for users and bundles, corresponding to bundle view and item view, respectively. However, they either use a unified view without differentiation or loosely combine the predictions of two separate views, while the crucial cooperative association between the two views' representations is overlooked. In this work, we propose to model the cooperative association between the two different views through cross-view contrastive learning. By encouraging the alignment of the two separately learned views, each view can distill complementary information from the other view, achieving mutual enhancement. Moreover, by enlarging the dispersion of different users/bundles, the self-discrimination of representations is enhanced. Extensive experiments on three public datasets demonstrate that our method outperforms SOTA baselines by a large margin. Meanwhile, our method requires minimal parameters of three set of embeddings (user, bundle, and item) and the computational costs are largely reduced due to more concise graph structure and graph learning module. In addition, various ablation and model studies demystify the working mechanism and justify our hypothesis. Codes and datasets are available at https://github.com/mysbupt/CrossCBR.<br />Comment: 9 pages, 5 figures, 5 tables; this the 3rd version that fixes a typo in table 1
- Subjects :
- Computer Science - Information Retrieval
Subjects
Details
- Database :
- arXiv
- Journal :
- KDD 2022
- Publication Type :
- Report
- Accession number :
- edsarx.2206.00242
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1145/3534678.3539229