Back to Search Start Over

TruthSR: Trustworthy Sequential Recommender Systems via User-generated Multimodal Content

Authors :
Yan, Meng
Huang, Haibin
Liu, Ying
Zhao, Juan
Gao, Xiyue
Xu, Cai
Guan, Ziyu
Zhao, Wei
Publication Year :
2024

Abstract

Sequential recommender systems explore users' preferences and behavioral patterns from their historically generated data. Recently, researchers aim to improve sequential recommendation by utilizing massive user-generated multi-modal content, such as reviews, images, etc. This content often contains inevitable noise. Some studies attempt to reduce noise interference by suppressing cross-modal inconsistent information. However, they could potentially constrain the capturing of personalized user preferences. In addition, it is almost impossible to entirely eliminate noise in diverse user-generated multi-modal content. To solve these problems, we propose a trustworthy sequential recommendation method via noisy user-generated multi-modal content. Specifically, we explicitly capture the consistency and complementarity of user-generated multi-modal content to mitigate noise interference. We also achieve the modeling of the user's multi-modal sequential preferences. In addition, we design a trustworthy decision mechanism that integrates subjective user perspective and objective item perspective to dynamically evaluate the uncertainty of prediction results. Experimental evaluation on four widely-used datasets demonstrates the superior performance of our model compared to state-of-the-art methods. The code is released at https://github.com/FairyMeng/TrustSR.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.17238
Document Type :
Working Paper