Back to Search Start Over

Preliminary Study on Incremental Learning for Large Language Model-based Recommender Systems

Authors :
Shi, Tianhao
Zhang, Yang
Xu, Zhijian
Chen, Chong
Feng, Fuli
He, Xiangnan
Tian, Qi
Publication Year :
2023

Abstract

Adapting Large Language Models for Recommendation (LLM4Rec) has shown promising results. However, the challenges of deploying LLM4Rec in real-world scenarios remain largely unexplored. In particular, recommender models need incremental adaptation to evolving user preferences, while the suitability of traditional incremental learning methods within LLM4Rec remains ambiguous due to the unique characteristics of Large Language Models (LLMs). In this study, we empirically evaluate two commonly employed incremental learning strategies (full retraining and fine-tuning) for LLM4Rec. Surprisingly, neither approach shows significant improvements in the performance of LLM4Rec. Instead of dismissing the role of incremental learning, we attribute the lack of anticipated performance enhancement to a mismatch between the LLM4Rec architecture and incremental learning: LLM4Rec employs a single adaptation module for learning recommendations, limiting its ability to simultaneously capture long-term and short-term user preferences in the incremental learning context. To test this speculation, we introduce a Long- and Short-term Adaptation-aware Tuning (LSAT) framework for incremental learning in LLM4Rec. Unlike the single adaptation module approach, LSAT utilizes two distinct adaptation modules to independently learn long-term and short-term user preferences. Empirical results verify that LSAT enhances performance, thereby validating our speculation. We release our code at: https://github.com/TianhaoShi2001/LSAT.<br />Comment: accepted in the short paper track of the 2024 ACM International Conference on Information and Knowledge Management (CIKM 2024)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.15599
Document Type :
Working Paper