Back to Search Start Over

Improving Contrastive Learning with Model Augmentation

Authors :
Liu, Zhiwei
Chen, Yongjun
Li, Jia
Luo, Man
Yu, Philip S.
Xiong, Caiming
Publication Year :
2022

Abstract

The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences. Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance, which employs contrastive learning between positive and negative views of sequences. However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals. Therefore, we investigate the possibility of model augmentation to construct view pairs. We propose three levels of model augmentation methods: neuron masking, layer dropping, and encoder complementing. This work opens up a novel direction in constructing views for contrastive SSL. Experiments verify the efficacy of model augmentation for the SSL in the sequential recommendation. Code is available\footnote{\url{https://github.com/salesforce/SRMA}}.<br />Comment: Preprint. Still under reivew

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.15508
Document Type :
Working Paper