1. Learning compact yet accurate Generative Adversarial Networks for recommender systems.
- Author
-
Zhao, Yu, Wang, Kuo, Guo, Guibing, and Wang, Xingwei
- Subjects
- *
GENERATIVE adversarial networks , *RECOMMENDER systems , *DATA distribution - Abstract
Recently, Generative Adversarial Networks (GANs) have received much attention in recommender systems because they can capture complex data distributions. They rely on two sub-networks, generator and discriminator models, to generate 'fake' yet reliable data, so better recommendation accuracy can be obtained. However, most variants of existing GANs achieve accuracy improvements by greatly increasing the model complexity, especially the amount of parameters, leading to difficulty in the deployment of those methods. Therefore, we aim to resolve this issue by learning compact yet accurate GANs that can make up valuable data with less generator parameters. To this end, we integrate knowledge distillation (KD, in the form of a teacher–student architecture) into GANs to reduce the model complexity while improving the accuracy of GANs. To the best of our knowledge, this work is the first to learn compact GANs for recommender systems by applying a KD framework named StuGAN. Specifically, we use a student discriminator to refine the teacher's knowledge, and then both the generator and discriminator are enhanced by leveraging the refined knowledge via adversarial learning, which constrains the generator to produce fake data approximating both the ground truth and teacher's predicted preferences, and enables the discriminator to distinguish between the preferences of the generator and more confusing preferences of the teacher from the ground truth. Finally, we conduct extensive experiments on two real-world datasets (i.e., Ciao and LastFM), and the results show that our approach can reduce the volume of model parameters by as much as half while maintaining comparable recommendation accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF