1. User-centric evaluation of recommender systems in social learning platforms: Accuracy is just the tip of the iceberg
- Author
-
Soude Fazeli, Hendrik Drachsler, Peter Sloep, Francis Brouns, Wim van der Vegt Brouns, Marlies Bitter-Rijpkema, RS-Research Line Technology Enhanced Learning Innovations for teaching and learning (TELI) (part of WO program), Department TELI, and RS-Theme Open Education
- Subjects
Computer science ,media_common.quotation_subject ,02 engineering and technology ,Recommender system ,Crowdsourcing ,Machine learning ,computer.software_genre ,Education ,Tagging ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,Recommender systems ,Social network services ,Quality (business) ,Prediction algorithms ,User-centered design ,media_common ,Measurement ,Metadata ,evaluation ,learning ,accuracy ,Serendipity ,business.industry ,05 social sciences ,General Engineering ,Novelty ,050301 education ,social ,Computer Science Applications ,Artificial intelligence ,Data mining ,business ,0503 education ,computer ,performance - Abstract
Recommender systems provide users with content they might be interested in. Conventionally, recommender systems are evaluated mostly by using prediction accuracy metrics only. But, the ultimate goal of a recommender system is to increase user satisfaction. Therefore, evaluations that measure user satisfaction should also be performed before deploying a recommender system in a real target environment. Such evaluations are laborious and complicated compared to the traditional, data-centric evaluations, though. In this study, we carried out a user-centric evaluation of state-of-the-art recommender systems as well as a graph-based approach in the ecologically valid setting of an authentic social learning platform. We also conducted a data-centric evaluation on the same data to investigate the added value of user-centric evaluations and how user satisfaction of a recommender system is related to its performance in terms of accuracy metrics. Our findings suggest that user-centric evaluation results are not necessarily in line with data-centric evaluation results. We conclude that the traditional evaluation of recommender systems in terms of prediction accuracy only does not suffice to judge performance of recommender systems on the user side. Moreover, the user-centric evaluation provides valuable insights in how candidate algorithms perform on each of the five quality metrics for recommendations: usefulness, accuracy, novelty, diversity, and serendipity.
- Published
- 2018
- Full Text
- View/download PDF