Learning Robust Recommender from Noisy Implicit Feedback

12/02/2021
by   Wenjie Wang, et al.
0

The ubiquity of implicit feedback makes it indispensable for building recommender systems. However, it does not actually reflect the actual satisfaction of users. For example, in E-commerce, a large portion of clicks do not translate to purchases, and many purchases end up with negative reviews. As such, it is of importance to account for the inevitable noises in implicit feedback. However, little work on recommendation has taken the noisy nature of implicit feedback into consideration. In this work, we explore the central theme of denoising implicit feedback for recommender learning, including training and inference. By observing the process of normal recommender training, we find that noisy feedback typically has large loss values in the early stages. Inspired by this observation, we propose a new training strategy named Adaptive Denoising Training (ADT), which adaptively prunes the noisy interactions by two paradigms (i.e., Truncated Loss and Reweighted Loss). Furthermore, we consider extra feedback (e.g., rating) as auxiliary signal and propose three strategies to incorporate extra feedback into ADT: finetuning, warm-up training, and colliding inference. We instantiate the two paradigms on the widely used binary cross-entropy loss and test them on three representative recommender models. Extensive experiments on three benchmarks demonstrate that ADT significantly improves the quality of recommendation over normal training without using extra feedback. Besides, the proposed three strategies for using extra feedback largely enhance the denoising ability of ADT.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2020

Denoising Implicit Feedback for Recommendation

The ubiquity of implicit feedback makes them the default choice to build...
research
07/07/2017

Computational Models of Tutor Feedback in Language Acquisition

This paper investigates the role of tutor feedback in language learning ...
research
04/14/2022

Self-Guided Learning to Denoise for Robust Recommendation

The ubiquity of implicit feedback makes them the default choice to build...
research
05/11/2023

Automated Data Denoising for Recommendation

In real-world scenarios, most platforms collect both large-scale, natura...
research
08/23/2023

Learning from Negative User Feedback and Measuring Responsiveness for Sequential Recommenders

Sequential recommenders have been widely used in industry due to their s...
research
05/20/2020

GCN-Based User Representation Learning for Unifying Robust Recommendation and Fraudster Detection

In recent years, recommender system has become an indispensable function...
research
01/03/2019

Pseudo-Implicit Feedback for Alleviating Data Sparsity in Top-K Recommendation

We propose PsiRec, a novel user preference propagation recommender that ...

Please sign up or login with your details

Forgot password? Click here to reset