False Negative Distillation and Contrastive Learning for Personalized Outfit Recommendation

10/13/2021
by   Seongjae Kim, et al.
0

Personalized outfit recommendation has recently been in the spotlight with the rapid growth of the online fashion industry. However, recommending outfits has two significant challenges that should be addressed. The first challenge is that outfit recommendation often requires a complex and large model that utilizes visual information, incurring huge memory and time costs. One natural way to mitigate this problem is to compress such a cumbersome model with knowledge distillation (KD) techniques that leverage knowledge from a pretrained teacher model. However, it is hard to apply existing KD approaches in recommender systems (RS) to the outfit recommendation because they require the ranking of all possible outfits while the number of outfits grows exponentially to the number of consisting clothing items. Therefore, we propose a new KD framework for outfit recommendation, called False Negative Distillation (FND), which exploits false-negative information from the teacher model while not requiring the ranking of all candidates. The second challenge is that the explosive number of outfit candidates amplifying the data sparsity problem, often leading to poor outfit representation. To tackle this issue, inspired by the recent success of contrastive learning (CL), we introduce a CL framework for outfit representation learning with two proposed data augmentation methods. Quantitative and qualitative experiments on outfit recommendation datasets demonstrate the effectiveness and soundness of our proposed methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2022

Unbiased Knowledge Distillation for Recommendation

As a promising solution for model compression, knowledge distillation (K...
research
06/28/2022

Cooperative Retriever and Ranker in Deep Recommenders

Deep recommender systems jointly leverage the retrieval and ranking oper...
research
10/27/2020

Contrastive Pre-training for Sequential Recommendation

Sequential recommendation methods play a crucial role in modern recommen...
research
08/06/2023

Semantic-Guided Feature Distillation for Multimodal Recommendation

Multimodal recommendation exploits the rich multimodal information assoc...
research
11/13/2019

Collaborative Distillation for Top-N Recommendation

Knowledge distillation (KD) is a well-known method to reduce inference l...
research
08/15/2023

Learning from All Sides: Diversified Positive Augmentation via Self-distillation in Recommendation

Personalized recommendation relies on user historical behaviors to provi...
research
06/07/2023

RD-Suite: A Benchmark for Ranking Distillation

The distillation of ranking models has become an important topic in both...

Please sign up or login with your details

Forgot password? Click here to reset