V^2L: Leveraging Vision and Vision-language Models into Large-scale Product Retrieval

by   Wenhao Wang, et al.
Zhejiang University
Baidu, Inc.

Product retrieval is of great importance in the ecommerce domain. This paper introduces our 1st-place solution in eBay eProduct Visual Search Challenge (FGVC9), which is featured for an ensemble of about 20 models from vision models and vision-language models. While model ensemble is common, we show that combining the vision models and vision-language models brings particular benefits from their complementarity and is a key factor to our superiority. Specifically, for the vision models, we use a two-stage training pipeline which first learns from the coarse labels provided in the training set and then conducts fine-grained self-supervised training, yielding a coarse-to-fine metric learning manner. For the vision-language models, we use the textual description of the training image as the supervision signals for fine-tuning the image-encoder (feature extractor). With these designs, our solution achieves 0.7623 MAR@10, ranking the first place among all the competitors. The code is available at: \href{https://github.com/WangWenhao0716/V2L}{V$^2$L}.


page 3

page 4


Generating Image-Specific Text Improves Fine-grained Image Classification

Recent vision-language models outperform vision-only models on many imag...

Large-Scale Product Retrieval with Weakly Supervised Representation Learning

Large-scale weakly supervised product retrieval is a practically useful ...

Scalable Performance Analysis for Vision-Language Models

Joint vision-language models have shown great performance over a diverse...

CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification without Concrete Text Labels

Pre-trained vision-language models like CLIP have recently shown superio...

Connecting Language and Vision for Natural Language-Based Vehicle Retrieval

Vehicle search is one basic task for the efficient traffic management in...

Prismer: A Vision-Language Model with An Ensemble of Experts

Recent vision-language models have shown impressive multi-modal generati...

Causal Attention for Vision-Language Tasks

We present a novel attention mechanism: Causal Attention (CATT), to remo...

Please sign up or login with your details

Forgot password? Click here to reset