SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval

12/12/2019
by   Liang Pang, et al.
0

In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents. Therefore, an ideal ranking model would be a mapping from a document set to a permutation on the set, and should satisfy two critical requirements: (1) it should have the ability to model cross-document interactions so as to capture local context information in a query; (2) it should be permutation-invariant, which means that any permutation of the inputted documents would not change the output ranking. Previous studies on learning-to-rank either design uni-variate scoring functions that score each document separately, and thus failed to model the cross-document interactions; or construct multivariate scoring functions that score documents sequentially, which inevitably sacrifice the permutation invariance requirement. In this paper, we propose a neural learning-to-rank model called SetRank which directly learns a permutation-invariant ranking model defined on document sets of any size. SetRank employs a stack of (induced) multi-head self attention blocks as its key component for learning the embeddings for all of the retrieved documents jointly. The self-attention mechanism not only helps SetRank to capture the local context information from cross-document interactions, but also to learn permutation-equivariant representations for the inputted documents, which therefore achieving a permutation-invariant ranking model. Experimental results on three large scale benchmarks showed that the SetRank significantly outperformed the baselines include the traditional learning-to-rank models and state-of-the-art Neural IR models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2019

Self-Attentive Document Interaction Networks for Permutation Equivariant Ranking

How to leverage cross-document interactions to improve ranking performan...
research
05/08/2020

Modeling Document Interactions for Learning to Rank with Regularized Self-Attention

Learning to rank is an important task that has been successfully deploye...
research
04/16/2018

Learning a Deep Listwise Context Model for Ranking Refinement

Learning to rank has been intensively studied and widely applied in info...
research
08/09/2014

The Lovasz-Bregman Divergence and connections to rank aggregation, clustering, and web ranking

We extend the recently introduced theory of Lovasz-Bregman (LB) divergen...
research
09/27/2021

Distributionally Robust Multi-Output Regression Ranking

Despite their empirical success, most existing listwiselearning-to-rank ...
research
09/15/2019

MarlRank: Multi-agent Reinforced Learning to Rank

When estimating the relevancy between a query and a document, ranking mo...
research
05/24/2023

Fusion-in-T5: Unifying Document Ranking Signals for Improved Information Retrieval

Common IR pipelines are typically cascade systems that may involve multi...

Please sign up or login with your details

Forgot password? Click here to reset