Adapting Learned Sparse Retrieval for Long Documents

by   Thong Nguyen, et al.

Learned sparse retrieval (LSR) is a family of neural retrieval methods that transform queries and documents into sparse weight vectors aligned with a vocabulary. While LSR approaches like Splade work well for short passages, it is unclear how well they handle longer documents. We investigate existing aggregation approaches for adapting LSR to longer documents and find that proximal scoring is crucial for LSR to handle long documents. To leverage this property, we proposed two adaptations of the Sequential Dependence Model (SDM) to LSR: ExactSDM and SoftSDM. ExactSDM assumes only exact query term dependence, while SoftSDM uses potential functions that model the dependence of query terms and their expansion terms (i.e., terms identified using a transformer's masked language modeling head). Experiments on the MSMARCO Document and TREC Robust04 datasets demonstrate that both ExactSDM and SoftSDM outperform existing LSR aggregation approaches for different document length constraints. Surprisingly, SoftSDM does not provide any performance benefits over ExactSDM. This suggests that soft proximity matching is not necessary for modeling term dependence in LSR. Overall, this study provides insights into handling long documents with LSR, proposing adaptations that improve its performance.


page 1

page 2

page 3

page 4


Do the Findings of Document and Passage Retrieval Generalize to the Retrieval of Responses for Dialogues?

A number of learned sparse and dense retrieval approaches have recently ...

Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches

Term frequency normalization is a serious issue since lengths of documen...

A Short Note on Proximity-based Scoring of Documents with Multiple Fields

The BM25 ranking function is one of the most well known query relevance ...

An Axiomatic Study of Query Terms Order in Ad-hoc Retrieval

Classic retrieval methods use simple bag-of-word representations for que...

Local Self-Attention over Long Text for Efficient Document Retrieval

Neural networks, particularly Transformer-based architectures, have achi...

To Phrase or Not to Phrase - Impact of User versus System Term Dependence Upon Retrieval

When submitting queries to information retrieval (IR) systems, users oft...

Adapting Language Models to Compress Contexts

Transformer-based language models (LMs) are powerful and widely-applicab...

Please sign up or login with your details

Forgot password? Click here to reset