PGT: Pseudo Relevance Feedback Using a Graph-Based Transformer

01/20/2021
by   HongChien Yu, et al.
0

Most research on pseudo relevance feedback (PRF) has been done in vector space and probabilistic retrieval models. This paper shows that Transformer-based rerankers can also benefit from the extra context that PRF provides. It presents PGT, a graph-based Transformer that sparsifies attention between graph nodes to enable PRF while avoiding the high computational complexity of most Transformer architectures. Experiments show that PGT improves upon non-PRF Transformer reranker, and it is at least as accurate as Transformer PRF models that use full attention, but with lower computational costs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2020

Relevance Transformer: Generating Concise Code Snippets with Relevance Feedback

Tools capable of automatic code generation have the potential to augment...
research
05/12/2023

Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval

Pseudo-relevance feedback (PRF) is a classical approach to address lexic...
research
02/12/2023

Transformer models: an introduction and catalog

In the past few years we have seen the meteoric appearance of dozens of ...
research
04/28/2020

EARL: Speedup Transformer-based Rankers with Pre-computed Representation

Recent innovations in Transformer-based ranking models have advanced the...
research
05/26/2023

On the Computational Power of Decoder-Only Transformer Language Models

This article presents a theoretical evaluation of the computational univ...
research
06/07/2022

How to Dissect a Muppet: The Structure of Transformer Embedding Spaces

Pretrained embeddings based on the Transformer architecture have taken t...
research
05/20/2021

Fully Adaptive Self-Stabilizing Transformer for LCL Problems

The first generic self-stabilizing transformer for local problems in a c...

Please sign up or login with your details

Forgot password? Click here to reset