HiPool: Modeling Long Documents Using Graph Neural Networks

by   Irene Li, et al.

Encoding long sequences in Natural Language Processing (NLP) is a challenging problem. Though recent pretraining language models achieve satisfying performances in many NLP tasks, they are still restricted by a pre-defined maximum length, making them challenging to be extended to longer sequences. So some recent works utilize hierarchies to model long sequences. However, most of them apply sequential models for upper hierarchies, suffering from long dependency issues. In this paper, we alleviate these issues through a graph-based method. We first chunk the sequence with a fixed length to model the sentence-level information. We then leverage graphs to model intra- and cross-sentence correlations with a new attention mechanism. Additionally, due to limited standard benchmarks for long document classification (LDC), we propose a new challenging benchmark, totaling six datasets with up to 53k samples and 4034 average tokens' length. Evaluation shows our model surpasses competitive baselines by 2.6 dataset. Our method is shown to outperform hierarchical sequential models with better performance and scalability, especially for longer sequences.


page 1

page 2

page 3

page 4


MuLD: The Multitask Long Document Benchmark

The impressive progress in NLP techniques has been driven by the develop...

Cross-Document Language Modeling

We introduce a new pretraining approach for language models that are gea...

Giraffe: Adventures in Expanding Context Lengths in LLMs

Modern large language models (LLMs) that rely on attention mechanisms ar...

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Transformer networks have a potential of learning longer-term dependency...

Fast-FNet: Accelerating Transformer Encoder Models via Efficient Fourier Layers

Transformer-based language models utilize the attention mechanism for su...

A Sentence-level Hierarchical BERT Model for Document Classification with Limited Labelled Data

Training deep learning models with limited labelled data is an attractiv...

Simple Local Attentions Remain Competitive for Long-Context Tasks

Many NLP tasks require processing long contexts beyond the length limit ...

Please sign up or login with your details

Forgot password? Click here to reset