Graph-to-Text Generation with Dynamic Structure Pruning

09/15/2022
by   Liang Li, et al.
16

Most graph-to-text works are built on the encoder-decoder framework with cross-attention mechanism. Recent studies have shown that explicitly modeling the input graph structure can significantly improve the performance. However, the vanilla structural encoder cannot capture all specialized information in a single forward pass for all decoding steps, resulting in inaccurate semantic representations. Meanwhile, the input graph is flatted as an unordered sequence in the cross attention, ignoring the original graph structure. As a result, the obtained input graph context vector in the decoder may be flawed. To address these issues, we propose a Structure-Aware Cross-Attention (SACA) mechanism to re-encode the input graph representation conditioning on the newly generated context at each decoding step in a structure aware manner. We further adapt SACA and introduce its variant Dynamic Graph Pruning (DGP) mechanism to dynamically drop irrelevant nodes in the decoding process. We achieve new state-of-the-art results on two graph-to-text datasets, LDC2020T02 and ENT-DESC, with only minor increase on computational cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2021

Structural Adapters in Pretrained Language Models for AMR-to-text Generation

Previous work on text generation from graph-structured data relies on pr...
research
09/15/2021

Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG

Ever since neural models were adopted in data-to-text language generatio...
research
06/16/2020

Modeling Graph Structure via Relative Position for Better Text Generation from Knowledge Graphs

We present a novel encoder-decoder architecture for graph-to-text genera...
research
07/15/2020

RobustScanner: Dynamically Enhancing Positional Clues for Robust Text Recognition

The attention-based encoder-decoder framework has recently achieved impr...
research
06/19/2021

JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs

Existing pre-trained models for knowledge-graph-to-text (KG-to-text) gen...
research
05/10/2021

R2D2: Relational Text Decoding with Transformers

We propose a novel framework for modeling the interaction between graphi...
research
06/15/2020

Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

Generating inferential texts about an event in different perspectives re...

Please sign up or login with your details

Forgot password? Click here to reset