Towards Controllable Natural Language Inference through Lexical Inference Types

08/07/2023
by   Yingji Zhang, et al.
0

Explainable natural language inference aims to provide a mechanism to produce explanatory (abductive) inference chains which ground claims to their supporting premises. A recent corpus called EntailmentBank strives to advance this task by explaining the answer to a question using an entailment tree <cit.>. They employ the T5 model to directly generate the tree, which can explain how the answer is inferred. However, it lacks the ability to explain and control the generation of intermediate steps, which is crucial for the multi-hop inference process. EntailmentBank, aims to push this task forward by explaining an answer to a question according to an entailment tree <cit.>. They employ T5 to generate the tree directly, which can explain how the answer is inferred but cannot explain how the intermediate is generated, which is essential to the multi-hop inference process. In this work, we focus on proposing a controlled natural language inference architecture for multi-premise explanatory inference. To improve control and enable explanatory analysis over the generation, we define lexical inference types based on Abstract Meaning Representation (AMR) graph and modify the architecture of T5 to learn a latent sentence representation (T5 bottleneck) conditioned on said type information. We also deliver a dataset of approximately 5000 annotated explanatory inference steps, with well-grounded lexical-symbolic operations. Experimental results indicate that the inference typing induced at the T5 bottleneck can help T5 to generate a conclusion under explicit control.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2021

Explaining Answers with Entailment Trees

Our goal, in the context of open-domain textual question-answering (QA),...
research
06/04/2016

Generating Natural Language Inference Chains

The ability to reason with natural language is a fundamental prerequisit...
research
05/05/2022

METGEN: A Module-Based Entailment Tree Generation Framework for Answer Explanation

Knowing the reasoning chains from knowledge to the predicted answers can...
research
08/05/2022

Going Beyond Approximation: Encoding Constraints for Explainable Multi-hop Inference via Differentiable Combinatorial Solvers

Integer Linear Programming (ILP) provides a viable mechanism to encode e...
research
09/25/2020

XTE: Explainable Text Entailment

Text entailment, the task of determining whether a piece of text logical...
research
04/30/2020

Modular Representation Underlies Systematic Generalization in Neural Natural Language Inference Models

In adversarial (challenge) testing, we pose hard generalization tasks in...
research
03/27/2013

A Framework for Control Strategies in Uncertain Inference Networks

Control Strategies for hierarchical tree-like probabilistic inference ne...

Please sign up or login with your details

Forgot password? Click here to reset