Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text

by   Fangzhi Xu, et al.

Logical reasoning task involves diverse types of complex reasoning over text, based on the form of multiple-choice question answering. Given the context, question and a set of options as the input, previous methods achieve superior performances on the full-data setting. However, the current benchmark dataset has the ideal assumption that the reasoning type distribution on the train split is close to the test split, which is inconsistent with many real application scenarios. To address it, there remain two problems to be studied: (1) How is the zero-shot capability of the models (train on seen types and test on unseen types)? (2) How to enhance the perception of reasoning types for the models? For problem 1, we propose a new benchmark for generalized zero-shot logical reasoning, named ZsLR. It includes six splits based on the three type sampling strategies. For problem 2, a type-aware model TaCo is proposed. It utilizes both the heuristic input reconstruction and the contrastive learning to improve the type perception in the global representation. Extensive experiments on both the zero-shot and full-data settings prove the superiority of TaCo over the state-of-the-art methods. Also, we experiment and verify the generalization capability of TaCo on other logical reasoning dataset.


Zero-Shot Classification by Logical Reasoning on Natural Language Explanations

Humans can classify an unseen category by reasoning on its language expl...

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation From Deductive, Inductive and Abductive Views

Large Language Models (LLMs) have achieved great success in various natu...

Zero-Shot Visual Question Answering

Part of the appeal of Visual Question Answering (VQA) is its promise to ...

Few-shot Visual Reasoning with Meta-analogical Contrastive Learning

While humans can solve a visual puzzle that requires logical reasoning b...

STEPS: A Benchmark for Order Reasoning in Sequential Tasks

Various human activities can be abstracted into a sequence of actions in...

SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning

The recent progress in large language models (LLMs), especially the inve...

RnG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering

Existing KBQA approaches, despite achieving strong performance on i.i.d....

Please sign up or login with your details

Forgot password? Click here to reset