Teaching Probabilistic Logical Reasoning to Transformers

05/22/2023
by   Aliakbar Nafar, et al.
0

Recent research on transformer-based language models investigates their reasoning ability over logical rules expressed in natural language text. However, their logic is not yet well-understood as we cannot explain the abstractions made by the models that help them in reasoning. These models are criticized for merely memorizing complex patterns in the data, which often creates issues for their generalizability in unobserved situations. In this work, we analyze the use of probabilistic logical rules in transformer-based language models. In particular, we propose a new approach, Probabilistic Constraint Training (PCT), that explicitly models probabilistic logical reasoning by imposing the rules of reasoning as constraints during training. We create a new QA benchmark for evaluating probabilistic reasoning over uncertain textual rules, which creates instance-specific rules, unlike the only existing relevant benchmark. Experimental results show that our proposed technique improves the base language models' accuracy and explainability when probabilistic logical reasoning is required for question answering. Moreover, we show that the learned probabilistic reasoning abilities are transferable to novel situations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset