Commonsense Knowledge-Augmented Pretrained Language Models for Causal Reasoning Classification

12/16/2021
by   Pedram Hosseini, et al.
0

Commonsense knowledge can be leveraged for identifying causal relations in text. In this work, we verbalize triples in ATOMIC2020, a wide coverage commonsense reasoning knowledge graph, to natural language text and continually pretrain a BERT pretrained language model. We evaluate the resulting model on answering commonsense reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baseline on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, without additional improvement on the base model or using quality-enhanced data for fine-tuning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset