Exploring ways to incorporate additional knowledge to improve Natural Language Commonsense Question Answering

by   Arindam Mitra, et al.

DARPA and Allen AI have proposed a collection of datasets to encourage research in Question Answering domains where (commonsense) knowledge is expected to play an important role. Recent language models such as BERT and GPT that have been pre-trained on Wikipedia articles and books, have shown decent performance with little fine-tuning on several such Multiple Choice Question-Answering (MCQ) datasets. Our goal in this work is to develop methods to incorporate additional (commonsense) knowledge into language model based approaches for better question answering in such domains. In this work we first identify external knowledge sources, and show that the performance further improves when a set of facts retrieved through IR is prepended to each MCQ question during both training and test phase. We then explore if the performance can be further improved by providing task specific knowledge in different manners or by employing different strategies for using the available knowledge. We present three different modes of passing knowledge and five different models of using knowledge including the standard BERT MCQ model. We also propose a novel architecture to deal with situations where information to answer the MCQ question is scattered over multiple knowledge sentences. We take 200 predictions from each of our best models and analyze how often the given knowledge is useful, how many times the given knowledge is useful but system failed to use it and some other metrices to see the scope of further improvements.


Improving Commonsense Question Answering by Graph-based Iterative Retrieval over Multiple Knowledge Sources

In order to facilitate natural language understanding, the key is to eng...

Unsupervised Commonsense Question Answering with Self-Talk

Natural language understanding involves reading between the lines with i...

Kformer: Knowledge Injection in Transformer Feed-Forward Layers

Knowledge-Enhanced Model have developed a diverse set of techniques for ...

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

Massive language models are the core of modern NLP modeling and have bee...

Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition

Knowledge of a disease includes information of various aspects of the di...

Explaining Question Answering Models through Text Generation

Large pre-trained language models (LMs) have been shown to perform surpr...

Bridging the Knowledge Gap: Enhancing Question Answering with World and Domain Knowledge

In this paper we present OSCAR (Ontology-based Semantic Composition Augm...

Please sign up or login with your details

Forgot password? Click here to reset