Calculating Question Similarity is Enough:A New Method for KBQA Tasks

by   Hanyu Zhao, et al.

Knowledge Base Question Answering (KBQA) aims to answer natural language questions with the help of an external knowledge base. The core idea is to find the link between the internal knowledge behind questions and known triples of the knowledge base. The KBQA task pipeline contains several steps, including entity recognition, relationship extraction, and entity linking. This kind of pipeline method means that errors in any procedure will inevitably propagate to the final prediction. In order to solve the above problem, this paper proposes a Corpus Generation - Retrieve Method (CGRM) with Pre-training Language Model (PLM) and Knowledge Graph (KG). Firstly, based on the mT5 model, we designed two new pre-training tasks: knowledge masked language modeling and question generation based on the paragraph to obtain the knowledge enhanced T5 (kT5) model. Secondly, after preprocessing triples of knowledge graph with a series of heuristic rules, the kT5 model generates natural language QA pairs based on processed triples. Finally, we directly solve the QA by retrieving the synthetic dataset. We test our method on NLPCC-ICCPOL 2016 KBQA dataset, and the results show that our framework improves the performance of KBQA and the out straight-forward method is competitive with the state-of-the-art.


page 1

page 2

page 3

page 4


Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering

This paper presents a general approach for open-domain question answerin...

Entity-Aware Language Model as an Unsupervised Reranker

In language modeling, it is difficult to incorporate entity relationship...

Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation

Although pre-training models have achieved great success in dialogue gen...

Knowledge-enhanced Iterative Instruction Generation and Reasoning for Knowledge Base Question Answering

Multi-hop Knowledge Base Question Answering(KBQA) aims to find the answe...

SPBERT: An Efficient Pre-training BERT on SPARQL Queries for Question Answering over Knowledge Graphs

In this paper, we propose SPBERT, a transformer-based language model pre...

HHH: An Online Medical Chatbot System based on Knowledge Graph and Hierarchical Bi-Directional Attention

This paper proposes a chatbot framework that adopts a hybrid model which...

Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment

Despite the remarkable recent advances in language models, they still st...

Please sign up or login with your details

Forgot password? Click here to reset