Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog

by   Mayank Mishra, et al.

Traditional systems designed for task oriented dialog utilize knowledge present only in structured knowledge sources to generate responses. However, relevant information required to generate responses may also reside in unstructured sources, such as documents. Recent state of the art models such as HyKnow and SeKnow aimed at overcoming these challenges make limiting assumptions about the knowledge sources. For instance, these systems assume that certain types of information, such as a phone number, is always present in a structured KB while information about aspects such as entrance ticket prices would always be available in documents. In this paper, we create a modified version of the MutliWOZ based dataset prepared by SeKnow to demonstrate how current methods have significant degradation in performance when strict assumptions about the source of information are removed. Then, in line with recent work exploiting pre-trained language models, we fine-tune a BART based model using prompts for the tasks of querying knowledge sources, as well as, for response generation, without making assumptions about the information present in each knowledge source. Through a series of experiments, we demonstrate that our model is robust to perturbations to knowledge modality (source of information), and that it can fuse information from structured as well as unstructured knowledge to generate responses.


page 2

page 10


Efficient Retrieval Augmented Generation from Unstructured Knowledge for Task-Oriented Dialog

This paper summarizes our work on the first track of the ninth Dialog Sy...

Combining pre-trained language models and structured knowledge

In recent years, transformer-based language models have achieved state o...

End-to-End Task-Oriented Dialog Modeling with Semi-Structured Knowledge Management

Current task-oriented dialog (TOD) systems mostly manage structured know...

Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback

Large language models (LLMs), such as ChatGPT, are able to generate huma...

Evaluating Large Language Models for Document-grounded Response Generation in Information-Seeking Dialogues

In this paper, we investigate the use of large language models (LLMs) li...

Constraint based Knowledge Base Distillation in End-to-End Task Oriented Dialogs

End-to-End task-oriented dialogue systems generate responses based on di...

Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task

The adoption of pre-trained language models in task-oriented dialogue sy...

Please sign up or login with your details

Forgot password? Click here to reset