Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts

Soft prompts have been recently proposed as a tool for adapting large frozen language models (LMs) to new tasks. In this work, we repurpose soft prompts to the task of injecting world knowledge into LMs. We introduce a method to train soft prompts via self-supervised learning on data from knowledge bases. The resulting soft knowledge prompts (KPs) are task independent and work as an external memory of the LMs. We perform qualitative and quantitative experiments and demonstrate that: (1) KPs can effectively model the structure of the training data; (2) KPs can be used to improve the performance of LMs in different knowledge intensive tasks.

READ FULL TEXT
05/22/2023

Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases

We introduce Chain of Knowledge (CoK), a framework that augments large l...
08/29/2023

Large language models converge toward human-like concept organization

Large language models show human-like performance in knowledge extractio...
05/24/2023

Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4

Misinformation poses a critical societal challenge, and current approach...
07/07/2023

Derivative Free Weight-space Ensembling

Recent work suggests that interpolating between the weights of two speci...
04/12/2022

A Review on Language Models as Knowledge Bases

Recently, there has been a surge of interest in the NLP community on the...
05/18/2023

The Web Can Be Your Oyster for Improving Large Language Models

Large language models (LLMs) encode a large amount of world knowledge. H...

Please sign up or login with your details

Forgot password? Click here to reset