A Review on Language Models as Knowledge Bases

04/12/2022
by   Badr AlKhamissi, et al.
Facebook
7

Recently, there has been a surge of interest in the NLP community on the use of pretrained Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs trained on a sufficiently large (web) corpus will encode a significant amount of knowledge implicitly in its parameters. The resulting LM can be probed for different kinds of knowledge and thus acting as a KB. This has a major advantage over traditional KBs in that this method requires no human supervision. In this paper, we present a set of aspects that we deem a LM should have to fully act as a KB, and review the recent literature with respect to those aspects.

READ FULL TEXT
08/29/2023

Large language models converge toward human-like concept organization

Large language models show human-like performance in knowledge extractio...
10/10/2021

Language Models As or For Knowledge Bases

Pre-trained language models (LMs) have recently gained attention for the...
11/08/2022

SocioProbe: What, When, and Where Language Models Learn about Sociodemographics

Pre-trained language models (PLMs) have outperformed other NLP models on...
05/22/2023

Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases

We introduce Chain of Knowledge (CoK), a framework that augments large l...
10/10/2022

Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts

Soft prompts have been recently proposed as a tool for adapting large fr...
04/12/2021

Relational world knowledge representation in contextual language models: A review

Relational knowledge bases (KBs) are established tools for world knowled...
02/02/2022

Understanding Knowledge Integration in Language Models with Graph Convolutions

Pretrained language models (LMs) do not capture factual knowledge very w...

Please sign up or login with your details

Forgot password? Click here to reset