Contrastive Knowledge Graph Error Detection

by   Qinggang Zhang, et al.

Knowledge Graph (KG) errors introduce non-negligible noise, severely affecting KG-related downstream tasks. Detecting errors in KGs is challenging since the patterns of errors are unknown and diverse, while ground-truth labels are rare or even unavailable. A traditional solution is to construct logical rules to verify triples, but it is not generalizable since different KGs have distinct rules with domain knowledge involved. Recent studies focus on designing tailored detectors or ranking triples based on KG embedding loss. However, they all rely on negative samples for training, which are generated by randomly replacing the head or tail entity of existing triples. Such a negative sampling strategy is not enough for prototyping practical KG errors, e.g., (Bruce_Lee, place_of_birth, China), in which the three elements are often relevant, although mismatched. We desire a more effective unsupervised learning mechanism tailored for KG error detection. To this end, we propose a novel framework - ContrAstive knowledge Graph Error Detection (CAGED). It introduces contrastive learning into KG learning and provides a novel way of modeling KG. Instead of following the traditional setting, i.e., considering entities as nodes and relations as semantic edges, CAGED augments a KG into different hyper-views, by regarding each relational triple as a node. After joint training with KG embedding and contrastive learning loss, CAGED assesses the trustworthiness of each triple based on two learning signals, i.e., the consistency of triple representations across multi-views and the self-consistency within the triple. Extensive experiments on three real-world KGs show that CAGED outperforms state-of-the-art methods in KG error detection. Our codes and datasets are available at


page 1

page 2

page 3

page 4


Knowledge Graph Self-Supervised Rationalization for Recommendation

In this paper, we introduce a new self-supervised rationalization method...

Investigating the Effect of Hard Negative Sample Distribution on Contrastive Knowledge Graph Embedding

The success of the knowledge graph completion task heavily depends on th...

Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation

Graph contrastive learning is the state-of-the-art unsupervised graph re...

Commonsense Knowledge Graph Completion Via Contrastive Pretraining and Node Clustering

The nodes in the commonsense knowledge graph (CSKG) are normally represe...

Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

Leading graph contrastive learning (GCL) methods perform graph augmentat...

Relational Symmetry based Knowledge Graph Contrastive Learning

Knowledge graph embedding (KGE) aims to learn powerful representations t...

Multispectral Self-Supervised Learning with Viewmaker Networks

Contrastive learning methods have been applied to a range of domains and...

Please sign up or login with your details

Forgot password? Click here to reset