Knowledge Fusion via Embeddings from Text, Knowledge Graphs, and Images

04/20/2017
by   Steffen Thoma, et al.
0

We present a baseline approach for cross-modal knowledge fusion. Different basic fusion methods are evaluated on existing embedding approaches to show the potential of joining knowledge about certain concepts across modalities in a fused concept representation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2016

Cross-Modal Scene Networks

People can recognize scenes across many different modalities beyond natu...
research
10/18/2019

Towards Learning Cross-Modal Perception-Trace Models

Representation learning is a key element of state-of-the-art deep learni...
research
03/13/2013

Possibilistic Assumption based Truth Maintenance System, Validation in a Data Fusion Application

Data fusion allows the elaboration and the evaluation of a situation syn...
research
08/04/2015

Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs

We study the extent to which online social networks can be connected to ...
research
07/16/2020

Memory Based Attentive Fusion

The use of multi-modal data for deep machine learning has shown promise ...
research
04/08/2021

Multimodal Fusion Refiner Networks

Tasks that rely on multi-modal information typically include a fusion mo...
research
01/21/2022

Taxonomy Enrichment with Text and Graph Vector Representations

Knowledge graphs such as DBpedia, Freebase or Wikidata always contain a ...

Please sign up or login with your details

Forgot password? Click here to reset