Learning Conceptual Space Representations of Interrelated Concepts
Several recently proposed methods aim to learn conceptual space representations from large text collections. These learned representations asso- ciate each object from a given domain of interest with a point in a high-dimensional Euclidean space, but they do not model the concepts from this do- main, and can thus not directly be used for catego- rization and related cognitive tasks. A natural solu- tion is to represent concepts as Gaussians, learned from the representations of their instances, but this can only be reliably done if sufficiently many in- stances are given, which is often not the case. In this paper, we introduce a Bayesian model which addresses this problem by constructing informative priors from background knowledge about how the concepts of interest are interrelated with each other. We show that this leads to substantially better pre- dictions in a knowledge base completion task.
READ FULL TEXT