Interpretable Word Embeddings via Informative Priors

09/03/2019
by   Miriam Hurtado Bodell, et al.
0

Word embeddings have demonstrated strong performance on NLP tasks. However, lack of interpretability and the unsupervised nature of word embeddings have limited their use within computational social science and digital humanities. We propose the use of informative priors to create interpretable and domain-informed dimensions for probabilistic word embeddings. Experimental results show that sensible priors can capture latent semantic concepts better than or on-par with the current state of the art, while retaining the simplicity and generalizability of using priors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2017

Mixed Membership Word Embeddings for Computational Social Science

Word embeddings improve the performance of NLP systems by revealing the ...
research
04/02/2019

Identification, Interpretability, and Bayesian Word Embeddings

Social scientists have recently turned to analyzing text using tools fro...
research
04/18/2019

Analytical Methods for Interpretable Ultradense Word Embeddings

Word embeddings are useful for a wide variety of tasks, but they lack in...
research
06/05/2019

Entity-Centric Contextual Affective Analysis

While contextualized word representations have improved state-of-the-art...
research
03/25/2022

Probabilistic Embeddings with Laplacian Graph Priors

We introduce probabilistic embeddings using Laplacian priors (PELP). The...
research
01/27/2020

The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings

We introduce POLAR - a framework that adds interpretability to pre-train...
research
04/21/2018

Context-Attentive Embeddings for Improved Sentence Representations

While one of the first steps in many NLP systems is selecting what embed...

Please sign up or login with your details

Forgot password? Click here to reset