Mixed Membership Word Embeddings for Computational Social Science

05/20/2017
by   James Foulds, et al.
0

Word embeddings improve the performance of NLP systems by revealing the hidden structural relationships between words. These models have recently risen in popularity due to the performance of scalable algorithms trained in the big data setting. Despite their success, word embeddings have seen very little use in computational social science NLP tasks, presumably due to their reliance on big data, and to a lack of interpretability. I propose a probabilistic model-based word embedding method which can recover interpretable embeddings, without big data. The key insight is to leverage the notion of mixed membership modeling, in which global representations are shared, but individual entities (i.e. dictionary words) are free to use these representations to uniquely differing degrees. Leveraging connections to topic models, I show how to train these models in high dimensions using a combination of state-of-the-art techniques for word embeddings and topic modeling. Experimental results show an improvement in predictive performance of up to 63 small datasets. The models are interpretable, as embeddings of topics are used to encode embeddings for words (and hence, documents) in a model-based way. I illustrate this with two computational social science case studies, on NIPS articles and State of the Union addresses.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset