Interpretation of Semantic Tweet Representations

by   J Ganesh, et al.

Research in analysis of microblogging platforms is experiencing a renewed surge with a large number of works applying representation learning models for applications like sentiment analysis, semantic textual similarity computation, hashtag prediction, etc. Although the performance of the representation learning models has been better than the traditional baselines for such tasks, little is known about the elementary properties of a tweet encoded within these representations, or why particular representations work better for certain tasks. Our work presented here constitutes the first step in opening the black-box of vector embeddings for tweets. Traditional feature engineering methods for high-level applications have exploited various elementary properties of tweets. We believe that a tweet representation is effective for an application because it meticulously encodes the application-specific elementary properties of tweets. To understand the elementary properties encoded in a tweet representation, we evaluate the representations on the accuracy to which they can model each of those properties such as tweet length, presence of particular words, hashtags, mentions, capitalization, etc. Our systematic extensive study of nine supervised and four unsupervised tweet representations against most popular eight textual and five social elementary properties reveal that Bi-directional LSTMs (BLSTMs) and Skip-Thought Vectors (STV) best encode the textual and social properties of tweets respectively. FastText is the best model for low resource settings, providing very little degradation with reduction in embedding size. Finally, we draw interesting insights by correlating the model performance obtained for elementary property prediction tasks with the highlevel downstream applications.


page 1

page 2

page 3

page 4


Interpreting the Syntactic and Social Elements of the Tweet Representations via Elementary Property Prediction Tasks

Research in social media analysis is experiencing a recent surge with a ...

Improving Distributed Representations of Tweets - Present and Future

Unsupervised representation learning for tweets is an important research...

Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks

There is a lot of research interest in encoding variable length sentence...

"Did you really mean what you said?" : Sarcasm Detection in Hindi-English Code-Mixed Data using Bilingual Word Embeddings

With the increased use of social media platforms by people across the wo...

On the logistical difficulties and findings of Jopara Sentiment Analysis

This paper addresses the problem of sentiment analysis for Jopara, a cod...

High Dimensional Model Representation as a Glass Box in Supervised Machine Learning

Prediction and explanation are key objects in supervised machine learnin...

Learning Distributed Representations of Sentences from Unlabelled Data

Unsupervised methods for learning distributed representations of words a...

Please sign up or login with your details

Forgot password? Click here to reset