HeBERT HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition

02/03/2021
by   Avihay Chriqui, et al.
0

The use of Bidirectional Encoder Representations from Transformers (BERT) models for different natural language processing (NLP) tasks, and for sentiment analysis in particular, has become very popular in recent years and not in vain. The use of social media is being constantly on the rise. Its impact on all areas of our lives is almost inconceivable. Researches show that social media nowadays serves as one of the main tools where people freely express their ideas, opinions, and emotions. During the current Covid-19 pandemic, the role of social media as a tool to resonate opinions and emotions, became even more prominent. This paper introduces HeBERT and HebEMO. HeBERT is a transformer-based model for modern Hebrew text. Hebrew is considered a Morphological Rich Language (MRL), with unique characteristics that pose a great challenge in developing appropriate Hebrew NLP models. Analyzing multiple specifications of the BERT architecture, we come up with a language model that outperforms all existing Hebrew alternatives on multiple language tasks. HebEMO is a tool that uses HeBERT to detect polarity and extract emotions from Hebrew user-generated content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated for this study. Data collection and annotation followed an innovative iterative semi-supervised process that aimed to maximize predictability. HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification. Emotion detection reached an F1-score of 0.78-0.97, with the exception of surprise, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset