Cross-lingual Hate Speech Detection using Transformer Models

11/01/2021
by   Teodor Tiţa, et al.
0

Hate speech detection within a cross-lingual setting represents a paramount area of interest for all medium and large-scale online platforms. Failing to properly address this issue on a global scale has already led over time to morally questionable real-life events, human deaths, and the perpetuation of hate itself. This paper illustrates the capabilities of fine-tuned altered multi-lingual Transformer models (mBERT, XLM-RoBERTa) regarding this crucial social data science task with cross-lingual training from English to French, vice-versa and each language on its own, including sections about iterative improvement and comparative error analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset