LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT?

08/03/2020
by   Marc Pàmies, et al.
0

This paper presents the different models submitted by the LT@Helsinki team for the SemEval 2020 Shared Task 12. Our team participated in sub-tasks A and C; titled offensive language identification and offense target identification, respectively. In both cases we used the so-called Bidirectional Encoder Representation from Transformer (BERT), a model pre-trained by Google and fine-tuned by us on the OLID and SOLID datasets. The results show that offensive tweet classification is one of several language-based tasks where BERT can achieve state-of-the-art results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset