ur-iw-hnt at GermEval 2021: An Ensembling Strategy with Multiple BERT Models

10/05/2021
by   Hoai Nam Tran, et al.
0

This paper describes our approach (ur-iw-hnt) for the Shared Task of GermEval2021 to identify toxic, engaging, and fact-claiming comments. We submitted three runs using an ensembling strategy by majority (hard) voting with multiple different BERT models of three different types: German-based, Twitter-based, and multilingual models. All ensemble models outperform single models, while BERTweet is the winner of all individual models in every subtask. Twitter-based models perform better than GermanBERT models, and multilingual models perform worse but by a small margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2020

UPB at SemEval-2020 Task 12: Multilingual Offensive Language Detection on Social Media by Fine-tuning a Variety of BERT-based Models

Offensive language detection is one of the most challenging problem in t...
research
04/14/2023

OPI at SemEval 2023 Task 9: A Simple But Effective Approach to Multilingual Tweet Intimacy Analysis

This paper describes our submission to the SemEval 2023 multilingual twe...
research
09/22/2022

AIR-JPMC@SMM4H'22: Classifying Self-Reported Intimate Partner Violence in Tweets with Multiple BERT-based Models

This paper presents our submission for the SMM4H 2022-Shared Task on the...
research
01/22/2021

Multilingual Pre-Trained Transformers and Convolutional NN Classification Models for Technical Domain Identification

In this paper, we present a transfer learning system to perform technica...
research
09/07/2021

FHAC at GermEval 2021: Identifying German toxic, engaging, and fact-claiming comments with ensemble learning

The availability of language representations learned by large pretrained...
research
02/24/2023

Naver Labs Europe (SPLADE) @ TREC Deep Learning 2022

This paper describes our participation to the 2022 TREC Deep Learning ch...

Please sign up or login with your details

Forgot password? Click here to reset