KUCST at CheckThat 2023: How good can we be with a generic model?

06/15/2023
by   Manex Agirrezabal, et al.
0

In this paper we present our method for tasks 2 and 3A at the CheckThat2023 shared task. We make use of a generic approach that has been used to tackle a diverse set of tasks, inspired by authorship attribution and profiling. We train a number of Machine Learning models and our results show that Gradient Boosting performs the best for both tasks. Based on the official ranking provided by the shared task organizers, our model shows an average performance compared to other teams.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2019

UDS--DFKI Submission to the WMT2019 Similar Language Translation Shared Task

In this paper we present the UDS-DFKI system submitted to the Similar La...
research
01/21/2020

Improving Label Ranking Ensembles using Boosting Techniques

Label ranking is a prediction task which deals with learning a mapping b...
research
09/28/2022

TRBoost: A Generic Gradient Boosting Machine based on Trust-region Method

A generic Gradient Boosting Machine called Trust-region Boosting (TRBoos...
research
08/29/2019

CCKS 2019 Shared Task on Inter-Personal Relationship Extraction

The CCKS2019 shared task was devoted to inter-personal relationship extr...
research
10/21/2020

NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task

We present the results and findings of the First Nuanced Arabic Dialect ...
research
08/27/2018

A strong baseline for question relevancy ranking

The best systems at the SemEval-16 and SemEval-17 community question ans...

Please sign up or login with your details

Forgot password? Click here to reset