Identification of the Relevance of Comments in Codes Using Bag of Words and Transformer Based Models

by   Sruthi S, et al.

The Forum for Information Retrieval (FIRE) started a shared task this year for classification of comments of different code segments. This is binary text classification task where the objective is to identify whether comments given for certain code segments are relevant or not. The BioNLP-IISERB group at the Indian Institute of Science Education and Research Bhopal (IISERB) participated in this task and submitted five runs for five different models. The paper presents the overview of the models and other significant findings on the training corpus. The methods involve different feature engineering schemes and text classification techniques. The performance of the classical bag of words model and transformer-based models were explored to identify significant features from the given training corpus. We have explored different classifiers viz., random forest, support vector machine and logistic regression using the bag of words model. Furthermore, the pre-trained transformer based models like BERT, RoBERT and ALBERT were also used by fine-tuning them on the given training corpus. The performance of different such models over the training corpus were reported and the best five models were implemented on the given test corpus. The empirical results show that the bag of words model outperforms the transformer based models, however, the performance of our runs are not reasonably well in both training and test corpus. This paper also addresses the limitations of the models and scope for further improvement.


page 1

page 2

page 3

page 4


FH-SWF SG at GermEval 2021: Using Transformer-Based Language Models to Identify Toxic, Engaging, Fact-Claiming Comments

In this paper we describe the methods we used for our submissions to the...

Text classification dataset and analysis for Uzbek language

Text classification is an important task in Natural Language Processing ...

Boosting classification reliability of NLP transformer models in the long run

Transformer-based machine learning models have become an essential tool ...

Automated Essay Scoring Using Transformer Models

Automated essay scoring (AES) is gaining increasing attention in the edu...

Techniques to Improve Q A Accuracy with Transformer-based models on Large Complex Documents

This paper discusses the effectiveness of various text processing techni...

Data Science Kitchen at GermEval 2021: A Fine Selection of Hand-Picked Features, Delivered Fresh from the Oven

This paper presents the contribution of the Data Science Kitchen at Germ...

Experiments with adversarial attacks on text genres

Neural models based on pre-trained transformers, such as BERT or XLM-RoB...

Please sign up or login with your details

Forgot password? Click here to reset