DeepAI AI Chat
Log In Sign Up

Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT

by   Qihuang Zhong, et al.

Recently, ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries. Several prior studies have shown that ChatGPT attains remarkable generation ability compared with existing models. However, the quantitative analysis of ChatGPT's understanding ability has been given little attention. In this report, we explore the understanding ability of ChatGPT by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models. We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question-answering tasks. Additionally, by combining some advanced prompting strategies, we show that the understanding ability of ChatGPT can be further improved.


Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering

There has been great success recently in tackling challenging NLP tasks ...

When BERT Plays the Lottery, All Tickets Are Winning

Much of the recent success in NLP is due to the large Transformer-based ...

Improving BERT with Self-Supervised Attention

One of the most popular paradigms of applying large, pre-trained NLP mod...

Revealing the Dark Secrets of BERT

BERT-based architectures currently give state-of-the-art performance on ...

BERTnesia: Investigating the capture and forgetting of knowledge in BERT

Probing complex language models has recently revealed several insights i...

Understanding BERT performance in propaganda analysis

In this paper, we describe our system used in the shared task for fine-g...

Distilling BERT for low complexity network training

This paper studies the efficiency of transferring BERT learnings to low ...