BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA Models
We introduce a new test set for visual question answering (VQA) called BinaryVQA to push the limits of VQA models. Our dataset includes 7,800 questions across 1,024 images and covers a wide variety of objects, topics, and concepts. For easy model evaluation, we only consider binary questions. Questions and answers are formulated and verified carefully and manually. Around 63 questions per image and question length are 7 and 5, respectively. The state of the art OFA model achieves 75 significantly lower than its performance on the VQA v2 test-dev dataset (94.7 a) performance over different categories such as text, counting and gaze direction, b) model interpretability, c) the effect of question length on accuracy, d) bias of models towards positive answers and introduction of a new score called the ShuffleAcc, and e) sensitivity to spelling and grammar errors. Our investigation demonstrates the difficulty of our dataset and shows that it can challenge VQA models for next few years. Data and code are publicly available at: DATA and CODE.
READ FULL TEXT