Curriculum Script Distillation for Multilingual Visual Question Answering

01/17/2023
by   Khyathi Raghavi Chandu, et al.
0

Pre-trained models with dual and cross encoders have shown remarkable success in propelling the landscape of several tasks in vision and language in Visual Question Answering (VQA). However, since they are limited by the requirements of gold annotated data, most of these advancements do not see the light of day in other languages beyond English. We aim to address this problem by introducing a curriculum based on the source and target language translations to finetune the pre-trained models for the downstream task. Experimental results demonstrate that script plays a vital role in the performance of these models. Specifically, we show that target languages that share the same script perform better ( 6 languages perform better than their counterparts ( 5-12

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset