Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?

06/17/2016
by   Abhishek Das, et al.
0

We conduct large-scale studies on `human attention' in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Overall, our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset