DualNet: Domain-Invariant Network for Visual Question Answering

06/20/2016
by   Kuniaki Saito, et al.
0

Visual question answering (VQA) task not only bridges the gap between images and language, but also requires that specific contents within the image are understood as indicated by linguistic context of the question, in order to generate the accurate answers. Thus, it is critical to build an efficient embedding of images and texts. We implement DualNet, which fully takes advantage of discriminative power of both image and textual features by separately performing two operations. Building an ensemble of DualNet further boosts the performance. Contrary to common belief, our method proved effective in both real images and abstract scenes, in spite of significantly different properties of respective domain. Our method was able to outperform previous state-of-the-art methods in real images category even without explicitly employing attention mechanism, and also outperformed our own state-of-the-art method in abstract scenes category, which recently won the first place in VQA Challenge 2016.

READ FULL TEXT
research
04/03/2018

Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering

A key solution to visual question answering (VQA) exists in how to fuse ...
research
02/01/2018

Dual Recurrent Attention Units for Visual Question Answering

We propose an architecture for VQA which utilizes recurrent layers to ge...
research
06/29/2023

Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question Answering

We study visual question answering in a setting where the answer has to ...
research
04/11/2017

Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering

This paper presents a new baseline for visual question answering task. G...
research
03/22/2023

Integrating Image Features with Convolutional Sequence-to-sequence Network for Multilingual Visual Question Answering

Visual Question Answering (VQA) is a task that requires computers to giv...
research
11/12/2017

High-Order Attention Models for Visual Question Answering

The quest for algorithms that enable cognitive abilities is an important...
research
01/22/2021

Visual Question Answering based on Local-Scene-Aware Referring Expression Generation

Visual question answering requires a deep understanding of both images a...

Please sign up or login with your details

Forgot password? Click here to reset