Open-Ended Multi-Modal Relational Reason for Video Question Answering

12/01/2020
by   Haozheng Luo, et al.
0

People with visual impairments urgently need helps, not only on the basic tasks such as guiding and retrieving objects , but on the advanced tasks like picturing the new environments. More than a guiding dog, they might want some devices which are able to provide linguistic interaction. Building on various research literature, we aim to conduct a research on the interaction between the robot agent and visual impaired people. The robot agent, applied VQA techniques, is able to analyze the environment, process and understand the pronouncing questions, and provide feedback to the human user. In this paper, we are going to discuss the related questions about this kind of interaction, the techniques we used in this work, and how we conduct our research.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset