Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness

04/05/2019
by   Arijit Ray, et al.
0

While there have been many proposals on how to make AI algorithms more transparent, few have attempted to evaluate the impact of AI explanations on human performance on a task using AI. We propose a Twenty-Questions style collaborative image guessing game, Explanation-assisted Guess Which (ExAG) as a method of evaluating the efficacy of explanations in the context of Visual Question Answering (VQA) - the task of answering natural language questions on images. We study the effect of VQA agent explanations on the game performance as a function of explanation type and quality. We observe that "effective" explanations are not only conducive to game performance (by almost 22 "excellent" rated explanations), but also helpful when VQA system answers are erroneous or noisy (by almost 30 that players develop a preference for explanations even when penalized and that the explanations are mostly rated as "helpful".

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset