Explanation vs Attention: A Two-Player Game to Obtain Attention for VQA
Abstract:
In this paper, we aim to obtain improved attention for a visual question answering (VQA) task. It is challenging to provide supervision for attention. An observation we make is that visual explanations as obtained through class activation mappings (specifically Grad-CAM) that are meant to explain the performance of various networks could form a means of supervision. However, as the distributions of attention maps and that of Grad-CAMs differ, it would not be suitable to directly use these as a form of supervision. Rather, we propose the use of a discriminator that aims to distinguish samples of visual explanation and attention maps. The use of adversarial training of the attention regions as a two-player game between attention and explanation serves to bring the distributions of attention maps and visual explanations closer. Significantly, we observe that providing such a means of supervision also results in attention maps that are more closely related to human attention resulting in a substantial improvement over baseline stacked attention network (SAN) models. It also results in a good improvement in rank correlation metric on the VQA task. This method can also be combined with recent MCB based methods and results in consistent improvement. We also provide comparisons with other means for learning distributions such as based on Correlation Alignment (Coral), Maximum Mean Discrepancy (MMD) and Mean Square Error (MSE) losses and observe that the adversarial loss outperforms the other forms of learning the attention maps. Visualization of the results also confirms our hypothesis that attention maps improve using this form of supervision.
PAAN Model:
Some example of visual question answering using our method:
Attention Visualisation Results:
Explanation Visualisation Results:
Warm-Start model Results:
Statistical Significance Analysis:
Visual Dialog Results for PAAN model:
Attention map Variance :
VQA-V1:
It contains human annotated question-answer pairs based on images (MS-COCO dataset's images).
Field | Train | Val | Test |
---|---|---|---|
Q-A pairs | 2,48,349 | 1,21,512 | 2,44,302 |
VQA-V2:
It has almost twice the number of question-answer pairs as **VQA-V1** and also claims to remove some biases.
Field | Train | Val | Test |
---|---|---|---|
Q-A pairs | 4,43,757 | 2,14,354 | 4,47,793 |
VQA-HAT
Dataset containing human annotated attention maps. (Based on **VQA-V1**).
Field | Train (out of 2,48,349) | Val (out of 1,21,512) |
---|---|---|
Q-A pairs | 58,475 | 1,374 |
BibTex
@InProceedings{Patro_2020_AAAI,
author = {Patro, Badri N. and Anupriy and Namboodiri, Vinay P.},
title = {Explanation vs Attention: A Two-Player Game to Obtain Attention for VQA},
booktitle = {AAAI},
year = {2020}
}
Acknowledgement
We acknowledge the help provided by Delta Lab members, who have supported us for this research activity.
-------------------------------------------------------------------------------------------------------------------------------------------