Answering Visual What-If Questions: From Actions to Predicted Scene Descriptions
conference contribution
posted on 2023-11-29, 18:09authored byMisha Wagner, Hector Basevi, Rakshith Shetty, Wenbin Li, Mateusz Malinowski, Mario FritzMario Fritz, Ales Leonardis
In-depth scene descriptions and question answering tasks have greatly increased the scope of today's definition of scene understanding. While such tasks are in principle open ended, current formulations primarily focus on describing only the current state of the scenes under consideration. In contrast, in this paper, we focus on the future states of the scenes which are also conditioned on actions. We posit this as a question answering task, where an answer has to be given about a future scene state, given observations of the current scene, and a question that includes a hypothetical action. Our solution is a hybrid model which integrates a physics engine into a question answering architecture in order to anticipate future scene states resulting from object-object interactions caused by an action. We demonstrate first results on this challenging new problem and compare to baselines, where we outperform fully data-driven end-to-end learning approaches.
History
Preferred Citation
Misha Wagner, Hector Basevi, Rakshith Shetty, Wenbin Li, Mateusz Malinowski, Mario Fritz and Ales Leonardis. Answering Visual What-If Questions: From Actions to Predicted Scene Descriptions. In: European Conference on Computer Vision (ECCV). 2018.
Primary Research Area
Trustworthy Information Processing
Name of Conference
European Conference on Computer Vision (ECCV)
Legacy Posted Date
2019-02-01
Open Access Type
Gold
BibTeX
@inproceedings{cispa_all_2796,
title = "Answering Visual What-If Questions: From Actions to Predicted Scene Descriptions",
author = "Wagner, Misha and Basevi, Hector and Shetty, Rakshith and Li, Wenbin and Malinowski, Mateusz and Fritz, Mario and Leonardis, Ales",
booktitle="{European Conference on Computer Vision (ECCV)}",
year="2018",
}