Back to Search Start Over

Explainable Video Action Reasoning via Prior Knowledge and State Transitions

Authors :
Mohan S. Kankanhalli
Zhiyong Cheng
Yongkang Wong
Tao Zhuo
Peng Zhang
Source :
ACM Multimedia
Publication Year :
2019
Publisher :
ACM, 2019.

Abstract

Human action analysis and understanding in videos is an important and challenging task. Although substantial progress has been made in past years, the explainability of existing methods is still limited. In this work, we propose a novel action reasoning framework that uses prior knowledge to explain semantic-level observations of video state changes. Our method takes advantage of both classical reasoning and modern deep learning approaches. Specifically, prior knowledge is defined as the information of a target video domain, including a set of objects, attributes and relationships in the target video domain, as well as relevant actions defined by the temporal attribute and relationship changes (i.e. state transitions). Given a video sequence, we first generate a scene graph on each frame to represent concerned objects, attributes and relationships. Then those scene graphs are associated by tracking objects across frames to form a spatio-temporal graph (also called video graph), which represents semantic-level video states. Finally, by sequentially examining each state transition in the video graph, our method can detect and explain how those actions are executed with prior knowledge, just like the logical manner of thinking by humans. Compared to previous works, the action reasoning results of our method can be explained by both logical rules and semantic-level observations of video content changes. Besides, the proposed method can be used to detect multiple concurrent actions with detailed information, such as who (particular objects), when (time), where (object locations) and how (what kind of changes). Experiments on a re-annotated dataset CAD-120 show the effectiveness of our method.

Details

Database :
OpenAIRE
Journal :
Proceedings of the 27th ACM International Conference on Multimedia
Accession number :
edsair.doi.dedup.....2a65c3b302fe05c6adec314abd85fbc6
Full Text :
https://doi.org/10.1145/3343031.3351040