Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/117224
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Towards context-aware interaction recognition for visual relationship detection
Author: Zhuang, B.
Liu, L.
Shen, C.
Reid, I.
Citation: Proceedings 2017 IEEE International Conference on Computer Vision ICCV 2017, 2017 / vol.2017-October, pp.589-598
Publisher: IEEE
Issue Date: 2017
Series/Report no.: IEEE International Conference on Computer Vision
ISBN: 9781538610329
ISSN: 2380-7504
Conference Name: IEEE International Conference on Computer Vision (ICCV) (22 Oct 2017 - 29 Oct 2017 : Venice, Italy)
Statement of
Responsibility: 
Bohan Zhuang, Lingqiao Liu, Chunhua Shen, Ian Reid
Abstract: Recognizing how objects interact with each other is a crucial task in visual recognition. If we define the context of the interaction to be the objects involved, then most current methods can be categorized as either: (i) training a single classifier on the combination of the interaction and its context; or (ii) aiming to recognize the interaction independently of its explicit context. Both methods suffer limitations: the former scales poorly with the number of combinations and fails to generalize to unseen combinations, while the latter often leads to poor interaction recognition performance due to the difficulty of designing a contextindependent interaction classifier.,,To mitigate those drawbacks, this paper proposes an alternative, context-aware interaction recognition framework. The key to our method is to explicitly construct an interaction classifier which combines the context, and the interaction. The context is encoded via word2vec into a semantic space, and is used to derive a classification result for the interaction. The proposed method still builds one classifier for one interaction (as per type (ii) above), but the classifier built is adaptive to context via weights which are context dependent. The benefit of using the semantic space is that it naturally leads to zero-shot generalizations in which semantically similar contexts (subject-object pairs) can be recognized as suitable contexts for an interaction, even if they were not observed in the training set. Our method also scales with the number of interaction-context pairs since our model parameters do not increase with the number of interactions. Thus our method avoids the limitation of both approaches. We demonstrate experimentally that the proposed framework leads to improved performance for all investigated interaction representations and datasets.
Rights: © 2017 IEEE
RMID: 0030083940
DOI: 10.1109/ICCV.2017.71
Grant ID: http://purl.org/au-research/grants/arc/DE170101259
http://purl.org/au-research/grants/arc/FT120100969
http://purl.org/au-research/grants/arc/FL130100102
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.