Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/117224
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhuang, B.en
dc.contributor.authorLiu, L.en
dc.contributor.authorShen, C.en
dc.contributor.authorReid, I.en
dc.date.issued2017en
dc.identifier.citationProceedings 2017 IEEE International Conference on Computer Vision ICCV 2017, 2017 / vol.2017-October, pp.589-598en
dc.identifier.isbn9781538610329en
dc.identifier.issn2380-7504en
dc.identifier.urihttp://hdl.handle.net/2440/117224-
dc.description.abstractRecognizing how objects interact with each other is a crucial task in visual recognition. If we define the context of the interaction to be the objects involved, then most current methods can be categorized as either: (i) training a single classifier on the combination of the interaction and its context; or (ii) aiming to recognize the interaction independently of its explicit context. Both methods suffer limitations: the former scales poorly with the number of combinations and fails to generalize to unseen combinations, while the latter often leads to poor interaction recognition performance due to the difficulty of designing a contextindependent interaction classifier.,,To mitigate those drawbacks, this paper proposes an alternative, context-aware interaction recognition framework. The key to our method is to explicitly construct an interaction classifier which combines the context, and the interaction. The context is encoded via word2vec into a semantic space, and is used to derive a classification result for the interaction. The proposed method still builds one classifier for one interaction (as per type (ii) above), but the classifier built is adaptive to context via weights which are context dependent. The benefit of using the semantic space is that it naturally leads to zero-shot generalizations in which semantically similar contexts (subject-object pairs) can be recognized as suitable contexts for an interaction, even if they were not observed in the training set. Our method also scales with the number of interaction-context pairs since our model parameters do not increase with the number of interactions. Thus our method avoids the limitation of both approaches. We demonstrate experimentally that the proposed framework leads to improved performance for all investigated interaction representations and datasets.en
dc.description.statementofresponsibilityBohan Zhuang, Lingqiao Liu, Chunhua Shen, Ian Reiden
dc.language.isoenen
dc.publisherIEEEen
dc.relation.ispartofseriesIEEE International Conference on Computer Visionen
dc.rights© 2017 IEEEen
dc.titleTowards context-aware interaction recognition for visual relationship detectionen
dc.typeConference paperen
dc.identifier.rmid0030083940en
dc.contributor.conferenceIEEE International Conference on Computer Vision (ICCV) (22 Oct 2017 - 29 Oct 2017 : Venice, Italy)en
dc.identifier.doi10.1109/ICCV.2017.71en
dc.relation.granthttp://purl.org/au-research/grants/arc/DE170101259en
dc.relation.granthttp://purl.org/au-research/grants/arc/FT120100969en
dc.relation.granthttp://purl.org/au-research/grants/arc/FL130100102en
dc.identifier.pubid398250-
pubs.library.collectionComputer Science publicationsen
pubs.library.teamDS05en
pubs.verification-statusVerifieden
pubs.publication-statusPublisheden
dc.identifier.orcidShen, C. [0000-0002-8648-8718]en
dc.identifier.orcidReid, I. [0000-0001-7790-6423]en
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.