Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/120067
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, Q.-
dc.contributor.authorWang, P.-
dc.contributor.authorShen, C.-
dc.contributor.authorReid, I.-
dc.contributor.authorHengel, A.-
dc.date.issued2018-
dc.identifier.citationProceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp.6106-6115-
dc.identifier.isbn9781538664209-
dc.identifier.issn1063-6919-
dc.identifier.issn2575-7075-
dc.identifier.urihttp://hdl.handle.net/2440/120067-
dc.description.abstractThe visual dialog task requires an agent to engage in a conversation about an image with a human. It represents an extension of the visual question answering task in that the agent needs to answer a question about an image, but it needs to do so in light of the previous dialog that has taken place. The key challenge in visual dialog is thus maintaining a consistent, and natural dialog while continuing to answer questions correctly. We present a novel approach that combines Reinforcement Learning and Generative Adversarial Networks (GANS) to generate more human-like responses to questions. The GAN helps overcome the relative paucity of training data, and the tendency of the typical MLE-based approach to generate overly terse answers. Critically, the GAN is tightly integrated into the attention mechanism that generates human-interpretable reasons for each answer. This means that the discriminative model of the GAN has the task of assessing whether a candidate answer is generated by a human or not, given the provided reason. This is significant because it drives the generative model to produce high quality answers that are well supported by the associated reasoning. The method also generates the state-of-the-art results on the primary benchmark.-
dc.description.statementofresponsibilityQi Wu, Peng Wang, Chunhua Shen, Ian Reid, Anton van den Hengel-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE Conference on Computer Vision and Pattern Recognition-
dc.rights© 2018 IEEE-
dc.source.urihttps://ieeexplore.ieee.org/xpl/conhome/8576498/proceeding-
dc.titleAre you talking to me? Reasoned visual dialog generation through adversarial learning-
dc.typeConference paper-
dc.contributor.conferenceIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (18 Jun 2018 - 23 Jun 2018 : Salt Lake City, UT)-
dc.identifier.doi10.1109/CVPR.2018.00639-
dc.relation.granthttp://purl.org/au-research/grants/arc/CE140100016-
dc.relation.granthttp://purl.org/au-research/grants/arc/FL130100102-
pubs.publication-statusPublished-
dc.identifier.orcidWu, Q. [0000-0003-3631-256X]-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
dc.identifier.orcidHengel, A. [0000-0003-3027-8364]-
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.