Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||What value do explicit high level concepts have in vision to language problems?|
Van Den Hengel, A.
|Citation:||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016 / vol.2016-December, pp.203-212|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||29th IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (26 Jun 2016 - 01 Jul 2016 : Las Vegas, NV)|
|Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony Dick, Anton van den Hengel|
|Abstract:||Much recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we investigate whether this direct approach succeeds due to, or despite, the fact that it avoids the explicit representation of high-level information. We propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. We achieve the best reported results on both image captioning and VQA on several benchmark datasets, and provide an analysis of the value of explicit high-level concepts in V2L problems.|
|Rights:||© 2016 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
|RA_hdl_108049.pdf||Restricted Access||397.22 kB||Adobe PDF||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.