Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/107515
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhuang, B.-
dc.contributor.authorLin, G.-
dc.contributor.authorShen, C.-
dc.contributor.authorReid, I.-
dc.date.issued2016-
dc.identifier.citationProceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol.2016-December, pp.5955-5964-
dc.identifier.isbn9781467388511-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/2440/107515-
dc.description.abstractIn this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary codes which serve as the labels of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental/interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-theart hashing for various retrieval tasks.-
dc.description.statementofresponsibilityBohan Zhuang, Guosheng Lin, Chunhua Shen, Ian Reid-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE Conference on Computer Vision and Pattern Recognition-
dc.rights© 2016 IEEE-
dc.source.urihttp://dx.doi.org/10.1109/cvpr.2016.641-
dc.titleFast training of triplet-based deep binary embedding networks-
dc.typeConference paper-
dc.contributor.conferenceComputer Society Conference on Computer Vision and Pattern Recognition (CVPRW) (27 Jun 2016 - 30 Jun 2016 : Las Vegas, NV)-
dc.identifier.doi10.1109/CVPR.2016.641-
pubs.publication-statusPublished-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
Appears in Collections:Aurora harvest 3
Computer Science publications

Files in This Item:
File Description SizeFormat 
RA_hdl_107515.pdf
  Restricted Access
Restricted Access458.26 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.