Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/107902
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Unsupervised CNN for single view depth estimation: geometry to the rescue
Author: Garg, R.
Vijay Kumar, B.
Carneiro, G.
Reid, I.
Citation: Lecture Notes in Artificial Intelligence, 2016 / Leibe, B., Matas, J., Sebe, N., Welling, M. (ed./s), vol.9912 LNCS, pp.740-756
Publisher: Springer International Publishing
Issue Date: 2016
Series/Report no.: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9912
ISBN: 9783319464831
ISSN: 0302-9743
1611-3349
Conference Name: 14th European Conference on Computer Vision (ECCV 2016) (11 Oct 2016 - 14 Oct 2016 : Amsterdam, Netherlands)
Editor: Leibe, B.
Matas, J.
Sebe, N.
Welling, M.
Statement of
Responsibility: 
Ravi Garg, Vijay Kumar B.G., Gustavo Carneiro, and Ian Reid
Abstract: A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation.
Rights: © Springer International Publishing AG 2016
DOI: 10.1007/978-3-319-46484-8_45
Grant ID: http://purl.org/au-research/grants/arc/CE140100016
http://purl.org/au-research/grants/arc/FL130100102
Published version: http://dx.doi.org/10.1007/978-3-319-46484-8_45
Appears in Collections:Aurora harvest 3
Computer Science publications

Files in This Item:
File Description SizeFormat 
RA_hdl_107902.pdf
  Restricted Access
Restricted Access7.1 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.