Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/124482
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction
Author: Zhan, H.
Garg, R.
Weerasekera, C.S.
Li, K.
Agarwal, H.
Reid, I.M.
Citation: Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp.340-349
Publisher: IEEE
Publisher Place: Piscataway, NJ.
Issue Date: 2018
Series/Report no.: IEEE Conference on Computer Vision and Pattern Recognition
ISBN: 1538664208
9781538664209
ISSN: 1063-6919
2575-7075
Conference Name: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (18 Jun 2018 - 23 Jun 2018 : Salt Lake City, USA)
Statement of
Responsibility: 
Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, Ian Reid
Abstract: Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner. Recent approaches to single view depth estimation explore the possibility of learning without full supervision via minimizing photometric error. In this paper, we explore the use of stereo sequences for learning depth and visual odometry. The use of stereo sequences enables the use of both spatial (between left-right pairs) and temporal (forward backward) photometric warp error, and constrains the scene depth and camera motion to be in a common, realworld scale. At test time our framework is able to estimate single view depth and two-view odometry from a monocular sequence. We also show how we can improve on a standard photometric warp loss by considering a warp of deep features. We show through extensive experiments that: (i) jointly training for single view depth and visual odometry improves depth prediction because of the additional constraint imposed on depths and achieves competitive results for visual odometry; (ii) deep feature-based warping loss improves upon simple photometric warp loss for both single view depth estimation and visual odometry. Our method outperforms existing learning based methods on the KITTI driving dataset in both tasks. The source code is available at https://github.com/Huangying-Zhan/ Depth-VO-Feat.
Rights: © 2018 IEEE
DOI: 10.1109/CVPR.2018.00043
Grant ID: http://purl.org/au-research/grants/arc/FL130100102
http://purl.org/au-research/grants/arc/CE140100016
Published version: https://ieeexplore.ieee.org/xpl/conhome/8576498/proceeding
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.