Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/134829
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free
Author: Sünderhauf, N.
Shirazi, S.
Jacobson, A.
Dayoub, F.
Pepperell, E.
Upcroft, B.
Milford, M.
Citation: Proceedings of the Robotics: Science and Systems XI Conference (RSS 2015), 2015 / Kavraki, L.E., Hsu, D., Buchli, J. (ed./s), vol.11, pp.1-10
Publisher: Robotics: Science and Systems Foundation
Publisher Place: online
Issue Date: 2015
Series/Report no.: Robotics: Science and Systems; XI
ISBN: 9780992374716
ISSN: 2330-765X
Conference Name: Robotics: Science and Systems (RSS) (13 Jul 2015 - 17 Jul 2015 : Rome, Italy)
Editor: Kavraki, L.E.
Hsu, D.
Buchli, J.
Statement of
Responsibility: 
Niko Sunderhauf, Sareh Shirazi, Adam Jacobson, Feras Dayoub, Edward Pepperell, Ben Upcroft, and Michael Milford
Abstract: Place recognition has long been an incompletely solved problem in that all approaches involve significant com- promises. Current methods address many but never all of the critical challenges of place recognition _ viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation.
Rights: Copyright status unknown
DOI: 10.15607/RSS.2015.XI.022
Grant ID: http://purl.org/au-research/grants/arc/CE140100016
Published version: http://www.roboticsproceedings.org/rss11/p22.html
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.