Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/107514
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMilford, M.-
dc.contributor.authorLowry, S.-
dc.contributor.authorSunderhauf, N.-
dc.contributor.authorShirazi, S.-
dc.contributor.authorPepperell, E.-
dc.contributor.authorUpcroft, B.-
dc.contributor.authorShen, C.-
dc.contributor.authorLin, G.-
dc.contributor.authorLiu, F.-
dc.contributor.authorCadena, C.-
dc.contributor.authorReid, I.-
dc.date.issued2015-
dc.identifier.citationConference on Computer Vision and Pattern Recognition Workshops IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops, 2015, vol.2015-October, pp.18-25-
dc.identifier.isbn9781467367592-
dc.identifier.issn2160-7508-
dc.identifier.issn2160-7516-
dc.identifier.urihttp://hdl.handle.net/2440/107514-
dc.description.abstractVision-based localization on robots and vehicles remains unsolved when extreme appearance change and viewpoint change are present simultaneously. The current state of the art approaches to this challenge either deal with only one of these two problems; for example FABMAP (viewpoint invariance) or SeqSLAM (appearanceinvariance), or use extensive training within the test environment, an impractical requirement in many application scenarios. In this paper we significantly improve the viewpoint invariance of the SeqSLAM algorithm by using state-of-the-art deep learning techniques to generate synthetic viewpoints. Our approach is different to other deep learning approaches in that it does not rely on the ability of the CNN network to learn invariant features, but only to produce good enough depth images from day-time imagery only. We evaluate the system on a new multi-lane day-night car dataset specifically gathered to simultaneously test both appearance and viewpoint change. Results demonstrate that the use of synthetic viewpoints improves the maximum recall achieved at 100% precision by a factor of 2.2 and maximum recall by a factor of 2.7, enabling correct place recognition across multiple road lanes and significantly reducing the time between correct localizations¹-
dc.description.statementofresponsibilityMichael Milford, Stephanie Lowry, Niko Sunderhauf, Sareh Shirazi, Edward Pepperell, Ben Upcroft Chunhua Shen, Guosheng Lin, Fayao Liu, Cesar Cadena, Ian Reid-
dc.language.isoen-
dc.publisherIEEE-
dc.relation.ispartofseriesIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops-
dc.rightsCopyright © 2015, IEEE-
dc.source.urihttp://dx.doi.org/10.1109/cvprw.2015.7301395-
dc.titleSequence searching with deep-learnt depth for condition-and viewpoint-invariant route-based place recognition-
dc.typeConference paper-
dc.contributor.conferenceComputer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (7 Jun 2015 - 12 Jun 2015 : Boston, MA)-
dc.identifier.doi10.1109/CVPRW.2015.7301395-
dc.relation.granthttp://purl.org/au-research/grants/arc/FT140101229-
dc.relation.granthttp://purl.org/au-research/grants/arc/CE140100016-
pubs.publication-statusPublished-
dc.identifier.orcidReid, I. [0000-0001-7790-6423]-
Appears in Collections:Aurora harvest 3
Computer Science publications

Files in This Item:
File Description SizeFormat 
RA_hdl_107514.pdf
  Restricted Access
Restricted Access793.6 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.