Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Efficient piecewise training of deep structured models for semantic segmentation|
Van Den Hengel, A.
|Citation:||Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016 / vol.2016-December, pp.3194-3203|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016) (26 Jun 2016 - 01 Jul 2016 : Las Vegas, NV)|
|Guosheng Lin, Chunhua Shen, Anton van den Hengel, Ian Reid|
|Abstract:||Recent advances in semantic image segmentation have mostly been achieved by training deep convolutional neural networks (CNNs). We show how to improve semantic segmentation through the use of contextual information, specifically, we explore 'patch-patch' context between image regions, and 'patch-background' context. For learning from the patch-patch context, we formulate Conditional Random Fields (CRFs) with CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied to avoid repeated expensive CRF inference for back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image input and sliding pyramid pooling is effective for improving performance. Our experimental results set new state-of-the-art performance on a number of popular semantic segmentation datasets, including NYUDv2, PASCAL VOC 2012, PASCAL-Context, and SIFT-flow. In particular, we achieve an intersection-overunion score of 78:0 on the challenging PASCAL VOC 2012 dataset.|
|Rights:||© 2016 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
|RA_hdl_105526.pdf||Restricted Access||412.76 kB||Adobe PDF||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.