Please use this identifier to cite or link to this item: http://hdl.handle.net/2440/111936
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Exploring context with deep structured models for semantic segmentation
Author: Lin, G.
Shen, C.
Hengel, A.
Reid, I.
Citation: IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016; 40(6):1352-1366
Publisher: IEEE
Issue Date: 2016
ISSN: 0162-8828
2160-9292
Statement of
Responsibility: 
Guosheng Lin, Chunhua Shen, Anton van den Hengel, and Ian Reid
Abstract: We propose an approach for exploiting contextual information in semantic image segmentation, and particularly investigate the use of patch-patch context and patch-background context in deep CNNs. We formulate deep structured models by combining CNNs and Conditional Random Fields (CRFs) for learning the patch-patch context between image regions. Specifically, we formulate CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied in order to avoid repeated expensive CRF inference during the course of back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image inputs and sliding pyramid pooling is very effective for improving performance.We perform comprehensive evaluation of the proposed method.We achieve new state-of-the-art performance on a number of challenging semantic segmentation datasets.
Keywords: Semantic segmentation; convolutional neural networks; conditional random fields; contextual models
Description: Date of publication 25 May 2017; date of current version 14 May 2018.
Rights: © 2017 IEEE
RMID: 0030086478
DOI: 10.1109/TPAMI.2017.2708714
Grant ID: http://purl.org/au-research/grants/arc/FT120100969
http://purl.org/au-research/grants/arc/FL130100102
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.