Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/106803
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Automatic quantification of tumour hypoxia from multi-modal microscopy images using weakly-supervised learning methods
Author: Carneiro, G.
Peng, T.
Bayer, C.
Navab, N.
Citation: IEEE Transactions on Medical Imaging, 2017; 36(7):1405-1417
Publisher: Institute of Electrical and Electronics Engineers
Issue Date: 2017
ISSN: 0278-0062
1558-254X
Statement of
Responsibility: 
Gustavo Carneiro, Tingying Peng, Christine Bayer, and Nassir Navab
Abstract: In recently published clinical trial results, hypoxia-modified therapies have shown to provide more positive outcomes to cancer patients, compared with standard cancer treatments. The development and validation of these hypoxia-modified therapies depend on an effective way of measuring tumor hypoxia, but a standardized measurement is currently unavailable in clinical practice. Different types of manual measurements have been proposed in clinical research, but in this paper we focus on a recently published approach that quantifies the number and proportion of hypoxic regions using high resolution (immuno-)fluorescence (IF) and hematoxylin and eosin (HE) stained images of a histological specimen of a tumor. We introduce new machine learning-based methodologies to automate this measurement, where the main challenge is the fact that the clinical annotations available for training the proposed methodologies consist of the total number of normoxic, chronically hypoxic, and acutely hypoxic regions without any indication of their location in the image. Therefore, this represents a weakly-supervised structured output classification problem, where training is based on a high-order loss function formed by the norm of the difference between the manual and estimated annotations mentioned above. We propose four methodologies to solve this problem: 1) a naive method that uses a majority classifier applied on the nodes of a fixed grid placed over the input images; 2) a baseline method based on a structured output learning formulation that relies on a fixed grid placed over the input images; 3) an extension to this baseline based on a latent structured output learning formulation that uses a graph that is flexible in terms of the amount and positions of nodes; and 4) a pixel-wise labeling based on a fully-convolutional neural network. Using a data set of 89 weakly annotated pairs of IF and HE images from eight tumors, we show that the quantitative results of methods (3) and (4) above are equally competitive and superior to the naive (1) and baseline (2) methods. All proposed methodologies show high correlation values with respect to the clinical annotations.
Keywords: Microscopy; structured output learning; deep learning; weakly-supervised training; high-order loss functions
Rights: © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
DOI: 10.1109/TMI.2017.2677479
Grant ID: http://purl.org/au-research/grants/arc/DP140102794
Published version: http://dx.doi.org/10.1109/tmi.2017.2677479
Appears in Collections:Aurora harvest 3
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.