Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Scaling CNNs for high resolution volumetric reconstruction from a single image|
van den Hengel, A.
|Citation:||Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW 2017), 2017 / vol.2018-January, pp.930-939|
|Publisher Place:||Piscataway, NJ|
|Series/Report no.:||IEEE International Conference on Computer Vision Workshops|
|Conference Name:||IEEE International Conference on Computer Vision Workshop (ICCVW 2017) (22 Oct 2017 - 29 Oct 2017 : Venice, ITALY)|
|Adrian Johnston, Ravi Garg, Gustavo Carneiro, Ian Reid, Anton van den Hengel|
|Abstract:||One of the long-standing tasks in computer vision is to use a single 2-D view of an object in order to produce its 3-D shape. Recovering the lost dimension in this process has been the goal of classic shape-from-X methods, but often the assumptions made in those works are quite limiting to be useful for general 3-D objects. This problem has been recently addressed with deep learning methods containing a 2-D (convolution) encoder followed by a 3-D (deconvolution) decoder. These methods have been reasonably successful, but memory and run time constraints impose a strong limitation in terms of the resolution of the reconstructed 3-D shapes. In particular, state-of-the-art methods are able to reconstruct 3-D shapes represented by volumes of at most 323 voxels using state-of-the-art desktop computers. In this work, we present a scalable 2-D single view to 3-D volume reconstruction deep learning method, where the 3-D (deconvolution) decoder is replaced by a simple inverse discrete cosine transform (IDCT) decoder. Our simpler architecture has an order of magnitude faster inference when reconstructing 3-D volumes compared to the convolution-deconvolutional model, an exponentially smaller memory complexity while training and testing, and a sub-linear run-time training complexity with respect to the output volume size. We show on benchmark datasets that our method can produce high-resolution reconstructions with state of the art accuracy.|
|Rights:||© 2017 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.