Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Exploiting temporal information for DCNN-based fine-grained object classification|
|Citation:||Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2016), 2016 / Liew, A., Lovell, B., Fookes, C., Zhou, J., Gao, Y., Blumenstein, M., Wang, Z. (ed./s), pp.442-447|
|Conference Name:||International Conference on Digital Image Computing: Techniques and Applications (DICTA 2016) (30 Nov 2016 - 02 Dec 2016 : Gold Coast, AUSTRALIA)|
|Zong Yuan Ge, Chris McCool, Conrad Sanderson, Peng Wang, Lingqiao Liu, Ian Reid, Peter Corke|
|Abstract:||Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification. In this work we present the novel task of video-based fine-grained object classification, propose a corresponding new video dataset, and perform a systematic study of several recent deep convolutional neural network (DCNN) based approaches, which we specifically adapt to the task. We evaluate three-dimensional DCNNs, two-stream DCNNs, and bilinear DCNNs. Two forms of the two-stream approach are used, where spatial and temporal data from two independent DCNNs are fused either via early fusion (combination of the fullyconnected layers) and late fusion (concatenation of the softmax outputs of the DCNNs). For bilinear DCNNs, information from the convolutional layers of the spatial and temporal DCNNs is combined via local co-occurrences. We then fuse the bilinear DCNN and early fusion of the two-stream approach to combine the spatial and temporal information at the local and global level (Spatio-Temporal Co-occurrence). Using the new and challenging video dataset of birds, classification performance is improved from 23.1% (using single images) to 41.1% when using the Spatio- Temporal Co-occurrence system. Incorporating automatically detected bounding box location further improves the classification accuracy to 53.6%.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.