Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/108762
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: A temporally invariant representation for visual tracking
Author: Ma, C.
Yang, X.
Zhang, C.
Yang, M.-H.
Citation: IEEE International Conference on Image Processing ICIP 2015: proceedings, 2015, pp.857-861
Publisher: IEEE
Issue Date: 2015
ISBN: 9781479983391
Conference Name: 2015 IEEE International Conference on Image Processing (ICIP 2015) (27 Sep 2015 - 30 Sep 2015 : Quebec City, Canada)
Statement of
Responsibility: 
Chao Ma, Xiaokang Yang, Chongyang Zhang, and Ming-Hsuan Yang
Abstract: In this paper, we propose to learn temporally invariant features from a large number of image sequences to represent objects for visual tracking. These features are trained on a convolutional neural network with temporal invariance constraints and robust to diverse motion transformations. We employ linear correlation filters to encode the appearance templates of targets and perform the tracking task by searching for the maximum responses at each frame. The learned filters are updated online and adapt to significant appearance changes during tracking. Extensive experimental results on challenging sequences show that the proposed algorithm performs favorably against state-of-the-art methods in terms of efficiency, accuracy, and robustness.
Keywords: Temporal invariance; feature learning; correlation filters; object tracking
Rights: © 2015 IEEE
DOI: 10.1109/ICIP.2015.7350921
Published version: http://dx.doi.org/10.1109/icip.2015.7350921
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.