Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Modeling and generating complex motion blur for real-time tracking|
|Citation:||IEEE Conference on Computer Vision and Pattern Recognition, 2008 (CVPR 2008), 2008 / pp.1-8|
|Conference Name:||IEEE Conference on Computer Vision and Pattern Recognition (21st : 2008 : Anchorage, AK)|
|Christopher Mei and Ian Reid|
|Abstract:||This article addresses the problem of real-time visual tracking in presence of complex motion blur. Previous authors have observed that efficient tracking can be obtained by matching blurred images instead of applying the computationally expensive task of deblurring (H. Jin et al., 2005). The study was however limited to translational blur. In this work, we analyse the problem of tracking in presence of spatially variant motion blur generated by a planar template. We detail how to model the blur formation and parallelise the blur generation, enabling a real-time GPU implementation. Through the estimation of the camera exposure time, we discuss how tracking initialisation can be improved. Our algorithm is tested on challenging real data with complex motion blur where simple models fail. The benefit of blur estimation is shown for structure and motion.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.