Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Multi-object model-free tracking with joint appearance and motion inference|
|Citation:||Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA 2017), 2017 / Guo, Y., Li, H., Cai, W., Murshed, M., Wang, Z., Gao, J., Feng, D. (ed./s), vol.2017-December, pp.1-8|
|Publisher Place:||Piscataway, NJ|
|Conference Name:||International Conference on Digital Image Computing: Techniques and Applications (DICTA 2017) (29 Nov 2017 - 01 Dec 2017 : Sydney, AUSTRALIA)|
|Chongyu Liu, Rui Yao, S. Hamid Rezatofighi, Ian Reid, Qinfeng Shi|
|Abstract:||Multi-object model-free tracking is challenging because the tracker is not aware of the objects’ type (not allowed to use object detectors), and needs to distinguish one object from background as well as other similar objects. Most existing methods keep updating their appearance model individually for each target, and their performance is hampered by sudden appearance change and/or occlusion. We propose to use both appearance model and motion model to overcome this issue. We introduce an indicator variable to predict sudden appearance change and occlusion. When they happen, our model stops updating the appearance model to avoid parameter update based on background or incorrect object, and rely more on motion model to track. Moreover, we consider the correlation among all targets, and seek the joint optimal locations for all target simultaneously. We formulate the problem of finding the most likely locations jointly as a graphical model inference problem, and learn the joint parameters for both appearance model and motion model in an online fashion in the framework of LaRank. Experiment results show that our method outperforms the stateof- the-art.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.