Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Formulating semantic image annotation as a supervised learning problem|
|Citation:||Proceedings, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 2 / C. Schmid, S. Soatto, and C. Tomasi (eds.): pp.163-168|
|Series/Report no.:||PROCEEDINGS - IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION|
|Conference Name:||IEEE Computer Society Conference on Computer Vision and Pattern Recognition (18th : 2005 : San Diego, CA, U.S.A.)|
|Gustavo Carneiro, Nuno Vasconcelos|
|Abstract:||We introduce a new method to automatically annotate and retrieve images using a vocabulary of image semantics. The novel contributions include a discriminant formulation of the problem, a multiple instance learning solution that enables the estimation of concept probability distributions without prior image segmentation, and a hierarchical description of the density of each image class that enables very efficient training. Compared to current methods of image annotation and retrieval, the one now proposed has significantly smaller time complexity and better recognition performance. Specifically, its recognition complexity is O(C×R), where C is the number of classes (or image annotations) and R is the number of image regions, while the best results in the literature have complexity O(T×R), where T is the number of training images. Since the number of classes grows substantially slower than that of training images, the proposed method scales better during training, and processes test images faster This is illustrated through comparisons in terms of complexity, time, and recognition performance with current state-of-the-art methods.|
|Rights:||Copyright © 2005 by The Institute of Electrical and Electronics Engineers, Inc.|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.