Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/120741
Citations
Scopus Web of Science® Altmetric
?
?
Type: Journal article
Title: Heritage image annotation via collective knowledge
Author: Zhang, J.
Wu, Q.
Zhang, J.
Shen, C.
Lu, J.
Wu, Q.
Citation: Pattern Recognition, 2019; 93:204-214
Publisher: Elsevier
Issue Date: 2019
ISSN: 0031-3203
1873-5142
Statement of
Responsibility: 
Junjie Zhang, Qi Wu, Jian Zhang, Chunhua Shen, Jianfeng Lu, Qiang Wu
Abstract: The automatic image annotation can provide semantic illustrations to understand image contents, and builds a foundation to develop algorithms that can search images within a large database. However, most current methods focus on solving the annotation problem by modeling the image visual content and tag semantic information, which overlooks the additional information, such as scene descriptions and locations. Moreover, the majority of current annotation datasets are visually consistent and only annotated by common visual objects and attributes, which makes the classic methods vulnerable to handle the more diverse image annotation. To address above issues, we propose to annotate images via collective knowledge, that is, we uncover relationships between the image and its neighbors by measuring similarities among metadata and conduct the metric learning to obtain the representations of image contents, we also generate semantic representations for images given collective semantic information from their neighbors. Two representations from different paradigms are embedded together to train an annotation model. We ground our model on the heritage image collection we collected from the library online open data. Annotations on the heritage image collection are not limited to common visual objects, and are highly relevant to historical events, and the diversity of the heritage image content is much larger than the current datasets, which makes it more suitable for this task. Comprehensive experimental results on the benchmark dataset indicate that the proposed model achieves the best performance compared to baselines and state-of-the-art methods.
Keywords: Annotation diversity; image annotation; representation learning; collective knowledge; heritage image collection
Rights: © 2019 Elsevier Ltd. All rights reserved.
DOI: 10.1016/j.patcog.2019.04.017
Published version: http://dx.doi.org/10.1016/j.patcog.2019.04.017
Appears in Collections:Aurora harvest 4
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.