TU Darmstadt / ULB / TUbiblio

Geo-based Automatic Image Annotation

Sergieh, Hatem Mousselly and Gianini, Gabriele and Döller, Mario and Kosch, Harald and Egyed-Zsigmond, Elöd and Pinon, Jean-Marie (2012):
Geo-based Automatic Image Annotation.
In: ICMR '12, In: Proceedings of the 2Nd ACM International Conference on Multimedia Retrieval, ACM, Hong Kong, China, DOI: 10.1145/2324796.2324850,
[Online-Edition: https://dl.acm.org/citation.cfm?doid=2324796.2324850],
[Conference or Workshop Item]

Abstract

A huge number of user-tagged images are daily uploaded to the web. Recently, a growing number of those images are also geotagged. These provide new opportunities for solutions to automatically tag images so that efficient image management and retrieval can be achieved. In this paper an automatic image annotation approach is proposed. It is based on a statistical model that combines two different kinds of information: high level information represented by user tags of images captured in the same location as a new unlabeled image (input image); and low level information represented by the visual similarity between the input image and the collection of geographically similar images. To maximize the number of images that are visually similar to the input image, an iterative visual matching approach is proposed and evaluated. The results show that a significant recall improvement can be achieved with an increasing number of iterations. The quality of the recommended tags has also been evaluated and an overall good performance has been observed.

Item Type: Conference or Workshop Item
Erschienen: 2012
Creators: Sergieh, Hatem Mousselly and Gianini, Gabriele and Döller, Mario and Kosch, Harald and Egyed-Zsigmond, Elöd and Pinon, Jean-Marie
Title: Geo-based Automatic Image Annotation
Language: English
Abstract:

A huge number of user-tagged images are daily uploaded to the web. Recently, a growing number of those images are also geotagged. These provide new opportunities for solutions to automatically tag images so that efficient image management and retrieval can be achieved. In this paper an automatic image annotation approach is proposed. It is based on a statistical model that combines two different kinds of information: high level information represented by user tags of images captured in the same location as a new unlabeled image (input image); and low level information represented by the visual similarity between the input image and the collection of geographically similar images. To maximize the number of images that are visually similar to the input image, an iterative visual matching approach is proposed and evaluated. The results show that a significant recall improvement can be achieved with an increasing number of iterations. The quality of the recommended tags has also been evaluated and an overall good performance has been observed.

Title of Book: Proceedings of the 2Nd ACM International Conference on Multimedia Retrieval
Series Name: ICMR '12
Publisher: ACM
Uncontrolled Keywords: geotagging, image annotation, image retrieval, statistical models
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Ubiquitous Knowledge Processing
Event Location: Hong Kong, China
Date Deposited: 31 Dec 2016 14:29
DOI: 10.1145/2324796.2324850
Official URL: https://dl.acm.org/citation.cfm?doid=2324796.2324850
Identification Number: TUD-CS-2012-0372
Export:
Suche nach Titel in: TUfind oder in Google
Send an inquiry Send an inquiry

Options (only for editors)

View Item View Item