Sergieh, Hatem Mousselly ; Gianini, Gabriele ; Döller, Mario ; Kosch, Harald ; Egyed-Zsigmond, Elöd ; Pinon, Jean-Marie (2012)
Geo-based Automatic Image Annotation.
Hong Kong, China
doi: 10.1145/2324796.2324850
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
A huge number of user-tagged images are daily uploaded to the web. Recently, a growing number of those images are also geotagged. These provide new opportunities for solutions to automatically tag images so that efficient image management and retrieval can be achieved. In this paper an automatic image annotation approach is proposed. It is based on a statistical model that combines two different kinds of information: high level information represented by user tags of images captured in the same location as a new unlabeled image (input image); and low level information represented by the visual similarity between the input image and the collection of geographically similar images. To maximize the number of images that are visually similar to the input image, an iterative visual matching approach is proposed and evaluated. The results show that a significant recall improvement can be achieved with an increasing number of iterations. The quality of the recommended tags has also been evaluated and an overall good performance has been observed.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2012 |
Autor(en): | Sergieh, Hatem Mousselly ; Gianini, Gabriele ; Döller, Mario ; Kosch, Harald ; Egyed-Zsigmond, Elöd ; Pinon, Jean-Marie |
Art des Eintrags: | Bibliographie |
Titel: | Geo-based Automatic Image Annotation |
Sprache: | Englisch |
Publikationsjahr: | 2012 |
Verlag: | ACM |
Buchtitel: | Proceedings of the 2Nd ACM International Conference on Multimedia Retrieval |
Reihe: | ICMR '12 |
Veranstaltungsort: | Hong Kong, China |
DOI: | 10.1145/2324796.2324850 |
URL / URN: | https://dl.acm.org/citation.cfm?doid=2324796.2324850 |
Kurzbeschreibung (Abstract): | A huge number of user-tagged images are daily uploaded to the web. Recently, a growing number of those images are also geotagged. These provide new opportunities for solutions to automatically tag images so that efficient image management and retrieval can be achieved. In this paper an automatic image annotation approach is proposed. It is based on a statistical model that combines two different kinds of information: high level information represented by user tags of images captured in the same location as a new unlabeled image (input image); and low level information represented by the visual similarity between the input image and the collection of geographically similar images. To maximize the number of images that are visually similar to the input image, an iterative visual matching approach is proposed and evaluated. The results show that a significant recall improvement can be achieved with an increasing number of iterations. The quality of the recommended tags has also been evaluated and an overall good performance has been observed. |
Freie Schlagworte: | geotagging, image annotation, image retrieval, statistical models |
ID-Nummer: | TUD-CS-2012-0372 |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung |
Hinterlegungsdatum: | 31 Dez 2016 14:29 |
Letzte Änderung: | 21 Sep 2018 10:23 |
PPN: | |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |