TU Darmstadt / ULB / TUbiblio

Diverse Image Captioning with Grounded Style

Klein, Franz ; Mahajan, Shweta ; Roth, Stefan (2021)
Diverse Image Captioning with Grounded Style.
43rd German Conference on Pattern Recognition (GCPR 2021). (28.09.-01.10.2021)
doi: 10.1007/978-3-030-92659-5_27
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

Stylized image captioning as presented in prior work aims to generate captions that reflect characteristics beyond a factual description of the scene composition, such as sentiments. Such prior work relies on given sentiment identifiers, which are used to express a certain global style in the caption, e.g. positive or negative, however without taking into account the stylistic content of the visual scene. To address this shortcoming, we first analyze the limitations of current stylized captioning datasets and propose COCO attribute-based augmentations to obtain varied stylized captions from COCO annotations. Furthermore, we encode the stylized information in the latent space of a Variational Autoencoder; specifically, we leverage extracted image attributes to explicitly structure its sequential latent space according to different localized style characteristics. Our experiments on the Senticap and COCO datasets show the ability of our approach to generate accurate captions with diversity in styles that are grounded in the image.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2021
Autor(en): Klein, Franz ; Mahajan, Shweta ; Roth, Stefan
Art des Eintrags: Bibliographie
Titel: Diverse Image Captioning with Grounded Style
Sprache: Englisch
Publikationsjahr: 28 September 2021
Verlag: Springer
Buchtitel: Pattern Recognition
Reihe: Lecture Notes in Computer Science
Band einer Reihe: 13024
Veranstaltungstitel: 43rd German Conference on Pattern Recognition (GCPR 2021)
Veranstaltungsdatum: 28.09.-01.10.2021
DOI: 10.1007/978-3-030-92659-5_27
URL / URN: https://link.springer.com/chapter/10.1007/978-3-030-92659-5_...
Kurzbeschreibung (Abstract):

Stylized image captioning as presented in prior work aims to generate captions that reflect characteristics beyond a factual description of the scene composition, such as sentiments. Such prior work relies on given sentiment identifiers, which are used to express a certain global style in the caption, e.g. positive or negative, however without taking into account the stylistic content of the visual scene. To address this shortcoming, we first analyze the limitations of current stylized captioning datasets and propose COCO attribute-based augmentations to obtain varied stylized captions from COCO annotations. Furthermore, we encode the stylized information in the latent space of a Variational Autoencoder; specifically, we leverage extracted image attributes to explicitly structure its sequential latent space according to different localized style characteristics. Our experiments on the Senticap and COCO datasets show the ability of our approach to generate accurate captions with diversity in styles that are grounded in the image.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Visuelle Inferenz
Hinterlegungsdatum: 08 Mär 2022 07:57
Letzte Änderung: 08 Mär 2022 07:57
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen