TU Darmstadt / ULB / TUbiblio

Empirical Evaluation of Supervision Signals for Style Transfer Models

Puzikov, Yevgeniy ; Stanley, Simoes ; Gurevych, Iryna ; Schweizer, Immanuel (2021)
Empirical Evaluation of Supervision Signals for Style Transfer Models.
doi: 10.48550/arXiv.2101.06172
Report, Bibliographie

Kurzbeschreibung (Abstract)

Text style transfer has gained increasing attention from the research community over the recent years. However, the proposed approaches vary in many ways, which makes it hard to assess the individual contribution of the model components. In style transfer, the most important component is the optimization technique used to guide the learning in the absence of parallel training data. In this work we empirically compare the dominant optimization paradigms which provide supervision signals during training: backtranslation, adversarial training and reinforcement learning. We find that backtranslation has model-specific limitations, which inhibits training style transfer models. Reinforcement learning shows the best performance gains, while adversarial training, despite its popularity, does not offer an advantage over the latter alternative. In this work we also experiment with Minimum Risk Training, a popular technique in the machine translation community, which, to our knowledge, has not been empirically evaluated in the task of style transfer. We fill this research gap and empirically show its efficacy.

Typ des Eintrags: Report
Erschienen: 2021
Autor(en): Puzikov, Yevgeniy ; Stanley, Simoes ; Gurevych, Iryna ; Schweizer, Immanuel
Art des Eintrags: Bibliographie
Titel: Empirical Evaluation of Supervision Signals for Style Transfer Models
Sprache: Englisch
Publikationsjahr: 15 Januar 2021
Verlag: arXiv
Reihe: Computation and Language
Auflage: 1. Version
DOI: 10.48550/arXiv.2101.06172
URL / URN: https://arxiv.org/abs/2101.06172
Kurzbeschreibung (Abstract):

Text style transfer has gained increasing attention from the research community over the recent years. However, the proposed approaches vary in many ways, which makes it hard to assess the individual contribution of the model components. In style transfer, the most important component is the optimization technique used to guide the learning in the absence of parallel training data. In this work we empirically compare the dominant optimization paradigms which provide supervision signals during training: backtranslation, adversarial training and reinforcement learning. We find that backtranslation has model-specific limitations, which inhibits training style transfer models. Reinforcement learning shows the best performance gains, while adversarial training, despite its popularity, does not offer an advantage over the latter alternative. In this work we also experiment with Minimum Risk Training, a popular technique in the machine translation community, which, to our knowledge, has not been empirically evaluated in the task of style transfer. We fill this research gap and empirically show its efficacy.

Freie Schlagworte: UKP_p_TGTOVE,FAZIT
Zusätzliche Informationen:

Preprint

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
Hinterlegungsdatum: 20 Jan 2021 16:09
Letzte Änderung: 11 Jul 2024 07:27
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen