Eger, Steffen ; Daxenberger, Johannes ; Gurevych, Iryna (2017)
Neural End-to-End Learning for Computational Argumentation Mining.
Vancouver, Canada
Konferenzveröffentlichung, Bibliographie
Kurzbeschreibung (Abstract)
We investigate neural techniques for end-to-end computational argumentation mining (AM). We frame AM both as a token-based dependency parsing and as a token-based sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem. Moreover, we find that jointly learning ‘natural’ subtasks, in a multi-task learning setup, improves performance.
Typ des Eintrags: | Konferenzveröffentlichung |
---|---|
Erschienen: | 2017 |
Autor(en): | Eger, Steffen ; Daxenberger, Johannes ; Gurevych, Iryna |
Art des Eintrags: | Bibliographie |
Titel: | Neural End-to-End Learning for Computational Argumentation Mining |
Sprache: | Englisch |
Publikationsjahr: | Juli 2017 |
Verlag: | Association for Computational Linguistics |
Buchtitel: | Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017) |
Band einer Reihe: | Volume 1: Long Papers |
Veranstaltungsort: | Vancouver, Canada |
URL / URN: | http://aclweb.org/anthology/P17-1002 |
Zugehörige Links: | |
Kurzbeschreibung (Abstract): | We investigate neural techniques for end-to-end computational argumentation mining (AM). We frame AM both as a token-based dependency parsing and as a token-based sequence tagging problem, including a multi-task learning setup. Contrary to models that operate on the argument component level, we find that framing AM as dependency parsing leads to subpar performance results. In contrast, less complex (local) tagging models based on BiLSTMs perform robustly across classification scenarios, being able to catch long-range dependencies inherent to the AM problem. Moreover, we find that jointly learning ‘natural’ subtasks, in a multi-task learning setup, improves performance. |
Freie Schlagworte: | UKP_a_DLinNLP, UKP_a_ArMin, reviewed, UKP_p_ArgumenText |
ID-Nummer: | TUD-CS-2017-0070 |
Fachbereich(e)/-gebiet(e): | 20 Fachbereich Informatik 20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung |
Hinterlegungsdatum: | 31 Mär 2017 14:02 |
Letzte Änderung: | 24 Jan 2020 12:03 |
PPN: | |
Projekte: | ArgumenText |
Export: | |
Suche nach Titel in: | TUfind oder in Google |
Frage zum Eintrag |
Optionen (nur für Redakteure)
Redaktionelle Details anzeigen |