TU Darmstadt / ULB / TUbiblio

What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in Web argumentation

Habernal, Ivan and Gurevych, Iryna (2016):
What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in Web argumentation.
In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Austin, Texas, [Online-Edition: http://www.aclweb.org/anthology/D16-1129],
[Conference or Workshop Item]

Abstract

This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multi-labeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new UKPConvArg2 corpus and software under permissive licenses to the research community.

Item Type: Conference or Workshop Item
Erschienen: 2016
Creators: Habernal, Ivan and Gurevych, Iryna
Title: What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in Web argumentation
Language: English
Abstract:

This article tackles a new challenging task in computational argumentation. Given a pair of two arguments to a certain controversial topic, we aim to directly assess qualitative properties of the arguments in order to explain why one argument is more convincing than the other one. We approach this task in a fully empirical manner by annotating 26k explanations written in natural language. These explanations describe convincingness of arguments in the given argument pair, such as their strengths or flaws. We create a new crowd-sourced corpus containing 9,111 argument pairs, multi-labeled with 17 classes, which was cleaned and curated by employing several strict quality measures. We propose two tasks on this data set, namely (1) predicting the full label distribution and (2) classifying types of flaws in less convincing arguments. Our experiments with feature-rich SVM learners and Bidirectional LSTM neural networks with convolution and attention mechanism reveal that such a novel fine-grained analysis of Web argument convincingness is a very challenging task. We release the new UKPConvArg2 corpus and software under permissive licenses to the research community.

Title of Book: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Publisher: Association for Computational Linguistics
Uncontrolled Keywords: UKP_a_ArMin;UKP_p_ArguAna
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Ubiquitous Knowledge Processing
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Research Training Group 1994 Adaptive Preparation of Information from Heterogeneous Sources
Event Location: Austin, Texas
Date Deposited: 31 Dec 2016 14:29
Official URL: http://www.aclweb.org/anthology/D16-1129
Identification Number: TUD-CS-2016-0180
Related URLs:
Export:
Suche nach Titel in: TUfind oder in Google

Optionen (nur für Redakteure)

View Item View Item