TU Darmstadt / ULB / TUbiblio

Still not there? Comparing Traditional Sequence-to-Sequence Models to Encoder-Decoder Neural Networks on Monotone String Translation Tasks

Schnober, Carsten and Eger, Steffen and Do Dinh, Erik-Lân and Gurevych, Iryna (2016):
Still not there? Comparing Traditional Sequence-to-Sequence Models to Encoder-Decoder Neural Networks on Monotone String Translation Tasks.
In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, The COLING 2016 Organizing Committee, Osaka, Japan, [Online-Edition: http://aclweb.org/anthology/C16-1160],
[Conference or Workshop Item]

Abstract

We analyze the performance of encoder-decoder neural models and compare them with well-known established methods. The latter represent different classes of traditional approaches that are applied to the monotone sequence-to-sequence tasks OCR post-correction, spelling correction, grapheme-to-phoneme conversion, and lemmatization. Such tasks are of practical relevance for various higher-level research fields including \textit{digital humanities}, automatic text correction, and speech recognition. We investigate how well generic deep-learning approaches adapt to these tasks, and how they perform in comparison with established and more specialized methods, including our own adaptation of pruned CRFs.

Item Type: Conference or Workshop Item
Erschienen: 2016
Creators: Schnober, Carsten and Eger, Steffen and Do Dinh, Erik-Lân and Gurevych, Iryna
Title: Still not there? Comparing Traditional Sequence-to-Sequence Models to Encoder-Decoder Neural Networks on Monotone String Translation Tasks
Language: English
Abstract:

We analyze the performance of encoder-decoder neural models and compare them with well-known established methods. The latter represent different classes of traditional approaches that are applied to the monotone sequence-to-sequence tasks OCR post-correction, spelling correction, grapheme-to-phoneme conversion, and lemmatization. Such tasks are of practical relevance for various higher-level research fields including \textit{digital humanities}, automatic text correction, and speech recognition. We investigate how well generic deep-learning approaches adapt to these tasks, and how they perform in comparison with established and more specialized methods, including our own adaptation of pruned CRFs.

Title of Book: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Publisher: The COLING 2016 Organizing Committee
Uncontrolled Keywords: UKP-DIPF;UKP_reviewed;UKP_a_DLinNLP
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Ubiquitous Knowledge Processing
Event Location: Osaka, Japan
Date Deposited: 31 Dec 2016 14:29
Official URL: http://aclweb.org/anthology/C16-1160
Identification Number: TUD-CS-2016-1450
Export:
Suche nach Titel in: TUfind oder in Google

Optionen (nur für Redakteure)

View Item View Item