TU Darmstadt / ULB / TUbiblio

Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks

Reimers, Nils and Gurevych, Iryna (2017):
Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks.
In: arXiv preprint arXiv:1707.06799, [Online-Edition: https://arxiv.org/abs/1707.06799],
[Article]

Abstract

Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a "black art that requires expert experiences" (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks.

Item Type: Article
Erschienen: 2017
Creators: Reimers, Nils and Gurevych, Iryna
Title: Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks
Language: English
Abstract:

Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a "black art that requires expert experiences" (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks.

Journal or Publication Title: arXiv preprint arXiv:1707.06799
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Ubiquitous Knowledge Processing
DFG-Graduiertenkollegs
DFG-Graduiertenkollegs > Research Training Group 1994 Adaptive Preparation of Information from Heterogeneous Sources
Date Deposited: 25 Jul 2017 10:47
Official URL: https://arxiv.org/abs/1707.06799
Identification Number: TUD-CS-2017-0196
Related URLs:
Export:

Optionen (nur für Redakteure)

View Item View Item