TU Darmstadt / ULB / TUbiblio

Exploiting Code Redundancies in ECOC

Park, Sang-Hyeun ; Weizsäcker, Lorenz ; Fürnkranz, Johannes
Hrsg.: Pfahringer, Bernhard ; Holmes, Geoff ; Hoffmann, Achim (2010)
Exploiting Code Redundancies in ECOC.
doi: 10.1007/978-3-642-16184-1_19
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

We study an approach for speeding up the training of error-correcting output codes (ECOC) classifiers. The key idea is to avoid unnecessary computations by exploiting the overlap of the different training sets in the ECOC ensemble. Instead of re-training each classifier from scratch, classifiers that have been trained for one task can be adapted to related tasks in the ensemble. The crucial issue is the identification of a schedule for training the classifiers which maximizes the exploitation of the overlap. For solving this problem, we construct a classifier graph in which the nodes correspond to the classifiers, and the edges represent the training complexity for moving from one classifier to the next in terms of the number of added training examples. The solution of the Steiner Tree problem is an arborescence in this graph which describes the learning scheme with the minimal total training complexity. We experimentally evaluate the algorithm with Hoeffding trees, as an example for incremental learners where the classifier adaptation is trivial, and with SVMs, where we employ an adaptation strategy based on adapted caching and weight reuse, which guarantees that the learned model is the same as per batch learning.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2010
Herausgeber: Pfahringer, Bernhard ; Holmes, Geoff ; Hoffmann, Achim
Autor(en): Park, Sang-Hyeun ; Weizsäcker, Lorenz ; Fürnkranz, Johannes
Art des Eintrags: Bibliographie
Titel: Exploiting Code Redundancies in ECOC
Sprache: Englisch
Publikationsjahr: 2010
Verlag: Springer Berlin / Heidelberg
Buchtitel: Discovery Science
Band einer Reihe: 6332
DOI: 10.1007/978-3-642-16184-1_19
Kurzbeschreibung (Abstract):

We study an approach for speeding up the training of error-correcting output codes (ECOC) classifiers. The key idea is to avoid unnecessary computations by exploiting the overlap of the different training sets in the ECOC ensemble. Instead of re-training each classifier from scratch, classifiers that have been trained for one task can be adapted to related tasks in the ensemble. The crucial issue is the identification of a schedule for training the classifiers which maximizes the exploitation of the overlap. For solving this problem, we construct a classifier graph in which the nodes correspond to the classifiers, and the edges represent the training complexity for moving from one classifier to the next in terms of the number of added training examples. The solution of the Steiner Tree problem is an arborescence in this graph which describes the learning scheme with the minimal total training complexity. We experimentally evaluate the algorithm with Hoeffding trees, as an example for incremental learners where the classifier adaptation is trivial, and with SVMs, where we employ an adaptation strategy based on adapted caching and weight reuse, which guarantees that the learned model is the same as per batch learning.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik > Knowledge Engineering
20 Fachbereich Informatik
Hinterlegungsdatum: 24 Jun 2011 14:20
Letzte Änderung: 05 Mär 2013 09:49
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen