TU Darmstadt / ULB / TUbiblio

Patching - A Framework for Adapting Immutable Classifiers to Evolving Domains

Kauschke, Sebastian (2019):
Patching - A Framework for Adapting Immutable Classifiers to Evolving Domains.
Darmstadt, Technische Universität, [Online-Edition: https://tuprints.ulb.tu-darmstadt.de/9089],
[Ph.D. Thesis]

Abstract

Machine learning models are subject to changing circumstances, and will degrade over time. Nowadays, data are collected in vast amounts: Personal data is retrieved by our phones, by our internet browser, via our shopping behavior, and especially through all the content that we upload to social media platforms. Machines in factories, cars, essentially every device that is not purely mechanical anymore, may be also collecting data. This data is often used to build predictive models, e.g., for recommender systems or remaining lifetime estimation. As all things in life, the data and the knowledge extracted from a person or machine is subject to change, which is called concept drift. This concept drift may be caused by varying circumstances, changes in the expected outcome, or completely new requirements for the task. In any case, to keep a model operative, adaptive learning mechanisms are required to deal with the drift. Related works in this area cover a plethora of adaptive learning mechanisms. Usually, these algorithms are made to learn on streams of data from scratch. However, we argue that in many real-world scenarios this type of learning does not fit the actual application. It is rather, that stationary models are trained in a sandbox environment on large datasets, which are then put into practical use. If these models are not specifically constructed to be adaptive, any concept drift will lower the performance. Since training such a model, e.g., a deep neural network, can be expensive in regards of cost and time required, it is desirable to use it as long as possible. We introduce a new paradigm of adapting existing models. Our goal is to keep the existing models as long as possible, and only adapt it to the concept drift where it is necessary. We solve this by computing partial adaptations, so called patches. Via this mechanism, we can assure the existing model to live longer, and keep the learning required for adaptation to a minimum. The Patching mechanism elongates the lifetime of a machine learned model, helps to adapt with fewer observed instances, aids in individualizing an existing model, and generally increases the models’ cost efficiency. In this dissertation we first introduce a general framework for learn- ing patches as adaptation mechanisms. We evaluate the concept, and compare it against state of the art stream learning mechanisms. When dealing with normal stream scenarios, it is reasonable to apply Patch- ing. However, when dealing with scenarios which it is intended for, Patching excels in adaptation speed and overall performance. In a second contribution we specialize the patching idea on neural networks. Since neural networks are expensive and time consuming in training, we require a way of adapting them quickly. Although neural networks can be adapted via the normal training process, training them with newer data can lead to side effects such as catastrophic forgetting. Depending on the size and complexity of the network, adapting them can also be either expensive or—when given only few examples—unsuccessful. We propose neural network patching (NN- Patching) as a solution to this issue. In NN-Patching, the underlying network remains unchanged. However, a neural patch is trained by using the inner activations of the base network. These represent latent features that can be useful towards the given task. An error estimator network determines, whether the patch network or the base network is better suited to classify an instance. NN-Patching shows even more significant improvements than Patching, with quick adaptation and overall adaptive capabilities that rival those of the theoretically more capable competition. The final contribution is geared towards the use in scenarios that require model individualization or deal with re-occuring concepts. For this task we propose Ensemble Patching, a variant of Patching that builds an ensemble of patches. These patches are learned in such a way, that they each cover a distinctive type of concept drift. When a new concept emerges, a certain error pattern will occur for the base classifier. A specific patch is then learned. All ensemble members are managed via a recurrent network called the ensemble conductor. This separately trained model will conduct the ensemble decision, and is the key player for the adaptation. When concepts become outdated, the conductor will put less weight on the decisions of the respective patches, but by its structure it can quickly reactivate them, should older concepts become relevant again. Our evaluation demonstrates that this ensemble technique handles recurring concepts very well. Ensemble Patching can also be employed in a stream classification scenario, where computational efficiency is important.

Item Type: Ph.D. Thesis
Erschienen: 2019
Creators: Kauschke, Sebastian
Title: Patching - A Framework for Adapting Immutable Classifiers to Evolving Domains
Language: English
Abstract:

Machine learning models are subject to changing circumstances, and will degrade over time. Nowadays, data are collected in vast amounts: Personal data is retrieved by our phones, by our internet browser, via our shopping behavior, and especially through all the content that we upload to social media platforms. Machines in factories, cars, essentially every device that is not purely mechanical anymore, may be also collecting data. This data is often used to build predictive models, e.g., for recommender systems or remaining lifetime estimation. As all things in life, the data and the knowledge extracted from a person or machine is subject to change, which is called concept drift. This concept drift may be caused by varying circumstances, changes in the expected outcome, or completely new requirements for the task. In any case, to keep a model operative, adaptive learning mechanisms are required to deal with the drift. Related works in this area cover a plethora of adaptive learning mechanisms. Usually, these algorithms are made to learn on streams of data from scratch. However, we argue that in many real-world scenarios this type of learning does not fit the actual application. It is rather, that stationary models are trained in a sandbox environment on large datasets, which are then put into practical use. If these models are not specifically constructed to be adaptive, any concept drift will lower the performance. Since training such a model, e.g., a deep neural network, can be expensive in regards of cost and time required, it is desirable to use it as long as possible. We introduce a new paradigm of adapting existing models. Our goal is to keep the existing models as long as possible, and only adapt it to the concept drift where it is necessary. We solve this by computing partial adaptations, so called patches. Via this mechanism, we can assure the existing model to live longer, and keep the learning required for adaptation to a minimum. The Patching mechanism elongates the lifetime of a machine learned model, helps to adapt with fewer observed instances, aids in individualizing an existing model, and generally increases the models’ cost efficiency. In this dissertation we first introduce a general framework for learn- ing patches as adaptation mechanisms. We evaluate the concept, and compare it against state of the art stream learning mechanisms. When dealing with normal stream scenarios, it is reasonable to apply Patch- ing. However, when dealing with scenarios which it is intended for, Patching excels in adaptation speed and overall performance. In a second contribution we specialize the patching idea on neural networks. Since neural networks are expensive and time consuming in training, we require a way of adapting them quickly. Although neural networks can be adapted via the normal training process, training them with newer data can lead to side effects such as catastrophic forgetting. Depending on the size and complexity of the network, adapting them can also be either expensive or—when given only few examples—unsuccessful. We propose neural network patching (NN- Patching) as a solution to this issue. In NN-Patching, the underlying network remains unchanged. However, a neural patch is trained by using the inner activations of the base network. These represent latent features that can be useful towards the given task. An error estimator network determines, whether the patch network or the base network is better suited to classify an instance. NN-Patching shows even more significant improvements than Patching, with quick adaptation and overall adaptive capabilities that rival those of the theoretically more capable competition. The final contribution is geared towards the use in scenarios that require model individualization or deal with re-occuring concepts. For this task we propose Ensemble Patching, a variant of Patching that builds an ensemble of patches. These patches are learned in such a way, that they each cover a distinctive type of concept drift. When a new concept emerges, a certain error pattern will occur for the base classifier. A specific patch is then learned. All ensemble members are managed via a recurrent network called the ensemble conductor. This separately trained model will conduct the ensemble decision, and is the key player for the adaptation. When concepts become outdated, the conductor will put less weight on the decisions of the respective patches, but by its structure it can quickly reactivate them, should older concepts become relevant again. Our evaluation demonstrates that this ensemble technique handles recurring concepts very well. Ensemble Patching can also be employed in a stream classification scenario, where computational efficiency is important.

Place of Publication: Darmstadt
Divisions: 20 Department of Computer Science
20 Department of Computer Science > Knowl­edge En­gi­neer­ing
20 Department of Computer Science > Telecooperation
Date Deposited: 29 Sep 2019 19:55
Official URL: https://tuprints.ulb.tu-darmstadt.de/9089
URN: urn:nbn:de:tuda-tuprints-90890
Referees: Fürnkranz, Prof. Dr. Johannes and Mühlhäuser, Prof. Dr. Max and Hammer, Prof. Dr. Barbara
Refereed / Verteidigung / mdl. Prüfung: 6 September 2019
Alternative Abstract:
Alternative abstract Language
Durch maschinelles Lernen erstellte Modelle sind sich ändernden Bedingungen unterworfen, was auf Dauer zu einer Leistungsver- schlechterung führen kann. Heutzutage werden in allen Lebensla- gen Daten über uns gesammelt: Sei es über unsere Smartphones, von unserem Internet Browser, durch unser Einkaufsverhalten, und natürlich durch all den Inhalt, den wir in sozialen Netzwerken teilen und hochladen. Maschinen in Fabriken, Autos, grundsätzlich jedes Gerät das nicht mehr rein mechanisch funktioniert, kann Daten über seine Nutzungsweise sammeln. Diese Daten werden von den Her- stellern gerne dazu genutzt, um Vorhersagemodelle aufzubauen, z.B. für Kaufempfehlungen oder zur Verschleiss-Vorhersage. Wie alle Dinge im Leben unterliegen auch die Information und das Wissen, dass aus solchen Daten extrahiert wurde, konstanten Verän- derungseinflüssen. Wir nennen das Concept Drift. Dieser Concept Drift kann verschiedenartig sein: Veränderte Rahmenbedingungen, iv veränderte Erwartungshaltung an das Resultat des Modells, oder auch komplett neue Anforderungen können einen Grund darstellen. In jedem Fall muss ein vorhandenes Modell angepasst, oder sogar neu gelernt werden, um diesen Veränderungen standzuhalten. Verwandte Arbeiten in diesem Gebiet decken eine große Menge an adaptiven Lernverfahren ab. Oftmals sind diese Algorithmen so ausgelegt, dass der komplette Lernvorgang auf einem Datenstrom stattfindet. Wir argumentieren allerdings, dass dieses Vorgehen in echten Anwendungen selten so stattfindet. Stattdessen werden sta- tionäre Modelle auf großen Datensätzen in einer Sandbox gelernt, und danach in ein Produktivsystem eingesetzt. Sofern diese Modelle nicht explizit darauf ausgelegt wurden, adaptiv zu sein, kann die Perfor- mance durch Concept Drift negativ beeinflusst werden. Das Training eines solchen Models, z.B. eines tiefen neuronalen Netzwerks, ist in der Regel ein langwieriger und kostspieliger Prozeß, weswegen eine lange Nutzbarkeit des Modells wünschenswert ist. Wir stellen ein neues Konzept zur Adaptation bestehender Modelle vor: Unser Ziel ist es, die Lebenszeit vorhandener Modelle signifikant zu verlängern, und das Modell nur an den Stellen anzupassen, wo es Fehler macht. Dazu berechnen wir partielle Modell-Adaptationen, sogenannte Patches. Ob ein Patch benötigt wird oder nicht, wird mit- tels eines Error-Estimators bestimmt. Dieser Estimator und der Patch sind adaptiv, und werden mit den neusten Daten aktuell gehalten. Sie können gegebenenfalls problematische Instanzen abfangen und korrekt klassifizieren. Unproblematische Instanzen werden weiterhin mit dem Basismodell klassifiziert. Durch diesen Mechanismus wird der Lernaufwand zur Adaption minimiert, und das Modell bleibt dennoch einsatzfähig. In dieser Dissertation führen wir zunächst ein generelles Framework für das Lernen von Patch-Adaptionen ein. Wir evaluieren das Konzept und treffen Vergleiche zu bestehenden Mechanismen. Beim Einsatz in generischen Datenströmen kann die Patching-Methode mit dem state of the art mithalten. In Szenarien, die dem primären Ensatzzweck von Patching entsprechen, können wir allerdings signifikante Verbesserung in der Adaptionsgeschwindigkeit und Gesamtperformance erzielen. In einem zweiten Beitrag spezialisieren wir die Patching-Idee auf den Einsatz mit neuronalen Netzwerken. Die Erzeugung neuronaler Netze ist mit hohen Zeit- und Rechenaufwand verbunden. Sie eignen sich zwar prinzipiell für stetige Adaption durch weitere Trainingsschritte, jedoch ist dies ebenfalls aufwendig und kann zu unerwünschten Nebeneffekten wie Catastrophic Forgetting führen. Wir stellen Neu- ral Network Patching (NN-Patching) als Lösung für dieses Problem vor: Bei NN-Patching bleibt das Basisnetzwerk unangetastet, und die Adaption findet über einen neuronalen Patch statt. Wie bei Patching ist auch hier ein Error Estimator im Einsatz, der die Notwendigkeit, eine Instanz mittels des Patches zu klassifizieren, abschätzt. Der Vorteil der neuronalen Patches liegt in der Möglichkeit, die inneren Aktivierungen des Basisnetzwerkes abzugreifen. Dadurch können latente Feature- Repräsentationen aus dem Basisnetzwerk weitergenutzt werden, was hilfreich für die Adaption ist. NN-Patching kann in diesem Bereich auch signifikante Verbesserungen gegenüber üblichen Adaptionsmeth- oden erzielen. Der finale Beitrag dieser Dissertation beschäftigt sich mit Patch- ing für Szenarien mit wiederkehrenden Konzepten oder Individu- alisierungscharakter. Hierfür stellen wir Ensemble Patching vor, eine Variante von Patching die ein Ensemble aus Patches generiert. Jeder dieser Patches wird so trainiert, dass er eine gewisse Art von Concept Drift abdeckt. Wenn ein neues Konzept auftritt, verursacht dies ein bestimmtes Fehlermuster im Basisklassifizierer. Für jedes spezifische Muster wird ein Patch gelernt. Ein rekurrentes neuronales Netzwerk lernt darauffolgend, eine Gesamtentscheidung anhand der aktuellen Situation und mit den Resultaten aller Patches zu generieren. Dieser sogenannte Ensemble Conductor kann schnell zwischen verschiede- nen Konzepten hin- und herwechseln, und somit auf wiederkehrende Konzepte reagieren. Zudem ist Ensemble Patching auch in rechenzeitkri- tischen Datenstrom-Szenarien einsetzbar, da der Rechenaufwand zur Adaption reduziert werden konnte.German
Export:
Suche nach Titel in: TUfind oder in Google

Optionen (nur für Redakteure)

View Item View Item