TU Darmstadt / ULB / TUbiblio

Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning

Cheng, Weiwei ; Fürnkranz, Johannes ; Hüllermeier, Eyke ; Park, Sang-Hyeun (2011)
Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning.
doi: 10.1007/978-3-642-23780-5_30
Konferenzveröffentlichung, Bibliographie

Kurzbeschreibung (Abstract)

This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a "preference-based" approach to reinforcement learning is a possible extension of the type of feedback an agent may learn from. In particular, while conventional RL methods are essentially confined to deal with numerical rewards, there are many applications in which this type of information is not naturally available, and in which only qualitative reward signals are provided instead. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. Concretely, in this paper, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of our preference-based policy iteration method are illustrated by means of two case studies.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2011
Autor(en): Cheng, Weiwei ; Fürnkranz, Johannes ; Hüllermeier, Eyke ; Park, Sang-Hyeun
Art des Eintrags: Bibliographie
Titel: Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning
Sprache: Englisch
Publikationsjahr: 2011
Verlag: Springer-Verlag
Buchtitel: Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
DOI: 10.1007/978-3-642-23780-5_30
Kurzbeschreibung (Abstract):

This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a "preference-based" approach to reinforcement learning is a possible extension of the type of feedback an agent may learn from. In particular, while conventional RL methods are essentially confined to deal with numerical rewards, there are many applications in which this type of information is not naturally available, and in which only qualitative reward signals are provided instead. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. Concretely, in this paper, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of our preference-based policy iteration method are illustrated by means of two case studies.

Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik > Knowledge Engineering
20 Fachbereich Informatik
Hinterlegungsdatum: 24 Jun 2011 13:30
Letzte Änderung: 05 Mär 2013 09:49
PPN:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen