TU Darmstadt / ULB / TUbiblio

Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning

Cheng, Weiwei and Fürnkranz, Johannes and Hüllermeier, Eyke and Park, Sang-Hyeun (2011):
Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning.
In: Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Springer-Verlag, [Conference or Workshop Item]

Abstract

This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a "preference-based" approach to reinforcement learning is a possible extension of the type of feedback an agent may learn from. In particular, while conventional RL methods are essentially confined to deal with numerical rewards, there are many applications in which this type of information is not naturally available, and in which only qualitative reward signals are provided instead. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. Concretely, in this paper, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of our preference-based policy iteration method are illustrated by means of two case studies.

Item Type: Conference or Workshop Item
Erschienen: 2011
Creators: Cheng, Weiwei and Fürnkranz, Johannes and Hüllermeier, Eyke and Park, Sang-Hyeun
Title: Preference-Based Policy Iteration: Leveraging Preference Learning for Reinforcement Learning
Language: English
Abstract:

This paper makes a first step toward the integration of two subfields of machine learning, namely preference learning and reinforcement learning (RL). An important motivation for a "preference-based" approach to reinforcement learning is a possible extension of the type of feedback an agent may learn from. In particular, while conventional RL methods are essentially confined to deal with numerical rewards, there are many applications in which this type of information is not naturally available, and in which only qualitative reward signals are provided instead. Therefore, building on novel methods for preference learning, our general goal is to equip the RL agent with qualitative policy models, such as ranking functions that allow for sorting its available actions from most to least promising, as well as algorithms for learning such models from qualitative feedback. Concretely, in this paper, we build on an existing method for approximate policy iteration based on roll-outs. While this approach is based on the use of classification methods for generalization and policy learning, we make use of a specific type of preference learning method called label ranking. Advantages of our preference-based policy iteration method are illustrated by means of two case studies.

Title of Book: Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
Publisher: Springer-Verlag
Divisions: 20 Department of Computer Science > Knowl­edge En­gi­neer­ing
20 Department of Computer Science
Date Deposited: 24 Jun 2011 13:30
Identification Number: doi:10.1007/978-3-642-23780-5_30
Export:

Optionen (nur für Redakteure)

View Item View Item