TU Darmstadt / ULB / TUbiblio

Browse by Person

Up a level
Export as [feed] Atom [feed] RSS 1.0 [feed] RSS 2.0
Group by: No Grouping | Item Type | Date | Language
Number of items: 8.

Hachiya, H. ; Peters, J. ; Sugiyama, M. (2011)
Reward Weighted Regression with Sample Reuse.
In: Neural Computation, (23(11))
Article

Hachiya, H. ; Peters, J. ; Sugiyama, M. (2011)
Reward Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning.
In: Neural Computation, 23 (11)
Article

Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2009)
Adaptive Importance Sampling for Value Function Approximation in On-policy Reinforcement Learning.
In: Neural Networks, 22(10), pp.1399-1410
Article

Hachiya, H. ; Peters, J. ; Sugiyama, M. (2009)
Adaptive Importance Sampling with Automatic Model Selection in Reward Weighted Regression.
Proceedings of the Workshop of Technical Committee on Neurocomputing.
Conference or Workshop Item

Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2009)
Efficient Data Reuse in Value Function Approximation.
Proceedings of the 2009 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning.
Conference or Workshop Item

Hachiya, H. ; Peters, J. ; Sugiyama, M. (2009)
Efficient Sample Reuse in EM-based Policy Search.
Proceedings of the 16th European Conference on Machine Learning (ECML).
Conference or Workshop Item

Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2008)
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation.
Proceedings of the Twenty-Third National Conference on Artificial Intelligence (AAAI).
Conference or Workshop Item

Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2008)
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation.
Conference on Artificial Intelligence (AAAI 2008). Chicago, Illinois (July 13–17, 2008)
Conference or Workshop Item

This list was generated on Tue Dec 5 01:12:43 2023 CET.