Blättern nach Person
Ebene hoch |
2011
Hachiya, H. ; Peters, J. ; Sugiyama, M. (2011)
Reward Weighted Regression with Sample Reuse.
In: Neural Computation, (23(11))
Artikel, Bibliographie
Hachiya, H. ; Peters, J. ; Sugiyama, M. (2011)
Reward Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning.
In: Neural Computation, 23 (11)
Artikel, Bibliographie
2009
Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2009)
Adaptive Importance Sampling for Value Function Approximation in On-policy Reinforcement Learning.
In: Neural Networks, 22(10), pp.1399-1410
Artikel, Bibliographie
Hachiya, H. ; Peters, J. ; Sugiyama, M. (2009)
Adaptive Importance Sampling with Automatic Model Selection in Reward Weighted Regression.
Proceedings of the Workshop of Technical Committee on Neurocomputing.
Konferenzveröffentlichung, Bibliographie
Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2009)
Efficient Data Reuse in Value Function Approximation.
Proceedings of the 2009 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning.
Konferenzveröffentlichung, Bibliographie
Hachiya, H. ; Peters, J. ; Sugiyama, M. (2009)
Efficient Sample Reuse in EM-based Policy Search.
Proceedings of the 16th European Conference on Machine Learning (ECML).
Konferenzveröffentlichung, Bibliographie
2008
Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2008)
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation.
Proceedings of the Twenty-Third National Conference on Artificial Intelligence (AAAI).
Konferenzveröffentlichung, Bibliographie
Hachiya, H. ; Akiyama, T. ; Sugiyama, M. ; Peters, J. (2008)
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation.
Conference on Artificial Intelligence (AAAI 2008). Chicago, Illinois (13.07.2008-17.07.2008)
Konferenzveröffentlichung, Bibliographie