TU Darmstadt / ULB / TUbiblio

Efficient Gradient-Free Variational Inference using Policy Search

Arenz, Oleg ; Neumann, Gerhard ; Zhong, Mingjun (2022)
Efficient Gradient-Free Variational Inference using Policy Search.
35th International Conference on Machine Learning (ICML 2018). Stockholm, Sweden (10.-15.07.2018)
doi: 10.26083/tuprints-00022925
Konferenzveröffentlichung, Zweitveröffentlichung, Verlagsversion

Kurzbeschreibung (Abstract)

Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.

Typ des Eintrags: Konferenzveröffentlichung
Erschienen: 2022
Autor(en): Arenz, Oleg ; Neumann, Gerhard ; Zhong, Mingjun
Art des Eintrags: Zweitveröffentlichung
Titel: Efficient Gradient-Free Variational Inference using Policy Search
Sprache: Englisch
Publikationsjahr: 2022
Ort: Darmstadt
Verlag: PMLR
Buchtitel: Proceedings of Machine Learning Research
Band einer Reihe: 80
Kollation: 10 ungezählte Seiten
Veranstaltungstitel: 35th International Conference on Machine Learning (ICML 2018)
Veranstaltungsort: Stockholm, Sweden
Veranstaltungsdatum: 10.-15.07.2018
DOI: 10.26083/tuprints-00022925
URL / URN: https://tuprints.ulb.tu-darmstadt.de/22925
Zugehörige Links:
Herkunft: Zweitveröffentlichungsservice
Kurzbeschreibung (Abstract):

Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.

Freie Schlagworte: Machine Learning, ICML, Variational Inference, Sampling, Policy Search, MCMC, Markov Chain Monte Carlo
Status: Verlagsversion
URN: urn:nbn:de:tuda-tuprints-229250
Zusätzliche Informationen:

Presentation video: https://vimeo.com/294656117

Sachgruppe der Dewey Dezimalklassifikatin (DDC): 000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Fachbereich(e)/-gebiet(e): 20 Fachbereich Informatik
20 Fachbereich Informatik > Intelligente Autonome Systeme
Hinterlegungsdatum: 02 Dez 2022 12:46
Letzte Änderung: 05 Dez 2022 09:13
PPN:
Zugehörige Links:
Export:
Suche nach Titel in: TUfind oder in Google
Frage zum Eintrag Frage zum Eintrag

Optionen (nur für Redakteure)
Redaktionelle Details anzeigen Redaktionelle Details anzeigen