Understanding event-generation networks via uncertainties

Following the growing success of generative neural networks in LHC simulations, the crucial question is how to control the networks and assign uncertainties to their event output. We show how Bayesian normalizing flow or invertible networks capture uncertainties from the training and turn them into...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bellagente, Marco (VerfasserIn) , Haußmann, Manuel (VerfasserIn) , Luchmann, Michel (VerfasserIn) , Plehn, Tilman (VerfasserIn)
Dokumenttyp: Article (Journal) Kapitel/Artikel
Sprache:Englisch
Veröffentlicht: October 4, 2021
In: Arxiv
Year: 2021, Pages: 1-26
DOI:10.48550/arXiv.2104.04543
Online-Zugang:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.48550/arXiv.2104.04543
Verlag, lizenzpflichtig, Volltext: http://arxiv.org/abs/2104.04543
Volltext
Verfasserangaben:Marco Bellagente, Manuel Haußmann, Michel Luchmann, and Tilman Plehn
Beschreibung
Zusammenfassung:Following the growing success of generative neural networks in LHC simulations, the crucial question is how to control the networks and assign uncertainties to their event output. We show how Bayesian normalizing flow or invertible networks capture uncertainties from the training and turn them into an uncertainty on the event weight. Fundamentally, the interplay between density and uncertainty estimates indicates that these networks learn functions in analogy to parameter fits rather than binned event counts.
Beschreibung:Gesehen am 13.07.2022
Beschreibung:Online Resource
DOI:10.48550/arXiv.2104.04543