Back to the formula: LHC edition

While neural networks offer an attractive way to numerically encode functions, actual formulas remain the language of theoretical particle physics. We show how symbolic regression trained on matrix-element information provides, for instance, optimal LHC observables in an easily interpretable form. W...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Butter, Anja (VerfasserIn) , Plehn, Tilman (VerfasserIn) , Soybelman, Nathalie (VerfasserIn) , Brehmer, Johann (VerfasserIn)
Dokumenttyp: Article (Journal) Kapitel/Artikel
Sprache:Englisch
Veröffentlicht: 15 Nov 2021
In: Arxiv
Year: 2021, Pages: 1-29
DOI:10.48550/arXiv.2109.10414
Online-Zugang:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.48550/arXiv.2109.10414
Verlag, lizenzpflichtig, Volltext: http://arxiv.org/abs/2109.10414
Volltext
Verfasserangaben:Anja Butter, Tilman Plehn, Nathalie Soybelman, and Johann Brehmer
Beschreibung
Zusammenfassung:While neural networks offer an attractive way to numerically encode functions, actual formulas remain the language of theoretical particle physics. We show how symbolic regression trained on matrix-element information provides, for instance, optimal LHC observables in an easily interpretable form. We introduce the method using the effect of a dimension-6 coefficient on associated ZH production. We then validate it for the known case of CP-violation in weak-boson-fusion Higgs production, including detector effects.
Beschreibung:Gesehen am 14.09.2022
Beschreibung:Online Resource
DOI:10.48550/arXiv.2109.10414