Back to the formula - LHC edition

While neural networks offer an attractive way to numerically encode functions, actual formulas remain the language of theoretical particle physics. We use symbolic regression trained on matrix-element information to extract, for instance, optimal LHC observables. This way we invert the usual simulat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Butter, Anja (VerfasserIn) , Plehn, Tilman (VerfasserIn) , Soybelman, Nathalie (VerfasserIn) , Brehmer, Johann (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 29-01-2024
In: SciPost physics
Year: 2024, Jahrgang: 16, Heft: 1, Pages: 1-28
ISSN:2542-4653
DOI:10.21468/SciPostPhys.16.1.037
Online-Zugang:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.21468/SciPostPhys.16.1.037
Verlag, lizenzpflichtig, Volltext: https://scipost.org/10.21468/SciPostPhys.16.1.037
Volltext
Verfasserangaben:Anja Butter, Tilman Plehn, Nathalie Soybelman and Johann Brehmer
Beschreibung
Zusammenfassung:While neural networks offer an attractive way to numerically encode functions, actual formulas remain the language of theoretical particle physics. We use symbolic regression trained on matrix-element information to extract, for instance, optimal LHC observables. This way we invert the usual simulation paradigm and extract easily interpretable formulas from complex simulated data. We introduce the method using the effect of a dimension-6 coefficient on associated ZH production. We then validate it for the known case of CP-violation in weak-boson-fusion Higgs production, including detector effects.
Beschreibung:Gesehen am 07.06.2024
Beschreibung:Online Resource
ISSN:2542-4653
DOI:10.21468/SciPostPhys.16.1.037