Back to the formula: LHC edition

While neural networks offer an attractive way to numerically encode functions, actual formulas remain the language of theoretical particle physics. We show how symbolic regression trained on matrix-element information provides, for instance, optimal LHC observables in an easily interpretable form. W...

Full description

Saved in:
Bibliographic Details
Main Authors: Butter, Anja (Author) , Plehn, Tilman (Author) , Soybelman, Nathalie (Author) , Brehmer, Johann (Author)
Format: Article (Journal) Chapter/Article
Language:English
Published: 15 Nov 2021
In: Arxiv
Year: 2021, Pages: 1-29
DOI:10.48550/arXiv.2109.10414
Online Access:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.48550/arXiv.2109.10414
Verlag, lizenzpflichtig, Volltext: http://arxiv.org/abs/2109.10414
Get full text
Author Notes:Anja Butter, Tilman Plehn, Nathalie Soybelman, and Johann Brehmer
Description
Summary:While neural networks offer an attractive way to numerically encode functions, actual formulas remain the language of theoretical particle physics. We show how symbolic regression trained on matrix-element information provides, for instance, optimal LHC observables in an easily interpretable form. We introduce the method using the effect of a dimension-6 coefficient on associated ZH production. We then validate it for the known case of CP-violation in weak-boson-fusion Higgs production, including detector effects.
Item Description:Gesehen am 14.09.2022
Physical Description:Online Resource
DOI:10.48550/arXiv.2109.10414