Essentially no barriers in neural network energy landscape

Training neural networks involves finding minima of a high-dimensional non-convex loss function. Knowledge of the structure of this energy landscape is sparse. Relaxing from linear interpolations, we construct continuous paths between minima of recent neural network architectures on CIFAR10 and CIFA...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Draxler, Felix (VerfasserIn) , Veschgini, Kambis (VerfasserIn) , Salmhofer, Manfred (VerfasserIn) , Hamprecht, Fred (VerfasserIn)
Dokumenttyp: Article (Journal) Chapter/Article
Sprache:Englisch
Veröffentlicht: 2 Mar 2018
In: Arxiv

Online-Zugang:kostenfrei
Volltext
Verfasserangaben:Felix Draxler, Kambis Veschgini, Manfred Salmhofer, Fred A. Hamprecht
Beschreibung
Zusammenfassung:Training neural networks involves finding minima of a high-dimensional non-convex loss function. Knowledge of the structure of this energy landscape is sparse. Relaxing from linear interpolations, we construct continuous paths between minima of recent neural network architectures on CIFAR10 and CIFAR100. Surprisingly, the paths are essentially flat in both the training and test landscapes. This implies that neural networks have enough capacity for structural changes, or that these changes are small between minima. Also, each minimum has at least one vanishing Hessian eigenvalue in addition to those resulting from trivial invariance.
Beschreibung:Identifizierung der Ressource nach: Last revised 22 Feb 2019
Gesehen am 15.12.2020
Beschreibung:Online Resource