Continuous level Monte Carlo and sample-adaptive model hierarchies

In this paper, we present a generalization of the multilevel Monte Carlo (MLMC) method to a setting where the level parameter is a continuous variable. This continuous level Monte Carlo (CLMC) estimator provides a natural framework in PDE applications to adapt the model hierarchy to each sample. In...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Detommaso, Gianluca (VerfasserIn) , Dodwell, Tim (VerfasserIn) , Scheichl, Robert (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: [2019]
In: SIAM ASA journal on uncertainty quantification
Year: 2019, Jahrgang: 7, Heft: 1, Pages: 93-116
ISSN:2166-2525
DOI:10.1137/18M1172259
Online-Zugang:Verlag, Volltext: https://doi.org/10.1137/18M1172259
Verlag, Volltext: https://epubs.siam.org/doi/10.1137/18M1172259
Volltext
Verfasserangaben:Gianluca Detommaso, Tim Dodwell, and Rob Scheichl
Beschreibung
Zusammenfassung:In this paper, we present a generalization of the multilevel Monte Carlo (MLMC) method to a setting where the level parameter is a continuous variable. This continuous level Monte Carlo (CLMC) estimator provides a natural framework in PDE applications to adapt the model hierarchy to each sample. In addition, it can be made unbiased with respect to the expected value of the true quantity of interest provided the quantity of interest converges sufficiently fast. The practical implementation of the CLMC estimator is based on interpolating actual evaluations of the quantity of interest at a finite number of resolutions. As our new level parameter, we use the logarithm of a goal-oriented finite element error estimator for the accuracy of the quantity of interest. We prove the unbiasedness, as well as a complexity theorem that shows the same rate of complexity for CLMC as for MLMC. Finally, we provide some numerical evidence to support our theoretical results, by successfully testing CLMC on a standard PDE test problem. The numerical experiments demonstrate clear gains for samplewise adaptive refinement strategies over uniform refinements.
Beschreibung:Gesehen am 24.06.2019
Beschreibung:Online Resource
ISSN:2166-2525
DOI:10.1137/18M1172259