Evaluating probabilistic classifiers: the triptych

Probability forecasts for binary outcomes, often referred to as probabilistic classifiers or confidence scores, are ubiquitous in science and society, and methods for evaluating and comparing them are in great demand. We propose and study a triptych of diagnostic graphics that focus on distinct and...

Full description

Saved in:
Bibliographic Details
Main Authors: Dimitriadis, Timo (Author) , Gneiting, Tilmann (Author) , Jordan, Alexander I. (Author) , Vogel, Peter (Author)
Format: Article (Journal) Chapter/Article
Language:English
Published: January 27, 2023
In: Arxiv
Year: 2023, Pages: 1-32
DOI:10.48550/arXiv.2301.10803
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.48550/arXiv.2301.10803
Verlag, kostenfrei, Volltext: http://arxiv.org/abs/2301.10803
Get full text
Author Notes:Timo Dimitriadis, Tilmann Gneiting, Alexander I. Jordan, Peter Vogel
Description
Summary:Probability forecasts for binary outcomes, often referred to as probabilistic classifiers or confidence scores, are ubiquitous in science and society, and methods for evaluating and comparing them are in great demand. We propose and study a triptych of diagnostic graphics that focus on distinct and complementary aspects of forecast performance: The reliability diagram addresses calibration, the receiver operating characteristic (ROC) curve diagnoses discrimination ability, and the Murphy diagram visualizes overall predictive performance and value. A Murphy curve shows a forecast's mean elementary scores, including the widely used misclassification rate, and the area under a Murphy curve equals the mean Brier score. For a calibrated forecast, the reliability curve lies on the diagonal, and for competing calibrated forecasts, the ROC and Murphy curves share the same number of crossing points. We invoke the recently developed CORP (Consistent, Optimally binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm based) approach to craft reliability diagrams and decompose a mean score into miscalibration (MCB), discrimination (DSC), and uncertainty (UNC) components. Plots of the DSC measure of discrimination ability versus the calibration metric MCB visualize classifier performance across multiple competitors. The proposed tools are illustrated in empirical examples from astrophysics, economics, and social science.
Item Description:Gesehen am 26.09.2023
Physical Description:Online Resource
DOI:10.48550/arXiv.2301.10803