Pathologist-like explainable AI for interpretable Gleason grading in prostate cancer
The aggressiveness of prostate cancer is primarily assessed from histopathological data using the Gleason scoring system. Conventional artificial intelligence (AI) approaches can predict Gleason scores, but often lack explainability, which may limit clinical acceptance. Here, we present an alternati...
Saved in:
| Main Authors: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
|---|---|
| Format: | Article (Journal) |
| Language: | English |
| Published: |
08 October 2025
|
| In: |
Nature Communications
Year: 2025, Volume: 16, Pages: 1-17 |
| ISSN: | 2041-1723 |
| DOI: | 10.25673/121080 |
| Online Access: | Resolving-System, kostenfrei: https://doi.org/10.25673/121080 Verlag, kostenfrei, Volltext: https://doi.org/10.1038/s41467-025-64712-4 Verlag, kostenfrei, Volltext: https://www.nature.com/articles/s41467-025-64712-4 |
| Author Notes: | Gesa Mittmann, Sara Laiouar-Pedari, Hendrik A. Mehrtens, Sarah Haggenmüller, Tabea-Clara Bucher, Tirtha Chanda, Nadine T. Gaisa, Mathias Wagner, Gilbert Georg Klamminger, Tilman T. Rau, Christina Neppl, Eva Maria Compérat, Andreas Gocht, Monika Haemmerle, Niels J. Rupp, Jula Westhoff, Irene Krücken, Maximilian Seidl, Christian M. Schürch, Marcus Bauer, Wiebke Solass, Yu Chun Tam, Florian Weber, Rainer Grobholz, Jaroslaw Augustyniak, Thomas Kalinski, Christian Hörner, Kirsten D. Mertz, Constanze Döring, Andreas Erbersdobler, Gabriele Deubler, Felix Bremmer, Ulrich Sommer, Michael Brodhun, Jon Griffin, Maria Sarah L. Lenon, Kiril Trpkov, Liang Cheng, Fei Chen, Angelique Levi, Guoping Cai, Tri Q. Nguyen, Ali Amin, Alessia Cimadamore, Ahmed Shabaik, Varsha Manucha, Nazeel Ahmad, Nidia Messias, Francesca Sanguedolce, Diana Taheri, Ezra Baraban, Liwei Jia, Rajal B. Shah, Farshid Siadat, Nicole Swarbrick, Kyung Park, Oudai Hassan, Siamak Sakhaie, Michelle R. Downes, Hiroshi Miyamoto, Sean R. Williamson, Tim Holland-Letz, Christoph Wies, Carolin V. Schneider, Jakob Nikolas Kather, Yuri Tolkach & Titus J. Brinker |
| Summary: | The aggressiveness of prostate cancer is primarily assessed from histopathological data using the Gleason scoring system. Conventional artificial intelligence (AI) approaches can predict Gleason scores, but often lack explainability, which may limit clinical acceptance. Here, we present an alternative, inherently explainable AI that circumvents the need for post-hoc explainability methods. The model was trained on 1,015 tissue microarray core images, annotated with detailed pattern descriptions by 54 international pathologists following standardized guidelines. It uses pathologist-defined terminology and was trained using soft labels to capture data uncertainty. This approach enables robust Gleason pattern segmentation despite high interobserver variability. The model achieved comparable or superior performance to direct Gleason pattern segmentation (Dice score: $${0.713}_{\pm 0.003}$$vs. $${0.691}_{\pm 0.010}$$) while providing interpretable outputs. We release this dataset to encourage further research on segmentation in medical tasks with high subjectivity and to deepen insights into pathologists’ reasoning. |
|---|---|
| Item Description: | Gesehen am 08.01.2026 |
| Physical Description: | Online Resource |
| ISSN: | 2041-1723 |
| DOI: | 10.25673/121080 |