MedicalPatchNet: a patch-based self-explainable AI architecture for chest X-ray classification
Deep neural networks excel in radiological image classification but frequently suffer from poor interpretability, limiting clinical acceptance. We present MedicalPatchNet, an inherently self-explainable architecture for chest X-ray classification that transparently attributes decisions to distinct i...
Gespeichert in:
| Hauptverfasser: | , , , , |
|---|---|
| Dokumenttyp: | Article (Journal) |
| Sprache: | Englisch |
| Veröffentlicht: |
20 February 2026
|
| In: |
Scientific reports
Year: 2026, Jahrgang: 16, Pages: 1-12 |
| ISSN: | 2045-2322 |
| DOI: | 10.1038/s41598-026-40358-0 |
| Online-Zugang: | Verlag, kostenfrei, Volltext: https://doi.org/10.1038/s41598-026-40358-0 Verlag, kostenfrei, Volltext: https://www.nature.com/articles/s41598-026-40358-0 |
| Verfasserangaben: | Patrick Wienholt, Christiane Kuhl, Jakob Nikolas Kather, Sven Nebelung & Daniel Truhn |
| Zusammenfassung: | Deep neural networks excel in radiological image classification but frequently suffer from poor interpretability, limiting clinical acceptance. We present MedicalPatchNet, an inherently self-explainable architecture for chest X-ray classification that transparently attributes decisions to distinct image regions. MedicalPatchNet splits images into non-overlapping patches, independently classifies each patch, and aggregates predictions, enabling intuitive visualization of each patch’s diagnostic contribution without post-hoc techniques. Trained on the CheXpert dataset (223,414 images), MedicalPatchNet matches the classification performance (AUROC 0.907 vs. 0.908) of EfficientNetV2-S, while improving interpretability: MedicalPatchNet demonstrates improved interpretability with higher pathology localization accuracy (mean hit-rate 0.485 vs. 0.376 with Grad-CAM) on the CheXlocalize dataset. By providing explicit, reliable explanations accessible even to non-AI experts, MedicalPatchNet mitigates risks associated with shortcut learning, thus improving clinical trust. Our model is publicly available with reproducible training and inference scripts and contributes to safer, explainable AI-assisted diagnostics across medical imaging domains. We make the code publicly available: https://github.com/TruhnLab/MedicalPatchNet |
|---|---|
| Beschreibung: | Veröffentlicht: 20. Februar 2026 Gesehen am 10.04.2026 |
| Beschreibung: | Online Resource |
| ISSN: | 2045-2322 |
| DOI: | 10.1038/s41598-026-40358-0 |