Can surgeons trust AI?: Perspectives on machine learning in surgery and the importance of eXplainable Artificial Intelligence (XAI)

Purpose: This brief report aims to summarize and discuss the methodologies of eXplainable Artificial Intelligence (XAI) and their potential applications in surgery. Methods: We briefly introduce explainability methods, including global and individual explanatory features, methods for imaging data an...

Full description

Saved in:
Bibliographic Details
Main Authors: Brandenburg, Johanna (Author) , Müller, Beat P. (Author) , Wagner, Martin (Author) , Schaar, Mihaela van der (Author)
Format: Article (Journal)
Language:English
Published: 28 January 2025
In: Langenbeck's archives of surgery
Year: 2025, Volume: 410, Pages: 1-5
ISSN:1435-2451
DOI:10.1007/s00423-025-03626-7
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.1007/s00423-025-03626-7
Verlag, kostenfrei, Volltext: https://link.springer.com/article/10.1007/s00423-025-03626-7
Get full text
Author Notes:Johanna M. Brandenburg, Beat P. Müller-Stich, Martin Wagner, Mihaela van der Schaar
Description
Summary:Purpose: This brief report aims to summarize and discuss the methodologies of eXplainable Artificial Intelligence (XAI) and their potential applications in surgery. Methods: We briefly introduce explainability methods, including global and individual explanatory features, methods for imaging data and time series, as well as similarity classification, and unraveled rules and laws. Results: Given the increasing interest in artificial intelligence within the surgical field, we emphasize the critical importance of transparency and interpretability in the outputs of applied models. Conclusion: Transparency and interpretability are essential for the effective integration of AI models into clinical practice.
Item Description:Gesehen am 26.09.2025
Physical Description:Online Resource
ISSN:1435-2451
DOI:10.1007/s00423-025-03626-7