From responsibility to reason-giving explainable artificial intelligence

We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Baum, Kevin (VerfasserIn) , Mantel, Susanne (VerfasserIn) , Schmidt, Eva (VerfasserIn) , Speith, Timo (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 19 February 2022
In: Philosophy & technology
Year: 2022, Jahrgang: 35, Pages: 1-30
ISSN:2210-5441
DOI:10.1007/s13347-022-00510-w
Online-Zugang:Verlag, kostenfrei, Volltext: https://doi.org/10.1007/s13347-022-00510-w
Volltext
Verfasserangaben:Kevin Baum, Susanne Mantel, Eva Schmidt, Timo Speith
Beschreibung
Zusammenfassung:We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end, and we examine whether—and how—it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
Beschreibung:Online veröffentlicht: 19. Februar 2022
Gesehen am 09.04.2025
Beschreibung:Online Resource
ISSN:2210-5441
DOI:10.1007/s13347-022-00510-w