Adversarial attacks and adversarial robustness in computational pathology

Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread c...

Full description

Saved in:
Bibliographic Details
Main Authors: Ghaffari Laleh, Narmin (Author) , Truhn, Daniel (Author) , Veldhuizen, Gregory Patrick (Author) , Han, Tianyu (Author) , van Treeck, Marko (Author) , Buelow, Roman D. (Author) , Langer, Rupert (Author) , Dislich, Bastian (Author) , Boor, Peter (Author) , Schulz, Volkmar (Author) , Kather, Jakob Nikolas (Author)
Format: Article (Journal)
Language:English
Published: 29 September 2022
In: Nature Communications
Year: 2022, Volume: 13, Pages: 1-10
ISSN:2041-1723
DOI:10.1038/s41467-022-33266-0
Online Access:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1038/s41467-022-33266-0
Verlag, lizenzpflichtig, Volltext: https://www.nature.com/articles/s41467-022-33266-0
Get full text
Author Notes:Narmin Ghaffari Laleh, Daniel Truhn, Gregory Patrick Veldhuizen, Tianyu Han, Marko van Treeck, Roman D. Buelow, Rupert Langer, Bastian Dislich, Peter Boor, Volkmar Schulz & Jakob Nikolas Kather
Description
Summary:Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks.
Item Description:Gesehen am 18.01.2023
Physical Description:Online Resource
ISSN:2041-1723
DOI:10.1038/s41467-022-33266-0