Medical large language models are susceptible to targeted misinformation attacks

Large language models (LLMs) have broad medical knowledge and can reason about medical information across many domains, holding promising potential for diverse medical applications in the near future. In this study, we demonstrate a concerning vulnerability of LLMs in medicine. Through targeted mani...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Han, Tianyu (VerfasserIn) , Nebelung, Sven (VerfasserIn) , Khader, Firas (VerfasserIn) , Wang, Tianci (VerfasserIn) , Müller-Franzes, Gustav (VerfasserIn) , Kuhl, Christiane (VerfasserIn) , Försch, Sebastian (VerfasserIn) , Kleesiek, Jens Philipp (VerfasserIn) , Haarburger, Christoph (VerfasserIn) , Bressem, Keno K. (VerfasserIn) , Kather, Jakob Nikolas (VerfasserIn) , Truhn, Daniel (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 23 October 2024
In: npj digital medicine
Year: 2024, Jahrgang: 7, Pages: 1-9
ISSN:2398-6352
DOI:10.1038/s41746-024-01282-7
Online-Zugang:Verlag, kostenfrei, Volltext: https://doi.org/10.1038/s41746-024-01282-7
Verlag, kostenfrei, Volltext: https://www.nature.com/articles/s41746-024-01282-7
Volltext
Verfasserangaben:Tianyu Han, Sven Nebelung, Firas Khader, Tianci Wang, Gustav Müller-Franzes, Christiane Kuhl, Sebastian Försch, Jens Kleesiek, Christoph Haarburger, Keno K. Bressem, Jakob Nikolas Kather & Daniel Truhn
Beschreibung
Zusammenfassung:Large language models (LLMs) have broad medical knowledge and can reason about medical information across many domains, holding promising potential for diverse medical applications in the near future. In this study, we demonstrate a concerning vulnerability of LLMs in medicine. Through targeted manipulation of just 1.1% of the weights of the LLM, we can deliberately inject incorrect biomedical facts. The erroneous information is then propagated in the model’s output while maintaining performance on other biomedical tasks. We validate our findings in a set of 1025 incorrect biomedical facts. This peculiar susceptibility raises serious security and trustworthiness concerns for the application of LLMs in healthcare settings. It accentuates the need for robust protective measures, thorough verification mechanisms, and stringent management of access to these models, ensuring their reliable and safe use in medical practice.
Beschreibung:Gesehen am 23.04.2025
Beschreibung:Online Resource
ISSN:2398-6352
DOI:10.1038/s41746-024-01282-7