Validating large language models against manual information extraction from case reports of drug-induced parkinsonism in patients with schizophrenia spectrum and mood disorders: a proof of concept study

In this proof of concept study, we demonstrated how Large Language Models (LLMs) can automate the conversion of unstructured case reports into clinical ratings. By leveraging instructions from a standardized clinical rating scale and evaluating the LLM’s confidence in its outputs, we aimed to refine...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Volkmer, Sebastian (VerfasserIn) , Glück, Alina (VerfasserIn) , Meyer-Lindenberg, Andreas (VerfasserIn) , Schwarz, Emanuel (VerfasserIn) , Hirjak, Dusan (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 20 March 2025
In: Schizophrenia
Year: 2025, Jahrgang: 11, Heft: 1, Pages: 1-4
ISSN:2754-6993
DOI:10.1038/s41537-025-00601-5
Online-Zugang:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1038/s41537-025-00601-5
Verlag, lizenzpflichtig, Volltext: https://www.nature.com/articles/s41537-025-00601-5
Volltext
Verfasserangaben:Sebastian Volkmer, Alina Glück, Andreas Meyer-Lindenberg, Emanuel Schwarz and Dusan Hirjak
Beschreibung
Zusammenfassung:In this proof of concept study, we demonstrated how Large Language Models (LLMs) can automate the conversion of unstructured case reports into clinical ratings. By leveraging instructions from a standardized clinical rating scale and evaluating the LLM’s confidence in its outputs, we aimed to refine prompting strategies and enhance reproducibility. Using this strategy and case reports of drug-induced Parkinsonism, we showed that LLM-extracted data closely align with clinical rater manual extraction, achieving an accuracy of 90%.
Beschreibung:Gesehen am 16.06.2025
Beschreibung:Online Resource
ISSN:2754-6993
DOI:10.1038/s41537-025-00601-5