Validating large language models against manual information extraction from case reports of drug-induced parkinsonism in patients with schizophrenia spectrum and mood disorders: a proof of concept study

In this proof of concept study, we demonstrated how Large Language Models (LLMs) can automate the conversion of unstructured case reports into clinical ratings. By leveraging instructions from a standardized clinical rating scale and evaluating the LLM’s confidence in its outputs, we aimed to refine...

Full description

Saved in:
Bibliographic Details
Main Authors: Volkmer, Sebastian (Author) , Glück, Alina (Author) , Meyer-Lindenberg, Andreas (Author) , Schwarz, Emanuel (Author) , Hirjak, Dusan (Author)
Format: Article (Journal)
Language:English
Published: 20 March 2025
In: Schizophrenia
Year: 2025, Volume: 11, Issue: 1, Pages: 1-4
ISSN:2754-6993
DOI:10.1038/s41537-025-00601-5
Online Access:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1038/s41537-025-00601-5
Verlag, lizenzpflichtig, Volltext: https://www.nature.com/articles/s41537-025-00601-5
Get full text
Author Notes:Sebastian Volkmer, Alina Glück, Andreas Meyer-Lindenberg, Emanuel Schwarz and Dusan Hirjak
Description
Summary:In this proof of concept study, we demonstrated how Large Language Models (LLMs) can automate the conversion of unstructured case reports into clinical ratings. By leveraging instructions from a standardized clinical rating scale and evaluating the LLM’s confidence in its outputs, we aimed to refine prompting strategies and enhance reproducibility. Using this strategy and case reports of drug-induced Parkinsonism, we showed that LLM-extracted data closely align with clinical rater manual extraction, achieving an accuracy of 90%.
Item Description:Gesehen am 16.06.2025
Physical Description:Online Resource
ISSN:2754-6993
DOI:10.1038/s41537-025-00601-5