Potential of ChatGPT and GPT-4 for data mining of free-text CT reports on lung cancer
Background - - The latest large language models (LLMs) solve unseen problems via user-defined text prompts without the need for retraining, offering potentially more efficient information extraction from free-text medical records than manual annotation. - - Purpose - - To compare the performance...
Saved in:
| Main Authors: | , , , , , , , , |
|---|---|
| Format: | Article (Journal) |
| Language: | English |
| Published: |
September 2023
|
| In: |
Radiology
Year: 2023, Volume: 308, Issue: 3, Pages: 1-9 |
| ISSN: | 1527-1315 |
| DOI: | 10.1148/radiol.231362 |
| Online Access: | Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1148/radiol.231362 Verlag, lizenzpflichtig, Volltext: https://pubs.rsna.org/doi/10.1148/radiol.231362 |
| Author Notes: | Matthias A. Fink, MD, Arved Bischoff, MD, Christoph A. Fink, MD, Martin Moll, MD, Jonas Kroschke, MD, Luca Dulz, MSc, Claus Peter Heußel, MD, Hans-Ulrich Kauczor, MD, Tim F. Weber, MD |
| Summary: | Background - - The latest large language models (LLMs) solve unseen problems via user-defined text prompts without the need for retraining, offering potentially more efficient information extraction from free-text medical records than manual annotation. - - Purpose - - To compare the performance of the LLMs ChatGPT and GPT-4 in data mining and labeling oncologic phenotypes from free-text CT reports on lung cancer by using user-defined prompts. - - Materials and Methods - - This retrospective study included patients who underwent lung cancer follow-up CT between September 2021 and March 2023. A subset of 25 reports was reserved for prompt engineering to instruct the LLMs in extracting lesion diameters, labeling metastatic disease, and assessing oncologic progression. This output was fed into a rule-based natural language processing pipeline to match ground truth annotations from four radiologists and derive performance metrics. The oncologic reasoning of LLMs was rated on a five-point Likert scale for factual correctness and accuracy. The occurrence of confabulations was recorded. Statistical analyses included Wilcoxon signed rank and McNemar tests. - - Results - - On 424 CT reports from 424 patients (mean age, 65 years ± 11 [SD]; 265 male), GPT-4 outperformed ChatGPT in extracting lesion parameters (98.6% vs 84.0%, P < .001), resulting in 96% correctly mined reports (vs 67% for ChatGPT, P < .001). GPT-4 achieved higher accuracy in identification of metastatic disease (98.1% [95% CI: 97.7, 98.5] vs 90.3% [95% CI: 89.4, 91.0]) and higher performance in generating correct labels for oncologic progression (F1 score, 0.96 [95% CI: 0.94, 0.98] vs 0.91 [95% CI: 0.89, 0.94]) (both P < .001). In oncologic reasoning, GPT-4 had higher Likert scale scores for factual correctness (4.3 vs 3.9) and accuracy (4.4 vs 3.3), with a lower rate of confabulation (1.7% vs 13.7%) than ChatGPT (all P < .001). - - Conclusion - - When using user-defined prompts, GPT-4 outperformed ChatGPT in extracting oncologic phenotypes from free-text CT reports on lung cancer and demonstrated better oncologic reasoning with fewer confabulations. |
|---|---|
| Item Description: | Online veröffentlicht: 19. September 2023 Gesehen am 06.06.2024 |
| Physical Description: | Online Resource |
| ISSN: | 1527-1315 |
| DOI: | 10.1148/radiol.231362 |