Comparative analysis of machine learning algorithms for computer-assisted reporting based on fully automated cross-lingual RadLex mappings
Computer-assisted reporting (CAR) tools were suggested to improve radiology report quality by context-sensitively recommending key imaging biomarkers. However, studies evaluating machine learning (ML) algorithms on cross-lingual ontological (RadLex) mappings for developing embedded CAR algorithms ar...
Saved in:
| Main Authors: | , , , , , , , , , |
|---|---|
| Format: | Article (Journal) |
| Language: | English |
| Published: |
09 March 2021
|
| In: |
Scientific reports
Year: 2021, Volume: 11, Pages: 1-18 |
| ISSN: | 2045-2322 |
| DOI: | 10.1038/s41598-021-85016-9 |
| Online Access: | Verlag, kostenfrei, Volltext: https://doi.org/10.1038/s41598-021-85016-9 Verlag, kostenfrei, Volltext: https://www.nature.com/articles/s41598-021-85016-9 |
| Author Notes: | Máté E. Maros, Chang Gyu Cho, Andreas G. Junge, Benedikt Kämpgen, Victor Saase, Fabian Siegel, Frederik Trinkmann, Thomas Ganslandt, Christoph Groden and Holger Wenz |
| Summary: | Computer-assisted reporting (CAR) tools were suggested to improve radiology report quality by context-sensitively recommending key imaging biomarkers. However, studies evaluating machine learning (ML) algorithms on cross-lingual ontological (RadLex) mappings for developing embedded CAR algorithms are lacking. Therefore, we compared ML algorithms developed on human expert-annotated features against those developed on fully automated cross-lingual (German to English) RadLex mappings using 206 CT reports of suspected stroke. Target label was whether the Alberta Stroke Programme Early CT Score (ASPECTS) should have been provided (yes/no:154/52). We focused on probabilistic outputs of ML-algorithms including tree-based methods, elastic net, support vector machines (SVMs) and fastText (linear classifier), which were evaluated in the same 5 × fivefold nested cross-validation framework. This allowed for model stacking and classifier rankings. Performance was evaluated using calibration metrics (AUC, brier score, log loss) and -plots. Contextual ML-based assistance recommending ASPECTS was feasible. SVMs showed the highest accuracies both on human-extracted- (87%) and RadLex features (findings:82.5%; impressions:85.4%). FastText achieved the highest accuracy (89.3%) and AUC (92%) on impressions. Boosted trees fitted on findings had the best calibration profile. Our approach provides guidance for choosing ML classifiers for CAR tools in fully automated and language-agnostic fashion using bag-of-RadLex terms on limited expert-labelled training data. |
|---|---|
| Item Description: | Gesehen am 12.08.2021 |
| Physical Description: | Online Resource |
| ISSN: | 2045-2322 |
| DOI: | 10.1038/s41598-021-85016-9 |