A baseline for machine-learning-based hepatocellular carcinoma diagnosis using multi-modal clinical data
The objective of this paper is to provide a baseline for performing multi-modal data classification on a novel open multimodal dataset of hepatocellular carcinoma (HCC), which includes both image data (contrast-enhanced CT and MRI images) and tabular data (the clinical laboratory test data as well a...
Saved in:
| Main Authors: | , , , , , , , , , , , |
|---|---|
| Format: | Article (Journal) Chapter/Article |
| Language: | English |
| Published: |
20 Jan 2025
|
| In: |
Arxiv
Year: 2025, Pages: 1-7 |
| DOI: | 10.48550/arXiv.2501.11535 |
| Online Access: | Verlag, kostenfrei, Volltext: https://doi.org/10.48550/arXiv.2501.11535 Verlag, kostenfrei, Volltext: http://arxiv.org/abs/2501.11535 |
| Author Notes: | Binwu Wang, Isaac Rodriguez, Leon Breitinger, Fabian Tollens, Timo Itzel, Dennis Grimm, Andrei Sirazitdinov, Matthias Frölich, Stefan Schönberg, Andreas Teufel, Jürgen Hesser, Wenzhao Zhao |
| Summary: | The objective of this paper is to provide a baseline for performing multi-modal data classification on a novel open multimodal dataset of hepatocellular carcinoma (HCC), which includes both image data (contrast-enhanced CT and MRI images) and tabular data (the clinical laboratory test data as well as case report forms). TNM staging is the classification task. Features from the vectorized preprocessed tabular data and radiomics features from contrast-enhanced CT and MRI images are collected. Feature selection is performed based on mutual information. An XGBoost classifier predicts the TNM staging and it shows a prediction accuracy of $0.89 \pm 0.05$ and an AUC of $0.93 \pm 0.03$. The classifier shows that this high level of prediction accuracy can only be obtained by combining image and clinical laboratory data and therefore is a good example case where multi-model classification is mandatory to achieve accurate results. |
|---|---|
| Item Description: | Gesehen am 10.02.2025 |
| Physical Description: | Online Resource |
| DOI: | 10.48550/arXiv.2501.11535 |