Efficient finetuning of foundation model combined with few-shot learning improves pattern recognition in histopathology
Neural networks have achieved state-of-the-art performance in classifying whole slide image (WSI) patches in histopathology through supervised learning. However, their reliance on large-scale annotated datasets imposes a substantial labeling burden, limiting the practical benefits of AI-assisted dia...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article (Journal) |
| Language: | English |
| Published: |
27 November 2025
|
| In: |
Virchows Archiv
|
| ISSN: | 1432-2307 |
| DOI: | 10.1007/s00428-025-04351-8 |
| Online Access: | Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1007/s00428-025-04351-8 |
| Author Notes: | Ayk Jessen, Christoph Blattgerste, Maximilian Legnar, Zoran V. Popovic, Stefan Porubsky, Cleo-Aron Weis |
| Summary: | Neural networks have achieved state-of-the-art performance in classifying whole slide image (WSI) patches in histopathology through supervised learning. However, their reliance on large-scale annotated datasets imposes a substantial labeling burden, limiting the practical benefits of AI-assisted diagnostics. Foundation models, pretrained on diverse datasets for multi-purpose applications, offer a promising alternative by enabling out-of-the-box generalization. Despite their success in other domains, these models currently underperform in histopathology due to domain-specific challenges. In this work, we introduce a fine-tuning pipeline that significantly enhances the performance of foundation models for histopathological classification using only a minimal amount of labeled data. Specifically, we curate an unlabeled dataset from the target domain and employ self-supervised learning (SSL) to adapt pretrained Vision Transformers (ViTs). Our approach substantially improves classification accuracy while reducing annotation requirements, making foundation models more suitable for histopathological analysis. Furthermore, our results show that SSL-trained models can extract richer features even without access to class labels or balanced training data. |
|---|---|
| Item Description: | Gesehen am 12.01.2026 |
| Physical Description: | Online Resource |
| ISSN: | 1432-2307 |
| DOI: | 10.1007/s00428-025-04351-8 |