Weakly supervised learning with positive and unlabeled data for automatic brain tumor segmentation

A major obstacle to the learning-based segmentation of healthy and tumorous brain tissue is the requirement of having to create a fully labeled training dataset. Obtaining these data requires tedious and error-prone manual labeling with respect to both tumor and non-tumor areas. To mitigate this pro...

Full description

Saved in:
Bibliographic Details
Main Authors: Wolf, Daniel (Author) , Regnery, Sebastian (Author) , Tarnawski, Rafal (Author) , Bobek-Billewicz, Barbara (Author) , Polańska, Joanna (Author) , Götz, Michael (Author)
Format: Article (Journal)
Language:English
Published: 24 October 2022
In: Applied Sciences
Year: 2022, Volume: 12, Issue: 21, Pages: 1-14
ISSN:2076-3417
DOI:10.3390/app122110763
Online Access:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.3390/app122110763
Verlag, lizenzpflichtig, Volltext: https://www.mdpi.com/2076-3417/12/21/10763
Get full text
Author Notes:Daniel Wolf, Sebastian Regnery, Rafal Tarnawski, Barbara Bobek-Billewicz, Joanna Polańska and Michael Götz
Description
Summary:A major obstacle to the learning-based segmentation of healthy and tumorous brain tissue is the requirement of having to create a fully labeled training dataset. Obtaining these data requires tedious and error-prone manual labeling with respect to both tumor and non-tumor areas. To mitigate this problem, we propose a new method to obtain high-quality classifiers from a dataset with only small parts of labeled tumor areas. This is achieved by using positive and unlabeled learning in conjunction with a domain adaptation technique. The proposed approach leverages the tumor volume, and we show that it can be either derived with simple measures or completely automatic with a proposed estimation method. While learning from sparse samples allows reducing the necessary annotation time from 4 h to 5 min, we show that the proposed approach further reduces the necessary annotation by roughly 50% while maintaining comparative accuracies compared to traditionally trained classifiers with this approach.
Item Description:Gesehen am 20.01.2023
Physical Description:Online Resource
ISSN:2076-3417
DOI:10.3390/app122110763