Phys Med Biol. 2021 Nov 24. doi: 10.1088/1361-6560/ac3d15. Online ahead of print.
ABSTRACT
Recent medical image segmentation methods heavily rely on large-scale training data and high-quality annotations. However, these resources are hard to obtain due to the limitation of medical images and professional annotators. How to utilize limited annotations and maintain the performance is an essential yet challenging problem. In this paper, we try to tackle this problem in a self-learning manner by proposing a Generative Adversarial Semi-supervised Network (GASNet). We use limited annotated images as main supervision signals, and the unlabeled images are manipulated as extra auxiliary information to improve the performance. More specifically, we modulate a segmentation network as a generator to produce pseudo labels for unlabeled images. To make the generator robust, we train an uncertainty discriminator with generative adversarial learning to determine the reliability of the pseudo labels. To further ensure dependability, we apply feature mapping loss to obtain statistic distribution consistency between the generated labels and the real labels. Then the verified pseudo labels are used to optimize the generator in a self-learning manner. We validate the effectiveness of the proposed method on right ventricle dataset, Sunnybrook dataset, STACOM, ISIC dataset, and Kaggle lung dataset. We obtain 0.8402 to 0.9121, 0.8103 to 0.9094, 0.9435 to 0.9724, 0.8635 to 0.886, and 0.9697 to 0.9885 dice coefficient with 1/8 to 1/2 proportion of densely annotated labels, respectively. The improvements are up to 28.6 points higher than the corresponding fully supervised baseline.
PMID:34818627 | DOI:10.1088/1361-6560/ac3d15