SimTriplet: Simple Triplet Representation Learning with a Single GPU

03/09/2021
by   Quan Liu, et al.
0

Contrastive learning is a key technique of modern self-supervised learning. The broader accessibility of earlier approaches is hindered by the need of heavy computational resources (e.g., at least 8 GPUs or 32 TPU cores), which accommodate for large-scale negative samples or momentum. The more recent SimSiam approach addresses such key limitations via stop-gradient without momentum encoders. In medical image analysis, multiple instances can be achieved from the same patient or tissue. Inspired by these advances, we propose a simple triplet representation learning (SimTriplet) approach on pathological images. The contribution of the paper is three-fold: (1) The proposed SimTriplet method takes advantage of the multi-view nature of medical images beyond self-augmentation; (2) The method maximizes both intra-sample and inter-sample similarities via triplets from positive pairs, without using negative samples; and (3) The recent mix precision training is employed to advance the training by only using a single GPU with 16GB memory. By learning from 79,000 unlabeled pathological patch images, SimTriplet achieved 10.58 better performance compared with supervised learning. It also achieved 2.13 better performance compared with SimSiam. Our proposed SimTriplet can achieve decent performance using only 1 at https://github.com/hrlblab/SimTriple.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset