Federated Self-Supervised Contrastive Learning and Masked Autoencoder for Dermatological Disease Diagnosis

by   Yawen Wu, et al.

In dermatological disease diagnosis, the private data collected by mobile dermatology assistants exist on distributed mobile devices of patients. Federated learning (FL) can use decentralized data to train models while keeping data local. Existing FL methods assume all the data have labels. However, medical data often comes without full labels due to high labeling costs. Self-supervised learning (SSL) methods, contrastive learning (CL) and masked autoencoders (MAE), can leverage the unlabeled data to pre-train models, followed by fine-tuning with limited labels. However, combining SSL and FL has unique challenges. For example, CL requires diverse data but each device only has limited data. For MAE, while Vision Transformer (ViT) based MAE has higher accuracy over CNNs in centralized learning, MAE's performance in FL with unlabeled data has not been investigated. Besides, the ViT synchronization between the server and clients is different from traditional CNNs. Therefore, special synchronization methods need to be designed. In this work, we propose two federated self-supervised learning frameworks for dermatological disease diagnosis with limited labels. The first one features lower computation costs, suitable for mobile devices. The second one features high accuracy and fits high-performance servers. Based on CL, we proposed federated contrastive learning with feature sharing (FedCLF). Features are shared for diverse contrastive information without sharing raw data for privacy. Based on MAE, we proposed FedMAE. Knowledge split separates the global and local knowledge learned from each client. Only global knowledge is aggregated for higher generalization performance. Experiments on dermatological disease datasets show superior accuracy of the proposed frameworks over state-of-the-arts.


Federated Contrastive Learning for Dermatological Disease Diagnosis via On-device Learning

Deep learning models have been deployed in an increasing number of edge ...

Distributed Contrastive Learning for Medical Image Segmentation

Supervised deep learning needs a large amount of labeled data to achieve...

Distributed Unsupervised Visual Representation Learning with Fused Features

Federated learning (FL) enables distributed clients to learn a shared mo...

FedMAE: Federated Self-Supervised Learning with One-Block Masked Auto-Encoder

Latest federated learning (FL) methods started to focus on how to use un...

Pseudo-Data based Self-Supervised Federated Learning for Classification of Histopathological Images

Computer-aided diagnosis (CAD) can help pathologists improve diagnostic ...

Enabling On-Device Self-Supervised Contrastive Learning With Selective Data Contrast

After a model is deployed on edge devices, it is desirable for these dev...

L-DAWA: Layer-wise Divergence Aware Weight Aggregation in Federated Self-Supervised Visual Representation Learning

The ubiquity of camera-enabled devices has led to large amounts of unlab...

Please sign up or login with your details

Forgot password? Click here to reset