Cuing Without Sharing: A Federated Cued Speech Recognition Framework via Mutual Knowledge Distillation

08/07/2023
by   Yuxuan Zhang, et al.
0

Cued Speech (CS) is a visual coding tool to encode spoken languages at the phonetic level, which combines lip-reading and hand gestures to effectively assist communication among people with hearing impairments. The Automatic CS Recognition (ACSR) task aims to recognize CS videos into linguistic texts, which involves both lips and hands as two distinct modalities conveying complementary information. However, the traditional centralized training approach poses potential privacy risks due to the use of facial and gesture videos in CS data. To address this issue, we propose a new Federated Cued Speech Recognition (FedCSR) framework to train an ACSR model over the decentralized CS data without sharing private information. In particular, a mutual knowledge distillation method is proposed to maintain cross-modal semantic consistency of the Non-IID CS data, which ensures learning a unified feature space for both linguistic and visual information. On the server side, a globally shared linguistic model is trained to capture the long-term dependencies in the text sentences, which is aligned with the visual information from the local clients via visual-to-linguistic distillation. On the client side, the visual model of each client is trained with its own local data, assisted by linguistic-to-visual distillation treating the linguistic model as the teacher. To the best of our knowledge, this is the first approach to consider the federated ACSR task for privacy protection. Experimental results on the Chinese CS dataset with multiple cuers demonstrate that our approach outperforms both mainstream federated learning baselines and existing centralized state-of-the-art ACSR methods, achieving 9.7 improvement for character error rate (CER) and 15.0

READ FULL TEXT

page 4

page 8

research
06/25/2021

Cross-Modal Knowledge Distillation Method for Automatic Cued Speech Recognition

Cued Speech (CS) is a visual communication system for the deaf or hearin...
research
12/02/2022

Cross-Modal Mutual Learning for Cued Speech Recognition

Automatic Cued Speech Recognition (ACSR) provides an intelligent human-m...
research
02/08/2021

Federated Acoustic Modeling For Automatic Speech Recognition

Data privacy and protection is a crucial issue for any automatic speech ...
research
04/07/2019

Long-Term Vehicle Localization by Recursive Knowledge Distillation

Most of the current state-of-the-art frameworks for cross-season visual ...
research
10/20/2021

Knowledge distillation from language model to acoustic model: a hierarchical multi-task learning approach

The remarkable performance of the pre-trained language model (LM) using ...
research
11/26/2019

Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers

Lip reading has witnessed unparalleled development in recent years thank...
research
04/11/2022

Multistream neural architectures for cued-speech recognition using a pre-trained visual feature extractor and constrained CTC decoding

This paper proposes a simple and effective approach for automatic recogn...

Please sign up or login with your details

Forgot password? Click here to reset