Temporal envelope and fine structure cues for dysarthric speech detection using CNNs

08/25/2021
by   Ina Kodrasi, et al.
0

Deep learning-based techniques for automatic dysarthric speech detection have recently attracted interest in the research community. State-of-the-art techniques typically learn neurotypical and dysarthric discriminative representations by processing time-frequency input representations such as the magnitude spectrum of the short-time Fourier transform (STFT). Although these techniques are expected to leverage perceptual dysarthric cues, representations such as the magnitude spectrum of the STFT do not necessarily convey perceptual aspects of complex sounds. Inspired by the temporal processing mechanisms of the human auditory system, in this paper we factor signals into the product of a slowly varying envelope and a rapidly varying fine structure. Separately exploiting the different perceptual cues present in the envelope (i.e., phonetic information, stress, and voicing) and fine structure (i.e., pitch, vowel quality, and breathiness), two discriminative representations are learned through a convolutional neural network and used for automatic dysarthric speech detection. Experimental results show that processing both the envelope and fine structure representations yields a considerably better dysarthric speech detection performance than processing only the envelope, fine structure, or magnitude spectrum of the STFT representation.

READ FULL TEXT

page 1

page 2

research
10/23/2019

End-to-End Multi-Task Denoising for the Joint Optimization of Perceptual Speech Metrics

Although supervised learning based on a deep neural network has recently...
research
06/22/2017

Comparison of Time-Frequency Representations for Environmental Sound Classification using Convolutional Neural Networks

Recent successful applications of convolutional neural networks (CNNs) t...
research
03/08/2019

A Deep Generative Model of Speech Complex Spectrograms

This paper proposes an approach to the joint modeling of the short-time ...
research
10/25/2019

Learning audio representations via phase prediction

We learn audio representations by solving a novel self-supervised learni...
research
02/21/2019

STFNets: Learning Sensing Signals from the Time-Frequency Perspective with Short-Time Fourier Neural Networks

Recent advances in deep learning motivate the use of deep neural network...
research
10/22/2019

Speech-VGG: A deep feature extractor for speech processing

A growing number of studies in the field of speech processing employ fea...
research
11/15/2020

Automatic and perceptual discrimination between dysarthria, apraxia of speech, and neurotypical speech

Automatic techniques in the context of motor speech disorders (MSDs) are...

Please sign up or login with your details

Forgot password? Click here to reset