Learning Subject-Invariant Representations from Speech-Evoked EEG Using Variational Autoencoders

07/01/2022
by   Lies Bollens, et al.
0

The electroencephalogram (EEG) is a powerful method to understand how the brain processes speech. Linear models have recently been replaced for this purpose with deep neural networks and yield promising results. In related EEG classification fields, it is shown that explicitly modeling subject-invariant features improves generalization of models across subjects and benefits classification accuracy. In this work, we adapt factorized hierarchical variational autoencoders to exploit parallel EEG recordings of the same stimuli. We model EEG into two disentangled latent spaces. Subject accuracy reaches 98.96 whereas binary content classification experiments reach an accuracy of 51.51 and 62.91

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset