Sound texture synthesis using RI spectrograms

10/21/2019
by   Hugo Caracalla, et al.
0

This article introduces a new parametric synthesis method for sound textures based on existing works in visual and sound texture synthesis. Starting from a base sound signal, an optimization process is performed until the cross-correlations between the feature-maps of several untrained 2D Convolutional Neural Networks (CNN) resemble those of an original sound texture. We use compressed RI spectrograms as input to the CNN: this time-frequency representation is the stacking of the real and imaginary part of the Short Time Fourier Transform (STFT) and thus implicitly contains both the magnitude and phase information, allowing for convincing syntheses of various audio events. The optimization is however performed directly on the time signal to avoid any STFT consistency issue. The results of an online perceptual evaluation are also detailed, and show that this method achieves results that are more realistic-sounding than existing parametric methods on a wide array of textures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset