Text-to-Audio Grounding: Building Correspondence Between Captions and Sound Events

02/23/2021
by   Xuenan Xu, et al.
0

Automated Audio Captioning is a cross-modal task, generating natural language descriptions to summarize the audio clips' sound events. However, grounding the actual sound events in the given audio based on its corresponding caption has not been investigated. This paper contributes an AudioGrounding dataset, which provides the correspondence between sound events and the captions provided in Audiocaps, along with the location (timestamps) of each present sound event. Based on such, we propose the text-to-audio grounding (TAG) task, which interactively considers the relationship between audio processing and language understanding. A baseline approach is provided, resulting in an event-F1 score of 28.3

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset