Abstract
Recent voice activity detection (VAD) schemes have aimed at leveraging the decent neural architectures, but few were successful with applying the attention network due to its high reliance on the encoder-decoder framework. This has often let the built systems have a high dependency on the recurrent neural networks, which are costly and sometimes less context-sensitive considering the scale and property of acoustic frames. To cope with this issue with the self-attention mechanism and achieve a simple, powerful, and environment-robust VAD, we first adopt the self-attention architecture in building up the modules for voice detection and boosted prediction. Our model surpasses the previous neural architectures in view of low signal-to-ratio and noisy real-world scenarios, at the same time displaying the robustness regarding the noise types. We make the test labels on movie data publicly available for the fair competition and future progress.
Original language | English |
---|---|
Title of host publication | 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 6808-6812 |
Number of pages | 5 |
ISBN (Electronic) | 9781728176055 |
DOIs | |
State | Published - 2021 |
Event | 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, Canada Duration: 6 Jun 2021 → 11 Jun 2021 |
Publication series
Name | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
---|---|
Volume | 2021-June |
ISSN (Print) | 1520-6149 |
Conference
Conference | 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 |
---|---|
Country/Territory | Canada |
City | Virtual, Toronto |
Period | 6/06/21 → 11/06/21 |
Bibliographical note
Publisher Copyright:© 2021 IEEE
Keywords
- Real-world noise
- Self-attention
- Voice activity detection