TR2021-036
Capturing Multi-Resolution Context by Dilated Self-Attention
-
- "Capturing Multi-Resolution Context by Dilated Self-Attention", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), DOI: 10.1109/ICASSP39728.2021.9415001, June 2021, pp. 5869-5873.BibTeX TR2021-036 PDF
- @inproceedings{Moritz2021jun,
- author = {Moritz, Niko and Hori, Takaaki and Le Roux, Jonathan},
- title = {Capturing Multi-Resolution Context by Dilated Self-Attention},
- booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
- year = 2021,
- pages = {5869--5873},
- month = jun,
- doi = {10.1109/ICASSP39728.2021.9415001},
- url = {https://www.merl.com/publications/TR2021-036}
- }
,
- "Capturing Multi-Resolution Context by Dilated Self-Attention", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), DOI: 10.1109/ICASSP39728.2021.9415001, June 2021, pp. 5869-5873.
-
MERL Contact:
-
Research Areas:
Abstract:
Self-attention has become an important and widely used neural network component that helped to establish new state-of-the-art results for various applications, such as machine translation and automatic speech recognition (ASR). However, the computational complexity of self-attention grows quadratically with the input sequence length. This can be particularly problematic for applications such as ASR, where an input sequence generated from an utterance can be relatively long. In this work, we propose a combination of restricted self-attention and a dilation mechanism, which we refer to as dilated self-attention. The restricted self-attention allows attention to neighboring frames of the query at a high resolution, and the dilation mechanism summarizes distant information to allow attending to it with a lower resolution. Different methods for summarizing distant frames are studied, such as subsampling, mean-pooling, and attention-based pooling. ASR results demonstrate substantial improvements compared to restricted self-attention alone, achieving similar results compared to full-sequence based self-attention with a fraction of the computational costs