Transformer-Based Encoder-Encoder Architecture for Spoken Term Detection
Date issued
2023
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Abstract
The paper presents a method for spoken term detection based on the Transformer architecture. We propose the encoder-encoder architecture employing two BERT-like encoders with additional modifications, including attention masking, convolutional and upsampling layers. The encoders project a recognized hypothesis and a searched term into a shared embedding space, where the score of the putative hit is computed using the calibrated dot product. In the experiments, we used the Wav2Vec 2.0 speech recognizer. The proposed system outperformed a baseline method based on deep LSTMs on the English and Czech STD datasets based on USC Shoah Foundation Visual History Archive (MALACH).
Description
Subject(s)
neural networks, transformer architecture, spoken term detection