Transformer-Based Encoder-Encoder Architecture for Spoken Term Detection
| dc.contributor.author | Švec, Jan | |
| dc.contributor.author | Šmídl, Luboš | |
| dc.contributor.author | Lehečka, Jan | |
| dc.date.accessioned | 2025-06-20T08:43:52Z | |
| dc.date.available | 2025-06-20T08:43:52Z | |
| dc.date.issued | 2023 | |
| dc.date.updated | 2025-06-20T08:43:52Z | |
| dc.description.abstract | The paper presents a method for spoken term detection based on the Transformer architecture. We propose the encoder-encoder architecture employing two BERT-like encoders with additional modifications, including attention masking, convolutional and upsampling layers. The encoders project a recognized hypothesis and a searched term into a shared embedding space, where the score of the putative hit is computed using the calibrated dot product. In the experiments, we used the Wav2Vec 2.0 speech recognizer. The proposed system outperformed a baseline method based on deep LSTMs on the English and Czech STD datasets based on USC Shoah Foundation Visual History Archive (MALACH). | en |
| dc.format | 12 | |
| dc.identifier.doi | 10.1007/978-3-031-47665-5_28 | |
| dc.identifier.isbn | 978-3-031-47664-8 | |
| dc.identifier.issn | 0302-9743 | |
| dc.identifier.obd | 43940821 | |
| dc.identifier.orcid | Švec, Jan 0000-0001-8362-5927 | |
| dc.identifier.orcid | Šmídl, Luboš 0000-0002-8169-2410 | |
| dc.identifier.orcid | Lehečka, Jan 0000-0002-3889-8069 | |
| dc.identifier.uri | http://hdl.handle.net/11025/60796 | |
| dc.language.iso | en | |
| dc.project.ID | GA22-27800S | |
| dc.project.ID | VJ01010108 | |
| dc.publisher | Springer | |
| dc.relation.ispartofseries | 7th Asian Conference on Pattern Recognition (ACPR 2023) | |
| dc.subject | neural networks | en |
| dc.subject | transformer architecture | en |
| dc.subject | spoken term detection | en |
| dc.title | Transformer-Based Encoder-Encoder Architecture for Spoken Term Detection | en |
| dc.type | Stať ve sborníku (D) | |
| dc.type | STAŤ VE SBORNÍKU | |
| dc.type.status | Published Version | |
| local.files.count | 1 | * |
| local.files.size | 1010436 | * |
| local.has.files | yes | * |
| local.identifier.eid | 2-s2.0-85177433088 |
Files
Original bundle
1 - 1 out of 1 results
No Thumbnail Available
- Name:
- Svec_Smidl_Lehecka_Transformer-Based_Encoder-Encoder_Architecture_for_Spoken_Term_Detection_2023.pdf
- Size:
- 986.75 KB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 out of 1 results
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: