The SignEval 2025 Challenge at the ICCV Multimodal Sign Language Recognition Workshop: Results and Discussion
Date issued
2025
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers Inc.
Abstract
This paper summarizes the results of the first multimodal sign language recognition challenge, SignEval 2025, organized at ICCV 2025. The challenge featured two tracks: (i) Continuous sign language recognition (CSLR) task based on the newly curated Isharah dataset, a Saudi Sign Language dataset, and (ii) Isolated sign language recognition (ISLR) task using the MultiMeDaLIS dataset, a multimodal Italian Sign Language corpus tailored for doctor-patient communication. Two tasks are defined within the CSLR track: Signer-Independent and Unseen-Sentences. The Signer-Independent task tests the model's ability to generalize across signers, a critical property for scalable real-world CSLR systems. The Unseen-Sentences task evaluates the model's capability to recognize novel sentence compositions by leveraging learned grammar and semantics. The ISLR track utilized MultiMeDaLIS, a multi-modal dataset. The participants of this track were challenged to classify isolated signs using only radar and RGB modalities. The challenge utilized two leaderboards to showcase methods, with participants setting new benchmarks and achieving state-of-the-art results on both tracks. More information on the challenges, tasks, leaderboard, baselines and development kits are available on https://multimodal-sign-language-recognition.github.io/ICCV-2025/.
Description
Subject(s)
SignEval 2025 challenge, multimodal sign language recognition