Fine-tuning pre-trained models for Automatic Speech Recognition: experiments on a fieldwork corpus of Japhug (Trans-Himalayan family) - HAL-SHS - Sciences de l'Homme et de la Société
Communication Dans Un Congrès Année : 2022

Fine-tuning pre-trained models for Automatic Speech Recognition: experiments on a fieldwork corpus of Japhug (Trans-Himalayan family)

Résumé

This is a report on results obtained in the development of speech recognition tools intended to support linguistic documentation efforts. The test case is an extensive fieldwork corpus of Japhug, an endangered language of the Trans-Himalayan (Sino-Tibetan) family. The goal is to reduce the transcription workload of field linguists. The method used is a deep learning approach based on the language-specific tuning of a generic pre-trained representation model, XLS-R, using a Transformer architecture. We note difficulties in implementation, in terms of learning stability. But this approach brings significant improvements nonetheless. The quality of phonemic transcription is improved over earlier experiments; and most significantly, the new approach allows for reaching the stage of automatic word recognition. Subjective evaluation of the tool by the author of the training data confirms the usefulness of this approach.

Domaines

Linguistique
Fichier principal
Vignette du fichier
ComputEL_5_Japhug_ASR.pdf (228.59 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

halshs-03647315 , version 1 (20-04-2022)

Licence

Identifiants

Citer

Séverine Guillaume, Guillaume Wisniewski, Cécile Macaire, Guillaume Jacques, Alexis Michaud, et al.. Fine-tuning pre-trained models for Automatic Speech Recognition: experiments on a fieldwork corpus of Japhug (Trans-Himalayan family). ComputEL-5 5th Workshop on Computational Methods for Endangered Languages (ComputEL-5), May 2022, Dublin, Ireland. ⟨10.18653/v1/2022.computel-1.21⟩. ⟨halshs-03647315⟩
367 Consultations
359 Téléchargements

Altmetric

Partager

More