Deep learning and voice comparison: phonetically-motivated vs. automatically-learned features - HAL Accéder directement au contenu
Communication dans un congrès Année : 2019

Deep learning and voice comparison: phonetically-motivated vs. automatically-learned features

Résumé

Broadband spectrograms of French vowels /Ã/, /a/, /E/, /e/, /i/, /@/, and /O/ extracted from radio broadcast corpora were used to recognize 45 speakers with a deep convolutional neural network (CNN). The same network was also trained with 62 phonetic parameters to i) see if the resulting confusions were identical to those made by the CNN trained with spectrograms, and ii) understand which acoustic parameters were used by the network. The two networks had identical discrimination results 68% of the time. In 22% of the data, the network trained with spectrograms achieved successful discrimination while the network trained with phonetic parameters failed, and the reverse was found in 10% of the data. We display the relevant phonetic parameters with raw values and values relative to the speakers' means and show cases favouring bad discrimination results. When the network trained with spectrograms failed to discriminate between some tokens, parameters related to f0 proved significant.
Fichier principal
Vignette du fichier
Gendrot_Ferragne_Pellegrini.pdf ( 436.34 Ko ) Télécharger
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

halshs-02412947, version 1 (16-12-2019)

Identifiants

  • HAL Id : halshs-02412947 , version 1

Citer

Cédric Gendrot, Emmanuel Ferragne, Thomas Pellegrini. Deep learning and voice comparison: phonetically-motivated vs. automatically-learned features. ICPhS, Aug 2019, Melbourne, Australia. ⟨halshs-02412947⟩
177 Consultations
160 Téléchargements
Dernière date de mise à jour le 20/04/2024
comment ces indicateurs sont-ils produits

Partager

Gmail Facebook Twitter LinkedIn Plus