Deep learning and voice comparison: phonetically-motivated vs. automatically-learned features - Archive ouverte HAL Access content directly
Conference Papers Year :

Deep learning and voice comparison: phonetically-motivated vs. automatically-learned features

(1) , (1) , (2)
1
2

Abstract

Broadband spectrograms of French vowels /Ã/, /a/, /E/, /e/, /i/, /@/, and /O/ extracted from radio broadcast corpora were used to recognize 45 speakers with a deep convolutional neural network (CNN). The same network was also trained with 62 phonetic parameters to i) see if the resulting confusions were identical to those made by the CNN trained with spectrograms, and ii) understand which acoustic parameters were used by the network. The two networks had identical discrimination results 68% of the time. In 22% of the data, the network trained with spectrograms achieved successful discrimination while the network trained with phonetic parameters failed, and the reverse was found in 10% of the data. We display the relevant phonetic parameters with raw values and values relative to the speakers' means and show cases favouring bad discrimination results. When the network trained with spectrograms failed to discriminate between some tokens, parameters related to f0 proved significant.
Fichier principal
Vignette du fichier
Gendrot_Ferragne_Pellegrini.pdf (436.34 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

halshs-02412947 , version 1 (16-12-2019)

Identifiers

  • HAL Id : halshs-02412947 , version 1

Cite

Cédric Gendrot, Emmanuel Ferragne, Thomas Pellegrini. Deep learning and voice comparison: phonetically-motivated vs. automatically-learned features. ICPhS, Aug 2019, Melbourne, Australia. ⟨halshs-02412947⟩
147 View
91 Download

Share

Gmail Facebook Twitter LinkedIn More