Toward an acoustico-perceptual representation of the speech sounds, incorporating the language-dependent system of contrasts
Résumé
This presentation deals with a new type of annotation of sounds, not based on their distinctive features, but on acoustico-perceptual features. The background is the principles underlying the acoustic theory of speech production (ATSP), and the used articulatory modelling to investigate the perceptual correlates of articulatory changes. Articulatory modelling( here Maeda's model) allows to investigate compensatory phenomena. Articulatory parameters are used as an input in a realistic manner : (1) the jaw position,* the tongue dorsum position (2) and shape (3), the tongue apex position (4), * the lip aperture (5) and protrusion (6), * the larynx height (7) * plus * the glottal area (8) * the fundamental frequency (9) * Velo-pharyngeal port (10). As predicted by the ATSP, the F-pattern is similar for vowels and consonants which are produced by a similar configuration of same vocal tract ; the differences between C anv V is mainly due to a different degree of constriction at the level of the glottis and the supraglottic constriction, which can be reproduced by AM. Change in degree of constriction creates a change in the source type and source location, with a possible creation of noise and tremendous acoustic changes. The representation allows to describe the difference between similar vowels (such as different types of /i/, either with F2 -prepalatal- maximal -prepalatal- or F3 maximal -palatal-. The effect of the immediate phonetic context on the F-pattern of a given phoneme, and its prosodic status are included in the representation of the phonemes, explaining the changes of timber of the different allophones of the phoneme.