Event integration mechanisms across languages and their psychological reality - HAL Accéder directement au contenu
Communication dans un congrès Année : 2019

Event integration mechanisms across languages and their psychological reality

Résumé

Two different modes of visual attention are recognized in visual cognition research: (a) an early ambient mode of processing; and (b) a late focal mode –the former associated with bottom-up mechanisms guided by low-level perceptual saliency features (i.e. configuration), and the latter related to top-down processing, based on high-level (i.e. contextual) information and depending on knowledge-based features such as semantic schemas, content, co-occurrence of objects in a scene etc. (Pannasch & Velichkovsky 2009). Knowledge-based information can be related to the linguistic knowledge of the viewers. More specifically, in the domain of motion event encoding, speakers’ knowledge depends on how available spatial components (e.g. Path, Manner) are in a language and how they combine into semantic schemas to form constrained spatial arrangements (Talmy 2006). Each language has a relatively closed set of ‘pre-packaged’ schemas and focuses differently on the core schema (i.e. the Path a Figure follows in a displacement): some (i.e. French) lexicalize the core schema in the main verb; others (i.e. English) express it in the periphery of the sentence. Many psycholinguistic studies (e.g. Papafragou et al. 2008) suggest that such language differences are only surface differences that cannot influence visual processing of events (unless only momentarily). According to these authors, gaze behaviour can change due to momentary top-down language effects when people prepare to speak, but language interference, if any, occurs late in the viewing process and is therefore considered to be superficial. For others, language effects do not only occur in verbal behaviour but extend to non-verbal behaviours such as eye movements (cf. Soroli et al. 2019 for a review) and have an early effect on low-level processing (Meteyard et al. 2007). Using verbal (production) and non-verbal measures (eye tracking), we investigated how speakers of two typologically different languages (English, French) perceive motion events visually and describe them verbally. Assuming that language can only have superficial effects that occur late during processing, no language differences should be found during the first stages of visual exploration. If, on the other hand, language has deeper psychological reality, then differences should be found not only during late exploration and verbalization but also during early/low-level scene viewing. The verbal measures confirmed the typological differences across the groups: English speakers systematically encoded Path in peripheral devices and lexicalized Manner in the verb; French speakers preferred to lexicalize Path downplaying details related to Manner. With respect to eye movements, the participants of the two groups explored the scenes very differently: while both groups showed higher proportion of focal than ambient fixations, short saccades and long smooth pursuits were more frequent in the English data compared with the French participants who opted for ambient gazes with higher proportions of large saccade amplitudes at the earliest stages of visual exploration. The findings suggest that both verbal encoding and event perception can be affected to a great extent by language-specific features. Typological properties are not just surface forms that merely emerge in verbal behavior: They leave traces at the earliest stages of cognitive processing and thus have a psychological reality that should not be ignored. References Meteyard, L., Bahrami, B. & Vigliocco, G. (2007). Motion detection and motion verbs: language affects low-level visual perception. Psychological Science, 18(11), 1007–1013. Pannasch, S. & Velichkovsky, B. M. (2009). Distractor effect and saccade amplitudes: Further evidence on different modes of processing in free exploration of visual images. Visual Cognition, 17(6–7), 1109–1131. Papafragou, A., Hulbert, J. & Trueswell, J. (2008). Does Language Guide Event Perception? Evidence from Eye Movements. Cognition, 108(1), 155–184. Soroli E., Hickmann M. & Hendriks H. (2019). Casting an eye on motion events: eye tracking and its implications for linguistic typology. In M. Aurnague & D. Stosic (eds.), The semantics of dynamic space in French: Descriptive, experimental and formal studies on motion expression, 249–288. Amsterdam: John Benjamins. Talmy, L. (2006). The fundamental system of spatial schemas in language. In B. Hampe (ed.) From perception to meaning: Image Schemas in Cognitive Linguistics, 199–234. Mouton de Gruyter.
Loading...
Fichier non déposé

Dates et versions

hal-02277569, version 1 (03-09-2019)

Identifiants

  • HAL Id : hal-02277569 , version 1

Citer

Efstathia Soroli, Coralie Vincent, Helen Engemann, Henriëtte Hendriks, Maya Hickmann. Event integration mechanisms across languages and their psychological reality. 15th International Cognitive Linguistics Conference: "Crosslinguistic Perspectives on Cognitive Linguistics", Aug 2019, Nishinomiya, Japan. ⟨hal-02277569⟩
133 Consultations
0 Téléchargements
Dernière date de mise à jour le 21/04/2024
comment ces indicateurs sont-ils produits

Partager

Gmail Facebook Twitter LinkedIn Plus