Ayuda
Ir al contenido

Dialnet


Language factors modulate audiovisual speech perception: a developmental perspective

  • Autores: Joan Birulés
  • Directores de la Tesis: Ferran Pons Gimeno (dir. tes.), Laura Bosch Galceran (codir. tes.)
  • Lectura: En la Universitat de Barcelona ( España ) en 2020
  • Idioma: inglés
  • Tribunal Calificador de la Tesis: Hélène Loevenbruck (presid.), Salvador Soto-Faraco (secret.), Mathilde Fort (voc.)
  • Programa de doctorado: Programa de Doctorado en Cerebro, Cognición y Conducta por la Universidad de Barcelona
  • Materias:
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • In most natural situations, adults look at the eyes of faces in seek of social information (Yarbus, 1967). However, when the auditory information becomes unclear (e.g. speech-in- noise) they switch their attention towards the mouth of a talking face and rely on the audiovisual redundant cues to help them process the speech signal (Barenholtz, Mavica, & Lewkowicz, 2016; Buchan, Paré, & Munhall, 2007; Lansing & McConkie, 2003; Vatikiotis- Bateson, Eigsti, Yano, & Munhall, 1998). Likewise, young infants are sensitive to the correspondence between acoustic and visual speech (Bahrick & Lickliter, 2012), and they also rely on the talker’s mouth during the second half of the first year of life, putatively to help them acquire language by the time they start babbling (Lewkowicz & Hansen-Tift, 2012), and also to aid language differentiation in the case of bilingual infants (Pons, Bosch & Lewkowicz, 2015). The current set of studies provides a detailed examination of the audiovisual (AV) speech cues contribution to speech processing at different language development stages, through the analysis of selective attention patterns when processing speech from talking faces. To do so, I compared different linguistic experience factors (i.e. types of bilingualism – distance between bilinguals’ two languages –, language familiarity and language proficiency) that modulate audiovisual speech perception in first language acquisition during infancy (Studies 1 and 2), early childhood (Studies 3 and 4), and in second language (L2) learning during adulthood (Studies 5, 6 and 7). The findings of the present work demonstrate that (1) perceiving speech audiovisually hampers close bilingual infants’ ability to discriminate their languages, that (2) 15-month-old and 5 year-old close language bilinguals rely more on the mouth cues of a talking face than do their distant bilingual peers, that (3) children’s attention to the mouth follows a clear temporal pattern: it is maximal in the beginning of the presentation and it diminishes gradually as speech continues, and that (4) adults also rely more on the mouth speech cues when they perceive fluent non-native vs. native speech, regardless of their L2 expertise. All in all, these studies shed new light into the field of audiovisual speech perception and language processing by showing that selective attention to a talker’s eyes and mouth is a dynamic, information-seeking process, which is largely modulated by perceivers’ early linguistic experience and the tasks’ demands. These results suggest that selectively attending the redundant speech cues of a talker’s mouth at the adequate moment enhances speech perception and is crucial for normal language development and speech processing, not only in infancy – during first language acquisition – but also in more advanced language stages in childhood, as well as in L2 learning during adulthood. Ultimately, they confirm that mouth reliance is greater in close bilingual environments, where the presence of two related languages increases the necessity for disambiguation and keeping separate language systems.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno