Ayuda
Ir al contenido

Dialnet


Semantic Structure, Speech Units and Facial Movements: Multimodal Corpus Analysis of English Public Speaking

  • Autores: Miharu Fuyuno, Yuko Yamashita, Takeshi Saitoh, Yoshitaka Nakajima
  • Localización: CILC2016: 8th International Conference on Corpus Linguistics / Antonio Moreno Ortiz (ed. lit.), Chantal Pérez Hernández (ed. lit.), 2016, págs. 447-461
  • Idioma: inglés
  • Enlaces
  • Resumen
    • This study examines connections between the semantic structure and speech units, and characteristics of facial movements in EFL learners’ public speech. The data were obtained from a multimodal corpus of English public speaking constructed from digital audio and video data of an official English speech contest held in a Japanese high school. Evaluation data of contest judges were also included. For the audio data, speech pauses were extracted with an acoustic analysis software, and the spoken content (text) of each speech unit embedded between two pauses was then annotated. The semantic structures of the speech units were analysed based on segmental chunks of clauses. Motion capturing was applied on video data; forty-two tracking points were set on each speaker's eyes, nose, mouths and face lines. The results indicated: (1) Speakers with higher evaluations showed a similar semantic structure pattern in their speech units. It was also confirmed as similar to that for NSE samples. (2) Horizontal facial movements and the angles of face rotations were extracted from motion capturing. The result is expected to be useful for defining a facial movement model that effectively describes good eye contacts in public speaking.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno