Watching talking faces: The development of cortical representation of visual syllables in infancy
Dopierała, Aleksandra A.W.; López Pérez, David; Mercure, Evelyne; Pluta, Agnieszka; Malinowska-Korczak, Anna; Evans, Samuel; Wolak, Tomasz and Tomalski, Przemysław. 2023. Watching talking faces: The development of cortical representation of visual syllables in infancy. Brain and Language, 244, 105304. ISSN 0093-934X [Article]
|
Text
Dopierala_2023_AAM.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (2MB) | Preview |
Abstract or Description
From birth, we perceive speech by hearing and seeing people talk. In adults cortical representations of visual speech are processed in the putative temporal visual speech area (TVSA), but it remains unknown how these representations develop. We measured infants’ cortical responses to silent visual syllables and non-communicative mouth movements using functional Near-Infrared Spectroscopy. Our results indicate that cortical specialisation for visual speech may emerge during infancy. The putative TVSA was active to both visual syllables and gurning around 5 months of age, and more active to gurning than to visual syllables around 10 months of age. Multivariate pattern analysis classification of distinct cortical responses to visual speech and gurning was successful at 10, but not at 5 months of age. These findings imply that cortical representations of visual speech change between 5 and 10 months of age, showing that the putative TVSA is initially broadly tuned and becomes selective with age.
Item Type: |
Article |
||||||||
Identification Number (DOI): |
|||||||||
Additional Information: |
Funding sources: This study was funded by a grant from the National Science Centre of Poland to PT (2016/23/B/HS6/03860). Additional support for data analyses was provided by the Institute of Psychology, PAS. |
||||||||
Data Access Statement: |
Data and code availability statement: The paper uses data collected at the University of Warsaw, Poland. The anonymised raw and pre-processed data, the script to pre-process the data in HoMer, and the code to analyse the data in SPSS are all available at https://osf.io/sqjft/?view_only=41d20e906335497b9ad661d5f1fe2118. The custom Matlab code used to run MVPA analyses is available at https://github.com/speechAndBrains/fNIRS_tools. |
||||||||
Keywords: |
fNIRS, Visual speech, Infant, Speech processing, Dynamic face processing |
||||||||
Departments, Centres and Research Units: |
|||||||||
Dates: |
|
||||||||
Item ID: |
33970 |
||||||||
Date Deposited: |
18 Aug 2023 08:27 |
||||||||
Last Modified: |
21 Jul 2024 01:28 |
||||||||
Peer Reviewed: |
Yes, this version has been peer-reviewed. |
||||||||
URI: |
View statistics for this item...
Edit Record (login required) |