Language experience influences audiovisual speech integration in unimodal and bimodal bilingual infants

Mercure, Evelyne; Kushnerenko, Elena; Goldberg, Laura; Bowden‐Howl, Harriet; Coulson, Kimberley; Johnson, Mark H and MacSweeney, Mairéad. 2018. Language experience influences audiovisual speech integration in unimodal and bimodal bilingual infants. Developmental Science, 22(1), e12701. ISSN 1363-755X [Article]

Mercure_et_al-2018-Developmental_Science.pdf - Published Version
Available under License Creative Commons Attribution.

Download (654kB) | Preview

Abstract or Description

Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4‐ to 8‐month‐old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye‐tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.

Item Type:


Identification Number (DOI):

Additional Information:

This study was funded by an ESRC Future Research Leader fellowship (ES/K001329/1), the UK Medical Research Council, and a Wellcome Trust Fellowship (100229/Z/12/Z).

Departments, Centres and Research Units:



17 May 2018Accepted
16 July 2018Published Online
13 December 2018Published

Item ID:


Date Deposited:

10 Oct 2019 10:30

Last Modified:

03 Aug 2021 15:03

Peer Reviewed:

Yes, this version has been peer-reviewed.


View statistics for this item...

Edit Record Edit Record (login required)