The effects of early visual language exposure on deaf children's linguistic and non-linguistic visual processing: An Eye-Tracking and fNIRS brain imaging investigation of emergent readers

ID: 2750
Status: Ongoing
Start date: September 2013


How do young children learn to read when using contemporary learning tools such as reading apps? How do young deaf children use, visually examine, and process complex visual information on a moving screen—especially involving early reading apps for the young deaf reader? For these questions, no studies exist, and our present studies are the first of their kind. We examine whether differences in early life visual language experience (AoE) impact visual attention and allocation in young deaf and hearing emergent readers. Early visual language experience affords enhanced visual gaze-shifting and visual attention in the young deaf visual learner which subsequently impacts book-sharing and literacy behaviors in toddlers and yields linguistic, reading, and cognitive benefits. We examine whether differences in AoE impact visual attention and allocation in the young emergent reader. If early visual language is a significant factor in task performance in early sign-exposed children, it may suggest that select visual properties at the heart of visual sign phonology selectively enhances visual sight word recognition in ways that positively impacts those children's acquisition of English reading. Also, it will provide insights into what age young deaf children are best exposed to sign languages as to promote bilingual mastery and enhancements to English reading acquisition. Results from the present study have begun to provide first-time research-based insights into all young children's visual attention to linguistic and non- linguistic visual information in dynamic moving scenes, as are commonly used in today's e-literacy technology.

Principal investigators

Additional investigators

Priorities addressed

Funding sources