PhD in Educational Neuroscience (PEN)

Students in our pioneering PEN program gain state-of-the-art Cognitive Neuroscience training in how humans learn, with a special strength in the neuroplasticity of visually guided learning processes. While Cognitive Neuroscience includes studies of learning and higher cognitive processes across the lifespan, its sister discipline, Educational Neuroscience, includes intensive study of five core domains that are crucial in early childhood learning, including language and bilingualism, reading and literacy, math and numeracy, science and critical thinking (higher cognition), social and emotional learning, and includes study of action and visual processing. PEN students become expert in one of the world's cutting-edge neuroimaging methods in the discipline of Cognitive Neuroscience (e.g., fNIRS, EEG, fMRI, and beyond), study Neuroethics, gain strong critical analysis and reasoning skills in science, and develop expertise in one of the core content areas of learning identified above. While becoming experts in both contemporary neuroimaging and behavioral experimental science, students also learn powerful, meaningful, and principled ways that science can be translated for the benefit of education and society today.

Dr. Laura-Ann Petitto, Chair, PEN Steering Committee
Dr. Thomas Allen, Program Director, PEN
Dr. Melissa Herzig , Assistant Program Director, PEN


Developmental Neuroplasticity and Timing of First Language Exposure in Infants

ID: 3608
Status: Ongoing
Start date: February 2018
End Date: August 2020

Description

This research project seeks to understand the mechanisms that underlie learning (i.e. language acquisition) in the developing brain in order to improve understanding of typical and atypical cognition. Much controversy exists in both science and speech, language, and hearing professionals regarding the optimal age (if at all) to expose young children to a visual signed language. This study promises to have high impact to broader society as our understanding from this study will ameliorate barriers to the successful use of hearing enhancement technologies by identifying optimal developmental timing of language exposure in conjunction with cochlear implantation. We utilize functional near infrared spectroscopy (fNIRS) and behavioral techniques that are compatible with young children and particularly recipients of cochlear implants to capture the modulation of the language neural networks as a function of different language exposure experiences. Congenitally deaf infants with cochlear implants provide scientists with an extraordinary natural experiment in which exposure to auditory-based and visual-based language permits investigation into controlled timing of linguistic exposure. Thus, in this first-time targeted study of brain tissue development in young cochlear implanted infants, we will better understand the neural network that underlies language acquisition and processing in terms of its neurobiological maturational sensitivity as well its neuroplasticity and resilience to modality of language.

Principal investigators

Additional investigators

Priorities addressed


Neural investigation on the impact of a visual language on arithmetic processing: an fMRI approach

ID: 3744
Status: Ongoing
Start date: July 2019

Description

Investigate the neural network, brain structures and cognitive processes involved in arithmetic processing for native ASL signers compared to hearing English speakers. Brain activation from adults performing single digit arithmetic problems, subtraction and multiplication problems, will be recorded. Different brain areas will be independently localized to identify which cognitive components are involved and to which extent depending on language modality. We will adopt a numerical processing localizer, a verbal rhyming localizer (ASL or English) and a hand movement localizer. Within the areas identifies by the localizers we expect to find similar numerical quantity processes across language modality groups. We expect language based activation for multiplication processes if both groups rely on verbal retrieval memory, regardless of modality. We expect increased motor movement activation in the ASL signing group given the observation in previous studies that ASL signers activate motor and supplementary motor areas when processing linguistic information regardless of whether presented in written English or ASL.

Principal investigators

Additional investigators

Priorities addressed

Funding sources


Signing Avatars & Immersive Learning (SAIL)

ID: 3707
Status: Ongoing
Start date: August 2018

Description

The aim of the proposed work is to develop and test a system in which signing avatars (computer-animated virtual humans/characters built from motion capture recordings) help deaf or hearing individuals learn ASL in an immersive virtual environment. The system will be called Signing Avatars & Immersive Learning (SAIL). Interactive speaking avatars have become valuable learning tools, whereas the potential uses of signing avatars have not been adequately explored. Due to the spatial and movement characteristics of natural sign languages, this project leverages the cognitive neuroscience of action perception to test the SAIL system. We will use motion capture recordings of native deaf signers, signing in ASL, to create signing avatars. The avatars will be placed in a virtual reality landscape which can be accessed via head-mounted goggles. Users will enter the virtual reality environment by wearing the goggles, and the user's own movements will be captured via gesture-recognition system (e.g., smart gloves). When using SAIL, users will see a signing avatar from a third person perspective, and they will also see a virtual version of their own arms, from a first person perspective. This first-person perspective can be matched onto their actual movements in the real world. By using gesture recognition systems users will imitate signs and learn through interactive lessons given by avatars. SAIL helps users to visualize and embody a spatial and visual language. This creates an embodied, immersive learning environment which may revolutionize ASL learning. SAIL will provide us the opportunity to understand the cognitive process of visual perception of ASL in a controlled 3d digital environment. Following the development of SAIL, we propose an electroencephalography (EEG) experiment to examine how the sensorimotor systems of the brain are engaged by the embodied experiences provided by SAIL. The action observation network of the human brain is active during the observation of others' movements. The extent of this activity during viewing of another person signing will provide insight into how the observer's own sensorimotor system processes the observed signs within SAIL.

Principal investigators

Additional investigators

Priorities addressed

Funding sources

Products

Quandt, L. C. (2019) Embodying sign language: Using avatars, VR, and EEG to design novel learning tools. Center for Adaptive Systems of Brain-Body Interactions Seminar Series, George Mason University, Fairfax, VA.

Quandt, L. C., Malzkuhn, M. (2019). Participant: NSF STEM for All Video Showcase. "SAIL: Signing Avatars & Immersive Learning" dissemination video.

Quandt, L.C. (2019). Signing avatars and embodied learning in virtual reality. NSF AccessCyberlearning 2.0 Capacity Building Institute, University of Washington, Seattle, WA.


Scholarship and creative activity

2018

Berteletti, I. (2018, June). Educational Neuroscience, what is it and what it's not. Presented at the University of Trento, Rovereto, Italy

2018

Parks, A., White, B. E., Lancaster, L., and Bakke, M. (2018, April). The test-retest reliability of the Early Speech Perception Test in adults with severe to profound hearing levels. Poster presentation at the Department of Hearing, Speech, and Language Sciences, Gallaudet University, Washington, DC.

Parks, A., White, B. E., Lancaster, L., and Bakke, M. (2018, February). The role of pure-tone average and auditory linguistic experience on word recognition and pattern perception ability in adults with severe to profound hearing levels. Poster presentation at the Department of Hearing, Speech, and Language Sciences, Gallaudet University, Washington, DC.

White, B. E. (2018, April). Building the visual vocabulary: A resource guide on vocabulary development in young deaf and hard-of-hearing children. Available at https://www.visvoc.bradleywhite.net

White, B. E. (2018, April). Language development timeline (0-5 years old): A resource guide on vocabulary development in young deaf and hard-of-hearing children. Available at https://www.visvoc.bradleywhite.net/assets/files/white16-languagedevelopmenttimeline.pdf

White, B. E. (2018, April). Resting state functional connectivity: Methodological and statistical approaches for functional near-infrared spectroscopy. Presentation at the Language and Educational Neuroscience Laboratory, Washington, DC.

White, B. E. (2018, April). Tips for facilitating vocabulary development: A resource guide on vocabulary development in young deaf and hard-of-hearing children. Available at https://www.visvoc.bradleywhite.net/assets/files/white16-tipsforfacilitatingvocabularydevelopment.pdf