Students in our pioneering PEN program gain state-of-the-art Cognitive Neuroscience training in how humans learn, with a special strength in the neuroplasticity of visually guided learning processes. While Cognitive Neuroscience includes studies of learning and higher cognitive processes across the lifespan, its sister discipline, Educational Neuroscience, includes intensive study of five core domains that are crucial in early childhood learning, including language and bilingualism, reading and literacy, math and numeracy, science and critical thinking (higher cognition), social and emotional learning, and includes study of action and visual processing. PEN students become expert in one of the world's cutting-edge neuroimaging methods in the discipline of Cognitive Neuroscience (e.g., fNIRS, EEG, fMRI, and beyond), study Neuroethics, gain strong critical analysis and reasoning skills in science, and develop expertise in one of the core content areas of learning identified above. While becoming experts in both contemporary neuroimaging and behavioral experimental science, students also learn powerful, meaningful, and principled ways that science can be translated for the benefit of education and society today.
Dr. Laura-Ann Petitto, Chair, PEN Steering Committee
Dr. Thomas Allen, Program Director, PEN
Dr. Melissa Herzig , Assistant Program Director, PEN
The project aims at investigating the role of ASL phonology and the underling neural substrates in solving single digit multiplication problems. In spoken languages, phonology and the left lateralized language areas are recruited when verbally retrieving single digit multiplication problems. The role of ASL phonology in arithmetic in general and specifically in the retrieval of multiplication problems is unknown despite abundant literature addressing reading in ASL users. In this study, we will recruit deaf participants with profound to severe hearing loss who have been exposed to ASL prior to age 2 and have had substantial exposure throughout their educational upbringing. Participants will have no history of neurological or developmental disorders and no know learning disability. Results will inform on the typical network involved in multiplication problem for ASL signers and outline a model to then investigate the impact of late ASL sign language exposure or math learning disability in ASL signers.
The objective of the study is to evaluate longitudinally the impact of language modality and early language experience on the core numerical representation and on the acquisition of the concept of exact number. To do this, 180 children aged 3 to 5 will be followed for up to two years. Leveraging the natural variability occurring within the deaf community, 60 children will be native American Sign Language (ASL) users, 60 children will have been exposed to a visual language after 24 months of age (e.g., deaf children with late cochlear implant and no in-home visual language), and the remaining will be English speaking children with no hearing loss and no delay in language exposure. Children will be evaluated at ~8 months intervals, between 2 to 4 times, on basic number skills until they reach proficient understanding of the exact number concept. They will also be assessed for language skills and general IQ. Parents will fill out a comprehensive survey on their child's language use and in-home language. This paradigm will allow to determine the impact of language modality and proficiency on the developmental trajectory of the core numerical representation. It will also allow to determine if the stages for reaching a full understanding of the exact number concept can be delayed or facilitated depending on language modality. Could the use of fingers in ASL to represent numbers facilitate early number concept acquisition? Does a delay in language exposure impact both the core number system and the acquisition of formal number concepts? Are the different stages impermeable to early language experience? What role does language play in the relation between the core numerical representation and the acquisition of exact number concept? These long-standing questions in the field of numerical cognition can be uniquely answered through the perspective of a visual language and time of language exposure.
We are testing the use of online conferencing to evaluate the development of counting skills in 3 to 5 years old signing children.
The proposed experiments in this project build towards addressing questions about neuroplasticity and resilience in the human cortex. To understand the neuroplasticity and resilience of the neural systems that underlie human communication, it is vital to include in a program of study populations with variations in (1) timing of first and second language exposure, (2) modality of language (i.e. tactile, auditory, visual), and (3) sensory experience (deaf-blind, hearing, and deaf populations.) The proposed project here focuses specifically on a DeafBlind population that uses a tactile language (i.e. ProTactile ASL, PTASL). We know that human language processing neural networks are constrained, yet flexible, and permits our species to learn and use a wide range of language structures and languages encoded in multiple modalities (visual, tactile, and auditory) and by including DeafBlind PTASL signers in the corpus of cognitive neuroscience literature, we advance understanding of the mechanisms that make this possible and, vitally, we illuminate possible overarching principles that guide human neural reorganization and resilience. Furthermore, the proposed experiments in this project begin to address key questions that have very strong relevance to society (particularly DeafBlind populations) surrounding debates about whether observed neural reorganization are instances of "maladaptive plasticity" or "functional resilience." By clarifying the scientific principles that underlie neuroplasticity findings and their interpretation, policies revolving around learning (e.g. optimizing language acquisition, sensory intervention for infants, reading practices, etc.) can be optimized greatly and the community may benefit indirectly from this proposed research project.
Large cognitive neuroscience EEG project with 60+ participants enrolled in a multi-part study to examine how signers and non-signers process written English, perceive ASL, and imitate ASL signs.
Berger, L. & Quandt, L. C. (2018). Sensorimotor EEG indicates deaf signers simulate tactile properties of ASL signs when reading English. Presented at the annual meeting of the Society for Neurobiology of Language in Quebec City, Canada.
Kubicek, E. & Quandt, L. C. (2018). Deaf signers' sensorimotor system activity during perception of one- and two-handed signs. Presented at the annual meeting of the Cognitive Neuroscience Society in Boston, MA.
Kubicek, E. & Quandt, L. C. (2018). Deaf signers' sensorimotor system sensitive to motoric and linguistic parameters of sign language. Presented at SACNAS 2018: The National Diversity in STEM Conference in San Antonio, TX.
Kubicek, E. & Quandt, L. C. (2019). Sensorimotor system engagement during ASL sign perception: an EEG study in deaf signers and hearing non-signers. Cortex, 119, 457-469.
Kubicek, E. & Quandt, L. C. (2019). Sensorimotor system engagement during ASL sign perception: an EEG study in deaf signers and hearing non-signers. Cortex. Pre-print available ahead of publication at https://doi.org/10.1101/558833
Quandt, L. C. & Kubicek, E. (2018). Sensorimotor characteristics of sign translations modulate EEG when deaf signers read English. Brain and Language, 187, 9-17.
Quandt, L. C. & Kubicek, E. M. (2017). Motor system contributions to cross-linguistic translation when deaf signers read English. Society for Neuroscience Annual Meeting, Washington, D.C.
Quandt, L. C. & Willis, A. S. (2019). Sensorimotor EEG activity during sign production in deaf signers and hearing non-signers. Presented at the annual meeting of the Society for Neurobiology of Language in Helsinki, Finland.
Willis, A. & Quandt, L. C. (2019). Sign language experience increases motor resonance during imitation of signs. Presented at the annual meeting of the Cognitive Neuroscience Society in San Francisco, CA.
Investigate the neural network, brain structures and cognitive processes involved in arithmetic processing for native ASL signers compared to hearing English speakers. Brain activation from adults performing single digit arithmetic problems, subtraction and multiplication problems, will be recorded. Different brain areas will be independently localized to identify which cognitive components are involved and to which extent depending on language modality. We will adopt a numerical processing localizer, a verbal rhyming localizer (ASL or English) and a hand movement localizer. Within the areas identifies by the localizers we expect to find similar numerical quantity processes across language modality groups. We expect language based activation for multiplication processes if both groups rely on verbal retrieval memory, regardless of modality. We expect increased motor movement activation in the ASL signing group given the observation in previous studies that ASL signers activate motor and supplementary motor areas when processing linguistic information regardless of whether presented in written English or ASL.
The aim of the proposed work is to develop and test a system in which signing avatars (computer-animated virtual humans/characters built from motion capture recordings) help deaf or hearing individuals learn ASL in an immersive virtual environment. The system will be called Signing Avatars & Immersive Learning (SAIL). Interactive speaking avatars have become valuable learning tools, whereas the potential uses of signing avatars have not been adequately explored. Due to the spatial and movement characteristics of natural sign languages, this project leverages the cognitive neuroscience of action perception to test the SAIL system. We will use motion capture recordings of native deaf signers, signing in ASL, to create signing avatars. The avatars will be placed in a virtual reality landscape which can be accessed via head-mounted goggles. Users will enter the virtual reality environment by wearing the goggles, and the user's own movements will be captured via gesture-recognition system (e.g., smart gloves). When using SAIL, users will see a signing avatar from a third person perspective, and they will also see a virtual version of their own arms, from a first person perspective. This first-person perspective can be matched onto their actual movements in the real world. By using gesture recognition systems users will imitate signs and learn through interactive lessons given by avatars. SAIL helps users to visualize and embody a spatial and visual language. This creates an embodied, immersive learning environment which may revolutionize ASL learning. SAIL will provide us the opportunity to understand the cognitive process of visual perception of ASL in a controlled 3d digital environment. Following the development of SAIL, we propose an electroencephalography (EEG) experiment to examine how the sensorimotor systems of the brain are engaged by the embodied experiences provided by SAIL. The action observation network of the human brain is active during the observation of others' movements. The extent of this activity during viewing of another person signing will provide insight into how the observer's own sensorimotor system processes the observed signs within SAIL.
Quandt, L. C. (2019) Embodying sign language: Using avatars, VR, and EEG to design novel learning tools. Center for Adaptive Systems of Brain-Body Interactions Seminar Series, George Mason University, Fairfax, VA.
Quandt, L. C., Malzkuhn, M. (2019). Participant: NSF STEM for All Video Showcase. "SAIL: Signing Avatars & Immersive Learning" dissemination video.
Quandt, L.C. (2019). Signing avatars and embodied learning in virtual reality. NSF AccessCyberlearning 2.0 Capacity Building Institute, University of Washington, Seattle, WA.
The aim is to investigate the differences and similarities in the neural correlates, through the EEG recordings, of native ASL users and English native speakers while performing single-digit arithmetic problems.
Berteletti, I. (2018, June). Educational Neuroscience, what is it and what it's not. Presented at the University of Trento, Rovereto, Italy
Parks, A., White, B. E., Lancaster, L., and Bakke, M. (2018, April). The test-retest reliability of the Early Speech Perception Test in adults with severe to profound hearing levels. Poster presentation at the Department of Hearing, Speech, and Language Sciences, Gallaudet University, Washington, DC.
Parks, A., White, B. E., Lancaster, L., and Bakke, M. (2018, February). The role of pure-tone average and auditory linguistic experience on word recognition and pattern perception ability in adults with severe to profound hearing levels. Poster presentation at the Department of Hearing, Speech, and Language Sciences, Gallaudet University, Washington, DC.
White, B. E. (2018, April). Building the visual vocabulary: A resource guide on vocabulary development in young deaf and hard-of-hearing children. Available at https://www.visvoc.bradleywhite.net
White, B. E. (2018, April). Language development timeline (0-5 years old): A resource guide on vocabulary development in young deaf and hard-of-hearing children. Available at https://www.visvoc.bradleywhite.net/assets/files/white16-languagedevelopmenttimeline.pdf
White, B. E. (2018, April). Resting state functional connectivity: Methodological and statistical approaches for functional near-infrared spectroscopy. Presentation at the Language and Educational Neuroscience Laboratory, Washington, DC.
White, B. E. (2018, April). Tips for facilitating vocabulary development: A resource guide on vocabulary development in young deaf and hard-of-hearing children. Available at https://www.visvoc.bradleywhite.net/assets/files/white16-tipsforfacilitatingvocabularydevelopment.pdf