Signing Avatars & Immersive Learning (SAIL)

ID: 3707
School: Research Center/Lab
Program: Ph.D. in Educational Neuroscience (PEN)
Status: Ongoing
Start date: August 2018

Description

The aim of the proposed work is to develop and test a system in which signing avatars (computer-animated virtual humans/characters built from motion capture recordings) help deaf or hearing individuals learn ASL in an immersive virtual environment. The system will be called Signing Avatars & Immersive Learning (SAIL). Interactive speaking avatars have become valuable learning tools, whereas the potential uses of signing avatars have not been adequately explored. Due to the spatial and movement characteristics of natural sign languages, this project leverages the cognitive neuroscience of action perception to test the SAIL system. We will use motion capture recordings of native deaf signers, signing in ASL, to create signing avatars. The avatars will be placed in a virtual reality landscape which can be accessed via head-mounted goggles. Users will enter the virtual reality environment by wearing the goggles, and the user's own movements will be captured via gesture-recognition system (e.g., smart gloves). When using SAIL, users will see a signing avatar from a third person perspective, and they will also see a virtual version of their own arms, from a first person perspective. This first-person perspective can be matched onto their actual movements in the real world. By using gesture recognition systems users will imitate signs and learn through interactive lessons given by avatars. SAIL helps users to visualize and embody a spatial and visual language. This creates an embodied, immersive learning environment which may revolutionize ASL learning. SAIL will provide us the opportunity to understand the cognitive process of visual perception of ASL in a controlled 3d digital environment. Following the development of SAIL, we propose an electroencephalography (EEG) experiment to examine how the sensorimotor systems of the brain are engaged by the embodied experiences provided by SAIL. The action observation network of the human brain is active during the observation of others' movements. The extent of this activity during viewing of another person signing will provide insight into how the observer's own sensorimotor system processes the observed signs within SAIL.

Principal investigators

Additional investigators

Priorities addressed

Funding sources

Approved Products

2019

Quandt, L. C. (2019) Embodying sign language: Using avatars, VR, and EEG to design novel learning tools. Center for Adaptive Systems of Brain-Body Interactions Seminar Series, George Mason University, Fairfax, VA.

Quandt, L. C., Malzkuhn, M. (2019). Participant: NSF STEM for All Video Showcase. "SAIL: Signing Avatars & Immersive Learning" dissemination video.

Quandt, L.C. (2019). Signing avatars and embodied learning in virtual reality. NSF AccessCyberlearning 2.0 Capacity Building Institute, University of Washington, Seattle, WA.

2021

Schwenk, M., Willis, A. S., Weeks, K., Ferster, R., & Quandt, L. C. Attitudes towards sign language avatars in the practice of teletherapy and assessment. (2021). Presented at the 2021 convention of the American Psychological Association, Society of Clinical Psychology (Division 12).

Shao, Q., Sniffen, A., Blanchett, J., Hillis, M. E., Shi, X., Haris, T. K., Liu, J., Lamberton, J., Malzkuhn, M., Quandt, L. C., Mahoney, J., Kraemer, D. J. M., Zhou, X., & Balkcom, D. Teaching American Sign Language in mixed reality. Talk to be given at UbiComp 2021.