Signing Avatars & Immersive Learning (SAIL)
The aim of the proposed work is to develop and test a system in which signing avatars (computer-animated virtual humans/characters built from motion capture recordings) help deaf or hearing individuals learn ASL in an immersive virtual environment. The system will be called Signing Avatars & Immersive Learning (SAIL). Interactive speaking avatars have become valuable learning tools, whereas the potential uses of signing avatars have not been adequately explored. Due to the spatial and movement characteristics of natural sign languages, this project leverages the cognitive neuroscience of action perception to test the SAIL system. We will use motion capture recordings of native deaf signers, signing in ASL, to create signing avatars. The avatars will be placed in a virtual reality landscape which can be accessed via head-mounted goggles. Users will enter the virtual reality environment by wearing the goggles, and the user's own movements will be captured via gesture-recognition system (e.g., smart gloves). When using SAIL, users will see a signing avatar from a third person perspective, and they will also see a virtual version of their own arms, from a first person perspective. This first-person perspective can be matched onto their actual movements in the real world. By using gesture recognition systems users will imitate signs and learn through interactive lessons given by avatars. SAIL helps users to visualize and embody a spatial and visual language. This creates an embodied, immersive learning environment which may revolutionize ASL learning. SAIL will provide us the opportunity to understand the cognitive process of visual perception of ASL in a controlled 3d digital environment. Following the development of SAIL, we propose an electroencephalography (EEG) experiment to examine how the sensorimotor systems of the brain are engaged by the embodied experiences provided by SAIL. The action observation network of the human brain is active during the observation of others' movements. The extent of this activity during viewing of another person signing will provide insight into how the observer's own sensorimotor system processes the observed signs within SAIL.
- Lamberton, Jason (Consultant) • Science of Learning Center on Visual Language & Visual Learning (VL2)
- Malzkuhn, Melissa • Motion Light Lab (ML2) - Educational Neuroscience (PEN)
- Wang, Yiqiao • Science of Learning Center on Visual Language & Visual Learning (VL2)
- Willis, Athena (Student) • PhD in Educational Neuroscience (PEN) - Educational Neuroscience (PEN)