Essoe et al. (2017) CNS Poster

Poster to be presented at CNS 2017

Poster E90, Monday, March 27, 2:30 – 4:30 pm, Pacific Concourse.

Download poster.

Cognitive Neuroscience Society Annual Meeting, 2017 @ beautifully foggy San Francisco, CA.

Abstract

Long-term retention of vocabulary in two phonetically similar foreign languages is aided when learning occurs in highly distinctive virtual reality environments*

Joey Ka-Yee Essoe, Nicco Reggente, Ai Ohno, Hera Youn-Ji Baek, Jesse Rissman

The environmental context in which a memory is encoded can impact its later accessibility by virtue of tagging the memory with unique retrieval cues. We examined whether distinctive virtual environments (VEs) could be used as a means to provide contextual support during the learning of two sets of easily confusable stimuli. Specifically, we taught participants the translations of 50 English words in two pre-experimentally unfamiliar languages: 10 were learned only in Swahili, 10 only in Chinyanja, and 30 in both languages. Participants in the Dual Context group learned each language in a different VE, whereas participants in the Single Context group learned both languages in the same VE. On Day 2, after the fourth VE learning session, participants’ ability to recall the Swahili and Chinyanja translations of the English words was tested outside of the VEs. One week later (Day 8), participants were reached by telephone and performed a surprise recall test assessing their long-term retention of the foreign words. Our results revealed that while the Single and Dual Context groups showed comparable recall performance when tested on Day 2, the Dual Context group exhibited significantly reduced forgetting when tested on Day 8. This finding showcases how distinctive learning contexts can protect newly acquired memories from succumbing to excessive interference and promote long-term retention. An additional fMRI dataset collected from a separate group of Dual Context participants during Day 2 cued recall should provide further insights into the mechanisms that underlie their memory advantage.

*Yes, I am sorry about the looooong title. I don’t know what I was thinking. Never again.

Advertisements

Participants Wanted: Learning in VR

General information for prospective participants

Compensation:

Each participant may only participate in ONE of the following 3 versions.
Version 1: $20 in cash.
Version 2: $30 in cash.
Version 3: $50 in cash (participant may also request an image of their brain!)

Compensation will be dispensed upon completion of the session.
Continue reading

Participants Wanted: Avatar Learning in Virtual Environments (ALIVE) 2016

For fMRI session: $20/hour. fMRI version pays $80-100 in cash, and you will get a picture of your brain!
For Lab session: $10/hour, lab-only version pays between $50-70 in cash.
Compensation will be dispensed upon completion of the last session.

Continue reading

Daniel Lin Presents: Effects of Sleep Quality on Virtual Reality Learning

Lin, D.,  Essoe, J. K-Y, Tran, J., Zhou, J., Mutter, J., Frostig, D., Yang, J., Reggente, N., Rissman, J. (2014, May). The Effects of Sleep Quality on Virtual Reality Learning and Overnight Forgetting. Poster presented at the 23rd Annual Psychology Undergraduate Research Conference, Los Angeles, CA.

Lin_Essoe_Rissman-PURC2014-SleepQuality_&_VR_Learning

Click here to download PDF

ALIVE featured in “I Love UCLA” video contest entry!

Directed by Andrew Butte, this is an entry for UCLA Fund’s 4th video contest “I Love UCLA.”

@ about 30s in, Daniel Lin, our Undergraduate Project Leader was in the video greeting the view-point character, then the latter explored our UnderSea Campus 😀

Fun stuff.