Essoe et al. (2017) CNS Poster

Poster to be presented at CNS 2017

Poster E90, Monday, March 27, 2:30 – 4:30 pm, Pacific Concourse.

Download poster.

Cognitive Neuroscience Society Annual Meeting, 2017 @ beautifully foggy San Francisco, CA.

Abstract

Long-term retention of vocabulary in two phonetically similar foreign languages is aided when learning occurs in highly distinctive virtual reality environments*

Joey Ka-Yee Essoe, Nicco Reggente, Ai Ohno, Hera Youn-Ji Baek, Jesse Rissman

The environmental context in which a memory is encoded can impact its later accessibility by virtue of tagging the memory with unique retrieval cues. We examined whether distinctive virtual environments (VEs) could be used as a means to provide contextual support during the learning of two sets of easily confusable stimuli. Specifically, we taught participants the translations of 50 English words in two pre-experimentally unfamiliar languages: 10 were learned only in Swahili, 10 only in Chinyanja, and 30 in both languages. Participants in the Dual Context group learned each language in a different VE, whereas participants in the Single Context group learned both languages in the same VE. On Day 2, after the fourth VE learning session, participants’ ability to recall the Swahili and Chinyanja translations of the English words was tested outside of the VEs. One week later (Day 8), participants were reached by telephone and performed a surprise recall test assessing their long-term retention of the foreign words. Our results revealed that while the Single and Dual Context groups showed comparable recall performance when tested on Day 2, the Dual Context group exhibited significantly reduced forgetting when tested on Day 8. This finding showcases how distinctive learning contexts can protect newly acquired memories from succumbing to excessive interference and promote long-term retention. An additional fMRI dataset collected from a separate group of Dual Context participants during Day 2 cued recall should provide further insights into the mechanisms that underlie their memory advantage.

*Yes, I am sorry about the looooong title. I don’t know what I was thinking. Never again.

Participants Wanted: Learning in VR

General information for prospective participants

Compensation:

Each participant may only participate in ONE of the following 3 versions.
Version 1: $20 in cash.
Version 2: $30 in cash.
Version 3: $50 in cash (participant may also request an image of their brain!)

Compensation will be dispensed upon completion of the session.
Continue reading

Participants Wanted: Avatar Learning in Virtual Environments (ALIVE) 2016

For fMRI session: $20/hour. fMRI version pays $80-100 in cash, and you will get a picture of your brain!
For Lab session: $10/hour, lab-only version pays between $50-70 in cash.
Compensation will be dispensed upon completion of the last session.

Continue reading

Daniel Lin Presents: Effects of Sleep Quality on Virtual Reality Learning

Lin, D.,  Essoe, J. K-Y, Tran, J., Zhou, J., Mutter, J., Frostig, D., Yang, J., Reggente, N., Rissman, J. (2014, May). The Effects of Sleep Quality on Virtual Reality Learning and Overnight Forgetting. Poster presented at the 23rd Annual Psychology Undergraduate Research Conference, Los Angeles, CA.

Lin_Essoe_Rissman-PURC2014-SleepQuality_&_VR_Learning

Click here to download PDF

ALIVE featured in “I Love UCLA” video contest entry!

Directed by Andrew Butte, this is an entry for UCLA Fund’s 4th video contest “I Love UCLA.”

@ about 30s in, Daniel Lin, our Undergraduate Project Leader was in the video greeting the view-point character, then the latter explored our UnderSea Campus 😀

Fun stuff.

Participants Needed: Avatar Learning in Virtual Environments (ALIVE)

This study has concluded as of Winter 2015.

General information for prospective participants

.

Compensation:

$10/hour, Minimum $50. Paid on Day 18 in Cash.

Up to 5 Psychology SONA credits.
If participation exceeds 6 hours, the remainder will be paid in cash as described above.

You may opt to receive a combination of cash and credit for your participation.
e.g. I f you only need 2 SONA credits, you can get the two credits and $30 in cash.

Please inform your researcher of your compensation preference during scheduling.

Continue reading

Niccotron: to host ALIVE’s OpenSim server!

Nicco put together this machine while I served as his package opener.

It is so pretty… and awesome.

Oculus Rift + OpenSim = I love my job!

Guess who is gonna have a jolly good time? (Or get incredibly sick)….

I LOVE MY WORK SO SO SO MUCH.

Thank you, David Rowe, the one-man team in CtrlAlt Studio that created the first and best (and only!) Rift-compatible OpenSim Viewer!

Oryon Essoe: OpenSim Avatar

Oryon Essoe: OpenSim Avatar

Meet Oryon Essoe, Prime Minister of Rissland. He works day and night to make Rissland a participant-friendly research space.

He is very smug because he built a classroom space in a tree-house today.

 

Dr. Jesse Rissman receives DARPA Young Faculty Award!

Proud to congratulate my mentor, Dr. Jesse Rissman for receiving DARPA Young Faculty Award!

This grant funds our virtual reality project, ALIVE (Avatar Learning In Virtual Environment).

Article by DARPA: ELITE GROUP OF YOUNG SCIENTISTS EMBARK ON DARPA RESEARCH EFFORTS

Article on UCLA Toda

Dr. Jesse Rissman receiving DARPA YFA from Dr. William Casebeer

Jesse receiving DARPA YFA from Dr. William Casebeer

OpenSim: Rissland’s Welcome Area

A virtual environment created for learning and memory research in the Behavioural Neuroscience Area of the Psychology Department in UCLA.

The project is called Avatar Learning In Virtual Environment (ALIVE), sponsored by DARPA. It is conducted in Dr. Jesse Rissman’s Memory Laboratory, by his graduate students Joey K.-Y. Essoe and Niccolo Reggente.

Rissland is created by Essoe utilising OpenSim’s Diva Distro, customising various free contents courtesy of the OpenSim community.

Screen Shot 2013-09-02 at 10.06.26 PM