Categories
Nevin Manimala Statistics

Sequence learning recodes cortical representations instead of strengthening initial ones

PLoS Comput Biol. 2021 May 24;17(5):e1008969. doi: 10.1371/journal.pcbi.1008969. Online ahead of print.

ABSTRACT

We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patters from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.

PMID:34029315 | DOI:10.1371/journal.pcbi.1008969

By Nevin Manimala

Portfolio Website for Nevin Manimala