Assistant Professor, Department of Psychology
Ph.D. Stanford University
Postdoctoral Fellowship, Yale University
Research Interests: Cognitive Neuroscience, Memory, Cognitive Control, fMRI Methods
Overview: I am interested in how our perceptual experiences are transformed into memories and how we recreate and selectively recall these experiences. Research in my lab makes use of behavioral and neuroimaging methods (primarily fMRI) with an emphasis on applying machine learning algorithms and multivariate pattern analyses to neuroimaging data in order to understand how memories are represented and transformed in distributed patterns of brain activity.
Some of the specific topics my lab addresses include: What are the cognitive and neural mechanisms that cause forgetting? How is competition between memories signaled and resolved in the brain during retrieval? How do we reduce interference between memories during encoding? Addressing these questions involves understanding the interactions and relative contributions of fronto-parietal cortex and medial temporal lobe structures.
Decomposing Parietal Memory Reactivation to Predict Consequences of Remembering.
Cereb Cortex. 2018 Aug 23;:
Authors: Lee H, Samide R, Richter FR, Kuhl BA
Memory retrieval can strengthen, but also distort memories. Parietal cortex is a candidate region involved in retrieval-induced memory changes as it reflects retrieval success and represents retrieved content. Here, we conducted an fMRI experiment to test whether different forms of parietal reactivation predict distinct consequences of retrieval. Subjects studied associations between words and pictures of faces, scenes, or objects, and then repeatedly retrieved half of the pictures, reporting the vividness of the retrieved pictures ("retrieval practice"). On the following day, subjects completed a recognition memory test for individual pictures. Critically, the test included lures highly similar to studied pictures. Behaviorally, retrieval practice increased both hit and false alarm (FA) rates to similar lures, confirming a causal influence of retrieval on subsequent memory. Using pattern similarity analyses, we measured two different levels of reactivation during retrieval practice: generic "category-level" reactivation and idiosyncratic "item-level" reactivation. Vivid remembering during retrieval practice was associated with stronger category- and item-level reactivation in parietal cortex. However, these measures differentially predicted subsequent recognition memory performance: whereas higher category-level reactivation tended to predict FAs to lures, item-level reactivation predicted correct rejections. These findings indicate that parietal reactivation can be decomposed to tease apart distinct consequences of memory retrieval.
PMID: 30137255 [PubMed - as supplied by publisher]
Parietal Representations of Stimulus Features Are Amplified during Memory Retrieval and Flexibly Aligned with Top-Down Goals.
J Neurosci. 2018 Sep 05;38(36):7809-7821
Authors: Favila SE, Samide R, Sweigart SC, Kuhl BA
In studies of human episodic memory, the phenomenon of reactivation has traditionally been observed in regions of occipitotemporal cortex (OTC) involved in visual perception. However, reactivation also occurs in lateral parietal cortex (LPC), and recent evidence suggests that stimulus-specific reactivation may be stronger in LPC than in OTC. These observations raise important questions about the nature of memory representations in LPC and their relationship to representations in OTC. Here, we report two fMRI experiments that quantified stimulus feature information (color and object category) within LPC and OTC, separately during perception and memory retrieval, in male and female human subjects. Across both experiments, we observed a clear dissociation between OTC and LPC: while feature information in OTC was relatively stronger during perception than memory, feature information in LPC was relatively stronger during memory than perception. Thus, while OTC and LPC represented common stimulus features in our experiments, they preferentially represented this information during different stages. In LPC, this bias toward mnemonic information co-occurred with stimulus-level reinstatement during memory retrieval. In Experiment 2, we considered whether mnemonic feature information in LPC was flexibly and dynamically shaped by top-down retrieval goals. Indeed, we found that dorsal LPC preferentially represented retrieved feature information that addressed the current goal. In contrast, ventral LPC represented retrieved features independent of the current goal. Collectively, these findings provide insight into the nature and significance of mnemonic representations in LPC and constitute an important bridge between putative mnemonic and control functions of parietal cortex.SIGNIFICANCE STATEMENT When humans remember an event from the past, patterns of sensory activity that were present during the initial event are thought to be reactivated. Here, we investigated the role of lateral parietal cortex (LPC), a high-level region of association cortex, in representing prior visual experiences. We find that LPC contained stronger information about stimulus features during memory retrieval than during perception. We also found that current task goals influenced the strength of stimulus feature information in LPC during memory. These findings suggest that, in addition to early sensory areas, high-level areas of cortex, such as LPC, represent visual information during memory retrieval, and that these areas may play a special role in flexibly aligning memories with current goals.
PMID: 30054390 [PubMed - in process]
Bottom-up and top-down factors differentially influence stimulus representations across large-scale attentional networks.
J Neurosci. 2018 Feb 02;:
Authors: Long NM, Kuhl BA
Visual attention is thought to be supported by three large-scale frontoparietal networks: the frontoparietal control network (FPCN), the dorsal attention network (DAN), and the ventral attention network (VAN). The traditional view is that these networks support visual attention by biasing and evaluating sensory representations in visual cortical regions. However, recent evidence suggests that frontoparietal regions actively represent perceptual stimuli. Here, we assessed how perceptual stimuli are represented across large-scale frontoparietal and visual networks. Specifically, we tested whether representations of stimulus features across these networks are differentially sensitive to bottom-up and top-down factors. In a pair of pattern-based fMRI studies, male and female human subjects made perceptual decisions about face images that varied along two independent dimensions: gender and affect. Across studies, we interrupted bottom-up visual input using backward masks. Within studies, we manipulated which stimulus features were goal-relevant (i.e., whether gender or affect was relevant) and task switching (i.e., whether the goal on the current trial matched the goal on the prior trial). We found that stimulus features could be reliably decoded from all four networks and, importantly, that sub-regions within each attentional network maintained coherent representations. Critically, the different attentional manipulations (interruption, goal relevance, task switching) differentially influenced feature representations across networks. Namely, whereas visual interruption had a relatively greater influence on representations in visual regions, goal relevance and task switching had a relatively greater influence on representations in frontoparietal networks. Thus, large-scale brain networks can be dissociated according to how attention influences the feature representations that they maintain.SIGNIFICANCE STATEMENTVisual attention is supported by multiple frontoparietal attentional networks. However, it remains unclear how stimulus features are represented within these networks and how they are influenced by attention. Here we assessed feature representations in four large-scale networks using a perceptual decision-making paradigm in which we manipulated top-down and bottom-up factors. We found that top-down manipulations such as goal relevance and task switching modulated feature representations in attentional networks, whereas bottom-up manipulations such as interruption of visual processing had a relatively stronger influence on feature representations in visual regions. Together, these findings indicate that attentional networks actively represent stimulus features and that representations within different large-scale networks are influenced by different forms of attention.
PMID: 29437930 [PubMed - as supplied by publisher]
Overlap among Spatial Memories Triggers Repulsion of Hippocampal Representations.
Curr Biol. 2017 Aug 07;27(15):2307-2317.e5
Authors: Chanales AJH, Oza A, Favila SE, Kuhl BA
Across the domains of spatial navigation and episodic memory, the hippocampus is thought to play a critical role in disambiguating (pattern separating) representations of overlapping events. However, it is not fully understood how and why hippocampal patterns become separated. Here, we test the idea that event overlap triggers a "repulsion" among hippocampal representations that develops over the course of learning. Using a naturalistic route-learning paradigm and spatiotemporal pattern analysis of human fMRI data, we found that hippocampal representations of overlapping routes gradually diverged with learning to the point that they became less similar than representations of non-overlapping events. In other words, the hippocampus not only disambiguated overlapping events but formed representations that "reversed" the objective similarity among routes. This finding, which was selective to the hippocampus, is not predicted by standard theoretical accounts of pattern separation. Critically, because the overlapping route stimuli that we used ultimately diverged (so that each route contained overlapping and non-overlapping segments), we were able to test whether the reversal effect was selective to the overlapping segments. Indeed, once overlapping routes diverged (eliminating spatial and visual similarity), hippocampal representations paradoxically became relatively more similar. Finally, using a novel analysis approach, we show that the degree to which individual hippocampal voxels were initially shared across route representations was predictive of the magnitude of learning-related separation. Collectively, these findings indicate that event overlap triggers a repulsion of hippocampal representations-a finding that provides critical mechanistic insight into how and why hippocampal representations become separated.
PMID: 28736170 [PubMed - indexed for MEDLINE]
Sampling memory to make profitable choices.
Nat Neurosci. 2017 Jun 27;20(7):903-904
Authors: Kuhl BA, Long NM
PMID: 28653687 [PubMed - in process]
Experience-dependent hippocampal pattern differentiation prevents interference during subsequent learning.
Nat Commun. 2016 04 06;7:11066
Authors: Favila SE, Chanales AJ, Kuhl BA
The hippocampus is believed to reduce memory interference by disambiguating neural representations of similar events. However, there is limited empirical evidence linking representational overlap in the hippocampus to memory interference. Likewise, it is not fully understood how learning influences overlap among hippocampal representations. Using pattern-based fMRI analyses, we tested for a bidirectional relationship between memory overlap in the human hippocampus and learning. First, we show that learning drives hippocampal representations of similar events apart from one another. These changes are not explained by task demands to discriminate similar stimuli and are fully absent in visual cortical areas that feed into the hippocampus. Second, we show that lower representational overlap in the hippocampus benefits subsequent learning by preventing interference between similar memories. These findings reveal targeted experience-dependent changes in hippocampal representations of similar events and provide a critical link between memory overlap in the hippocampus and behavioural expressions of memory interference.
PMID: 27925613 [PubMed - indexed for MEDLINE]
Hippocampal Mismatch Signals Are Modulated by the Strength of Neural Predictions and Their Similarity to Outcomes.
J Neurosci. 2016 12 14;36(50):12677-12687
Authors: Long NM, Lee H, Kuhl BA
The hippocampus is thought to compare predicted events with current perceptual input, generating a mismatch signal when predictions are violated. However, most prior studies have only inferred when predictions occur without measuring them directly. Moreover, an important but unresolved question is whether hippocampal mismatch signals are modulated by the degree to which predictions differ from outcomes. Here, we conducted a human fMRI study in which subjects repeatedly studied various word-picture pairs, learning to predict particular pictures (outcomes) from the words (cues). After initial learning, a subset of cues was paired with a novel, unexpected outcome, whereas other cues continued to predict the same outcome. Critically, when outcomes changed, the new outcome was either "near" to the predicted outcome (same visual category as the predicted picture) or "far" from the predicted outcome (different visual category). Using multivoxel pattern analysis, we indexed cue-evoked reactivation (prediction) within neocortical areas and related these trial-by-trial measures of prediction strength to univariate hippocampal responses to the outcomes. We found that prediction strength positively modulated hippocampal responses to unexpected outcomes, particularly when unexpected outcomes were close, but not identical, to the prediction. Hippocampal responses to unexpected outcomes were also associated with a tradeoff in performance during a subsequent memory test: relatively faster retrieval of new (updated) associations, but relatively slower retrieval of the original (older) associations. Together, these results indicate that hippocampal mismatch signals reflect a comparison between active predictions and current outcomes and that these signals are most robust when predictions are similar, but not identical, to outcomes.
SIGNIFICANCE STATEMENT: Although the hippocampus is widely thought to signal "mismatches" between memory-based predictions and outcomes, previous research has not linked hippocampal mismatch signals directly to neural measures of prediction strength. Here, we show that hippocampal mismatch signals increase as a function of the strength of predictions in neocortical regions. This increase in hippocampal mismatch signals was particularly robust when outcomes were similar, but not identical, to predictions. These results indicate that hippocampal mismatch signals are driven by both the active generation of predictions and the similarity between predictions and outcomes.
PMID: 27821577 [PubMed - indexed for MEDLINE]
Reconstructing Perceived and Retrieved Faces from Activity Patterns in Lateral Parietal Cortex.
J Neurosci. 2016 06 01;36(22):6069-82
Authors: Lee H, Kuhl BA
UNLABELLED: Recent findings suggest that the contents of memory encoding and retrieval can be decoded from the angular gyrus (ANG), a subregion of posterior lateral parietal cortex. However, typical decoding approaches provide little insight into the nature of ANG content representations. Here, we tested whether complex, multidimensional stimuli (faces) could be reconstructed from ANG by predicting underlying face components from fMRI activity patterns in humans. Using an approach inspired by computer vision methods for face recognition, we applied principal component analysis to a large set of face images to generate eigenfaces. We then modeled relationships between eigenface values and patterns of fMRI activity. Activity patterns evoked by individual faces were then used to generate predicted eigenface values, which could be transformed into reconstructions of individual faces. We show that visually perceived faces were reliably reconstructed from activity patterns in occipitotemporal cortex and several lateral parietal subregions, including ANG. Subjective assessment of reconstructed faces revealed specific sources of information (e.g., affect and skin color) that were successfully reconstructed in ANG. Strikingly, we also found that a model trained on ANG activity patterns during face perception was able to successfully reconstruct an independent set of face images that were held in memory. Together, these findings provide compelling evidence that ANG forms complex, stimulus-specific representations that are reflected in activity patterns evoked during perception and remembering.
SIGNIFICANCE STATEMENT: Neuroimaging studies have consistently implicated lateral parietal cortex in episodic remembering, but the functional contributions of lateral parietal cortex to memory remain a topic of debate. Here, we used an innovative form of fMRI pattern analysis to test whether lateral parietal cortex actively represents the contents of memory. Using a large set of human face images, we first extracted latent face components (eigenfaces). We then used machine learning algorithms to predict face components from fMRI activity patterns and, ultimately, to reconstruct images of individual faces. We show that activity patterns in a subregion of lateral parietal cortex, the angular gyrus, supported successful reconstruction of perceived and remembered faces, confirming a role for this region in actively representing remembered content.
PMID: 27251627 [PubMed - indexed for MEDLINE]