Assistant Professor, Department of Psychology
Ph.D. Stanford University
Postdoctoral Fellowship, Yale University
Research Interests: Cognitive Neuroscience, Memory, Cognitive Control, fMRI Methods
Overview: I am interested in how our perceptual experiences are transformed into memories and how we recreate and selectively recall these experiences. Research in my lab makes use of behavioral and neuroimaging methods (primarily fMRI) with an emphasis on applying machine learning algorithms and multivariate pattern analyses to neuroimaging data in order to understand how memories are represented and transformed in distributed patterns of brain activity.
Some of the specific topics my lab addresses include: What are the cognitive and neural mechanisms that cause forgetting? How is competition between memories signaled and resolved in the brain during retrieval? How do we reduce interference between memories during encoding? Addressing these questions involves understanding the interactions and relative contributions of fronto-parietal cortex and medial temporal lobe structures.
Bottom-up and top-down factors differentially influence stimulus representations across large-scale attentional networks.
J Neurosci. 2018 Feb 02;:
Authors: Long NM, Kuhl BA
Visual attention is thought to be supported by three large-scale frontoparietal networks: the frontoparietal control network (FPCN), the dorsal attention network (DAN), and the ventral attention network (VAN). The traditional view is that these networks support visual attention by biasing and evaluating sensory representations in visual cortical regions. However, recent evidence suggests that frontoparietal regions actively represent perceptual stimuli. Here, we assessed how perceptual stimuli are represented across large-scale frontoparietal and visual networks. Specifically, we tested whether representations of stimulus features across these networks are differentially sensitive to bottom-up and top-down factors. In a pair of pattern-based fMRI studies, male and female human subjects made perceptual decisions about face images that varied along two independent dimensions: gender and affect. Across studies, we interrupted bottom-up visual input using backward masks. Within studies, we manipulated which stimulus features were goal-relevant (i.e., whether gender or affect was relevant) and task switching (i.e., whether the goal on the current trial matched the goal on the prior trial). We found that stimulus features could be reliably decoded from all four networks and, importantly, that sub-regions within each attentional network maintained coherent representations. Critically, the different attentional manipulations (interruption, goal relevance, task switching) differentially influenced feature representations across networks. Namely, whereas visual interruption had a relatively greater influence on representations in visual regions, goal relevance and task switching had a relatively greater influence on representations in frontoparietal networks. Thus, large-scale brain networks can be dissociated according to how attention influences the feature representations that they maintain.SIGNIFICANCE STATEMENTVisual attention is supported by multiple frontoparietal attentional networks. However, it remains unclear how stimulus features are represented within these networks and how they are influenced by attention. Here we assessed feature representations in four large-scale networks using a perceptual decision-making paradigm in which we manipulated top-down and bottom-up factors. We found that top-down manipulations such as goal relevance and task switching modulated feature representations in attentional networks, whereas bottom-up manipulations such as interruption of visual processing had a relatively stronger influence on feature representations in visual regions. Together, these findings indicate that attentional networks actively represent stimulus features and that representations within different large-scale networks are influenced by different forms of attention.
PMID: 29437930 [PubMed - as supplied by publisher]