Upcoming Events!
Past Events
Recent advances in machine learning have shown that deep neural networks (DNNs) can provide powerful...
Recent advances in machine learning have shown that deep neural networks (DNNs) can provide powerful and flexible models of neural sensory processing. In the auditory system, standard linear-nonlinear (LN) models and their derivatives are unable to account for high-order cortical representations, and DNNs may provide additional insight cortical sound processing. Deep learning can be difficult to implement with relatively small datasets, such as those available from single neuron recordings. To address this limitation, we developed a population encoding model, a network model that simultaneously predicts activity of many neurons recorded during presentation of a large, fixed set of natural sounds. This approach defines a spectro-temporal space that is shared by all the neurons and pools statistical power across the population. We tested a range of DNN architectures on data from primary and non-primary auditory cortex. DNNs performed substantially better than LN models. Moreover, the DNNs were highly generalizable. The output layer of a model pre-fit using one population of neurons could be fit to different single units and/or different stimuli, with performance close to that of neurons in the original model. These results indicate that population encoding models capture a general set of computations performed by auditory cortex and that the model itself can be analyzed for a general characterization of auditory cortical representation. |full_html