Yashar Ahmadian

Assistant Professor, Department of Biology
Member, ION

Ph.D. Columbia University
B.Sc. Sharif University of Technology, Tehran, Iran

yashar@uoregon.edu
Lab Website
Office: 238 Huestis 
Phone: 541-346-7636

 

Research Interests: Theoretical neuroscience

Overview: Our lab's research is in theoretical neuroscience. Our broad interest is in understanding how large networks of neurons, e.g. those in the mammalian cerebral cortex, process sensory inputs and give rise to higher-level cognitive functions through their collective dynamics on multiple time scales. To shed light on the complexity of neurobiological phenomena we use mathematical models that capture a few core concepts or computational and dynamical principles. We also work on developing new statistical and computational tools for analyzing large, high-dimensional neurobiological and behavioral datasets. In pursuing these goals we use techniques from statistical physics, random matrix theory, machine learning and information theory. We collaborate with experimental labs here in the University of Oregon and elsewhere.

Current questions of interest include the following. How do randomness and nonnormality in the connectivity structure of networks affect their dynamics? What roles do the horizontal and feedback connections in sensory cortical areas play in contextual modulation (how e.g. the response of neurons in the visual cortex is affected by the visual context surrounding that stimulus) and ultimately in the dynamical representation of objects? Can the breakup of neural response types in the early auditory system be explained by efficient coding principles?

RECENT PUBLICATIONS

Related Articles

Learning unbelievable probabilities.

Adv Neural Inf Process Syst. 2011 Dec;24:738-746

Authors: Pitkow X, Ahmadian Y, Miller KD

Abstract
Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we show that many probability distributions have marginals that cannot be reached by belief propagation using any set of model parameters or any learning algorithm. We call such marginals 'unbelievable.' This problem occurs whenever the Hessian of the Bethe free energy is not positive-definite at the target marginals. All learning algorithms for belief propagation necessarily fail in these cases, producing beliefs or sets of beliefs that may even be worse than the pre-learning approximation. We then show that averaging inaccurate beliefs, each obtained from belief propagation using model parameters perturbed about some learned mean values, can achieve the unbelievable marginals.

PMID: 28781497 [PubMed]