Abstract: Computational processes in neural systems emerge through learning across multiple timescales; from evolution and development to immediate, in-context adaptation. Yet fundamental questions remain: Which neural architectures confer evolutionary advantages? How do experiences shape circuit dynamics? What principles govern how specific computations arise during training? My group addresses these questions using simulated recurrent neural networks. Building on a decade of research across multiple labs, we focus on fixed point structures, termed "dynamical motifs”, that serve as computational primitives. We've discovered that these motifs can be flexibly composed to solve diverse tasks, with rapid learning often involving novel recombination of existing motifs rather than construction of entirely new dynamics. However, the principles governing motif composition remain poorly understood, motivating our simulation-based approach.I will present two ongoing projects that illustrate this framework:
- Dynamical motifs underlying foraging behavior: How fundamental dynamical motifs support naturalistic decision-making and navigation.
- How task structure shapes computational dynamics: The relationship between problem structure and the organization of dynamical systems that solve it.