Research Overview

Our lab studies how learning unfolds in both neural populations and in artificial networks, with an overall aim to link the rules governing weight updates with representational changes and network-level computations.

A central focus of our research is how learning shapes population dynamics to produce new computations. We build mathematical models using both theoretical and data-driven approaches. First, we study the dynamics of gradient flow and biologically-inspired alternatives to understand how the structure of learning rules influences representational geometry and overall model behavior. Second, we fit models to neural data to infer how population dynamics and latent manifolds change during learning. This combination allows us to examine the interplay between learning rules, population dynamics, and manifold geometry.

The motor system provides a concrete setting to study these principles. Motor learning is distributed across brain areas, combining modularity and multiple learning rules that can be tested in modular neural networks. Because motor regions link population dynamics directly to goal-directed actions and task performance, they provide a natural window into how low-dimensional latent dynamics and sequential motifs emerge over learning. Building on these principles, we leverage machine learning and control-theoretic frameworks to model how neural dynamics give rise to motor behaviors and to test how learning shapes population activity.

Our work draws on mathematical tools from dynamical systems, matrix and tensor analysis, differential geometry, and high-dimensional statistics. Using this approach, we hope to uncover general principles of computation, shedding light on how neural systems learn, adapt, and generalize in complex environments.

We currently have openings - contact Alex if you are interested in joining the team!