Physics Colloquium with Sue Yeon Chung on Multi-Level Theory of Neural Representations: Capacity of Neural Manifolds in Biological and Artificial Neural Networks

Sue Yeon Chung (Hosted by Wessel) from New York University and the Flatiron Institute will be presenting the colloquium "Multi-Level Theory of Neural Representations: Capacity of Neural Manifolds in Biological and Artificial Neural Networks"

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine a light on the black box of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive task implementations emerge from the structure in neural populations and from biologically plausible neural networks.

We will introduce a new theory that connects geometric structures that arise from neural population responses (i.e., neural manifolds) to the neural representation's efficiency in implementing a task. In particular, this theory describes how many neural manifolds can be represented (or packed) in the neural activity space while they can be linearly decoded by a downstream readout neuron. The intuition from this theory is remarkably simple: like a sphere packing problem in physical space, we can encode many “neural manifolds into the neural activity space if these manifolds are small and low-dimensional, and vice versa.

Next, we will describe how such an approach can, in fact, open the black box of distributed neuronal circuits in a range of settings, such as experimental neural datasets and artificial neural networks. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data and estimating the amount of task information embedded in the same population.

Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations by (1) understanding how task-implementing neural manifolds emerge across brain regions and during learning, (2) investigating how neural tuning properties shape the representation geometry in early sensory areas, and (3) demonstrating the impressive task performance and neural predictivity achieved by optimizing a deep network to maximize the capacity of neural manifolds. By expanding our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

This lecture was made possible by the William C. Ferguson fund.