Navigation auf uzh.ch

Suche

Institute of Neuroinformatics

Research

Inferring structure in high-dimensional data

Advances in experimental methodologies in neuroscience lead to ever-larger behavioral and neural data sets. Such data is highly structured, but describing this structure quantitatively is typically very challenging. Most often we lack good models of natural behavior or neural computations that could point to the most relevant aspects of the data. One focus of the lab is to develop generally applicable methods that can reveal the structure in large datasets without strong assumptions about the properties of the underlying data. We recently introduced non-parametric approaches based on nearest-neighbor statistics to reveal the topology of dense, high-dimensional data. These approaches make only weak assumptions about the nature of the data (e.g. clustering) and are thus broadly applicable. Together with more traditional Targeted Dimensionality Reduction techniques, these methods can provide quantitative, unbiased descriptions of the structure of high-dimensional behavioral and neural data. 

Code and Data

Mante et al., Context-dependent computation by recurrent dynamics in prefrontal cortex, Nature, 2013 Link

Kollmorgen, Newsome, and Mante, Spatial and temporal structure of choice representations in prefrontal cortex, bioRxiv, 2019 Link

Kollmorgen, Hahnloser, and Mante, Nearest neighbours reveal fast and slow components of motor learning, Nature, 2020 Link

Behavioral mechanisms of cognition

Decision-making paradigms are used in basic and clinical research to reveal the processes underlying normal and impaired cognition. Mechanistic models of behavior play a key role in interpreting the choices of participants in these tasks. For one, they can reveal how perceptual processes interact with cognitive processes like attention, impulsivity, or biases to form a choice. For another, they provide access to latent variables (like the momentary belief or confidence of the participant) that may be more tightly linked to the underlying neural processes than the observed choices. We combine approaches from Bayesian inference, stochastic differential models, and a lot of computational power to evaluate hundreds of models on the behavior of individual participants (humans or macaques). This approach leads to a fine-grained characterization of the latent decision processes in each individual, but also reveal considerable differences across individuals. The resulting quantitative descriptions of behavior form the basis of analyses of concurrently measured neural activity and could provide new insights into the nature of cognitive deficits in psychiatric and neurological disorders.

Neural mechanisms of cognition

Our past work lends support to the hypothesis of “computation-through-dynamics”, the idea that computations in the brain emerge from and are best understood at the level of collective dynamics of large neural populations. We characterize and model neural population dynamics with a variety of tools, including decoding approaches, fits of linear and non-linear dynamical systems, and deep neural networks trained both with supervised and unsupervised (reinforcement-learning) methods. Reverse-engineering of the trained networks provides novel hypotheses about the nature of neural computations in the brain. We focus predominantly on understanding computations underlying cognitive abilities like long-term planning, inhibitory control, and attention. These abilities are thought to rely critically on a network of areas in prefrontal cortex. The great majority of these areas exist only in primates, which is why we study macaques. Ultimately our models of computations should be precise enough to predict the neural and behavioral consequences of arbitrary causal perturbations of the population activity, and explain the mechanism underlying cognitive deficits akin to those observed in psychiatric disorders.

Mante et al., Context-dependent computation by recurrent dynamics in prefrontal cortex, Nature, 2013 Link

Galgali and Mante, Set in one’s thoughts, Nature Neuroscience, 2018 Link

Aoi, Mante, and Pillow, Prefrontal cortex exhibits multidimensional dynamic encoding during decision making, Nature Neuroscience, 2020 Link

Krause et al, Operative dimensions in unconstrained connectivity of recurrent neural networks, Advances in Neural Information Processing Systems (NeurIPS), 2022 Link

Calangiu, Kollmorgen, Reppas, and Mante, Primate pre-arcuate cortex actively maintains persistent representations of saccades from plans to outcomes, bioRxiv, 2022 Link

Ehret et al, Population-level neural correlates of flexible avoidance learning in medial prefrontal cortex, bioRxiv, 2023 Link

Soldado-Magraner, Mante* and Sahani*, Inferring context-dependent computations through linear approximations of prefrontal cortex dynamics, bioRxiv, 2023 Link

Galgali, Sahani, and Mante, Residual dynamics resolves recurrent contributions to neural computation, Nature Neuroscience, 2023 Link

Structure and dynamics of natural behavior

While our models of neural computation are mostly based on neural activity recorded in laboratory settings, they should ultimately also explain brain function during rich natural behaviors. To establish this link, we aim to record neural activity in the same animals both in the laboratory and in natural settings. Quantifying and understanding natural behaviors, however, is a challenge on its own. To tackle this challenge, we develop both non-parametric and model-based approaches to infer the structure and dynamics in dense, high-dimensional behavioral data. In one line of research we study how the developing song a of juvenile bird changes over the course of many months of training. Song development is a powerful model of motor learning during a complex, natural behavior, which can conveniently be recorded with just a microphone. In another line of research, we use continuous video recordings to study individual and social behaviors in a group of monkeys living in a zoo-like enclosure. Extracting the relevant behavior from videos is hard, and requires state-of-the art approaches from computer vision and machine-learning.

Kollmorgen, Hahnloser, and Mante, Nearest neighbours reveal fast and slow components of motor learning, Nature, 2020 Link

Marks et al, Deep-learning based identification, pose estimation and end-to-end behaviour classification for interacting primates and mice in complex environments, Nature Machine Intelligence, 2022 Link