Neuroscience Projects
Mean-Field Modeling of Spiking Neural Network Dynamics
One of the projects I worked on during my PhD and continue to work on is the derivation and analysis of so-called mean-field models. Mean-field models can be derived from the mathematical equations that govern the dynamics of spiking neural networks and they capture how average quantities such as the average firing rate or the average membrane potential develop over time in these networks. These average quantities are much easier to record in living brains than single neuron properties. Thus, these mean-field models allow to relate recordings of neural population averages to the underlying spiking network activity. They are a great tool to bridge multiple scales of brain organization and allow to apply a wide range of dynamical systems analysis methods. The relationship between mean-field model and spiking neural network dynamics is depicted in the figure below for a network of quadratic integrate-and-fire (QIF) neurons.
My PhD work on mean-field models of spiking neural dynamics was mostly concerned with the effects that different forms of plasticity have on the network dynamics [1,2]. Plasticity thereby refers to properties of neurons or synapses that undergo changes, depending on the level of activity of the neuron or synapse. An example for that is spike-frequency adaptation, which leads to a reduction of the firing rate response of a neuron to a given input current, if the neuron fired at a high rate prior to the input already. My co-authors and I derived the mean-field equations for a network of spiking neurons with spike-frequency adaptation and analyzed its role in neural synchronization in [1].
As part of my first postdoc with Ann Kennedy, we built up on this previous work to derive mean-field equations for a new mathematical model of single neurons (the Izhikevich neuron model) with distributed spiking thresholds [3].
[1] Gast et al. (2020) Neural computation.
[2] Gast et al. (2021) Physical Review E.
[3] Gast et al. (2023) Physical Review E.
Neural Synchronization Properties in the Basal Ganglia
The basal ganglia are a set of interconnected neural populations, deep in the brain, that are involved in various important brain functions such as learning, habit formation, and motor control. Damage to or malfunctioning of the basal ganglia has been found in various neurological disorders such as Parkinson's disease, Tourette syndrome, or Huntington's disease. Typically, malfunctioning of the basal ganglia is associated with increased synchrony in the activity of basal ganglia neurons. For example, the decline in motor control in Parkinson's disease seems to be related to the tendency of large parts of basal ganglia neurons to fire in concert at frequencies between 12-30 Hz (beta frequency band), thus reducing their information encoding capacities.
In this project, I use mathematical modeling and computer simulations to examine the conditions under which neurons in the basal ganglia start to fire in synchrony. During my PhD, I built a mathematical model of the external pallidum, a central part of the basal ganglia, and analyzed the neural synchronization properties of the model under normal and dopamine depleted conditions [4]. Furthermore, my co-authors and I showed via this model how the complex features of parkinsonian neural synchronization that have been found in human patients could emerge at the level of the external pallidum [4, 5].
[4] Gast et al. (2021) Journal of Neuroscience.
[5] Gong et al. (2021) Brain.
The Role of Neural Heterogeneity
It is one of the main questions in neuroscience to answer how the brain functions such as stable perception, controlling body movements, and integrating novel experiences with past memories, arise from interacting neurons. Some questions that are strongly related to this main question are:
How do these functions depend on the properties of the neurons?
How does structural damage of neural tissue affect these functions?
How is the development of brain networks across the human lifespan related to increasing and declining performance in those functions?
One important aspect of neural tissue in the brain is that neurons differ dramatically in their structural composition. These differences in structural composition are also reflected in a wide range of different modes of activity that these neurons can express. We call this neural heterogeneity. In this project, we studied how the level of neural heterogeneity in a neural network affects both the overall fluctuations of neural activity that the network can express as well as the functions that can emerge in the network. These functions can be thought of as less complex constituents of the high-level brain functions mentioned above. Examples are the formation of a local memory and the ability to reliably respond to an input signal with a precise output.
In [6], we derived mean-field equations for networks with heterogeneous spiking neurons and studied how the level of heterogeneity affects networks with different interacting neuron types. We find that neural heterogeneity has neuron type specific effects and acts as an important "knob" to tune the network to a regime where it can perform a specific function such as memory formation particularly well.
[6] Gast et al. (2024) PNAS.