HomeEducationDoctorate (PhD & EngD)Current candidatesPhD infoUpcoming Public DefencesPhD Defence Manu Kalia | Data, models and transitions in computational neuroscience - Bottom-up and top-down approaches

PhD Defence Manu Kalia | Data, models and transitions in computational neuroscience - Bottom-up and top-down approaches

Data, models and transitions in computational neuroscience - Bottom-up and top-down approaches

The PhD defence of Manu Kalia will take place (partly) online and can be followed by a live stream.

Manu Kalia is a PhD student in the research group Mathematics of Imaging & AI. Supervisors are prof.dr. C. Brune from the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) and prof.dr.ir. M.J.A.M. van Putten from the Faculty of Science & Technology (S&T). Co-supervisor is dr. H.G.E. Meijer (EEMCS).

This thesis is concerned with building and analyzing mathematical models in computational neuroscience using bottom-up and top-down approaches. Models are constructed using biophysical principles to understand the pathophysiology of cerebral ischemia at different spatial and temporal scales. Data-driven techniques in conjunction with machine learning are used to build compact parameter-dependent models from high-dimensional data. Finally, model maps are introduced to explain the generic unfolding of a newly observed bifurcation.

In Chapter 3, a comprehensive biophysical model of a glutamatergic synapse is developed, to identify key determinants of synaptic failure during energy deprivation. The model is based on fundamental biophysical principles, includes dynamics of the most relevant ions, i.e., Na+, K+, Ca2+, Cl− and glutamate, and is calibrated with experimental data. It confirms the critical role of the Na+/K+-ATPase in maintaining ion gradients, membrane potentials and cell volumes. The simulations demonstrate that the system exhibits two stable states, one physiological and one pathological. During energy deprivation, the physiological state may disappear, forcing a transit to the pathological state, which can be reverted when blocking voltage-gated Na+ and K+ channels. The model predicts that the transition to the pathological state is favoured if the extracellular space fraction is small. A reduction in the extracellular space volume fraction, as, e.g. observed with ageing, will thus promote the brain’s susceptibility to ischemic damage. The work thus provides new insights into the brain’s ability to recover from energy deprivation, with translational relevance for diagnosis and treatment of ischemic strokes.

In Chapter 4, the relationship between electroencephalogram (EEG phenomenology and cellular biophysical principles is studied using a model of interacting thalamic and cortical neural masses coupled with energy-dependent synaptic transmission. The model faithfully reproduces the characteristic EEG phenomenology during acute cerebral ischemia and shows that synaptic arrest occurs before cell swelling and irreversible neuronal depolarization. The early synaptic arrest is attributed to ion homeostatic failure due to dysfunctional Na+/K+-ATPase. Moreover, it is also shown that the excitatory input from relay cells to the cortex controls rhythmic behavior. In particular, low relayinterneuron interaction manifests in burst-like EEG behavior immediately prior to synaptic arrest. The model thus reconciles the implications of stroke on a cellular, synaptic and circuit level and provides a basis for exploring multi-scale therapeutic interventions.

In Chapter 5, deep learning autoencoders are introduced to discover coordinate transformations that capture the underlying parametric dependence of a dynamical system in terms of its canonical normal form, allowing for a simple representation of the parametric dependence and bifurcation structure. The autoencoder constrains the latent variable to adhere to a given normal form, thus allowing it to learn the appropriate coordinate transformation. The method is demonstrated on a number of example problems, showing that it can capture a diverse set of normal forms associated with Hopf, pitchfork, transcritical and/or saddle node bifurcations. This method shows how normal forms can be leveraged as canonical and universal building blocks in deep learning approaches for model discovery and reduced-order modeling.

Finally, in Chapter 6, a saddle to saddle-focus homoclinic transition when the stable leading eigenspace is 3-dimensional (called the 3DL-bifurcation) is analyzed. Here a pair of complex eigenvalues and a real eigenvalue exchange their position relative to the imaginary axis, giving rise to a 3-dimensional stable leading eigenspace at the critical parameter values. This transition is different from the standard Belyakov bifurcation, where a double real eigenvalue splits either into a pair of complex-conjugate eigenvalues or two distinct real eigenvalues. In the wild case, sets of codimension 1 and 2 bifurcation curves are obtained, along with points that asymptotically approach the 3DL-bifurcation point and have a structure that differs from that of the standard Belyakov case. An example of this bifurcation is also provided in a perturbed Lorenz-Stenflo 4D ODE model.