List of Projects – Autumn 2023
Finite-dimensional representation of neural field equations.
Neural fields are continuum models of cortical networks, describing the state of a neuronal population as a scalar field over space. The dynamics of a neural field are described by an integro-differential equation, called the neural field equation [1].
The neural field being a vector in a functional vector space, the system is therefore infinite-dimensional. Yet, the dynamics can always be approximated, and in some cases expressed exactly, by a finite system of ODEs [2]. This yields a finite-dimensional representation of the system, that relies on the projection of the neural field on the eigenbasis of a linear operator.
How large must the dimension be to get a good approximation of the dynamics? Is there a best choice of the basis for the projection? Can we find a non-trivial example for which a good choice of the space over which the field is defined allows for an exact finite-dimensional representation?
These are the questions you will address during the project, through the study of concrete examples and numerical simulations.
The student should have a good background in mathematics or physics (basics in functional analysis / dynamical systems), as well as basics in programming (Python/Julia).
For further information, please contact: [email protected].
[1] W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski. Neuronal dynamics: From single neurons to networks and models of cognition (chapter 18). Cambridge University Press, 2014.
[2] R. Veltz and O. Faugeras. Local/global analysis of the stationary solutions of some neural field equations. SIAM Journal on Applied Dynamical Systems, 9(3):954–998, 2010.
List of Projects – Spring 2023
How do mice explore a labyrinth – and why? (taken)
What guides humans and animals when exploring new and unfamiliar environments? Recent findings show that they strongly rely on intrinsic motivations such as curiosity, surprise and novelty – especially when no extrinsic reward information about food or monetary rewards is available [1]. Novelty-seeking, i.e. a preference of humans and animals to seek out unfamiliar and novel states, is particularly useful to guide behaviour in unknown environments [2], and can be modelled with (bioplausible) reinforcement learning models [3]. However, it is still unclear how the brain’s internal representation of the explored space influences novelty signals and the resulting exploration behaviour.
In this project, you will implement simplified representations of a labyrinth environment, integrate them into bioplausible models of novelty-seeking in the brain and compare them to the behaviour of mice in an exploration task.
For further information, please contact: [email protected]
Prerequisites: Coding skills in python, basics in statistics and math, interest in behavioural modeling and spatial navigation.
[1] A. Jaegle, V. Mehrpour, and N. Rust. Visual novelty, curiosity, and intrinsic reward in machine learning and the brain. Current Opinion in Neurobiology, 58:167–174, 2019.
[2] H.A. Xu, A. Modirshanechi, M.P. Lehmann, W. Gerstner, and M.H. Herzog. Novelty is not Surprise: Human exploratory and adaptive behavior in sequential decision-making. PLoS Computational Biology, 17:e1009070, 2021
[3] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 2018
A high-dimensional walk across neural loss landscape valleys (taken)
We can portray an optimized network as a single point in a high-dimensional parameter space. We know this point lies at a low loss, but what can we tell about its surroundings? Can we visit other trained models, or low-loss solutions, by taking a lazy stroll along the landscape without ever climbing up the loss? Are there valleys we can traverse?
In this simulation project, you will explore the loss landscape valleys and visit different solutions by exploiting our knowledge of the geometry of the loss function, in particular symmetries and invariances. You will work on turning the lazy stroll into a pathfinding problem on graphs and integrate it with gradient descent optimization.
Prerequisites: proficiency in Python or Julia, knowledge of PyTorch or TensorFlow is a plus. Curiosity and creativity are necessary to traverse high-dimensional spaces.
For further information, please contact: [email protected]
References:
[1] Simsek, Berfin, et al. “Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances.” International Conference on Machine Learning. PMLR, 2021.
[2] Ainsworth, Samuel K., Jonathan Hayase, and Siddhartha Srinivasa. “Git re-basin: Merging models modulo permutation symmetries.” arXiv preprint arXiv:2209.04836 (2022).
[3] Singh, Sidak Pal, and Martin Jaggi. “Model fusion via optimal transport.” Advances in Neural Information Processing Systems33 (2020): 22045-22055.
Up-scaling a biologically plausible learning rule to ImageNet (taken)
CLAPP (Contrastive, Local And Predictive Plasticity) [1] is a biologically plausible self-supervised learning rule for moderately deep ANNs. It has been shown to perform well on CIFAR-10, and the project aims at upscaling it to ImageNet. This implies tackling a number of machine learning problems linked with the size and hardness of the dataset, such as a more complex architecture and hyper-parameter tuning, while keeping in mind the biological interpretations of the algorithms.
This project is offered by Guillaume Bellec and Ariane Delrocq.
We suggest this project for a student with good (python) programming skills and experience with standard machine learning techniques, and interest in biologically relevant algorithms.
[1] Illing, B., Ventura, J., Bellec, G. & Gerstner, W. Local plasticity rules can learn deep representations using self-supervised contrastive predictions. arXiv:2010.08262 [cs] (2021).