List of Projects – Spring 2024
Reverse engineering deep networks to reverse engineer neural circuits
We have very limited understanding of how neurons interact to drive even the simplest of behaviours . To achieve a mechanistic understanding of any neural circuit we first need a complete description of its activity and structure. Neuroscience is flourishing with techniques to measure the activity of thousands of neurons simultaneously , but retrieving the connectivity between neurons remains a daunting task for experimentalists.
Can we reverse-engineer the connectivity of a circuit from measurements of its activity? We cast this problem into a deep learning paradigm called teacher-student, where we train student networks to imitate the measured activity of a to-be-recovered teacher network. We make use of novel deep learning theory on the geometry of loss functions  to identify the connectivity that can generate a specific set of activities . Many questions are left open: (i) what is the best stimulation to improve data efficiency of connectivity recovery? (ii) how can we adapt our method to different neural architectures? (iii) what makes a network easy or hard to reconstruct? (iv) what if we have limited knowledge about the activation function?
In this project, you will work on the Expand-and-Cluster algorithm  to advance one of the above-listed open questions, or one of your own. Ideal candidates must have familiarity with deep learning concepts and frameworks (e.g. PyTorch or JAX), be willing to work independently and be proactive. Interested students should send their application, including CV and grades to [email protected].
 R. Tampa. “Why is the human brain so difficult to understand? We asked 4 neuroscientists.” Allen Institute Blog Post.
 Urai et al. “Large-scale neural recordings call for new insights to link brain and behavior.” Nature Neuroscience.
 Şimşek et al. “Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances.” ICML 2021.
 Martinelli et al. “Expand-and-Cluster: exact parameter recovery of neural networks”. ArXiv 2023.
Modeling the impact of stimulus similarities on novelty perception using deep learning for latent representations
Novelty is an intrinsic motivational signal that guides the behavior of humans, animals and artificial agents in the face of unfamiliar stimuli and environments. But how can an (biological or artificial) agent determine whether a stimulus is novel or not? In the lab, we recently showed how algorithmic models of novelty detection in the brain [1,2] can be extended to continuous environments and stimulus spaces with similarity structures (unpublished results). However, our current model relies on the existence of a sufficiently low-dimensional stimulus representation. In machine learning, on the other hand, novelty is computed using neural networks that are trained end-to-end to estimate the stimulus novelty [3,4], which makes it hard to understand how the structure of the stimulus space influences novelty computation.
In this project, we will take a hybrid approach to model how novelty can be computed in naturalistic stimulus spaces, and combine deep learning with algorithmic models of novelty computation. We will use our model to investigate how stimulus similarities in the original stimulus space and the latent representation space shape novelty computation, and compare the novelty signals predicted by our model to state-of-the-art algorithmic and machine-learning models of novelty computation.
Good programming skills in python are a strict requirement, prior experience with deep learning and pytorch is helpful but not required. Interested students should send their application, including CV and grades in relevant classes, to [email protected].
 Xu et al., Novelty is not surprise: Human exploratory and adaptive behavior in sequential decision-making. PLoS Comp. Biol. (2021)
 Modirshanechi et al., Surprise and novelty in the brain. Curr. Op. Neurobiol. (2023)
 Bellemare et al., Unifying count-based exploration and intrinsic motivation. NeurIPS (2016)
 Ostrovski et al., Count-based exploration with neural density models. PMLR (2017)
A video game experiment on mental time-travel and one-shot learning
Mental time-travel is the process of vividly remembering past personal experiences or imagining oneself in a future situation. Whereas humans can be asked to describe what they experience during mental time-travel, indirect approaches are needed to investigate whether mental time-travel exists in other species. We study a class of behavioural tasks that humans can presumably solve using mental time-travel, and that feature a behavioral readout other than verbal descriptions of subjective experiences. For example, a subject may perform an action to prepare for an event in the near future, by recalling a related but unique prior episode where they were unprepared.
In this project, we will design simple video game implementations of this behavioral paradigm, to study in rodent and human subjects. The project consists of four tasks. First, design and test implementations of the task using the Unity game engine. Second, run pilot human behavioural experiments with lab members and friends. Third, run the experiment online (e.g. on prolific.co) or with EPFL students. Four, analyse the behavioural data.
Good programming skills are a strict requirement; familiarity with Unity is an asset but not required. Interested students should send their application, including CV and grades, to both [email protected] and [email protected].
This project can also be done as a Master’s project.
List of Projects – Autumn 2023
Finite-dimensional representation of neural field equations.
Neural fields are continuum models of cortical networks, describing the state of a neuronal population as a scalar field over space. The dynamics of a neural field are described by an integro-differential equation, called the neural field equation .
The neural field being a vector in a functional vector space, the system is therefore infinite-dimensional. Yet, the dynamics can always be approximated, and in some cases expressed exactly, by a finite system of ODEs . This yields a finite-dimensional representation of the system, that relies on the projection of the neural field on the eigenbasis of a linear operator.
How large must the dimension be to get a good approximation of the dynamics? Is there a best choice of the basis for the projection? Can we find a non-trivial example for which a good choice of the space over which the field is defined allows for an exact finite-dimensional representation?
These are the questions you will address during the project, through the study of concrete examples and numerical simulations.
The student should have a good background in mathematics or physics (basics in functional analysis / dynamical systems), as well as basics in programming (Python/Julia).
For further information, please contact: [email protected].
 W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski. Neuronal dynamics: From single neurons to networks and models of cognition (chapter 18). Cambridge University Press, 2014.
 R. Veltz and O. Faugeras. Local/global analysis of the stationary solutions of some neural field equations. SIAM Journal on Applied Dynamical Systems, 9(3):954–998, 2010.