Spring 2026Â Â
Â
Predicting Lifespan & Drug Effects from Whole-Life C. elegans Behavior
Aging leaves a signature on behavior long before mortality events occur [1]. In this project, L4-synchronized worms are tracked continuously to death under Drug and Control conditions, yielding dense, whole-life time series of posture and locomotion. Because every animal begins at the same developmental stage, early post-L4 behavior provides a shared baseline from which divergence in movement patterns, bout structure, and restâactivity cycles can foreshadow individual outcomes. In our dataset, Drug prolonged lifespan relative to Control; we will quantify this effect size and its uncertainty while mapping the behavioral changes that accompany it. The dataset therefore links fine-grained behavioral dynamics to two targets: survival time and treatment status. Our pipeline will standardize per-worm trajectories, engineer multiscale features (e.g., speed, reversals, dwelling, posture/periodicity), pair them with survival and treatment labels, and emphasize interpretabilityâwhich metrics matter, how early they become informative, and how they shift with drug exposure.
Primary goals
â Predict lifespan from early behavior with time-to-event models.
â Classify drug exposure from behavior.
â Identify and visualize key behavioral metrics that drive survival differences and drug effects.
Strong Python;Â ML experience (neural nets or survival analysis) preferred; time-series a plus. Most important: be genuinely interested. My email is: [email protected].Â
[1] Stern, S. et al. Neuromodulatory Control of Long-Term Behavioral Patterns and Individuality across Development. Cell (2018).
Autumn 2025
Â
Training recurrent population networks to perform computational tasks (TAKEN)
Recurrent neural networks (RNNs) have proven to be a useful tool for modelling and understanding how neural computations, such as decision making[1] and motor commands[2], are possible through recurrent interactions in a population of neurons. Machine learning has played a vital role in this by making it possible to train RNNs to perform a task of interest, after which the learned solution can provide insights into possible network implementations[1, 2]. However, populations of neurons in the brain behave very differently from conventional rate units on which existing methods are based. In this project you will investigate whether or not it is possible to train networks of interacting populations to perform computational tasks.
The population we will be working with is based on the spiking SRM0 neuron with escape noise, whose population activity can be calculated through a self-consistent integral equation[3]. Responses of this population are characterised by oscillatory activity in response to a step input, arising from the refractory effects of individual neurons, which are not captured by conventional rate units. For the network it means that the current activity of a population is affected both by the incoming signal from other populations, as well as by the history of its own activity. How can we train a network of interacting populations to perform classic behavioural tasks? Would it be enough to train a standard RNN and transfer the learned parameters to the population model, or is it necessary to have a dedicated learning algorithm?
The project aims to answer these questions by comparing both approaches, for example by applying backpropagation to the self-consistent equation and by using existing RNN training tools. As such, candidates should feel comfortable with optimisation procedures and have experience with training neural networks in code.
Interested candidates should send their CV alongside relevant grades to [email protected].
[1] Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474), 78-84.
[2] Sussillo, D., Churchland, M. M., Kaufman, M. T., & Shenoy, K. V. (2015). A neural network that finds a naturalistic solution for the production of muscle activity. Nature neuroscience, 18(7), 1025-1033.
[3] Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states, and locking. Neural computation, 12(1), 43-89.
Identifying monosynaptic connection using deep learning method (TAKEN)
Recently, we developed an approach to measure synaptic connectivity in vivo, training a deep convolutional network to reliably identify monosynaptic connections from the spike-time cross-correlograms of millions of single-unit pairs.
The benchmark results on both the experimental recordings and synthetic datasets indicate that the method is very promising.Â
You can find more information about this method in [1].
The goal of this project is very simple, which is to further improve the performance of this algorithm. (I already have a few ideas for you to start with.) If you are interested in this, please reach out to me. My email is:Â [email protected].Â
Â
[1] Fink AJ, Muscinelli SP, Wang S, Hogan MI, English DF, Axel R, Litwin-Kumar A, Schoonover CE. Experience-dependent reorganization of inhibitory neuron synaptic connectivity. bioRxiv. 2025 Jan 16.
Characterizing Spike Time Cross-Correlogram Patterns (TAKEN)
The spike time cross-correlogram (CCG) is a simple yet powerful statistic derived from spike recordings, offering unparalleled insights into the underlying structure of neural circuits. Due to its computational efficiency and interpretability, we are very interested in studying it.
In this project, we aim to perform large-scale clustering of millions of CCGs to uncover common and potentially novel patterns. The primary goals are toÂ
– Identify characteristic shapes of CCGs,
– Analyze their variability across different brain regions, and
– Investigate the mechanisms that may generate these patterns.
You will be responsible for developing or scaling a clustering algorithm to handle the vast quantity of CCG data. While we have a solid understanding of which algorithms are likely to perform well, the main challenge is achieving scalability. There is also room for innovation and improvement in the algorithmic approach. Most importantly, the ultimate goal is to apply the developed method to CCGs computed from various brain regions and understand the difference.
The project is perfect for a student who has a strong interest in neuroscience and applied machine learning.Â
The minimum requirements for the student are to write clear codes and be interested in this project.Â
Interested candidates, please send their application to [email protected]. If you have any questions, also feel free to contact me.
Â
[1] Fink AJ, Muscinelli SP, Wang S, Hogan MI, English DF, Axel R, Litwin-Kumar A, Schoonover CE. Experience-dependent reorganization of inhibitory neuron synaptic connectivity. bioRxiv. 2025 Jan 16.
Optimising rate networks for efficient simulation of complex spiking neurons (TAKEN)
Networks of rate neurons are commonly used in neuroscience for their simplicity and efficiency. They form a useful abstraction of the processing of a real neuron, mapping a linear sum of inputs to a nonlinear firing rate. However, this abstraction comes at the cost of not including the dynamics of individual neurons at small timescales, such as reset and refractoriness. In this project you will investigate whether or not these properties are important for an accurate description of the dynamics of a population of neurons.
Â
Specifically, you will compare a large network of SRM0 spiking neurons to an equally large network of rate neurons with identical connectivity. The network is constructed to have low-dimensional population dynamics that are reliable across trials while retaining individual neural variability[1], a setup which has previously been shown to converge one-to-one to a rate network when working with Poisson neurons[2]. Can simple rate neurons approximate the more complex SRM0 dynamics well, which for example are known to display oscillations in their step response[3]?
The project aims to answer the main question by fitting the parameters of rate neurons, such as the gain curve, to simulations of the analytical solution. This may be approached both through a machine learning and a theoretical perspective.
Interested candidates should contact me at [email protected].
[1] DePasquale, B., Sussillo, D., Abbott, L. F., & Churchland, M. M. (2023). The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron, 111(5), 631-649.
[2] Schmutz, V., Brea, J., & Gerstner, W. (2025). Emergent rate-based dynamics in duplicate-free populations of spiking neurons. Physical Review Letters, 134(1), 018401.
[3] Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states, and locking. Neural computation, 12(1), 43-89.
Â
Spring 2025
How do biological constraints shape the solution space of neural networks? (TAKEN)
Why does deep learning work? Despite its overwhelming empirical success, this question remains largely unanswered from a theoretical perspective. A key piece of the puzzle is that networks generalize well despite being deep in the overparameterized regime [1,2]. This means there exists a “solution space”: many different combinations of weights that perfectly solve the training task. However, solving the training task does not necessarily imply generalization to novel inputs. How does training lead to weights that are not only part of the solution space but also generalize well?
One explanation is that the solution space is shaped by implicit or explicit regularizers. However, traditional explicit regularizers, such as penalizing large weights, do not significantly impact generalization [1]. This exploratory project seeks inspiration from biological neural networks, which operate under strong constraints – for example, their connectivity is limited by available space, and their activity is restricted by available energy [3]. How do such biological constraints shape the solution space? And how do they affect the resulting generalization performance?
These questions can be investigated through training neural networks, but they are also amenable to theoretical analysis (based, for example, on the methods developed in [4]), depending on the candidate’s preferences. Strong programming skills in Python are required for this project. Interested candidates should send their application, including a CV and grades in relevant courses, to [email protected].
[1] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2022). Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations.
[2] Belkin, M., Hsu, D., Ma, S., & Mandal, S. (2019). Reconciling modern machine-learning practice and the classical biasâvariance trade-off. Proceedings of the National Academy of Sciences, 116(32), 15849-15854.
[3] Laughlin, S.B., & Sejnowski, T.J. (2003). Communication in neuronal networks. Science, 301(5641), 1870-1874.
[4] van Meegen, A., & Sompolinsky, H. (2024). Coding schemes in neural networks learning classification tasks. arXiv preprint arXiv:2406.16689.
Identifying monosynaptic connection using deep learning method (also possible as a Masters Thesis project) – TAKEN
Recently, we developed an approach to measure synaptic connectivity in vivo, training a deep convolutional network to reliably identify monosynaptic connections from the spike-time cross-correlograms of millions of single-unit pairs.
The benchmark results on both the experimental recordings and synthetic datasets indicate that the method is very promising.
In this project, you will learn
- methods about synaptic connectivity inference,
- and how to simulate a large network of spiking neurons for benchmarking,
while
- trying to improve the performance of the current method further. (I already have a few ideas for you to start with.)
The project is perfect for a student with a decent Python programming background and who wants to study applied machine learning problems in neuroscience.
The minimum requirements for the student are to write clear codes and be interested in this project.
Interested candidates, please send their application to [email protected]. If you have any questions, also feel free to contact me.
Inferring hidden variables from observed activity in recurrent network models – TAKEN
Recent neuroscience research suggests that complex brain activity during cognitive tasks is often driven by simple, low-dimensional patterns [1].
On the modelling side, continuous-time Recurrent Neural Networks (RNNs) are dynamical systems of interacting “neurons” often used to model brain processes.
Studies of RNNs have helped us understand how computational abilities emerge at the collective level. In a key class of models, the activity of a large network is reduced to a few âlatent variablesâ that enable specific computations [2, 3].
In these models, however, observed neural activity, or âfiring rates,â is given by a non-linear transformation of these latent variables, making it impossible to decode them without knowing the transformation itself. Inferring this transformation from observed activity could help us reveal the hidden latent variables and understand network structure.
This project aims to develop a method to infer this transformation by solving a dimensionality-reduction problem. If successful on recurrent network models, the method could be applied to real neural data.
A good background in maths, physics, or computer science is recommended.
If you are interested, you can send your application, as well as CV and grades in relevant classes, to [email protected]. Feel free to contact me if you have questions as well. The project could also be shaped as a Master Thesis.
[1] Khona, M., Fiete, I.R. Attractor and integrator networks in the brain. Nat Rev Neurosci 23 (2022). https://doi.org/10.1038/s41583-022-00642-0
[2] Mastrogiuseppe, F., Ostojic, S. Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron 99 (2018). https://doi.org/10.1016/j.neuron.2018.07.003
[3] Pezon, L., Schmutz, V., Gerstner, W. Linking Neural Manifolds to Circuit Structure in Recurrent Networks. bioRxiv (2024). https://doi.org/10.1101/2024.02.28.582565
List of Projects – Autumn 2024
Reverse engineering deep networks to reverse engineer neural circuits – (TAKEN)
We have very limited understanding of how neurons interact to drive even the simplest of behaviours [1]. To achieve a mechanistic understanding of any neural circuit we first need a complete description of its activity and structure. Neuroscience is flourishing with techniques to measure the activity of thousands of neurons simultaneously [2], but retrieving the connectivity between neurons remains a daunting task for experimentalists.
Can we reverse-engineer the connectivity of a circuit from measurements of its activity? We cast this problem into a deep learning paradigm called teacher-student, where we train student networks to imitate the measured activity of a to-be-recovered teacher network. We make use of novel deep learning theory on the geometry of loss functions [3] to identify the connectivity that can generate a specific set of activities [4]. Many questions are left open: (i) what is the best stimulation to improve data efficiency of connectivity recovery? (ii) how can we adapt our method to different neural architectures? (iii) what makes a network easy or hard to reconstruct? (iv) what if we have limited knowledge about the activation function?
In this project, you will work on the Expand-and-Cluster algorithm [4] to advance one of the above-listed open questions, or one of your own. Ideal candidates must have familiarity with deep learning concepts and frameworks (e.g. PyTorch or JAX), be willing to work independently and be proactive. Interested students should send their application, including CV and grades to [email protected].
[1] R. Tampa. âWhy is the human brain so difficult to understand? We asked 4 neuroscientists.â Allen Institute Blog Post.
[2] Urai et al. âLarge-scale neural recordings call for new insights to link brain and behavior.â Nature Neuroscience.
[3] ĆimĆek et al. âGeometry of the loss landscape in overparameterized neural networks: Symmetries and invariances.â ICML 2021.
[4] Martinelli et al. âExpand-and-Cluster: exact parameter recovery of neural networksâ. ArXiv 2023.
How can we model human learning of environmental statistics? (TAKEN)
Through exploration, people learn both concretely which actions are good in certain situations, as well as about general statistics of the environment. However, in many situations, it is unclear how best to model human learning of these parameters, since full Bayesian models are frequently intractable.
We have recently developed meta-reinforcement learning neural network models that capture human performance well in observe-or-bet tasks [1, 2]. These neural network models, however, are best suited for modelling the performance of people who have already gained familiarity with the task and learned the underlying statistics. In this semester project, we will use gradient-based optimization instead model human learning of the task learning by placing in the Learning Expected Value of Control framework to characterize the structure and priors that guide human [3, 4]. Using this formalization, we will identify the priors and expected structure of the task that guide human learning across episodes of task engagement.
The project will involve a mixture of normative modelling as well as data analysis of behavioral data from human experiments.
Interested students should send their application, including CV and grades to [email protected].
References:
[1] Sandbrink, K., & Summerfield, C. (2023). Learning the value of control with Deep RL. 2023 Conference on Cognitive Computational Neuroscience. https://doi.org/10.32470/CCN.2023.1640-0
[2] Sandbrink, K., & Summerfield, C. (2024). Modelling cognitive flexibility with deep neural networks. Current Opinion in Behavioral Sciences, 57, 101361. https://doi.org/10.1016/j.cobeha.2024.101361
[3] MasĂs, J. A., Musslick, S., & Cohen, J. (2021). The Value of Learning and Cognitive Control Allocation. Proceedings of the Annual Meeting of the Cognitive Science Society. https://escholarship.org/uc/item/7w0223v0
[4] Carrasco-Davis, R., MasĂs, J., & Saxe, A. M. (2023). Meta-Learning Strategies through Value Maximization in Neural Networks (arXiv:2310.19919). arXiv. https://doi.org/10.48550/arXiv.2310.19919
List of Projects – Spring 2024
Reverse engineering deep networks to reverse engineer neural circuits – TAKEN
We have very limited understanding of how neurons interact to drive even the simplest of behaviours [1]. To achieve a mechanistic understanding of any neural circuit we first need a complete description of its activity and structure. Neuroscience is flourishing with techniques to measure the activity of thousands of neurons simultaneously [2], but retrieving the connectivity between neurons remains a daunting task for experimentalists.
Can we reverse-engineer the connectivity of a circuit from measurements of its activity? We cast this problem into a deep learning paradigm called teacher-student, where we train student networks to imitate the measured activity of a to-be-recovered teacher network. We make use of novel deep learning theory on the geometry of loss functions [3] to identify the connectivity that can generate a specific set of activities [4]. Many questions are left open: (i) what is the best stimulation to improve data efficiency of connectivity recovery? (ii) how can we adapt our method to different neural architectures? (iii) what makes a network easy or hard to reconstruct? (iv) what if we have limited knowledge about the activation function?
In this project, you will work on the Expand-and-Cluster algorithm [4] to advance one of the above-listed open questions, or one of your own. Ideal candidates must have familiarity with deep learning concepts and frameworks (e.g. PyTorch or JAX), be willing to work independently and be proactive. Interested students should send their application, including CV and grades to [email protected].
[1] R. Tampa. âWhy is the human brain so difficult to understand? We asked 4 neuroscientists.â Allen Institute Blog Post.
[2] Urai et al. âLarge-scale neural recordings call for new insights to link brain and behavior.â Nature Neuroscience.
[3] ĆimĆek et al. âGeometry of the loss landscape in overparameterized neural networks: Symmetries and invariances.â ICML 2021.
[4] Martinelli et al. âExpand-and-Cluster: exact parameter recovery of neural networksâ. ArXiv 2023.
Modeling the impact of stimulus similarities on novelty perception using deep learning for latent representations (TAKEN)
Novelty is an intrinsic motivational signal that guides the behavior of humans, animals and artificial agents in the face of unfamiliar stimuli and environments. But how can an (biological or artificial) agent determine whether a stimulus is novel or not? In the lab, we recently showed how algorithmic models of novelty detection in the brain [1,2] can be extended to continuous environments and stimulus spaces with similarity structures (unpublished results). However, our current model relies on the existence of a sufficiently low-dimensional stimulus representation. In machine learning, on the other hand, novelty is computed using neural networks that are trained end-to-end to estimate the stimulus novelty [3,4], which makes it hard to understand how the structure of the stimulus space influences novelty computation.
In this project, we will take a hybrid approach to model how novelty can be computed in naturalistic stimulus spaces, and combine deep learning with algorithmic models of novelty computation. We will use our model to investigate how stimulus similarities in the original stimulus space and the latent representation space shape novelty computation, and compare the novelty signals predicted by our model to state-of-the-art algorithmic and machine-learning models of novelty computation.
Good programming skills in python are a strict requirement, prior experience with deep learning and pytorch is helpful but not required. Interested students should send their application, including CV and grades in relevant classes, to [email protected].
References
[1] Xu et al., Novelty is not surprise: Human exploratory and adaptive behavior in sequential decision-making. PLoS Comp. Biol. (2021)
[2] Modirshanechi et al., Surprise and novelty in the brain. Curr. Op. Neurobiol. (2023) Â
[3] Bellemare et al., Unifying count-based exploration and intrinsic motivation. NeurIPS (2016)
[4] Ostrovski et al., Count-based exploration with neural density models. PMLR (2017)
A video game experiment on mental time-travel and one-shot learning (TAKEN)
Mental time-travel is the process of vividly remembering past personal experiences or imagining oneself in a future situation. Whereas humans can be asked to describe what they experience during mental time-travel, indirect approaches are needed to investigate whether mental time-travel exists in other species. We study a class of behavioural tasks that humans can presumably solve using mental time-travel, and that feature a behavioral readout other than verbal descriptions of subjective experiences. For example, a subject may perform an action to prepare for an event in the near future, by recalling a related but unique prior episode where they were unprepared.
In this project, we will design simple video game implementations of this behavioral paradigm, to study in rodent and human subjects. The project consists of four tasks. First, design and test implementations of the task using the Unity game engine. Second, run pilot human behavioural experiments with lab members and friends. Third, run the experiment online (e.g. on prolific.co) or with EPFL students. Four, analyse the behavioural data.
Good programming skills are a strict requirement; familiarity with Unity is an asset but not required. Interested students should send their application, including CV and grades, to both [email protected] and [email protected].
This project can also be done as a Master’s project.