Semester Projects

List of Projects – Autumn 2024

 

Reverse engineering deep networks to reverse engineer neural circuits

We have very limited understanding of how neurons interact to drive even the simplest of behaviours [1]. To achieve a mechanistic understanding of any neural circuit we first need a complete description of its activity and structure. Neuroscience is flourishing with techniques to measure the activity of thousands of neurons simultaneously [2], but retrieving the connectivity between neurons remains a daunting task for experimentalists.

Can we reverse-engineer the connectivity of a circuit from measurements of its activity? We cast this problem into a deep learning paradigm called teacher-student, where we train student networks to imitate the measured activity of a to-be-recovered teacher network. We make use of novel deep learning theory on the geometry of loss functions [3] to identify the connectivity that can generate a specific set of activities [4]. Many questions are left open: (i) what is the best stimulation to improve data efficiency of connectivity recovery? (ii) how can we adapt our method to different neural architectures? (iii) what makes a network easy or hard to reconstruct? (iv) what if we have limited knowledge about the activation function?

In this project, you will work on the Expand-and-Cluster algorithm [4] to advance one of the above-listed open questions, or one of your own. Ideal candidates must have familiarity with deep learning concepts and frameworks (e.g. PyTorch or JAX), be willing to work independently and be proactive. Interested students should send their application, including CV and grades to [email protected].

 

[1] R. Tampa. “Why is the human brain so difficult to understand? We asked 4 neuroscientists.” Allen Institute Blog Post.

[2] Urai et al. “Large-scale neural recordings call for new insights to link brain and behavior.” Nature Neuroscience.

[3] Şimşek et al. “Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances.” ICML 2021.

[4] Martinelli et al. “Expand-and-Cluster: exact parameter recovery of neural networks”. ArXiv 2023.

 


How can we model human learning of environmental statistics? (TAKEN)

 

Through exploration, people learn both concretely which actions are good in certain situations, as well as about general statistics of the environment. However, in many situations, it is unclear how best to model human learning of these parameters, since full Bayesian models are frequently intractable.

 

We have recently developed meta-reinforcement learning neural network models that capture human performance well in observe-or-bet tasks [1, 2]. These neural network models, however, are best suited for modelling the performance of people who have already gained familiarity with the task and learned the underlying statistics. In this semester project, we will use gradient-based optimization instead model human learning of the task learning by placing in the Learning Expected Value of Control framework to characterize the structure and priors that guide human [3, 4]. Using this formalization, we will identify the priors and expected structure of the task that guide human learning across episodes of task engagement.

 

The project will involve a mixture of normative modelling as well as data analysis of behavioral data from human experiments.

Interested students should send their application, including CV and grades to [email protected].

 

References:

 

[1] Sandbrink, K., & Summerfield, C. (2023). Learning the value of control with Deep RL. 2023 Conference on Cognitive Computational Neuroscience. https://doi.org/10.32470/CCN.2023.1640-0

 

[2] Sandbrink, K., & Summerfield, C. (2024). Modelling cognitive flexibility with deep neural networks. Current Opinion in Behavioral Sciences, 57, 101361. https://doi.org/10.1016/j.cobeha.2024.101361

 

[3] Masís, J. A., Musslick, S., & Cohen, J. (2021). The Value of Learning and Cognitive Control Allocation. Proceedings of the Annual Meeting of the Cognitive Science Society. https://escholarship.org/uc/item/7w0223v0

 

[4] Carrasco-Davis, R., Masís, J., & Saxe, A. M. (2023). Meta-Learning Strategies through Value Maximization in Neural Networks (arXiv:2310.19919). arXiv. https://doi.org/10.48550/arXiv.2310.19919

List of Projects – Spring 2024

 

Reverse engineering deep networks to reverse engineer neural circuits – TAKEN

We have very limited understanding of how neurons interact to drive even the simplest of behaviours [1]. To achieve a mechanistic understanding of any neural circuit we first need a complete description of its activity and structure. Neuroscience is flourishing with techniques to measure the activity of thousands of neurons simultaneously [2], but retrieving the connectivity between neurons remains a daunting task for experimentalists.

Can we reverse-engineer the connectivity of a circuit from measurements of its activity? We cast this problem into a deep learning paradigm called teacher-student, where we train student networks to imitate the measured activity of a to-be-recovered teacher network. We make use of novel deep learning theory on the geometry of loss functions [3] to identify the connectivity that can generate a specific set of activities [4]. Many questions are left open: (i) what is the best stimulation to improve data efficiency of connectivity recovery? (ii) how can we adapt our method to different neural architectures? (iii) what makes a network easy or hard to reconstruct? (iv) what if we have limited knowledge about the activation function?

In this project, you will work on the Expand-and-Cluster algorithm [4] to advance one of the above-listed open questions, or one of your own. Ideal candidates must have familiarity with deep learning concepts and frameworks (e.g. PyTorch or JAX), be willing to work independently and be proactive. Interested students should send their application, including CV and grades to [email protected].

 

[1] R. Tampa. “Why is the human brain so difficult to understand? We asked 4 neuroscientists.” Allen Institute Blog Post.

[2] Urai et al. “Large-scale neural recordings call for new insights to link brain and behavior.” Nature Neuroscience.

[3] Şimşek et al. “Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances.” ICML 2021.

[4] Martinelli et al. “Expand-and-Cluster: exact parameter recovery of neural networks”. ArXiv 2023.


Modeling the impact of stimulus similarities on novelty perception using deep learning for latent representations (TAKEN)

Novelty is an intrinsic motivational signal that guides the behavior of humans, animals and artificial agents in the face of unfamiliar stimuli and environments. But how can an (biological or artificial) agent determine whether a stimulus is novel or not? In the lab, we recently showed how algorithmic models of novelty detection in the brain [1,2] can be extended to continuous environments and stimulus spaces with similarity structures (unpublished results). However, our current model relies on the existence of a sufficiently low-dimensional stimulus representation. In machine learning, on the other hand, novelty is computed using neural networks that are trained end-to-end to estimate the stimulus novelty [3,4], which makes it hard to understand how the structure of the stimulus space influences novelty computation.

In this project, we will take a hybrid approach to model how novelty can be computed in naturalistic stimulus spaces, and combine deep learning with algorithmic models of novelty computation. We will use our model to investigate how stimulus similarities in the original stimulus space and the latent representation space shape novelty computation, and compare the novelty signals predicted by our model to state-of-the-art algorithmic and machine-learning models of novelty computation.

Good programming skills in python are a strict requirement, prior experience with deep learning and pytorch is helpful but not required. Interested students should send their application, including CV and grades in relevant classes, to [email protected].

 

References

[1] Xu et al., Novelty is not surprise: Human exploratory and adaptive behavior in sequential decision-making. PLoS Comp. Biol. (2021)

[2] Modirshanechi et al., Surprise and novelty in the brain. Curr. Op. Neurobiol. (2023)  

[3] Bellemare et al., Unifying count-based exploration and intrinsic motivation. NeurIPS (2016)

[4] Ostrovski et al., Count-based exploration with neural density models. PMLR (2017)


A video game experiment on mental time-travel and one-shot learning (TAKEN)

Mental time-travel is the process of vividly remembering past personal experiences or imagining oneself in a future situation. Whereas humans can be asked to describe what they experience during mental time-travel, indirect approaches are needed to investigate whether mental time-travel exists in other species. We study a class of behavioural tasks that humans can presumably solve using mental time-travel, and that feature a behavioral readout other than verbal descriptions of subjective experiences. For example, a subject may perform an action to prepare for an event in the near future, by recalling a related but unique prior episode where they were unprepared.

In this project, we will design simple video game implementations of this behavioral paradigm, to study in rodent and human subjects. The project consists of four tasks. First, design and test implementations of the task using the Unity game engine. Second, run pilot human behavioural experiments with lab members and friends. Third, run the experiment online (e.g. on prolific.co) or with EPFL students. Four, analyse the behavioural data.

Good programming skills are a strict requirement; familiarity with Unity is an asset but not required. Interested students should send their application, including CV and grades, to both [email protected] and [email protected].

This project can also be done as a Master’s project.