Student projects

We currently offer the following master projects:

Self-supervised learning for 3D live-cell microscopy

State-of-the-art object detection and segmentation methods for microscopy images rely on supervised machine learning [1], which requires laborious manual annotation of training data (=”labels”) [2].

Our group is generally interested in developing label-efficient methods for analysing microscopy images via self-supervised representation learning.
We have recently developed a pre-training method based on time arrow prediction (TAP) [3], which captures inherently time-asymmetric biological processes such as cell divisions or cell death without any human supervision, here’s a video of what this looks like.

 

We propose the following project ideas (master research project or master thesis computer science/data science):

1. Time Arrow Prediction (TAP) for 3D+time datasets

We have developed TAP for 2D+time datasets (code at https://github.com/weigertlab/tarrow). We would like to investigate how this method can be extended to work on large 3D+time microscopy videos, like developing C. elegans [4] or zebrafish embryos [5], where time-asymmetric events are very sparse.

2. Time Arrow Prediction (TAP) embeddings for cell tracking

Cell tracking is specific form of the well-studied Multiple object tracking (MOT) problem [6]. One of the bottlenecks of current cell tracking algorithms is the reliable identification of cell divisions on new datasets. We would like to investigate how we can use the representations learned by TAP to improve generalization of current cell tracking algorithms like [7]. We are contributing to a new neat framework for linking objects based on integer linear programs that could be of interest [8].

3. Advanced explainability methods for Time Arrow Prediction

One powerful aspect of our time arrow prediction method is the ability to qualitatively discover unknown sparse events/features/patterns in novel datasets. We are interested in further developing interpretability methods for time arrow prediction and the resulting representations, such as explainable machine learning methods [9] beyond our current implementation of GradCAM [10].


Don’t hesitate to reach out to Ben Gallusser for more details.


Required skills:
– Advanced Machine Learning, theory + practice.
– Affinity for pretty microscopy images ;).
– Efficient and well-structured scientific computing in Python (or Julia).
– Comfortable with a Deep Learning framework, ideally PyTorch.
– Image Processing and Computer Vision.


References:

[1] Weigert et al., Star-convex polyhedra for 3D object detection and segmentation in microscopy (WACV 2020)

[2] Greenwald et al., Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning (Nature Biotech 2022)

[3] Gallusser et al., Self-supervised dense representation learning for live-cell microscopy with time arrow prediction (MICCAI 2023)

[4] http://celltrackingchallenge.net/3d-datasets

[5] https://zebrahub.ds.czbiohub.org/imaging

[6] Luo et al., Multiple object tracking: A literature review (Journal of AI, 2021)

[7] Hirsch et al., Tracking by weakly-supervised learning and graph optimization for whole-embryo C. elegans lineages (MICCAI 2022)

[8] https://funkelab.github.io/motile

[9] Nazir et al., Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks (Computers in Biology and Medicine, 2023)

[10] Selvaraju et al., Grad-cam: Visual explanations from deep networks via gradient-based localization (ICCV 2017)