Swiss Data Science Center
Visit our website

Projects – Spring 2021
It may be possible to convert a thesis project into a semester project or extend a semester project to be suitable for a thesis project. If any of the present or past projects interests you, please feel free to contact us. We are always looking forward to meeting motivated and talented students who want to work on exciting projects.
Laboratory:
Swiss Data Science Center
Type:
Master Project
Description:
Variational autoencoders [1,2] are unsupervised deep learning techniques that learn latent representations of the input data of low dimensionality. Previous works have shown that the latent low-dimensional representations capture the most relevant features in the data which could be used directly for physical understanding, but also as input to other machine learning algorithms, such as clustering, forecasting or extreme event detection. Here, we will mostly focus on understanding the latent representations of physical systems, such as the Lorenz attractor or data representing climate systems. The goal will be to disentangle the representations, such that each feature ideally captures one driver of the dynamics.
Goals/benefits:
- Working with machine learning and deep learning libraries in Python (pandas, scikit-learn, PyTorch)
- Becoming familiar with the analysis of time series (power spectra)
- Advancing research on an interdisciplinary problem
- Possibility to publish a research paper
Prerequisites:
- Machine learning and deep learning (advanced or intermediate skills)
- Python (advanced skills)
- Interested in interdisciplinary applications
Deliverables:
- Well-documented code
- Written report and oral presentation
References:
[1] D. Kingma, M. Welling, “Auto-encoding variational Bayes”, 2013
[2] I. Higgins, D. Amos, D. Pfau, S. Racaniere, L. Matthey, D. Rezende, A. Lerchner, “Towards a definition of disentangled representations”, 2018
Contact:
Eniko Székely ([email protected])
Natasha Tagasovska ([email protected])