Open Projects

Projects are available on the following topics (not exclusive):

  • Machine Learning and Applications
  • Deep Learning Science
  • Image Analysis and Computer Vision
  • Graph Signal Processing and Network Machine Learning

Non-exhaustive project list

Single-Input Multiple-Output Model Merging: Leveraging Foundation Models for Multi-Task Learning

The advent of foundation models has revolutionized the landscape of machine learning, introducing a new paradigm where practitioners can access pre-trained checkpoints, such as those available on platforms like Huggingface, tailored for specific tasks. These models are derived from the same initial checkpoints but are fine-tuned on specific tasks, such as CIFAR or MNIST. Task arithmetic techniques [1,2] merge these different models into one multi-task model, i.e., a model that showcases good performance on all involved tasks, without needing additional training.

While the task arithmetic literature has focused on merging of classification models fine-tuned on different inputs, more traditional multi-task learning settings remain unexplored. An important case, for instance, is of a model solving the tasks of semantic segmentation, instance segmentation and depth regression from a single image input [3]. The goal of this project is to leverage existing model merging techniques for the single-input multiple-output case.


[1] G. Ilharco, M. T. Ribeiro, M. Wortsman, L. Schmidt, H. Hajishirzi, and A. Farhadi, “Editing models with task arithmetic,” in ICLR 2023. 

[2] P. Yadav, D. Tam, L. Choshen, C. A. Raffel, and M. Bansal, “TIES-Merging: Resolving interference when merging models,” in NeurIPS 2023.

[3] A. Kendall, Y. Gal, and R. Cipolla, “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in CVPR 2018


Applicants must have completed at least one deep learning course and have experience with PyTorch. Familiarity with multi-task learning and model merging techniques is preferred.

Contact: [email protected] or [email protected]

Foundational AI model for future cardiovascular diseases prediction

Cardiovascular disease (CVD) remains the leading cause of global mortality, constituting 32% of all deaths. Timely prediction of CVD is, therefore, of paramount importance. This project focuses on leveraging foundational models such as ViT, SAM, and CLIP to predict CVD from an internal dataset of 1500 2D X-ray coronary angiography images [1].

Despite the significant success of foundational models in diverse vision and language applications, their integration into the medical domain has been limited. This can be attributed to the scarcity of large, medical domain-specific datasets [2, 3].

The goals of this project are as follows: (1) to assess and compare the performance of existing pretrained foundational models in the field of cardiology imaging; (2) to improve pretrained foundational models with domain-specific objectives through self-supervised learning or supervised learning, utilizing publicly available data [4]; and (3) to adapt re-pretrained models for predicting CVD on our own dataset.


[1]. De Bruyne, B., Pijls, N.H., Kalesan, B., Barbato, E., Tonino, P.A., al.: Fractional flow reserve–guided pci versus medical therapy in stable coronary disease. New England Journal of Medicine 367(11), 991–1001 (2012)
[2]. Wang, Dequan, Xiaosong Wang, Lilong Wang, Mengzhang Li, Qian Da, Xiaoqiang Liu, Xiangyu Gao et al. “MedFMC: A Real-world Dataset and Benchmark For Foundation Model Adaptation in Medical Image Classification.” arXiv preprint arXiv:2306.09579 (2023).
[3]. Zhou, Y., Chia, M. A., Wagner, S. K., Ayhan, M. S., Williamson, D. J., Struyven, R. R., … & Keane, P. A. (2023). A foundation model for generalizable disease detection from retinal images. Nature, 622(7981), 156-163.

Good knowledge of deep learning and experience with ML/DL libraries, preferably pytorch.

Contact: [email protected] or [email protected]

Self-supervised learning for major arteries segmentation from invasive coronary angiography

Invasive coronary angiography (ICA)  is a widely used imaging modality for diagnosing cardiovascular diseases. Accurate segmentation of major arteries from ICA is crucial for clinical decision-making, treatment planning, and research purposes. However, manual segmentation is time-consuming and prone to inter-observer variability [1].

This project aims to: (1) Develop a self-supervised learning framework [2,3,4] for feature representation from ICA. (2) Fine-tune the pre-trained network for major arteries segmentation. (3) Evaluate the proposed method on a diverse datasets to demonstrate its effectiveness and robustness.


[1] Ma, Jun, et al. “Segment anything in medical images.” Nature Communications 15.1 (2024): 654.

[2] Kim, Boah, Yujin Oh, and Jong Chul Ye. “Diffusion adversarial representation learning for self-supervised vessel segmentation.” arXiv preprint arXiv:2209.14566 (2022).

[3] Zhou, Lei, et al. “Self pre-training with masked autoencoders for medical image classification and segmentation.” 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). IEEE, 2023.

[4] Ma, Yuxin, et al. “Self-supervised vessel segmentation via adversarial learning.” Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

Proficiency in deep learning and practical experience with machine learning/deep learning libraries, preferably pytorch, are necessary.

Contact: [email protected]

Interpretable Deep Learning towards cardiovascular disease prediction

Cardiovascular disease (CVD) is the leading cause of death in most European countries and is responsible for more than one in three of all potential years of life lost. Myocardial ischemia and infarction are most often the result of obstructive coronary artery disease (CAD), and their early detection is of prime importance. This could be developed based on data such as coronary angiography (CA), which is an X-ray based imaging technique used to assess the coronary arteries. However, such prediction is a non-trivial task, as i) data is typically noisy and of small volume, and ii) CVDs typically result from the complex interplay of local and systemic factors ranging from cellular signaling to vascular wall histology and fluid hemodynamics. The goal of this project is to apply advanced machine learning techniques, and in particular deep learning, in order to detect culprit lesions from CA images, and eventually predict myocardial infarction. Incorporating domain specific constraints to existing learning algorithms might be needed.


[1] Yang et al., Deep learning segmentation of major vessels in X-ray coronaryangiography, Nature Scientific Reports, 2019.

[2] Du et al., Automatic and multimodal analysis for coronary angiography: training and validation of a deep learning architecture, Eurointervention 2020.

Good knowledge of machine learning and deep learning architectures. Experience with one of deep learning libraries and in particular Pytorch is necessary.

Contact: [email protected]

Deep learning towards X-ray CT imaging becoming the gold standard for heart attack diagnosis

Cardiovascular disease (CVD) is the leading cause of death in most European countries and is responsible for more than one in three of all potential years of life lost. Myocardial infarction (MI), commonly known as a heart attack, is most often the result of obstructive coronary artery disease (CAD). The gold standard today for diagnosing a severe stenosis (the obstruction of the artery) in patients with symptoms of a cardiac event is through coronary angiography (CA). CA is an invasive procedure, in which a catheter is inserted into the body through an artery towards the heart. Over the last decade there have been attempts at diagnosing severe stenosis by extracting various measurements [1,2] from the non-invasive X-ray CT imaging. However, the gold standard for the final decision making for the treatment of patients still requires the invasive CA imaging. The goal of this project is to apply advanced machine learning techniques, and in particular deep learning, in order to predict if a certain suspected area shown in a CT image is considered a severe stenosis according to the CA gold standard. This will hopefully pave the way towards making the non-invasive CT imaging the gold standard for MI diagnosis.


[1] Zreik, Majd, et al. “A recurrent CNN for automatic detection and classification of coronary artery plaque and stenosis in coronary CT angiography.” IEEE transactions on medical imaging 38.7 (2018): 1588-1598.

[2] Hong, Youngtaek, et al. “Deep learning-based stenosis quantification from coronary CT angiography.” Medical Imaging 2019: Image Processing. Vol. 10949. International Society for Optics and Photonics, 2019.

Good knowledge of machine learning and deep learning architectures. Experience with one of the deep learning libraries and in particular Pytorch is necessary.

Contact: [email protected] and [email protected]

Learning novel predictive representation by concept bottleneck disentanglement

Concepts are human-defined features used to explain the decision-making of black-box models with human interpretable explanations. Such methods are especially useful in the medical domain where we wish to explain the decision of a model trained to diagnose a medical condition (e.g, arthritis grade) from images (e.g, X-ray) with a concept a physician would look for in the image (e.g, bone spurs). Over the last few years various methods have been developed to extract concept explanations to interpret models post-hoc [1,2,3,4]. These methods assume that the models implicitly learn those concepts from the data, however this is not guaranteed.
In a more recent work, [5] introduces concept bottleneck models (CBM) which exploit the access to labels of human-interpretable concepts as well as the downstream task label to learn concepts explicitly. These models are trained to predict the task label y, given input x and through a bottleneck layer L, which is forced to learn some k labeled concepts. In this work they show that although constraining the parametric space of the bottleneck layer they are able to achieve comparable predictive performance with equivalent unconstrained baselines.

In this project we propose to combine the concept bottleneck parameters with unconstrained ones in order to learn a hybrid representation that takes into account both. Moreover, we wish the unconstrained bottleneck representation to be disentangled from the concepts parameters to allow the learning of new information. To this end, we will experiment with different information bottleneck disentanglement approaches as proposed in [6,7].


[1] Bau, D., Zhou, B., Khosla, A., Oliva, A., and Torralba, A. Network dissection: Quantifying interpretability of deep visual representations. In Computer Vision and Pattern Recognition (CVPR), pp. 6541–6549, 2017.
[2]Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors Concept Bottleneck Models (tcav). In International Conference on Machine Learning (ICML), pp. 2668–2677, 2018.
[3] Zhou, B., Sun, Y., Bau, D., and Torralba, A. Interpretable basis decomposition for visual explanation. In European Conference on Computer Vision (ECCV), pp. 119–134, 2018.
[4] Ghorbani, A., Wexler, J., Zou, J. Y., and Kim, B. Towards automatic concept-based explanations. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9277–9286, 2019.
[5] Koh PW, Nguyen T, Tang YS, Mussmann S, Pierson E, Kim B, Liang P. Concept bottleneck models. InInternational Conference on Machine Learning 2020 Nov 21 (pp. 5338-5348). PMLR.
[6]Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. betavae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations (ICLR), 2017.
[7] Klys, Jack, Jake Snell, and Richard Zemel. “Learning latent subspaces in variational autoencoders.” Advances in neural information processing systems 31 (2018).

Experience with machine and deep learning projects and experience with ML/DL libraries, preferably pytorch. Knowledge of information theory is a plus.

Contact: [email protected]

Leveraging Biological Knowledge and Gene Ontologies to Improve Unsupervised Clustering in Single-Cell RNA-Sequencing and Spatial Transcriptomics

Single-cell RNA-sequencing (scRNA-seq) and spatial transcriptomics emerged as breakthrough technologies to characterize cellular heterogeneity within human tissues, including cancer biopsies. Unsupervised clustering based on detailed transcriptomes of individual cells/tissue regions is a central component to identify and characterize novel cell types [1]. In cancer biology, identifying rare cell populations is highly relevant, as it can reveal drivers of therapy resistance. However, technical variability, high dimensionality (curse of dimensionality), and sparsity (high drop-out rate) in single-cell RNA-sequencing [2] can lead to the emergence of spurious clusters, posing a significant challenge.

This collaborative research project between the Genomics and Health Informatics group at IDIAP and the LTS4 lab at EPFL, aims to address this limitation by focusing on the structure of the biological system, specifically how genes collaborate to control cellular and tissue-scale functions. Novel graph-based feature representation learning methods will be proposed for individual cells, possibly using Graph Neural Networks (GNNs). Then, building on these new representations, improved cell clustering algorithms will be developed and validated against recent baseline methods [3] in their ability to (1) recover rare escapees driving tumor resistance; (2) identify spots that exhibit similar morphological structure and organisation.

[1] Zhang, S., Li, X., Lin, J., Lin, Q. & Wong, K.-C. Review of single-cell RNA-seq data clustering for cell-type identification and characterization. RNA 29, 517–530 (2023).
[2] Kiselev, V. Y., Andrews, T. S. & Hemberg, M. Publisher Correction: Challenges in unsupervised clustering of single-cell RNA-seq data. Nat. Rev. Genet. 20, 310 (2019).
[3] Du, L., Han, R., Liu, B., Wang, Y. & Li, J. ScCCL: Single-Cell Data Clustering Based on Self-Supervised Contrastive Learning. IEEE/ACM Trans. Comput. Biol. Bioinform. 20, 2233–2241 (2023).

Good knowledge of Python and a deep learning framework of choice (PyTorch, Tensorflow, Jax); sufficient familiarity with statistics and machine learning, also preferably Graph Neural Networks. Good knowledge of biology, or strong interest to learn biology, is a plus.

Contact: [email protected], [email protected] or [email protected]


Unlocking the Complexity of Amyotrophic Lateral Sclerosis: Integration of Biological Knowledge and Clinical Data for Genetic Insights

Amyotrophic Lateral Sclerosis (ALS) is a complex and devastating neurodegenerative condition characterized by a diverse array of clinical presentations and progression trajectories1. There is growing evidence to suggest that molecular subtypes, driven by independent disease mechanisms, contribute to the observed clinical heterogeneity2. However, our understanding of the genetic architecture and the corresponding molecular or cellular events that underlie distinct subtypes has been limited.

The goal of this collaborative research project between the Genomics and Health Informatics group at IDIAP and the LTS4 lab at EPFL is, through the integration of genomics and clinical data, to gain deeper insights into the genetic underpinnings of the disease and pinpoint relevant molecular pathways. The project will be based on publicly available data from large-scale consortiums (AnswerALS and ProjectMinE). Specifically, graph-based approaches such as graph neural networks (GCNs)3,4 could be applied to:
1) Represent molecular pathway knowledge (GO ontology or Protein-Protein interactions) to identify accumulation of genetic mutations in specific pathways, and
2) Enable improved patient stratification and to delineate the pertinent genetic mutations and molecular pathways that underlie distinct ALS subtypes.

[1] Pires, S., Gromicho, M., Pinto, S., de Carvalho, M., Madeira, S.C. (2020). Patient Stratification Using Clinical and Patient Profiles: Targeting Personalized Prognostic Prediction in ALS. In: Rojas, I., Valenzuela, O., Rojas, F., Herrera, L., Ortuño, F. (eds) Bioinformatics and Biomedical Engineering. IWBBIO 2020. Lecture Notes in Computer Science(), vol 12108. Springer, Cham.
[2] Eshima, J., O’Connor, S.A., Marschall, E. et al. Molecular subtypes of ALS are associated with differences in patient prognosis. Nat Commun 14, 95 (2023).
[3] Manchia M, Cullis J, Turecki G, Rouleau GA, Uher R, Alda M. The impact of phenotypic and genetic heterogeneity on results of genome wide association studies of complex diseases. PLoS One. 2013 Oct 11;8(10):e76295.
[4] Liang, B., Gong, H., Lu, L. et al. Risk stratification and pathway analysis based on graph neural network and interpretable algorithm. BMC Bioinformatics 23, 394 (2022).

Candidates should have strong mathematical and computational skills. Candidates should be familiar with Python/R, and with the Linux environment. Experience in sequencing data and machine learning is an asset. Candidates do not necessarily have to have a biological background but should have a strong desire to directly work with experimental biologists..

Contact: [email protected], [email protected], [email protected] or [email protected]

Cell-Graph Analysis with Graph Neural Networks for Immunotherapy

With the advance of imaging systems, reasonably accurate cell phenomaps, which refer to the spatial map of cells accompanied by cell phenotypes, have become more accessible. As spatial organization of immune cells within the tumor microenvironment is believed to be a strong indicator of cancer progression [1], data-driven analysis of cell phenomaps to discover new biomarkers to help with cancer prognosis is an important and emerging research area. One straightforward idea is to use cell-graphs [2], which can be later used as an input to Graph Neural Network, for example, for survival prediction [3]. However, such a dataset itself poses a lot of algorithmic and computational challenges given the big variations in both number of cells (from few tens of thousands on a slide to a few millions) and their structure, as well as the class imbalance if the objective is some sort of classification. In this project, we will explore different modeling of cell graphs for hierarchical representation learning that has a prognostic value.

[1] Anderson, Nicole M, and M Celeste Simon. “The tumor microenvironment.” Current biology: CB vol. 30,16 (2020): R921-R925. doi:10.1016/j.cub.2020.06.081
[2] Yener, Bulent. “Cell-Graphs: Image-Driven Modeling of Structure-Function Relationship.” Communications of the ACM, January 2017, Vol. 60 No. 1, Pages 74-84. doi:10.1145/2960404
[3] Yanan Wang, Yu Guang Wang, Changyuan Hu, Ming Li, Yanan Fan, Nina Otter, Ikuan Sam, Hongquan Gou, Yiqun Hu, Terry Kwok, John Zalcberg, Alex Boussioutas, Roger J. Daly, Guido Montúfar, Pietro Liò, Dakang Xu, Geoffrey I. Webb, Jiangning Song. “Cell graph neural networks enable the digital staging of tumor microenvironment and precise prediction of patient survival in gastric cancer.” medRxiv 2021.09.01.21262086; doi:

Good knowledge of Python and a deep learning framework of choice (PyTorch, Tensorflow, Jax); sufficient familiarity with statistics and machine learning, also preferably Graph Neural Networks. Prior experience with DataFrame is a plus.

Contact: [email protected]

Enhancing Scalable Hierarchical Graph Generation

One significant limitation of graph diffusion models is their scalability, primarily due to the square growth in computational cost with the number of nodes. Recently, SparseDiff[1] introduced a new diffusion model without relying on any assumptions beyond sparsity, and demonstrates exceptional performance across a wide range of tasks. However, despite its applicability to many scenarios, SparseDiff’s effectiveness remains constrained for graphs with more than thousands of nodes.

This proposal aims to develop a hierarchical framework based on SparseDiff. It is designed to initially generate mid-sized graphs with a trained diffusion model, setting up a foundational structure and topology for final graphs. These graphs are then expanded hierarchically[2][3] into larger sizes. The primary challenge and contribution of this project lie in the design of the hierarchical graph structure. By innovating on this front, the framework is expected to significantly enhance space efficiency while maintaining high performance on very large graphs.

[1] Qin et al., Sparse Training of Discrete Diffusion Models for Graph Generation, 2023
[2] Bergmeister et al., Efficient and Scalable Graph Generation through Iterative Local Expansion, ICLR2024
[3] Karami et al., HiGen: Hierarchical Graph Generative Networks, 2023

Mandatory: one deep learning course (at least) and prior experience PyTorch. Prior knowledge with graph deep learning and/or diffusion models is a plus.

Contact: [email protected]

Graph Latent Diffusion Models

Graph generative models have recently undergone through huge developments mostly due to the adoption of diffusion models to the graph setting [1]. Their capability of capturing higher order relations in graph datasets has lead to impressive accurate models of complex graph distributions, with scientific applications ranging from molecular generation [1] to digital pathology [2]. Despite their remarkable expressivity, current state-of-the-art graph generative models are limited to small graph generation because the unordered nature of graphs makes it difficult to adequately exploit this property. In this project, we will develop a graph-specific latent diffusion models to solve this scaling issues. We will take inspiration from the success of latent diffusion model for image generation [5], where the diffusion process occurs at a smaller dimensionality space, thus more efficiently, and the final image is then upsampled to a high-resolution image.

[1] Vignac, C. et al., “Digress: Discrete denoising diffusion for graph generation”, International Conference on Learning Representations, 2022
[2] Madeira, M. et al., “Tertiary lymphoid structures generation through graph-based diffusion”, GRAIL (MICCAI workshop), 2023
[3] Limnios, S., “Sagess: Sampling graph denoising diffusion model for scalable graph generation”, arXiv preprint arXiv:2306.16827, 2023
[4] Karami, M., “Higen: Hierarchical graph generative networks”, arXiv preprint arXiv:2305.19337, 2023.
[5] Rombach, R. et al. “High-resolution image synthesis with latent diffusion models.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022.

Mandatory: one deep learning course (at least) and prior experience PyTorch. Prior knowledge with graph deep learning and/or diffusion models is a plus.

Contact: [email protected]

Re-rethinking pooling in graph neural networks

Local pooling methods aim to preserve the hierarchical graph’s structural information by iteratively coarsening the graph into a new graph in a smaller size. However, its utility in graph neural networks remains an unsolved research direction, with recent works claiming that such a technique undermines performance [1]. However, this analysis is conducted in datasets with small graphs, where not only the hope for some hierarchical prior is minimal, but even structure-unaware models thrive [2].

In this project, we aim to reassess the utility of graph local-pooling by extending the analysis to larger and hierarchical graph datasets. We plan to establish a robust evaluation framework to test distinct pooling methods [3, 4, 5] and compare them to non pooling baselines. If suitable, we will design a new approach to cover for the existing method’s bottlenecks and evaluate the proposed method on real-world datasets from digital pathology..

[1] Mesquita et al., Rethinking pooling in graph neural networks, 2020
[2] Dwivedi et al., Benchmarking Graph Neural Networks, 2022
[3] Ying et al., Hierarchical Graph Representation Learning with Differentiable Pooling, 2019
[4] Khasahmadi et al., Memory-Based Graph Networks, 2020
[5] Dhillon et al., Weighted graph cuts without eigenvectors: a multilevel approach, 2007

Knowledge of Python and sufficient familiarity with statistics and machine learning. Prior experience with PyTorch is strongly recommended..

Contact: [email protected], [email protected] or [email protected]

Immunotherapy Response Prediction with Self-Supervised Graph Representation Learning

Self-supervision has shown notable performance in computer vision and natural language processing and its integration into graph representation learning is an exciting research domain [1]. Due to its nature of learning representations from the data itself without requiring labeled samples, it can be especially useful for biomedical applications where there is a lack of high-quality annotated data. This project focuses on immunotherapy response prediction of metastatic melanoma patients whose whole slide image data are modeled as cell graphs [2] where nodes represent cells and edges represent their interactions. These cell graphs can be used as input to a graph neural network for the end goal of binary classification (responding or non-responding). However, the main challenge stems from the fact that this is a graph-level classification problem with large graphs coming from a very limited patient cohort, hence, little data.
Self-supervised methods such as designing auxiliary tasks to capture contextual information regarding the tumor microenvironment or injecting informative biases into the model architecture are promising areas to investigate in tackling this problem [3]. Furthermore, understanding the specific cellular patterns that distinguish responding from non-responding cases and making these patterns interpretable by clinical experts deserves further investigation. Techniques like Multiple Instance Learning may play a crucial role in achieving this goal [4].

[1] . Xie, Z. Xu, J. Zhang, Z. Wang, and S. Ji, “Self-supervised learning of graph neural networks: A unified review”, IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 2, pp. 2412–2429, 2022
[2] B. Yener, “Cell-graphs: image-driven modeling of structure-function relationship”, Communications of the ACM, vol. 60, no. 1, pp. 74–84, 2016
[3] Z. Wu, A. E. Trevino, E. Wu, K. Swanson, H. J. Kim, H. B. D’Angio, R. Preska, G. W. Charville, P. D. Dalerba, A. M. Egloff et al., “Graph deep learning for the characterization of tumour microenvironments from spatial protein profiles in tissue specimens”, Nature Biomedical Engineering, vol. 6, no. 12, pp. 1435–1448, 2022
[4] M. Ilse, J. Tomczak and M. Welling, “Attention-based deep multiple instance learning”, International Conference on Machine Learning, pp. 2127-2136, 2018.

Good knowledge of Python and sufficient familiarity with statistics and machine learning. Prior experience with PyTorch is strongly recommended.

Contact: [email protected], [email protected] or [email protected]

Hypergraph neural networks for digital pathology

Hypergraphs are generalisations of graphs where edges can connect multiple nodes instead of just two. Hypergraphs are powerful in that they allow a broader set of nodes to be involved. The recent development of hypergraph neural networks [1][2] has opened up an interesting application area in digital pathology. Hypergraph neural networks in digital pathology [3][4] can be thought of as an extension of hierarchical graph representations in digital pathology which work on tissue and cell graphs [5]. In digital pathology, hypergraph neural networks have been used mainly for survival prediction.

In this project, we will explore hypergraph neural networks for node-level prediction tasks from the OCELOT dataset [6] and cancer type prediction from TCGA datasets. We will leverage existing hypergraph neural networks implementations from the library. Further, we will investigate explainable methods to understand the important hypergraph components for predictions.

[1] Feng, Yifan, et al. “Hypergraph neural networks.” Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019.
[2] Telyatnikov, Lev, et al. “Hypergraph neural networks through the lens of message passing: a common perspective to homophily and architecture design.” arXiv preprint arXiv:2310.07684 (2023).
[3] Di, Donglin, et al. “Big-hypergraph factorization neural network for survival prediction from whole slide image.” *IEEE Transactions on Image Processing* 31 (2022): 1149-1160.
[4] Benkirane, Hakim, et al. “Hyper-AdaC: adaptive clustering-based hypergraph representation of whole slide images for survival analysis.” *Machine Learning for Health*. PMLR, 2022.
[5] Pati, Pushpak, et al. “Hierarchical graph representations in digital pathology.” *Medical image analysis* 75 (2022): 102264.
[6] OCELOT grand challenge:

Experience with deep learning projects and experience with PyTorch and knowledge of graph neural networks is recommended. Knowledge of digital pathology is a plus.

Contact: [email protected].

Prototypical graph neural networks for medical image applications

Prototypical graph neural networks are interpretable-by-design [1][2][3]. For graph-level classification, they work by identifying important prototypes for each class. Recent works applying prototypical neural networks have shown promise in medical image analysis applications [4][5][6]. Yet, the use of prototypical graph neural networks for medical image analysis remains largely unexplored.

In this project, we will explore the use of different prototypical graph neural networks (including ProtGNN and PIGNN) on cell graphs constructed on medical images for graph level classification tasks. In particular, we will focus on the explainability of the different architectures and their applicability for medical images. We will work on publicly available datasets (such as ACROBAT, CAMELYON and other GrandChallenge datasets).

[1] Zhang, Zaixi, et al. “Protgnn: Towards self-explaining graph neural networks.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.
[2] Ragno, Alessio, Biagio La Rosa, and Roberto Capobianco. “Prototype-based interpretable graph neural networks.” IEEE Transactions on Artificial Intelligence (2022).
[3] Marc, Christiansen, et al. “How Faithful are Self-Explainable GNNs?.” Learning on Graphs Conference 2023. 2023.
[4] Rymarczyk, Dawid, et al. “ProtoMIL: multiple instance learning with prototypical parts for whole-slide image classification.” Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer International Publishing, 2022.
[5] Yu, Jin-Gang, et al. “Prototypical multiple instance learning for predicting lymph node metastasis of breast cancer from whole-slide pathological images.” Medical Image Analysis 85 (2023): 102748.
[6] Deuschel, Jessica, et al. “Multi-prototype few-shot learning in histopathology.” Proceedings of the IEEE/CVF international conference on computer vision. 2021.

Experience with deep learning projects and experience with PyTorch and knowledge of graph neural networks is recommended. Knowledge of digital pathology is a plus.

Contact: [email protected].

Interpretable machine learning in personalised medicine

Modern machine learning models mostly act as a black box and their decisions cannot be easily inspected by humans. To trust the automated decision-making, we need to understand the reasons behind predictions, and gain insights into the models. This can be achieved by building models that are interpretable. Recently, different methods have been proposed for data classification, such as augmenting the training set with useful features [1], visualizing the intermediate features in order to understand the input stimuli that excite individual feature maps at any layer in the model [2-3], or introducing logical rules in the network that guide the classification decision [4], [5]. The aim of this project is to study existing algorithms, which attempt to interpret deep architectures by studying the structure of their inner layer representations, and based on these methods find patterns for classification decisions along with coherent explanations. The studied algorithms will most be considered in the context of personalised medicine applications.

[1] R. Collobert, J. Weston, L. Bottou, M. M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,”J. Mach. Learn. Res., vol. 12, pp. 2493–2537, Nov. 2011.
[2] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv:1312.6034, 2013.
[3] L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, “Visualizing deep neural network decisions: Prediction difference analysis,” arXiv:1702.04595, 2017.
[4] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing, “Harnessing deep neural networks with logic rules,” in ACL, 2016.
[5] Z. Hu, Z. Yang, R. Salakhutdinov, and E. Xing, “Deep neural networks with massive learned knowledge,” in Conf. on Empirical Methods in Natural Language Processing, EMNLP, 2016.

Familiarity with machine learning and deep learning architectures. Experience with one of deep learning libraries and good knowledge of the corresponding coding language (preferably Python) is a plus.

Contact: [email protected]

Explainable Ejection Fraction Estimation from cardiac ultrasound videos

Quantitative assessment of caridiac functions is essential for the diagnosis of cardiovascular diseases (CVD). In particular, one of the most crucial measurements of heart function in clinical routine is left ventricle ejection fraction (LVEF), which is the ratio of left ventricle blood volume in the end-systolic to end-diastolic phase during one cardiac cycle. The manual assessment of LVEF, which depends on accurate frame identification and ventricular annotation, is associated with significant inter-observer variability. The EF has been predicted using a variety of deep learning-based algorithms, however most of them lack reliable explainability and have a low accuracy due to unrealistice data augmentations. This project aims to (1) automatically estimate the EF from ultrasound videos using the public datasets [1, 2] (2) provide explainability such as the weights over the frames in one ultrasound video or attention maps over pixels in each frame for the LVEF estimation.

[1] Ouyang, D., He, B., Ghorbani, A. et al. Video-based AI for beat-to-beat assessment of cardiac function. Nature 580, 252–256 (2020).
[2] S. Leclerc, E. Smistad, J. Pedrosa, A. Ostvik, et al. “Deep Learning for Segmentation using an Open Large-Scale Dataset in 2D Echocardiography” in IEEE Transactions on Medical Imaging, vol. 38, no. 9, pp. 2198-2210, Sept. 2019.

Good knowledge of deep learning and experience with ML/DL libraries, preferably pytorch.

Contact: [email protected]

Modeling and learning dynamic graphs

Graphs provide a compact representation for complex systems describing for example biological, financial or social phenomena. Often graphs are considered as static objects, although in many applications the underlying systems are varying over time: Individuals in social networks make new connections to each other, or drugs change how components of biological networks interact. Modelling graphs as temporal objects thus allows us to better describe and understand the dynamical behavior of these physical systems.
In this project we aim to model or learn the dynamics of temporal graphs using ideas from optimal transport. Optimal transport is a powerful mathematical tool for describing the dynamics of different types of data [1], and also has tight connections with diffusion based generative models [2].
Depending on the background of the student, the goal of this project is to use either optimal transport based graph distances [3], or graph generative models [4] to better understand temporal graphs.

[1] G. Peyré, M. Cuturi. “Computational Optimal Transport: With Applications to Data Science”. Foundations and trends in machine learning. 2019.
[2] V. De Bortoli, J. Thornton, J. Heng, and A. Doucet. “Diffusion Schrödinger bridge with applications to score-based generative modeling”. Advances in Neural Information Processing Systems. 2021.
[3] H. Petric Maretic, M. El Gheche, G. Chierchia, P. Frossard. “GOT: an optimal transport framework for graph comparison”. Advances in Neural Information Processing Systems. 2019.
[4] C. Vignac, I. Krawczuk, A. Siraudin, B. Wang, V. Cevher, P. Frossard. “Digress: Discrete denoising diffusion for graph generation“. In Proceedings of the 11th International Conference on Learning Representations. 2023.

Good knowledge of programming (Python or similar). Some background in optimisation or machine learning. The advanced project also requires knowledge of deep learning and PyTorch. Familiarity with diffusion models is a plus.

Comparing structured data with fused Gromov-Wasserstein distance

In the era of big data it becomes crucial to quantify the similarity between data sets. A useful method to compare data distributions is the Wasserstein distance [1]. Another related metric, the Gromov-Wasserstein distance can be used to compare structured objects, such as graphs [2,3].
The two methods have been combined to the so-called fused Gromov Wasserstein distance, which compares graph structured data by taking into account both the underlying graph structures and the feature information [4].

In this project we explore the fused Gromov Wasserstein distance and its ability to compare structured data. Interesting directions of the project are, e.g., to incorporate new types of feature information or identify subgraph structures.

[1] G. Peyré, M. Cuturi. “Computational Optimal Transport: With Applications to Data Science”. Foundations and trends in machine learning. 2019.
[2] F. Mémoli. “Gromov-Wasserstein distances and the metric approach to object matching” Foundations of computational mathematics. 2011
[3] D. Alvarez-Melis, T. Jaakkola, S. Jegelka. Structured optimal transport. In International Conference on Artificial Intelligence and Statistics. 2018.
[4] T. Vayer, L. Chapel, R. Flamary, R. Tavenard, N. Courty. “Optimal Transport for structured data with application on graphs”. International Conference on Machine Learning (ICML). 2019

Good knowledge of optimization, and programming (Python or similar).
Some experience with machine learning and graphs is a plus.

Gromov-Wasserstein projections for Graph Neural Network

We are interested in this work in learning representation of attributed graphs in an end-to-end fashion, for both node-level and graph-level tasks, using Optiomal Transport across graph spaces. One existing approach consists in designing kernels that leverage topological properties of the observed graphs. Alternative approaches relying on Graph Neural Networks aim at learning vectorial representations of the graphs and their nodes that can encode the graph structure (i.e. graph representation learning [1]). These architectures typically learn node embeddings via local permutation-invariant transformations following two dual mechanisms: i) the message-passing (MP) principle followed by a global pooling; ii) or iteratively performing hierarchical pooling that induces MP via graph coarsening principles [2, 3].

In this project, we aim at designing GNNs that leverage recent advances in Optimal Transport (OT) across spaces, naturally providing novel MP mechanisms or their dual hierarchical counterpart [4]. We will study these models in depth following a rigorous methodology in order to position the approaches on some of the main concerns of the current GNN literature. First, by studying these approaches on well-known synthetic datasets used to assess the expressiveness limits of GNNs, i.e., their ability to distinguish graphs or their nodes in homophilic and heterophilic contexts. Finally, we will benchmark these approaches on real-world datasets commonly used by the research community. Pytorch and Pytorch geometric implementations of initial frameworks and experiments will be provided so that students can easily familiarise themselves with the tools involved (especially OT solvers).

[1] Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher R ́e, and Kevin Murphy. Machine learning on graphs: A model and comprehensive taxonomy. The Journal of Machine Learning Research, 23(1):3840–3903, 2022.
[2] Daniele Grattarola, Daniele Zambon, Filippo Maria Bianchi, and Cesare Alippi. Understanding pooling in graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022.
[3] Chuang Liu, Yibing Zhan, Jia Wu, Chang Li, Bo Du, Wenbin Hu, Tongliang Liu, and Dacheng Tao. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321, 2022.
[4] C ́edric Vincent-Cuaz, R ́emi Flamary, Marco Corneli, Titouan Vayer, and Nicolas Courty. Semi-relaxed gromov wasserstein divergence with applications on graphs. In International Conference on Learning Representations, 2022.

Master student, with solid background in Machine Learning which are proficient in Pytorch. Knowledge on graph machine learning/ graph theory is recommended.

Robust Graph Dictionary Learning

Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements. Yet, this analysis is complex in the context of graph learning, as graphs usually belong to different metric spaces. Seminal works of [1, 2] filled this gap by proposing new Graph Dictionary Learning approaches using the Gromov-Wasserstein (GW) distance based on Optimal Transport (OT) as data fitting term, or relaxation of this distance [3]. Later on, [4] identified that these methods exhibit high sensitivity to edge noises and proposed a variant of GW to fix this, leveraging robust optimization tools that can be seen as a modification of the primal GW problem.

This project will first aim at analyzing results in [4] from different graph-theoretic perspectives, with potential contributions to the open-source package Python Optimal Transport [5]. Then we will investigate new models to improve performances of the latter. This project naturally involves challenges in terms of solver design, implementations, and theoretical analysis between OT and graph theory, which can be studied to a greater or lesser extent depending on the student’s wishes.

[1] Hongtengl Xu. Gromov-wasserstein factorization models for graph clustering. In Proceedings of the AAAI con- ference on artificial intelligence, volume 34, pages 6478–6485, 2020.
[2] Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Marco Corneli, and Nicolas Courty. Online graph dictionary learning. In International conference on machine learning, pages 10564–10574. PMLR, 2021.
[3] Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, and Nicolas Courty. Semi-relaxed gromov- wasserstein divergence and applications on graphs. In International Conference on Learning Representations, 2021.
[4] Weijie Liu, Jiahao Xie, Chao Zhang, Makoto Yamada, Nenggan Zheng, and Hui Qian. Robust graph dictionary learning. In The Eleventh International Conference on Learning Representations, 2022.
[5] Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, et al. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8, 2021.

One bachelor or master student with background in Machine Learning and experience with Python. Experience with graph theory would be a plus.

Scalable template-based GNN with Optimal Transport divergences

This work aims at investigating novel Optimal Transport (OT) based operators for Graph Neural Networks (GNN) based representation learning, to address graph-level tasks e.g classification and regression [1]. GNN typically learn node embeddings via local permutation-invariant transformations using message-passing, then perform a global pooling step to get the graph representation [2]. Recently, [3] proposed a novel global pooling relational concept that led to SOTA performances, placing distances to some learnable graph templates at the core of the graph representation using the Fused Gromov-Wasserstein distance [4]. The latter results from solving a complex graph matching problem, which greatly enhances GNN expressivity, but comes with a high computational cost which limits its use to dataset of small graphs.

This project will aim at enhancing the scalability of this kind of approaches from moderate to large graphs, while guaranteeing gains in terms of expressivity and/or generalization performances. This project naturally includes both empirical and theoretical challenges, which can be studied to a greater or lesser extent depending on the student’s wishes.

[1] Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, and Kevin Murphy. Machine learning on graphs: A model and comprehensive taxonomy. The Journal of Machine Learning Research, 23(1):3840–3903, 2022.
[2] Chuang Liu, Yibing Zhan, Jia Wu, Chang Li, Bo Du, Wenbin Hu, Tongliang Liu, and Dacheng Tao. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321, 2022.
[3] Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, and Nicolas Courty. Template based graph neural network with optimal transport distances. Advances in Neural Information Processing Systems, 35:11800– 11814, 2022.
[4] Vayer Titouan, Nicolas Courty, Romain Tavenard, and Rémi Flamary. Optimal transport for structured data with application on graphs. In International Conference on Machine Learning, pages 6275–6284. PMLR, 2019.

One master student with solid background in Machine Learning which are proficient in Pytorch. Expe- rience with graph machine learning would be a plus.

Analysis of brain networks over time

We are interested in detecting, and possibly predict, epileptic seizures using graphs extracted from EEG measurements.

Seizures occur as abnormal neuronal activity. They can affect the whole brain or localized areas and may propagate over time. The main non-invasive diagnosis tool is EEG which measures voltage fluctuations over a person’s scalp. These fluctuations correspond to the electrical activity caused by joint activation of groups of neurons. EEGs can span several hours and are currently inspected “by hand” by highly specialized doctors. ML approaches could improve this analyis, and network approaches have shown promising results.

Our data consists in multiple graphs providing a snapshot of brain activity over a time window. Considering consequent time windows, we have stochastic processes on graphs, of which we would like to identify changing points. We will learn graph representations and study their evolution over time to identify changes in regime. You are expected to compare different models in terms of performances and explainability. We are paticularly interested in inherently explainable methods, using graph features and classical time series analysis. A comparison with deep learning models could be valuable as well.

The content and workload is flexible based on the student profile and time involvement (semester project vs MSc thesis).

– Network machine learning
– Time series (preferably)
– Python (numpy, sklearn)

Implementation of Hierarchical Training of Neural Networks

Deep Neural Networks (DNNs) provide state-of-the-art accuracy for many tasks such as image classification. Since most of these networks require high computational resources and memory; in general, they are executed on cloud systems, which satisfy this requirement. However, it increases the latency of execution due to the high communication cost of the data to the cloud and it raises privacy concerns. These issues are more critical during the training phase, as the backward pass is naturally more resource hungry, and the required dataset is huge.
Hierarchical Training [1][2], is a novel approach to implement the training phase of DNNs in edge-cloud frameworks, dividing the calculations between two devices. It aims to keep the communication cost and computation cost in an acceptable criterion that results in the reduction of the training time while keeping the accuracy of the model high. Moreover, these methods inherently preserve the privacy of users.
In this project, the goal is to implement a new method of hierarchical training, which has been made in CSEM/LTS4 using PyTorch framework, on a two-device, edge-cloud system. The edge device (e.g., Nvidia Jetson Series [3]) has lower resources in comparison to the cloud, which is basically a high-end GPU system. We aim to train popular neural networks (such as VGG) on this two-device system.


[1] D. Liu, X. Chen, Z. Zhou, and Q. Ling, ‘HierTrain: Fast Hierarchical Edge AI Learning with Hybrid Parallelism in Mobile-Edge-Cloud Computing’, ArXiv200309876 Cs, Mar. 2020, Accessed: Jul. 03, 2021. [Online]. Available:
[2] A. E. Eshratifar, M. S. Abrishami, and M. Pedram, ‘JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services’, ArXiv180108618 Cs, Feb. 2020, Accessed: Jul. 09, 2021. [Online]. Available:
[3] ‘NVIDIA Embedded Systems for Next-Gen Autonomous Machines’, NVIDIA. (accessed Apr. 14, 2022).


Experience in programming on Nvidia Jetson is required. Good knowledge of deep learning in PyTorch is necessary. Experience of working with TensorFlow is a plus


Black-box attack against LLMs

Recently, Large Language Models (LLMs) such as ChatGPT have seen widespread deployment. These models exhibit advanced general capabilities, but pose risks around misuse by bad actors. LLMs are trained for safety and harmlessness but they remain susceptible to adversarial misuse. It has been shown that these systems can be forced to elicit undesired behavior [1].
Adversarial examples have been investigated in the different fields of natural language processing such as text classification [2]. The goal of this project is to extend such attacks to LLMs.


[1] Jones et al., “Automatically Auditing Large Language Models via Discrete Optimization”, ICML, 2023.[2] Zhang et al., “Adversarial attacks on deep-learning models in natural language processing: A survey”, ACM TIST, 2020.


Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with one of the deep learning libraries and in particular PyTorch.

Contact: [email protected]

TransFool against LLMs

Recently, Large Language Models (LLMs) such as ChatGPT have been increasingly deployed. However, these models can pose risks around misuse by bad actors. It has been shown that these models are prone to produce objectionable content, and they are vulnerable to adversarial misuse [1].
Recently, we proposed TransFool to generate adversarial examples against Neural Machine Translation models [2]. In this project, we aim to extend TransFool to reveal vulnerabilities of LLMs and force them to produce objectionable information.


[1] Jones et al., “Automatically Auditing Large Language Models via Discrete Optimization”, ICML, 2023.
[2] Sadrizadeh et al., “TransFool: An Adversarial Attack against Neural Machine Translation Models”, TMLR, 2023.


Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with one of the deep learning libraries and in particular PyTorch.

Contact: [email protected]

Adversarial attacks against neural machine translation models

In recent years, DNN models have been used in machine translation tasks. The significant performance of Neural Machine Translation (NMT) systems have led to their growing usage in diverse areas. However, DNN models have been shown to be highly vulnerable to intentional or unintentional manipulations, which are called adversarial examples [1]. Although adversarial examples have been investigated in the field of text classification [2], they have not been well studied for the NMTs.
The goal of this project is to extend popular methods of generating adversarial examples against text classifiers, e.g. TextFooler [3] and BERT-Attack [4], to the case of NMT.


[1] Szegedey et al., “Intriguing properties of neural networks”, ICLR 2014.
[2] Zhang et al., “Adversarial attacks on deep-learning models in natural language processing: A survey”, ACM TIST, 2020.
[3] Jin et al., “Is bert really robust? a strong baseline for natural language attack on text classification and entailment”, AAAI 2020.
[4] Li et al., “BERT-ATTACK: Adversarial attack against BERT using BERT”, EMNLP 2020.


Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with PyTorch or TensorFlow is a plus.

Contact: [email protected]

Personalized epilepsy detection and classification

Epilepsy affects nearly a million patients world-wide, and about one-third of them do not currently respond to drugs and have to be monitored on a continuous basis. While general seizure detection/classification approaches have been proposed, many of them learn a global model for every patient. It is known that the while patients exhibit similarity, seizures are known to be diverse and patient-specific. As a result, many of the approaches that use a single model learnt from all patients often tends to generalize poorly to a new patient. In this project, we will pursue the design of personalized approaches for seizure classification/detection by leveraging techniques of meta-learning/transfer-learning. By examining the personalized models, we will also attempt to quantify the inter-patient and intra-patient similarities and dissimilarities. We will work with EEG datasets that consists of hours of recordings for many patients, and make use of graph-based models that build feature representations that actively make use of neighbourhood information across different brain regions.


At least a machine learning course and prior experience with deep learning in Pytorch.
Some familiarity with fundamentals of signal processing, time-series analysis is a plus.

Contact: [email protected]