Open Projects

Projects are available on the following topics (not exclusive):

Image Analysis and Vision

Interpretable Deep Learning towards cardiovascular disease prediction

Cardiovascular disease (CVD) is the leading cause of death in most European countries and is responsible for more than one in three of all potential years of life lost. Myocardial ischemia and infarction are most often the result of obstructive coronary artery disease (CAD), and their early detection is of prime importance. This could be developed based on data such as coronary angiography (CA), which is an X-ray based imaging technique used to assess the coronary arteries. However, such prediction is a non-trivial task, as i) data is typically noisy and of small volume, and ii) CVDs typically result from the complex interplay of local and systemic factors ranging from cellular signaling to vascular wall histology and fluid hemodynamics. The goal of this project is to apply advanced machine learning techniques, and in particular deep learning, in order to detect culprit lesions from CA images, and eventually predict myocardial infarction. Incorporating domain specific constraints to existing learning algorithms might be needed.

References:

[1] Yang et al., Deep learning segmentation of major vessels in X-ray coronaryangiography, Nature Scientific Reports, 2019.

[2] Du et al., Automatic and multimodal analysis for coronary angiography: training and validation of a deep learning architecture, Eurointervention 2020.


Requirements:
Good knowledge of machine learning and deep learning architectures. Experience with one of deep learning libraries and in particular Pytorch is necessary.

Contact: [email protected]

Deep learning towards X-ray CT imaging becoming the gold standard for heart attack diagnosis

Cardiovascular disease (CVD) is the leading cause of death in most European countries and is responsible for more than one in three of all potential years of life lost. Myocardial infarction (MI), commonly known as a heart attack, is most often the result of obstructive coronary artery disease (CAD). The gold standard today for diagnosing a severe stenosis (the obstruction of the artery) in patients with symptoms of a cardiac event is through coronary angiography (CA). CA is an invasive procedure, in which a catheter is inserted into the body through an artery towards the heart. Over the last decade there have been attempts at diagnosing severe stenosis by extracting various measurements [1,2] from the non-invasive X-ray CT imaging. However, the gold standard for the final decision making for the treatment of patients still requires the invasive CA imaging. The goal of this project is to apply advanced machine learning techniques, and in particular deep learning, in order to predict if a certain suspected area shown in a CT image is considered a severe stenosis according to the CA gold standard. This will hopefully pave the way towards making the non-invasive CT imaging the gold standard for MI diagnosis.

References:

[1] Zreik, Majd, et al. “A recurrent CNN for automatic detection and classification of coronary artery plaque and stenosis in coronary CT angiography.” IEEE transactions on medical imaging 38.7 (2018): 1588-1598.

[2] Hong, Youngtaek, et al. “Deep learning-based stenosis quantification from coronary CT angiography.” Medical Imaging 2019: Image Processing. Vol. 10949. International Society for Optics and Photonics, 2019.

Requirements:
Good knowledge of machine learning and deep learning architectures. Experience with one of the deep learning libraries and in particular Pytorch is necessary.

Contact: [email protected] and [email protected]

Learning novel predictive representation by concept bottleneck disentanglement

Concepts are human-defined features used to explain the decision-making of black-box models with human interpretable explanations. Such methods are especially useful in the medical domain where we wish to explain the decision of a model trained to diagnose a medical condition (e.g, arthritis grade) from images (e.g, X-ray) with a concept a physician would look for in the image (e.g, bone spurs). Over the last few years various methods have been developed to extract concept explanations to interpret models post-hoc [1,2,3,4]. These methods assume that the models implicitly learn those concepts from the data, however this is not guaranteed.
In a more recent work, [5] introduces concept bottleneck models (CBM) which exploit the access to labels of human-interpretable concepts as well as the downstream task label to learn concepts explicitly. These models are trained to predict the task label y, given input x and through a bottleneck layer L, which is forced to learn some k labeled concepts. In this work they show that although constraining the parametric space of the bottleneck layer they are able to achieve comparable predictive performance with equivalent unconstrained baselines.

In this project we propose to combine the concept bottleneck parameters with unconstrained ones in order to learn a hybrid representation that takes into account both. Moreover, we wish the unconstrained bottleneck representation to be disentangled from the concepts parameters to allow the learning of new information. To this end, we will experiment with different information bottleneck disentanglement approaches as proposed in [6,7].

References:

[1] Bau, D., Zhou, B., Khosla, A., Oliva, A., and Torralba, A. Network dissection: Quantifying interpretability of deep visual representations. In Computer Vision and Pattern Recognition (CVPR), pp. 6541–6549, 2017.
[2]Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors Concept Bottleneck Models (tcav). In International Conference on Machine Learning (ICML), pp. 2668–2677, 2018.
[3] Zhou, B., Sun, Y., Bau, D., and Torralba, A. Interpretable basis decomposition for visual explanation. In European Conference on Computer Vision (ECCV), pp. 119–134, 2018.
[4] Ghorbani, A., Wexler, J., Zou, J. Y., and Kim, B. Towards automatic concept-based explanations. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9277–9286, 2019.
[5] Koh PW, Nguyen T, Tang YS, Mussmann S, Pierson E, Kim B, Liang P. Concept bottleneck models. InInternational Conference on Machine Learning 2020 Nov 21 (pp. 5338-5348). PMLR.
[6]Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. betavae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations (ICLR), 2017.
[7] Klys, Jack, Jake Snell, and Richard Zemel. “Learning latent subspaces in variational autoencoders.” Advances in neural information processing systems 31 (2018).

Requirements:
Experience with machine and deep learning projects and experience with ML/DL libraries, preferably pytorch. Knowledge of information theory is a plus.

Contact: [email protected]

Are networks failing the first step?

Training Adversarially robust networks is often a computationally expensive process. However, there could be simple mechanisms to increase robustness within the architecture of the network.

The goal of the project is as follows:

  1. Analyze the effect of increasing the filter sizes in the first convolutional layers of the network.[1]
  2. Enforcing activation consistency in the neighborhood of the sample by penalizing activations close to zero (L1 reg + deactivation regularizer)
  3. Study the robustness of the modified network and the properties of the adversarial perturbations (frequency).
  4. Study the effect on the rest of the layers and the whole network, in terms of their robustness.

References:

[1] Gavrikov et al (2022). “Adversarial Robustness through the Lens
of Convolutional Filters.”

Requirements:
Good knowledge of Python and Pytorch.

Contact: [email protected] or [email protected]

Exploiting adversarial examples for computer vision tasks

Even when neural networks perform very well on a task, they are still extremely vulnerable to carefully crafted alterations of their input data known as adversarial perturbations [1]. Methods that construct these perturbations are generally referred to as attacks, and those that protect against adversaries as defences.
Recently, some authors have shown that, besides improving robustness, some of these defences also enhance the interpretability of neural networks and their performance on downstream computer vision tasks [2].
The goal of this project is to further explore this idea and develop state-of-the-art robust models that can be used for several computer vision tasks like image generation, inpainting or denoising.

References:
[1] Szegedey et al., “Intriguing properties of neural networks”, ICLR 2014.
[2] Santurkar et al., “Image Synthesis with a Single (Robust) Classifier”, NeurIPS 2019.

Requirements:
Good knowledge of Python, sufficient familiarity with computer vision and deep learning. Experience with PyTorch or other deep learning library is a plus.

Contact: [email protected]

Cell-Graph Analysis with Graph Neural Networks for Immunotherapy

With the advance of imaging systems, reasonably accurate cell phenomaps, which refer to the spatial map of cells accompanied by cell phenotypes, have become more accessible. As spatial organization of immune cells within the tumor microenvironment is believed to be a strong indicator of cancer progression [1], data-driven analysis of cell phenomaps to discover new biomarkers to help with cancer prognosis is an important and emerging research area. One straightforward idea is to use cell-graphs [2], which can be later used as an input to Graph Neural Network, for example, for survival prediction [3]. However, such a dataset itself poses a lot of algorithmic and computational challenges given the big variations in both number of cells (from few tens of thousands on a slide to a few millions) and their structure, as well as the class imbalance if the objective is some sort of classification. In this project, we will explore different modeling of cell graphs for hierarchical representation learning that has a prognostic value.

[1] Anderson, Nicole M, and M Celeste Simon. “The tumor microenvironment.” Current biology: CB vol. 30,16 (2020): R921-R925. doi:10.1016/j.cub.2020.06.081
[2] Yener, Bulent. “Cell-Graphs: Image-Driven Modeling of Structure-Function Relationship.” Communications of the ACM, January 2017, Vol. 60 No. 1, Pages 74-84. doi:10.1145/2960404
[3] Yanan Wang, Yu Guang Wang, Changyuan Hu, Ming Li, Yanan Fan, Nina Otter, Ikuan Sam, Hongquan Gou, Yiqun Hu, Terry Kwok, John Zalcberg, Alex Boussioutas, Roger J. Daly, Guido Montúfar, Pietro Liò, Dakang Xu, Geoffrey I. Webb, Jiangning Song. “Cell graph neural networks enable the digital staging of tumor microenvironment and precise prediction of patient survival in gastric cancer.” medRxiv 2021.09.01.21262086; doi: https://doi.org/10.1101/2021.09.01.21262086

Requirements:
Good knowledge of Python and a deep learning framework of choice (PyTorch, Tensorflow, Jax); sufficient familiarity with statistics and machine learning, also preferably Graph Neural Networks. Prior experience with DataFrame is a plus.

Contact: [email protected]

Classification of disease status in DCIS patients using cell-graphs and graph neural networks on spatial proteomics data

There has been a growing availability of technology that enables the capture of cell-level phenotypes. Spatial proteomics quantifies the cell-level expression of a selected set of proteins and can be seen as an extended version of digital pathology data. Ductal carcinoma in-situ (DCIS), a type of breast cancer, was recently studied using spatial proteomics [1]. As the initial analysis, the authors utilized hand-crafted tumour features capturing cell-cell proximity and morphology and cell-type abundance to estimate disease progression. A natural extension would be to apply graph-based methods utilizing cell-graphs [2] and graph neural networks, which have shown great promise in capturing the interactions between cells [3]. In this project, we will (i) explore different methods of cell-graph construction, and (ii) utilize different explainability tools to characterize differences between cell-graphs with different progressions.

References:
[1] Risom T, Glass DR, Averbukh I, Liu CC, Baranski A, Kagel A, McCaffrey EF, Greenwald NF, Rivero-Gutiérrez B, Strand SH, Varma S, Kong A, Keren L, Srivastava S, Zhu C, Khair Z, Veis DJ, Deschryver K, Vennam S, Maley C, Hwang ES, Marks JR, Bendall SC, Colditz GA, West RB, Angelo M. Transition to invasive breast cancer is associated with progressive changes in the structure and composition of tumor stroma. Cell. 2022 Jan 20;185(2):299-310.e18. doi: 10.1016/j.cell.2021.12.023. PMID: 35063072; PMCID: PMC8792442.
[2] Yener, Bulent. “Cell-Graphs: Image-Driven Modeling of Structure-Function Relationship.” Communications of the ACM, January 2017, Vol. 60 No. 1, Pages 74-84. doi:10.1145/2960404
[3] Pati, Pushpak, Guillaume Jaume, Lauren Alisha Fernandes, Antonio Foncubierta, Florinda Feroce, Anna Maria Anniciello, Giosue Scognamiglio, et al. “HACT-Net: A Hierarchical Cell-to-Tissue Graph Neural Network for Histopathological Image Classification.” arXiv, July 1, 2020. http://arxiv.org/abs/2007.00584.

Requirements:
Good knowledge of Python and a deep learning framework of choice (PyTorch, Tensorflow); sufficient familiarity with statistics and machine learning, also preferably Graph Neural Networks.

Contact: [email protected]

Using graph neural networks and topological data analysis for spatial tumor understanding

Characterizing the tumor behavior is key for disease understanding and eventually efficient cancer treatment. In this project, we will explore the potential of graph based machine learning methods together with tools from topological data analysis in order to characterize the distribution of different types of cells in the tumor microenvironment. The developed methodologies with be evaluated in agent-based simulation models, that have been successfully used to create valid tumor microenvironments.

References:
[1] H. Edelsbrunner and J. L. Harer. Persistent homology — A survey. Contemp. Math., 453:257–282, 2008.
[2] M. Bronstein, J. Bruna, T. Cohen, P. Velickovic, Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, 2022

Requirements:
Experience with machine and deep learning projects and experience with ML/DL libraries, preferably pytorch. Knowledge of graph neural networks and/or topological data analysis is a plus.

Contact: [email protected] and [email protected]

Interpretable machine learning in personalised medicine

Modern machine learning models mostly act as a black box and their decisions cannot be easily inspected by humans. To trust the automated decision-making, we need to understand the reasons behind predictions, and gain insights into the models. This can be achieved by building models that are interpretable. Recently, different methods have been proposed for data classification, such as augmenting the training set with useful features [1], visualizing the intermediate features in order to understand the input stimuli that excite individual feature maps at any layer in the model [2-3], or introducing logical rules in the network that guide the classification decision [4], [5]. The aim of this project is to study existing algorithms, which attempt to interpret deep architectures by studying the structure of their inner layer representations, and based on these methods find patterns for classification decisions along with coherent explanations. The studied algorithms will most be considered in the context of personalised medicine applications.

[1] R. Collobert, J. Weston, L. Bottou, M. M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,”J. Mach. Learn. Res., vol. 12, pp. 2493–2537, Nov. 2011.
[2] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv:1312.6034, 2013.
[3] L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, “Visualizing deep neural network decisions: Prediction difference analysis,” arXiv:1702.04595, 2017.
[4] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing, “Harnessing deep neural networks with logic rules,” in ACL, 2016.
[5] Z. Hu, Z. Yang, R. Salakhutdinov, and E. Xing, “Deep neural networks with massive learned knowledge,” in Conf. on Empirical Methods in Natural Language Processing, EMNLP, 2016.

Requirements:
Familiarity with machine learning and deep learning architectures. Experience with one of deep learning libraries and good knowledge of the corresponding coding language (preferably Python) is a plus.

Contact: [email protected]

Graph and Network Signal Processing and Analysis

Comparing structured data with fused Gromov-Wasserstein distance

In the era of big data it becomes crucial to quantify the similarity between data sets. A useful method to compare data distributions is the Wasserstein distance [1]. Another related metric, the Gromov-Wasserstein distance can be used to compare structured objects, such as graphs [2,3].
The two methods have been combined to the so-called fused Gromov Wasserstein distance, which compares graph structured data by taking into account both the underlying graph structures and the feature information [4].

In this project we explore the fused Gromov Wasserstein distance and its ability to compare structured data. Interesting directions of the project are, e.g., to incorporate new types of feature information or identify subgraph structures.

References:
[1] G. Peyré, M. Cuturi. “Computational Optimal Transport: With Applications to Data Science”. Foundations and trends in machine learning. 2019.
[2] F. Mémoli. “Gromov-Wasserstein distances and the metric approach to object matching” Foundations of computational mathematics. 2011
[3] D. Alvarez-Melis, T. Jaakkola, S. Jegelka. Structured optimal transport. In International Conference on Artificial Intelligence and Statistics. 2018.
[3] T. Vayer, L. Chapel, R. Flamary, R. Tavenard, N. Courty. “Optimal Transport for structured data with application on graphs”. International Conference on Machine Learning (ICML). 2019

Requirements: 
Good knowledge of optimization, and programming (Python or similar).
Some experience with machine learning and graphs is a plus.
 
 

Analysis of brain networks over time

We are interested in detecting, and possibly predict, epileptic seizures using graphs extracted from EEG measurements.

Seizures occur as abnormal neuronal activity. They can affect the whole brain or localized areas and may propagate over time. The main non-invasive diagnosis tool is EEG which measures voltage fluctuations over a person’s scalp. These fluctuations correspond to the electrical activity caused by joint activation of groups of neurons. EEGs can span several hours and are currently inspected “by hand” by highly specialized doctors. ML approaches could improve this analyis, and network approaches have shown promising results.

Our data consists in multiple graphs providing a snapshot of brain activity over a time window. Considering consequent time windows, we have stochastic processes on graphs, of which we would like to identify changing points. We will learn graph representations and study their evolution over time to identify changes in regime. You are expected to compare different models in terms of performances and explainability. We are paticularly interested in inherently explainable methods, using graph features and classical time series analysis. A comparison with deep learning models could be valuable as well.

The content and workload is flexible based on the student profile and time involvement (semester project vs MSc thesis).

Requirements: 
 
– Network machine learning
– Time series (preferably)
– Python (numpy, sklearn)
 
 

Machine Learning and Applications

Implementation of Hierarchical Training of Neural Networks

Deep Neural Networks (DNNs) provide state-of-the-art accuracy for many tasks such as image classification. Since most of these networks require high computational resources and memory; in general, they are executed on cloud systems, which satisfy this requirement. However, it increases the latency of execution due to the high communication cost of the data to the cloud and it raises privacy concerns. These issues are more critical during the training phase, as the backward pass is naturally more resource hungry, and the required dataset is huge.
Hierarchical Training [1][2], is a novel approach to implement the training phase of DNNs in edge-cloud frameworks, dividing the calculations between two devices. It aims to keep the communication cost and computation cost in an acceptable criterion that results in the reduction of the training time while keeping the accuracy of the model high. Moreover, these methods inherently preserve the privacy of users.
In this project, the goal is to implement a new method of hierarchical training, which has been made in CSEM/LTS4 using PyTorch framework, on a two-device, edge-cloud system. The edge device (e.g., Nvidia Jetson Series [3]) has lower resources in comparison to the cloud, which is basically a high-end GPU system. We aim to train popular neural networks (such as VGG) on this two-device system.

References:

[1] D. Liu, X. Chen, Z. Zhou, and Q. Ling, ‘HierTrain: Fast Hierarchical Edge AI Learning with Hybrid Parallelism in Mobile-Edge-Cloud Computing’, ArXiv200309876 Cs, Mar. 2020, Accessed: Jul. 03, 2021. [Online]. Available: http://arxiv.org/abs/2003.09876
[2] A. E. Eshratifar, M. S. Abrishami, and M. Pedram, ‘JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services’, ArXiv180108618 Cs, Feb. 2020, Accessed: Jul. 09, 2021. [Online]. Available: http://arxiv.org/abs/1801.08618
[3] ‘NVIDIA Embedded Systems for Next-Gen Autonomous Machines’, NVIDIA. https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/ (accessed Apr. 14, 2022).

Requirements:

Experience in programming on Nvidia Jetson is required. Good knowledge of deep learning in PyTorch is necessary. Experience of working with TensorFlow is a plus

Contact: yamin.sepehri@epfl.ch

Adversarial attacks against neural machine translation models

In recent years, DNN models have been used in machine translation tasks. The significant performance of Neural Machine Translation (NMT) systems have led to their growing usage in diverse areas. However, DNN models have been shown to be highly vulnerable to intentional or unintentional manipulations, which are called adversarial examples [1]. Although adversarial examples have been investigated in the field of text classification [2], they have not been well studied for the NMTs.
The goal of this project is to extend popular methods of generating adversarial examples against text classifiers, e.g. TextFooler [3] and BERT-Attack [4], to the case of NMT.

References:

[1] Szegedey et al., “Intriguing properties of neural networks”, ICLR 2014.
[2] Zhang et al., “Adversarial attacks on deep-learning models in natural language processing: A survey”, ACM TIST, 2020.
[3] Jin et al., “Is bert really robust? a strong baseline for natural language attack on text classification and entailment”, AAAI 2020.
[4] Li et al., “BERT-ATTACK: Adversarial attack against BERT using BERT”, EMNLP 2020.

Requirements:

Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with PyTorch or TensorFlow is a plus.

Contact: [email protected]

Generating adversarial examples to fool neural machine translation models

Most of the works in the literature of adversarial attacks against Neural Machine Translation (NMT) models aim to degrade the translation quality with regards to the reference translation [1,2]. However, there is a difference between text classifiers and NMT models. By changing the input of the NMT model, we expect that the output of the translation also changes. Therefore, we need to make sure that generated adversarial examples are indeed fooling the target model and the performance drops because of the failure of the NMT model and not the changes made by the adversary (which may have also changed the actual ground-truth reference) [3].
The goal of this project is to use back-translation as proposed in [3] to generated adversarial examples that find the failure mode of the NMT models. In other words, we aim to reduce the direct correlation between the degradation in the translation and the adversarial perturbation.

[1] Michel et al., “On evaluation of adversarial perturbations for sequence-to-sequence models”, NAACL, 2019.
[2] Cheng et al., “Robust neural machine translation with doubly adversarial inputs”, ACL, 2019.
[3] Zhang et al., “Crafting adversarial examples for neural machine translation”, ACL, 2021.

Requirements:

Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with PyTorch or TensorFlow is a plus.

Contact: [email protected]

Targeted adversarial attacks against neural machine translation models

Neural Machine Translation (NMT) systems are used to convert a sequence of words from a source language to a sequence of words in a target language. They are shown to be vulnerable to adversarial examples in untargeted and targeted settings [1,2].

The goal of this project is to devise methods to generate imperceptible targeted adversarial examples against NMT systems. In classifiers, the purpose of the adversary in a targeted attack is to change the output of the classifier to a predefined target class. However, in NMT models, the model generate a sequence of words, and for each word there exist tens of thousands possible choices (different words in the target language), which makes targeted attack against NMT models more complicated [3]. Therefore, we aim to define a targeted attack against NMT models by combining a classifier with the NMT model.

[1] Michel et al., “On evaluation of adversarial perturbations for sequence-to-sequence models”, NAACL, 2019.
[2] Cheng et al., “Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples.” Proceedings of the AAAI Conference on Artificial Intelligence, 2020.
[3] Ebrahimi et al., “On Adversarial Examples for Character-Level Neural Machine Translation.” Proceedings of the 27th International Conference on Computational Linguistics, 2018.

Requirements:

Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with PyTorch and HuggingFace is a plus.

Contact: [email protected]

Physics-inspired ML for characterization and detection of epileptic seizures

There is an increased interest in the application of physics-inspired models in many domains on machine-learning today due to the inherent interpretability offered by these models. Physics-inspired models use a combination of welldefined dynamical models and completely data-driven models. Interpretability is particularly of importance in healthcare applications, particularly, such as that in the study and classification/detection of seizures. Epilepsy affects about 60 million people worldwide and about a third of the patients do not respond well to existing drugs, and a good understanding is key in improving the techniques available for diagnosis and monitoring. Further, seizures vary significantly from one patient to the other and even during different instances in the same patient. While there is an understanding that seizure propogates across regions in the brain, further work is necessary to quantify this understanding, particularly in terms of seizure propogation models and the differences and similarities across patients. Previous works in computational neuroscience have shown that mathematical models can help capture the epileptic behaviour at micro and macro-scales of brain activity[1]. However, many of these models are typically hand-tuned and have not been used in a learning setting. On the other hand, there are recent works that tackle the problem of modeling in a completely datadriven manner [2].

In this work, we will investigate the use of physics-inspired ML for modeling seizures, employing both parametric models and data driven models such as neural ODEs[3] as pursued in for example, [4, 5].  The work will primarily be based on scalp EEG recordings.

[1] Glomb et al., “Computational models in electroencephalography,” Brain Topography, vol. 35:142–161, 2021.
[2] Sip et al., “Data-driven method to infer the seizure propagation patterns in an epileptic brain from intracranial electroencephalography,” PLOS Computational Biology, vol. 17, 02 2021.
[3] Chen et al., “Neural ordinary differential equations,” NeurIPS, vol. 31, 2018.
[4] Yin et al., “Augmenting physical models with deep networks for complex dynamics forecasting,” in ICLR, 2021.
[5] Schmidt et al., “Dynamics on networks: The role of local dynamics and global networks on the emergence of hypersync

Requirements:

At least a machine learning course and prior experience with deep learning in Pytorch. Some familiarity with fundamentals of signal processing, time-series analysis is a plus.

Contact: [email protected] and [email protected]

Personalized epilepsy detection and classification

Epilepsy affects nearly a million patients world-wide, and about one-third of them do not currently respond to drugs and have to be monitored on a continuous basis. While general seizure detection/classification approaches have been proposed, many of them learn a global model for every patient. It is known that the while patients exhibit similarity, seizures are known to be diverse and patient-specific. As a result, many of the approaches that use a single model learnt from all patients often tends to generalize poorly to a new patient. In this project, we will pursue the design of personalized approaches for seizure classification/detection by leveraging techniques of meta-learning/transfer-learning. By examining the personalized models, we will also attempt to quantify the inter-patient and intra-patient similarities and dissimilarities. We will work with EEG datasets that consists of hours of recordings for many patients, and make use of graph-based models that build feature representations that actively make use of neighbourhood information across different brain regions.

Requirements:

At least a machine learning course and prior experience with deep learning in Pytorch.
Some familiarity with fundamentals of signal processing, time-series analysis is a plus.

Contact: [email protected]