Student Projects

LTS5 OPEN SEMESTER AND MASTER PROJECTS – Spring 2023
 

SEMESTER PROJECT PROPOSALS

1. Spherical Deconvolution Algorithms for Intra-Voxel Fiber Estimation and Brain Connectivity Mapping
The LTS5 Diffusion group focuses on brain tissue microstructure and structural connectivity –estimated by diffusion Magnetic Resonance Imaging (dMRI) data, with a particular focus on the reconstruction of the nerve fiber orientation distribution function (ODF) per voxel (see the figure below). This information is important for the reconstruction of the brain’s white matter by using fiber tracking algorithms (see ref [1]).

We have implemented various novel reconstruction algorithms (e.g., see refs. [2-5]) and we plan to develop a new generation of methods using Machine Learning techniques. The goals of this project are: (1) create a large database of fiber ODFs and corresponding dMRI signals, (2) Design, train, and optimize a neural network using this dataset, (3) predict the fiber ODFs from new dMRI data, and (4) compare the implemented algorithm with state-of-the-art techniques using both synthetic and real dMRI data acquired from human brains. The results will be published in international conferences and relevant journals.

Requirements: The project will be implemented in Python, so good knowledge is required. This project is ideal for a computer scientist, mathematician, physicist, or engineer interested in medical imaging, machine learning, signal processing, and optimization.

References:
[1] https://www.sciencedirect.com/science/article/abs/pii/S1053811914003541

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4607500
[3] https://www.sciencedirect.com/science/article/abs/pii/S1053811918307699
[4] https://onlinelibrary.wiley.com/doi/10.1002/mrm.21917

Supervisors: Dr. Erick J. Canales-Rodríguez ([email protected]), Dr. Gabriel Girard ([email protected]), and Prof. Jean-Philippe Thiran


2. Myelin Water Imaging Using T2 Relaxometry
Myelin is a lipid-rich substance that surrounds the axons in the brain, which is essential for the proper functioning of the nervous system. Myelin water imaging is a magnetic resonance imaging (MRI) method that can be used to quantify and visualize myelination in the brain and spinal cord in vivo. The signal coming from the MRI machine (using a multi-echo T2 relaxometry sequence) can be decomposed into components, including that originated by water molecules trapped between the lipid bilayers of myelin. The correct estimation of this component provides a myelin-specific MRI biomarker to monitor brain changes in cerebral white matter. Myelin quantification has important implications for understanding various neurodegenerative diseases, including multiple sclerosis.
We are looking for a motivated student to (1) learn about the MRI and signal processing theory behind this modality, (2) improve the current estimation methods, and (3) test and compare the new results with the current methods and histological measurements.
The project builds on top of previous cutting-edge research carried out in our lab (for more details see our multi-component T2 reconstruction toolbox and related references: https://github.com/ejcanalesr/multicomponent-T2-toolbox). The results will be published in international conferences and relevant journals.

Requirements: The project will be implemented in Python, so good knowledge is required. This project is ideal for a computer scientist, mathematician, physicist, or engineer interested in medical imaging, optimization, and signal processing.

Supervisors: Dr. Erick J. Canales-Rodríguez ([email protected]) and Prof. Jean-Philippe Thiran


3. Preprocessing steps for cervical cancer detection

Cervical cancer is a major concern in public health around the world, both in high and low- and middle-income settings. In collaboration with the Geneva University Hospitals (HUG) and Dschang District Hospital in Cameroon, we aim at implementing a smartphone-based solution that automatically detects cervical cancer from videos of the cervix using deep neural networks.

This project focuses on the preprocessing steps required before the classification of the images and will take place in twofold. The goal is to improve the quality assessment of the videos, especially focusing on movement detection, blur and luminosity changes.

Requirements: Basic knowledge of image processing and deep learning. Fluent in python and pytorch.

Assistants: Magali Cattin ([email protected]) and Roser Vinals Terres ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


4. Cervical cancer classification

Cervical cancer is a major concern in public health around the world, both in high and low- and middle-income settings. In collaboration with the Geneva University Hospitals (HUG) and Dschang District Hospital in Cameroon, we aim at implementing a smartphone-based solution that automatically detects cervical cancer from videos of the cervix using deep neural networks.

Visual inspection with acetic acid is a common method used to detect cervical cancer. It consists of applying diluted acetic acid on the cervix which acts as a contrasting agent: the different types of tissues (and potential lesions) whiten at various rates and reach various intensity.

A deep learning model was trained to classify static images of the cervix based on images taken approximately 1 minute after application of the acetic acid. Firstly, this project aims to explore its performance on images taken at a different time after application of acetic acid in order to identify the optimal frame. A secondary objective is to improve the robustness of the model, e.g. training it with images at different times.

Requirements:  Basic knowledge of deep learning. Fluent in python and pytorch.

Assistants: Magali Cattin ([email protected]) and Roser Vinals Terres ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


5. Improving image reconstruction for ultrasound ultrafast imaging

Ultrasound imaging is one of the safest, cheapest and widely accessible imaging modalities used in medical diagnosis. Particularly, a technique called ultrafast ultrasound achieves high frame rates, being used, for instance, to analyse tissue displacements. However, the image quality achieved by this technique is very low.

A deep-learning based image reconstruction technique is being developed to improve the image quality of ultrafast images. This project aims to further improve the current reconstruction technique and adapt it to create a real-time image reconstruction pipeline.

Requirements:  Signal processing and deep learning. Fluent in python and pytorch.

Assistants: Roser Vinals Terres ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


6. Design of a semi-automatic image labelling pipeline for stroke image annotations and continuous learning.

Very large cohorts are needed to train machine learning models to a level of performance that is useful in real-world clinical settings. In biomedical imaging, there is a lack of such datasets because, on the one hand, privacy protection laws prevent hospitals from sharing clinical data, and on the other hand, there is a lack of image annotation by clinical experts.

In order to facilitate the trained experts’ annotation of large cohorts of data, initial semi-automatic labellings are usually performed to present a rough initial segmentation to the expert to correct and enhance. In this project, the student will implement such a framework to be used by a trained radiologist in order to annotate MRI data, specifically for stroke image segmentation. The student will be responsible for using already implemented methods for stroke image segmentation, in addition to workflow data management tools such as Airflow, to create initial segmentation labels in a user-friendly way. In a second step, the student will add these initial labels to biomedical-oriented machine-learning frameworks for data labellings such as MONAI-labels, 3D Slicers and other

This work is a part of the Advanced Stroke Analytic Platform, which aims to develop next-generation clinical decision support tools for stroke imaging for Swiss hospitals and conduct a proof-of-concept between the Inselspital in Bern, the CHUV (Centre Hospitalier Universitaire Vaudois) in Lausanne and the scanner manufacturer Siemens Healthineers.

Requirements:

  • Experience with Python and machine learning libraries.
  • Interest and previous experience in image processing.

Supervisor: Jean-Philippe Thiran;

Co-supervisors Jonathan Rafael Patiño ([email protected]), and Jonas Richiardi (CHUV – [email protected]).


7. Biomedical imaging DataOps using Apache Airflow for MRI segmentation.

In medical image processing, data management is crucial to ensure the high reproducbility of medical trials and experiments, as well as to ensure the correct management of data privacy and integrity.

In data science, Apache Airflow is used for the scheduling and orchestration of data pipelines or workflows.  Orchestration of data pipelines refers to the sequencing, coordination, scheduling, and managing of complex data pipelines from diverse sources.

In this project, the student will work directly with pipelines for image preprocessing, segmentation and experimental setups for multi-institutional stroke imaging screenings. The student will develop and design the data orchestration setup to ensure experiment reproducibility, optimal data management and privacy-ensuring policies.

This work is a part of the Advanced Stroke Analytic Platform, which aims to develop next-generation clinical decision support tools for stroke imaging for Swiss hospitals and conduct a proof-of-concept between the Inselspital in Bern, the CHUV (Centre Hospitalier Universitaire Vaudois) in Lausanne and the scanner manufacturer Siemens Healthineers.

Requirements:

  • Python knowledge.
  • Previous knowledge of image processing and docker/singularity containers.
  • Interest in medical imaging and applications.

Supervisor: Jean-Philippe Thiran;

Co-supervisors Jonathan Rafael Patiño ([email protected]), and Jonas Richiardi (CHUV – [email protected]).


8. Deep Learning Algorithms for Intra-Voxel Fiber Estimation in the developing brain
Diffusion-weighted magnetic resonance imaging (dMRI) is the method of choice to study white matter tracts that connect different brain regions. Several models have been proposed to map the diffusion signal to tensors or fiber orientation distribution functions (FODs) [1] that are necessary for white matter reconstruction. Accurate estimation of FODs with existing methods requires a large number of measurements and hence long acquisition times which is undesirable for newborns or fetuses. In our recent work [2], we proposed a deep learning method that learns to directly map the dMRI data of newborns from the developing Human Connectome Project (dHCP) acquired with few diffusion measurements to the target FODs reconstructed using hundreds of measurements (Figure attached).
The goal of this project is to (1) compare different state-of-the-art methods on the dHCP dataset (e.g. [3][4]) that predict FODs using deep learning (2) help in designing, training, and optimising neural networks to advance our technique [2].
Requirements: The project will be implemented in Python. It is ideal for a computer scientist, mathematician, physicist, or engineer interested in medical image processing, machine learning, signal processing, and optimization.
This project will be supervised by H. Kebiri, M. Bach Cuadra (CHUV-UNIL-CIBM) and Prof Thiran, also held in collaboration with the Computational Radiology Laboratory (CRL) of Boston Children’s Hospital and Harvard Medical School (Dr. Davood Karimi and Prof. Ali Gholipour).
[2] Deep learning estimation of fibre orientation distribution functions from few diffusion-weighted MRI measurements, Hamza Kebiri, Ali Gholipour, Davood Karimi and Meritxell Bach Cuadra. Submitted to the International Symposium of Biomedical Imaging (ISBI) 2023. 

9. Unsupervised anomaly localization using a small training data regime

Description: Anomaly detection and localization are the centerpieces of many safety-critical applications, ranging from manufacturing defect detection to medical image inspection. Considering the highly diverse anomaly types, most existing methods are formulated as a one-class classification setup to model the distribution of normal samples and then identify abnormal ones by finding outliers. This project aims to formulate a new unsupervised framework for anomaly detection and localization trained on an extremely small number of normal instances. In particular, popular architectures, including CNN and transformers, will be compared and analyzed. The project will also explore advanced self-supervised visual representation learning based on masked image modeling for anomaly detection.

Assistant: Dr Behzad Bozorgtabar ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


10. Generative data augmentation of plastic anomalies in biodegradable waste

In Switzerland and Liechtenstein, more than 1.4 million cubic meters of compost is produced annually from recycled biogenic waste. This compost is often contaminated by foreign matter such as plastics and aluminium, which must be correctly identified and removed in order to meet a number of legal requirements for its further processing.

In the context of a project between industry and academia, combining talents from multiple fields and aiming to automate the process of detecting these impurities, we propose a project of synthetic data augmentation targeted at improving the accuracy of an instance segmentation model. The goal is to setup or create a system able to perform data augmentation by synthesizing new samples using deep learning techniques, as for example Generative Adversarial Networks (GAN), so as to avoid repeating the costly process of image acquisition of real samples.

In this project, the student will investigate, study and implement state-of-the-art deep learning techniques for a very specific and concrete application of synthetic data creation. The student is expected to be familiar with Python and a common deep learning framework, such as Pytorch or Tensorflow.

Assistants: Alexandre Abbey ([email protected]) and Davide Nanni ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


11. Water surface detection in laboratory-scale rivers through image processing

Reliable information about the spatial distribution of open surface water is critically important in various scientific disciplines, such as the assessment of present and future water resources, climate models, agriculture suitability, river dynamics, surface water survey and management, and flood mapping (Rokni et al., 2014). Several image processing techniques have been introduced in recent decades for the extraction of water features from images, at the beginning from satellite data (Du et al., 2012) and more recently from Unmanned Serial Vehicle (UAV) sensors (Hashemi-Beni et al., 2018). However, although the same kind of algorithms applied to water surface detection on laboratory setups must exist, the is no one publicly available. The Environmental Hydraulics Laboratory at EPFL is currently studying the evolution of rivers in time through experiments in a laboratory-scale river. The experimental setup counts with several measurement tools based on image processing. However, there is a recently installed camera aimed to capture images of the flume from a top-view (see Figure 1).

These images are meant to obtain a time-lapse of the river network for studying the time evolution of the stream. For this, it is necessary to distinguish the pixels that contain water from those that do not. The main objective of this project is to build a Python/Matlab code that allows obtaining the distribution of water for each image. In Figure 1 below, there is a schematic example showing on the left side the raw image without processing, and on the right side a manual drawing of the river network that should be automatically obtained by using the code. The student working on this project should have prior knowledge on image processing. The technique to use for the image segmentation will be defined by the student based on the analysis of the case and his own expertise. The final report will contain the description of the process and the techniques used for achieving the segmentation. The student will also deliver the final code with its corresponding explanatory comments. If you are interested in applying, please write an email to Clemente Gotelli ([email protected]).

References

Du, Zhiqiang et al. (2012). “Estimating surface water area changes using time-series Landsat data in the Qingjiang River Basin, China”. In: Journal of Applied Remote Sensing 6.1, p. 063609. Hashemi-Beni, Leila et al. (2018). “Challenges and Opportunities for UAV-Based Digital Elevation Model Generation for Flood-Risk Management: A Case of Princeville, North Carolina”. In: Sensors 18.11. doi: 10.3390/s18113843. Rokni, Komeil et al. (2014). “Water Feature Extraction and Change Detection Using Multitemporal Landsat Imagery”. In: Remote Sensing 6.5, pp. 4173–4189. doi: 10.3390/rs6054173.



MASTER PROJECT PROPOSALS 

MEDICAL IMAGING PROJECTS

1. Spherical Deconvolution Algorithms for Intra-Voxel Fiber Estimation and Brain Connectivity Mapping
The LTS5 Diffusion group focuses on brain tissue microstructure and structural connectivity –estimated by diffusion Magnetic Resonance Imaging (dMRI) data, with a particular focus on the reconstruction of the nerve fiber orientation distribution function (ODF) per voxel (see the figure below). This information is important for the reconstruction of the brain’s white matter by using fiber tracking algorithms (see ref [1]).

We have implemented various novel reconstruction algorithms (e.g., see refs. [2-5]) and we plan to develop a new generation of methods using Machine Learning techniques. The goals of this project are: (1) create a large database of fiber ODFs and corresponding dMRI signals, (2) Design, train, and optimize a neural network using this dataset, (3) predict the fiber ODFs from new dMRI data, and (4) compare the implemented algorithm with state-of-the-art techniques using both synthetic and real dMRI data acquired from human brains. The results will be published in international conferences and relevant journals.

Requirements: The project will be implemented in Python, so good knowledge is required. This project is ideal for a computer scientist, mathematician, physicist, or engineer interested in medical imaging, machine learning, signal processing, and optimization.

References:
[1] https://www.sciencedirect.com/science/article/abs/pii/S1053811914003541

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4607500
[3] https://www.sciencedirect.com/science/article/abs/pii/S1053811918307699
[4] https://onlinelibrary.wiley.com/doi/10.1002/mrm.21917

Supervisors: Dr. Erick J. Canales-Rodríguez ([email protected]), Dr. Gabriel Girard ([email protected]), and Prof. Jean-Philippe Thiran


2. Myelin Water Imaging Using T2 Relaxometry
Myelin is a lipid-rich substance that surrounds the axons in the brain, which is essential for the proper functioning of the nervous system. Myelin water imaging is a magnetic resonance imaging (MRI) method that can be used to quantify and visualize myelination in the brain and spinal cord in vivo. The signal coming from the MRI machine (using a multi-echo T2 relaxometry sequence) can be decomposed into components, including that originated by water molecules trapped between the lipid bilayers of myelin. The correct estimation of this component provides a myelin-specific MRI biomarker to monitor brain changes in cerebral white matter. Myelin quantification has important implications for understanding various neurodegenerative diseases, including multiple sclerosis.
We are looking for a motivated student to (1) learn about the MRI and signal processing theory behind this modality, (2) improve the current estimation methods, and (3) test and compare the new results with the current methods and histological measurements.
The project builds on top of previous cutting-edge research carried out in our lab (for more details see our multi-component T2 reconstruction toolbox and related references: https://github.com/ejcanalesr/multicomponent-T2-toolbox). The results will be published in international conferences and relevant journals.

Requirements: The project will be implemented in Python, so good knowledge is required. This project is ideal for a computer scientist, mathematician, physicist, or engineer interested in medical imaging, optimization, and signal processing.

Supervisors: Dr. Erick J. Canales-Rodríguez ([email protected]) and Prof. Jean-Philippe Thiran


3. Preprocessing steps for cervical cancer detection

Cervical cancer is a major concern in public health around the world, both in high and low- and middle-income settings. In collaboration with the Geneva University Hospitals (HUG) and Dschang District Hospital in Cameroon, we aim at implementing a smartphone-based solution that automatically detects cervical cancer from videos of the cervix using deep neural networks.

This project focuses on the preprocessing steps required before the classification of the images and will take place in twofold. The goal is to improve the quality assessment of the videos, especially focusing on movement detection, blur and luminosity changes.

Requirements: Basic knowledge of image processing and deep learning. Fluent in python and pytorch.

Assistants: Magali Cattin ([email protected]) and Roser Vinals Terres ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


4. Cervical cancer classification

Cervical cancer is a major concern in public health around the world, both in high and low- and middle-income settings. In collaboration with the Geneva University Hospitals (HUG) and Dschang District Hospital in Cameroon, we aim at implementing a smartphone-based solution that automatically detects cervical cancer from videos of the cervix using deep neural networks.

Visual inspection with acetic acid is a common method used to detect cervical cancer. It consists of applying diluted acetic acid on the cervix which acts as a contrasting agent: the different types of tissues (and potential lesions) whiten at various rates and reach various intensity.

A deep learning model was trained to classify static images of the cervix based on images taken approximately 1 minute after application of the acetic acid. Firstly, this project aims to explore its performance on images taken at a different time after application of acetic acid in order to identify the optimal frame. A secondary objective is to improve the robustness of the model, e.g. training it with images at different times.

Requirements:  Basic knowledge of deep learning. Fluent in python and pytorch.

Assistants: Magali Cattin ([email protected]) and Roser Vinals Terres ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


5. Improving image reconstruction for ultrasound ultrafast imaging

Ultrasound imaging is one of the safest, cheapest and widely accessible imaging modalities used in medical diagnosis. Particularly, a technique called ultrafast ultrasound achieves high frame rates, being used, for instance, to analyse tissue displacements. However, the image quality achieved by this technique is very low.

A deep-learning based image reconstruction technique is being developed to improve the image quality of ultrafast images. This project aims to further improve the current reconstruction technique and adapt it to create a real-time image reconstruction pipeline.

Requirements:  Signal processing and deep learning. Fluent in python and pytorch.

Assistants: Roser Vinals Terres ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


6. Multi-modal MRI segmentation with deep learning for acute stroke – collaboration with CHUV

In clinical practice, robust methods for the automatic segmentation of the infarcted core in MRI are essential for assessing the correct stroke treatment and improving patient outcomes. State-of-the-art segmentation networks use single-modality MRI inputs to define infarcted brain regions. In this project, we will investigate the further development of such networks using multi-channel data, comprised of multiple MRI modalities in a single 4D volume, to train novel multi-channel 3D segmentation networks on a cohort of patients undergoing imaging for acute stroke. In this project, the student will be responsible for the implementation, training, and interpretation of the results of several state-of-the-art segmentation networks.

This work is a part of the Advanced Stroke Analytic Platform, which aims to develop next-generation clinical decision support tools for stroke imaging for Swiss hospitals and conduct a proof-of-concept between the Inselspital in Bern, the CHUV (Centre Hospitalier Universitaire Vaudois) in Lausanne and the scanner manufacturer Siemens Healthineers.

Requirements:

  • Experience with python and machine learning libraries, Pytorch, Tensorflow + Keras, and Scikit-learn.
  • Interest and previous experience in image processing.
  • Interest in medical imaging and applications.

Supervisor: Jean-Philippe Thiran;

Co-supervisors Jonathan Rafael Patiño ([email protected]), and Jonas Richiardi (CHUV – [email protected]).


7. Federated Machine Learning for Multi-center stroke image segmentation – collaboration with CHUV

Deep learning models are greatly dependent on the amount and quality of the data used to train them; hence, to achieve performance applicable in real-world clinical settings, very large cohorts are required. Such datasets are not available in biomedical imaging due to, on the one hand, privacy protection laws that prevent hospitals from sharing clinical data; and, on the other hand, the heterogeneity of said data due to the scanner diversity and lack of standards in MRI acquisition protocols. The adoption of a federated algorithm would enable the institutions to collaborate without jeopardising patients’ privacy and increase the amount of available data for the training of the model. The latest federated framework developments also incorporate modules to mitigate the detrimental effect of data heterogeneity and its unequal distribution across the sites to facilitate the convergence of the global model, effectively increasing its accuracy.

In this project, the student will test previously implemented machine learning methods for stroke lesion segmentation in a Federated cross-silo scenario, and implement new aggregation techniques. This will enable the aggregation of data from multi-institutions, scanners and patients. Such data presents a challenging  scenario for the segmentation network due to the data heterogeneity, and thus, ad-hoc network harmonization techniques will be tested in parallel to better aggregation methods.

This work is a part of the Advanced Stroke Analytic Platform, which aims to develop next-generation clinical decision support tools for stroke imaging for Swiss hospitals and conduct a proof-of-concept between the Inselspital in Bern, the CHUV (Centre Hospitalier Universitaire Vaudois) in Lausanne and the scanner manufacturer Siemens Healthineers.

 Requirements:

  • Experience with Python and machine learning libraries, Pytorch, Tensorflow + Keras, and Scikit-Learn.
  • Experience implementing deep convolutional networks and custom layers.
  • Interest and previous experience in image processing.
  • Interest in medical imaging and applications.

Supervisor: Jean-Philippe Thiran;

Co-supervisors Jonathan Rafael Patiño ([email protected]), and Jonas Richiardi (CHUV – [email protected]).


8. Deep learning based shape analysis of cardiac biventricular meshes – collaboration with CHUV

The goal of the project is to make the student familiar with the current trends in medical cardiac imaging analysis. Cardiac motion and shape analysis have proved to be useful to characterize differences in clinical diseases [1,2]. However, the pipeline to obtain suitable meshes from cardiac magnetic resonance  images is relatively complex and the displacement estimation relies on single image modalities with a single point of view. Recent work focused on the integration of multiple cardiac image modalities covering different points of view to jointly predict mesh displacements throughout the cardiac cycle [3]. Nevertheless, the exploration of how useful mesh descriptors can be in large-scale datasets remains relatively unexplored [2]. In this project, the student will use state-of-the-art deep learning approaches based on differential geometry and graph neural networks to explore the potential of shape descriptors to stratify subjects by clinical diagnosis in a large-scale cohort, the UK Biobank, containing thousands of cardiac images.

Therefore, the goals are 1) apply existing meshing methods to reliably obtain biventricular meshes [4,5] from multi-structure segmentations obtained from previous works based on convolutional neural networks (CNN) [6,7], 2) generate a statistical shape atlas to study the main modes of variation [8] and 3) explore the use of state-of-the art shape analysis tools to characterize the cardiac shape of each subject [9]

The project will provide valuable input to an ongoing research effort between Lausanne and Geneva in integrative characterisation of heart failure, and therefore has the potential to contribute to advancing medical science and ultimately benefit patients with cardiac pathologies.

References:

[1]:  Mansi, T., et al..: A statistical model for quantification and prediction of cardiac remodelling: Application to Tetralogy of Fallot. IEEE Trans Med Imaging

[2]: Bello, G. A.,et al. (2019). Deep-learning cardiac motion analysis for human survival prediction. Nature Machine Intelligence

[3]: Meng, Q., Bai, W., Liu, T., O’Regan, D. P., & Rueckert, D. (2022). Mesh-Based 3D Motion Tracking in Cardiac MRI Using Deep Learning, MICCAI 2022

[4]: Wickramasinghe, U., Remelli, E., Knott, G., & Fua, P. (2020). Voxel2Mesh: 3D Mesh Model Generation from Volumetric Data. MICCAI 2020

[5]: William E. Lorensen and Harvey E. Cline. 1987. Marching cubes: A high resolution 3D surface construction algorithm. SIGGRAPH Comput. Graph. 21, 4 (July 1987)

[6]: Bai, W., et al. (2018). Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. Information and Computing Sciences. Artificial Intelligence and Image Processing. Journal of Cardiovascular Magnetic Resonance, 20(1).

[7]: Byrne, N., Clough, J. R., Valverde, I., Montana, G., & King, A. P. (2022). A persistent homology-based topological loss for CNN-based multi-class segmentation of CMR. IEEE Transactions on Medical Imaging.

[8]: Bai, W., et al. (2015). A bi-ventricular cardiac atlas built from 1000+ high resolution MR images of healthy subjects and an analysis of shape and motion. Medical Image Analysis

[9]:Sharp, N., Attaiki, S., Crane, K., & Ovsjanikov, M. (2022). DiffusionNet: Discretization Agnostic Learning on Surfaces. ACM Transactions on Graphics, 41(3), 1–16.

Requirements: The project will be implemented in Python. Good knowledge of Pytorch as well as familiarity with deep learning are desirable.

Supervisor:

Prof. Jean-Philippe Thiran (EPFL-LTS5)

Co-supervisors:

Dr. Jaume Banus Cobo CHUV-Translational Machine Learning Lab ([email protected])

Dr. Jonas Richiardi CHUV-Translational Machine Learning Lab ([email protected])


9. Deep learning super resolution reconstruction (SRR) for fetal brain MRI – collavoration with CHUV-UNIL

Fetal brain magnetic resonance imaging (MRI) is a challenging imaging setting,  due to the small size of the brain, its rapid changes throughout gestation and unpredictable fetal motion. As a result, state-of-the-art approaches typically require motion correction as well as super resolution reconstruction. Recently, several deep learning-based approaches have been proposed for these steps [1, 2, 3]. These approaches are either self-supervised, or rely on simulating ground-truth data, as no ground truth data are available.
In this project, we are looking for a motivated student to (1) learn about the pipeline for fetal brain MRI SRR; (2) experiment training DL models with various simulation strategies and (3) test and compare the generalization and robustness of these trained models on clinical acquisition. We aim at publishing the results in international conferences and relevant journals.
Requirements: A good knowledge of python and pytorch, familiarity with deep learning.
Supervisors: Dr. Thomas Sanchez ([email protected]), Dr. Meritxell Bach Cuadra ([email protected]), and Prof. Jean-Philippe Thiran

10. Deep Learning Algorithms for Intra-Voxel Fiber Estimation in the developing brain – collavoration with CHUV-UNIL
Diffusion-weighted magnetic resonance imaging (dMRI) is the method of choice to study white matter tracts that connect different brain regions. Several models have been proposed to map the diffusion signal to tensors or fiber orientation distribution functions (FODs) [1] that are necessary for white matter reconstruction. Accurate estimation of FODs with existing methods requires a large number of measurements and hence long acquisition times which is undesirable for newborns or fetuses. In our recent work [2], we proposed a deep learning method that learns to directly map the dMRI data of newborns from the developing Human Connectome Project (dHCP) acquired with few diffusion measurements to the target FODs reconstructed using hundreds of measurements (Figure attached).
The goal of this project is to (1) compare different state-of-the-art methods on the dHCP dataset (e.g. [3][4]) that predict FODs using deep learning (2) help in designing, training, and optimising neural networks to advance our technique [2].
Requirements: The project will be implemented in Python. It is ideal for a computer scientist, mathematician, physicist, or engineer interested in medical image processing, machine learning, signal processing, and optimization.
This project will be supervised by H. Kebiri, M. Bach Cuadra (CHUV-UNIL-CIBM) and Prof Thiran, also held in collaboration with the Computational Radiology Laboratory (CRL) of Boston Children’s Hospital and Harvard Medical School (Dr. Davood Karimi and Prof. Ali Gholipour).
[2] Deep learning estimation of fibre orientation distribution functions from few diffusion-weighted MRI measurements, Hamza Kebiri, Ali Gholipour, Davood Karimi and Meritxell Bach Cuadra. Submitted to the International Symposium of Biomedical Imaging (ISBI) 2023. 

11. Denoising in Diffusion MR imaging – master project at CHUV

Diffusion MRI is a powerful tool to quantify biological tissue microstructure in vivo and non-invasively. However, diffusion MRI signal analysis is notoriously hampered by low signal-to-noise ratio (SNR) in heavily diffusion-weighted images where the signal is substantially attenuated. Among the denoising techniques proposed over the years, Marchenko-Pastur Principle Component Analysis (MP-PCA) denoising has been the most promising (Veraart et al., NeuroImage 2016; Moeller et al., NeuroImage 2021). In order to meet the underlying assumption of Gaussian noise and avoid interference from Rician bias, diffusion MRI data are best denoised in complex-valued space rather than magnitude space. Phase maps from diffusion weighted datasets are however heavily affected by fluctuations due to the diffusion-weighting itself.

In this project, we propose to develop a robust pipeline for pre-processing the magnitude and phase images to enable reliable MP-PCA denoising of complex-valued diffusion MRI data, and thus provide a dramatic boost in SNR. The test data for this project will then be used to quantify brain gray matter microstructure using a new biophysical model (NEXI, Jelescu et al., NeuroImage 2022) with high accuracy and precision of parameter estimates.

Contact : Prof. Ileana Jelescu ([email protected])


12. Hyperparameter optimization and personalized cross-silo federated deep learning for brain abnormalities detection – master project in industry, in collaboraion with CHUV and SIEMENS HEALTHINEERS

Fully automated deep learning algorithm detection abnormalities in the brain (see figure on the right with an example of detection of cerebral microbleeds) highly depends on the amount and diversity for MRI training images. This is particularly challenging where the required training data may not be available at one single institution due to limited number of patients or labeled data. Looking at the current General Data Protection Regulation and other policies, even though many medical institutions may have some annotated datasets, they cannot be directly shared as medical data is highly sensitive to privacy not only in Europe but all over the world. Therefore, global federated learning frameworks have been proposed to exploit the knowledge in the local labeled data of different healthcare institutions to enhance a global segmentation model. These institutions do not need to share local original medical image data, while they can correct the model predictions, train the segmentation model and then periodically upload the model configurations to the global server. The server aggregates the contributions from the individual sites to generate a global model and then diffuse the new model to all institutions. The local institutions receive the global model from the server and continue to train and update the model using the newly collected local patient MRI images. This process can be iterated until the model reached a satisfactory performance.

While this technology is slowly maturing, with several competing federated learning frameworks, many issues remain to make this approach more effective, in particular across institutions with heterogeneous imaging protocols (data shift). In this project, we will build on ongoing federated learning research work between the Lausanne University Hospital and the Siemens team. Using synthetically generated labels (microbleeds), as well as real multi-site data (stroke), we will more specifically investigate heterogeneous data and domain shift into four different cross-silo federated learning scenarios: data shift with label shift, data shift with no label shift, no data shift but label shift, no data or label shift. The task will be segmentation in both cases, using either a vanilla 2D U-Net or a 3D U-Net.

In these domain shift scenarios, we will experiment with the balance between global models and personalized (site-specific) models using first a distributed hyper-parameter optimization approach, and then personalized federated learning (pFL) techniques. We will leverage existing metrics for data and label heterogeneity (SSIM, perceptual metrics, distribution parameters), as well as develop new problem-specific ones, and relate them empirically to segmentation performance depending on hyperparameters of the federation, including balance between local and global learning rates, site-specific learning rates, and site weightings during aggregation. Then, similarly to the Flamby framework [Ogier du Terrail et al., NeurIPS 2022] we will benchmark this baseline approach (well-tuned global models) with respect to several alternatives to the classic federated average algorithms, including algorithms focusing on heterogeneity such as FedProx [Li et al PMLR 2020] and pFL algorithms such as Per-FedAvg [Fallah et al., NeurIPS 2020].

The project is a great opportunity to learn about state-of-the art techniques in federated learning for heterogeneous data in a project with real clinical impact, and requires experience with Python and Pytorch, experience with medical imaging and the MONAI framework are pluses.

Supervisor

Prof. Jean-Philippe Thiran (EPFL-LTS5)

Co-supervisors

Dr. Jonas Richiardi (CENTRE HOSPITALIER UNIVERSITAIRE VAUDOIS)

Dr Jonathan Disselhorst, Dr. Bénédicte Maréchal (SIEMENS HEALTHINEERS)


13. Assessing brain disconnectivity in acute ischemic stroke in a multicentric study – master project in industry, in collaboraion with CHUV and SIEMENS HEALTHINEERS

Ischemic stroke is a highly prevalent disease typically caused by a blood clot blocking an artery in the brain that can lead to lasting brain damage, long-term disability or even death.

Possible treatments for ischemic stroke include thrombectomy, an endovascular surgical procedure aimed at revascularizing the affected brain areas. As thrombectomy is a delicate procedure, clinicians have to carefully weigh the risks and benefits for the patient, based on radiological information and clinical symptoms. In this context, the location of the stroke within the brain plays an important role since different locations may disrupt different neural connections, and thus brain functions.

In this project, we propose to study the impact of stroke on structural brain connectivity using a connectivity atlas. Brain graphs will be used to model the disrupted brain connectivity and extract features to be correlated with clinical symptoms and patient outcome (prognosis). This will be done by characterising the structural connectivity of the brain using the image processing methods, as well as using graph-theoretical methods.. The role of the student will be to adapt a previously developed method for multiple sclerosis [Ravano et al., Neuroimage:Clinical, 2021], and study its utility on ischemic stroke. Further methodological developments will include the use of multiplex graphs from longitudinal data and graph convolutional neural networks for clinical predictions.

This master thesis is part of the Advanced Stroke Analysis Plateform (ASAP), a project in collaboration with the CHUV hospital (Lausanne) and the Inselspital (Bern) with retrospective data available from more than 2000 patients. The student will work in an interdisciplinary environment, in close contact with MRI experts and clinicians and with a clear clinical goal motivating the project. Previous experience with Python coding and image processing is required.

Company Information 
Siemens Healthineers International AG
Advanced Clinical Imaging Technology
EPFL QI-E, 1015 Lausanne, Switzerland.

Contact Information 
Dr Tobias Kober, email: [email protected]



COMPUTER VISION PROJECTS ​​

14. Unsupervised anomaly localization using a small training data regime

Anomaly detection and localization are the centerpieces of many safety-critical applications, ranging from manufacturing defect detection to medical image inspection. Considering the highly diverse anomaly types, most existing methods are formulated as a one-class classification setup to model the distribution of normal samples and then identify abnormal ones by finding outliers. This project aims to formulate a new unsupervised framework for anomaly detection and localization trained on an extremely small number of normal instances. In particular, popular architectures, including CNN and transformers, will be compared and analyzed. The project will also explore advanced self-supervised visual representation learning based on masked image modeling for anomaly detection.

Assistant: Dr Behzad Bozorgtabar ([email protected])

Supervisor: Prof. Jean-Philippe Thiran


14. Generative data augmentation of plastic anomalies in biodegradable waste

In Switzerland and Liechtenstein, more than 1.4 million cubic meters of compost is produced annually from recycled biogenic waste. This compost is often contaminated by foreign matter such as plastics and aluminium, which must be correctly identified and removed in order to meet a number of legal requirements for its further processing.

In the context of a project between industry and academia, combining talents from multiple fields and aiming to automate the process of detecting these impurities, we propose a project of synthetic data augmentation targeted at improving the accuracy of an instance segmentation model. The goal is to setup or create a system able to perform data augmentation by synthesizing new samples using deep learning techniques, as for example Generative Adversarial Networks (GAN), so as to avoid repeating the costly process of image acquisition of real samples.

In this project, the student will investigate, study and implement state-of-the-art deep learning techniques for a very specific and concrete application of synthetic data creation. The student is expected to be familiar with Python and a common deep learning framework, such as Pytorch or Tensorflow.

Assistants: Alexandre Abbey ([email protected]) and Davide Nanni ([email protected]h)

Supervisor: Prof. Jean-Philippe Thiran


15. Fall detection using machine learning – master project in industry at Gets MSS SA (Lausanne)

At Gets MSS Sa, we aim to make the work of caregivers easier while improving security of patients and elderly people. To achieve that we develop and manufacture nursecall system that allow caregiver to be alerted when a patient needs help.

Falls are a problem encountered when someone is too weak to be able to get up again or to move around alone. Therefore to be able to give the  alert in case a person has fallen, we would like to develop a fall detection system. We aim to be able to know when a person goes out of bed to prevent falls.

We need you for:

  • Review existing technologies to be able to detect those falls. For example technologies radar, Lidar, Depth of Field, Kinect, etc…
  • Identify the advantages and disadvantages of each technology.
  • Develop a prototype using the chosen technology, image processing and machine learning.

We would like to create this product through an embedded system that will respect privacy.

Gets MSS SA is a small company of about 30 employees with a family atmosphere. Strong of its development team, it has become the leader in the Swiss nursecall market, maintaining itself on the cutting edge of innovation.

Contact : [email protected] and Prof. J.-Ph. Thiran