Ongoing Student Projects

The following projects are currently pursued by students in our lab, and are therefore not available anymore. They are published for reference and inspiration.

End-to-end auditing of decentralized learning

Contact: Martijn de Vos <[email protected]>

Motivation

Decentralized learning involves local training of models at nodes and aggregation. In this case, detecting nodes may deviate from honest behavior and attempt to disrupt the training process or poison the learning. Detecting this malicious behavior in decentralized learning is very difficult. One possible way to detect malicious behavior is to verify the learning process at the end in case of suspected malicious activity. This however entails many challenges from the trust, data transfer, communication, storage and computation fronts. We would like to develop and study a system that can verify a learning process and identify the culprit node in the event of suspicion.

Components of the system

1. An optimistic mechanism to suspect malicious behavior in the learning process based on the current state. 2. A data structure (for example, a computational graph) on a trusted party (say, a server) which records (like a tape) the entire learning process: nodes being the state and inputs, and edges denoting the transition (gradient descent and averaging). 3. An communication-efficient and trustworthy way of data exchange between the learning parties and the server for future verification. 4. An incentive or stake mechanism for learning parties to behave honestly to reduce the chances of invoking the verification system. The intent is to design this verification system with the trusted server code executing on a Trusted Execution Environment and compare it against alternatives such as Homomorphic Encryption. Ideally, this project is intended as 1-2 independent master semester projects or a master thesis project.

Must-haves

  1. Python programming experience
  2. Knowledge of Machine Learning

Good to know

  1. Computer networks
  2. Experience with Pytorch
  3. Calculus
  4. Concurrency

Asynchronous Decentralized Learning

Contact: Martijn de Vos <[email protected]>

Decentralized Learning (DL) is a relatively new class of ML algorithms where the learning process takes place on a network of interconnected devices with no central server that supervises the training. In standard DL algorithms such as DP-SGD, each device in the network independently updates its own model in reach round, based on the data available locally, sends its model to some neighbours, and merges all received models received in a round with its own model. Since training progresses in discrete rounds, these algorithms are synchronous and therefore require synchronisation amongst all processes to determine when to move to the next round. Slow nodes, or stragglers, can thus significantly prolong the model training.

There have been a few proposals that propose asynchronous DL algorithms. Nodes in such algorithms make individual progression and usually do not have to wait for other nodes in order to make progression. While this avoids global synchronisation, one has to handle the situation where some fast nodes get ahead of other nodes, therefore impacting model convergence. To address this, asynchronous ML algorithms such as Gossip Learning merge received models based on the model age.

The goal of this project is to design, develop, implement and empirically analyse the performance of asynchronous learning algorithms. A theoretical contribution in this project could, for example, be a convergence proof, showing that asynchronous DL algorithms result in a consensus model, even with varying levels of model staleness during training. Another focus can be on the real-world performance, resource usage and convergence speed of such algorithms.

A Comparative Evaluation of Decentralized Learning Algorithms using Realistic Real-world Traces

Contact: Martijn de Vos <[email protected]>

Decentralized Learning (DL) has gained significant attention in recent years due to its potential to enhance the privacy, fault tolerance scalability of machine learning compared to centralized settings. Various algorithms have been proposed for DL, such as Decentralized Stochastic Gradient Descent (D-SGD), AllReduce-SGD (in which nodes are connected in a ring topology and use AllReduce to average their models), Asynchronous Distributed Parallel Stochastic Gradient Descent (AD-PSGD) and Gossip Learning (GL). While these algorithms all share the same objective – collaboratively train a model without sharing data – they make different trade-offs, e.g., whether there is round synchronisation and how models are averaged across peers. To better understand the trade-offs of these algorithms, a comprehensive experimental evaluation is needed. In this project, we propose a detailed comparative analysis of these DL algorithms using realistic real-world traces that capture compute power, network capacity, data heterogeneity, and node availability in realistic FL settings.

The primary goal of this project is to compare and evaluate state-of-the-art DL algorithms under realistic conditions. We will provide you with real-world traces to mimic the actual behaviour of compute power, network capacity, data heterogeneity, and node availability in FL settings. You will integrate these traces and implement different DL algorithms in DecentralizePy, a framework to develop and deploy DL algorithms. By doing so, we provide a more accurate and practical assessment of these algorithms’ strengths and weaknesses, which will guide future research and development in the field of DL.

Building Inclusive ML Models with Decentralized Learning

Master project

Contact: Sayan Biswas <[email protected]>

One of the recently popularised ways to efficiently handle the rapid growth of size and complexity of the currently deployed Machine Learning (ML) models is by taking a decentralised approach which ameliorates various challenges associated with traditional, centralized ML paradigms including but not limited to data privacy, ownership and control, scalability, robustness and fault tolerance, and communication overhead. Decentralised Learning (DL) is a relatively new class of ML algorithms where the learning process takes place collaboratively on a network of interconnected devices without the reliance on any central server supervising the training.

On the other hand, a series of recent unfortunate incidents like Facebook mislabelling black men as primates and facial-analysis software having 0.8% error for light-skinned men and 34.7% for dark-skinned women have indicated the lack of representation of minorities in the ML models in use. Thus, the need for training personalised ML models catering to the differing requirements, data distribution, and attributes pertaining to the different communities has been unequivocally acknowledged. Clustering-based approaches such as the Iterative Federated Clustering Algorithm (IFCA) to achieve the personalisation of ML models have recently been in the spotlight primarily in the context of Federated Learning (FL).

The main goal of this project is to lay down the foundational framework needed to carry out such clustering-based personalised model training in DL. In particular, we wish to develop a privacy-preserving, communication-efficient, and decentralised way to estimate key statistical summaries of the data/models held by the nodes (possibly using some techniques based on sampling or sketching) iteratively over the training rounds to compare their similarity and, eventually, furnish a dynamic way to cluster the network based on that. This will, in turn, help in the development of a DL equivalent of some of the state-of-the-art clustering-based personalised model training algorithms.

To contribute effectively to this project, we highly value:

  • A strong mathematical grasp and interest in probability theory, combinatorics, and analysis.
  • Proficiency in basic machine learning implementation.

Boosting Decentralized Learning with Bandwidth Pooling

Contact: Martijn de Vos <[email protected]>

Decentralized Learning (DL) is a relatively new class of ML algorithms where the learning process takes place on a network of interconnected devices with no central server that supervises the training. While DL initially has been applied within data centers to improve the efficiency and scalability of large-scale ML tasks in homogeneous environments, it is increasingly being used to train ML models between end-user devices in heterogeneous environments. With DL, each device in the network independently updates its own model based on the data available locally and directly shares the updated model with other clients. Then, each client periodically aggregates received models. DL uses a peer-to-peer communication topology that prescribes which clients share their model with which other clients.

As DL moves beyond homogeneous data centers to large-scale, heterogeneous end-user environments such as smartphone networks, the variability in computational and communication resources becomes a substantial issue. The discrepancies in bandwidth among nodes can lead to inefficiencies in model dissemination, which is critical to the DL process and directly affects the duration of a round. This project aims to design and evaluate a bandwidth pooling strategy where nodes with surplus bandwidth can assist other nodes in the dissemination of their models. The main research question we seek to address is: “How can a node in DL effectively utilize the surplus bandwidth of neighboring nodes to accelerate dissemination of its model in the network?”.

Building Inclusive ML Models with Decentralized Learning

Master project

Contact: Sayan Biswas <[email protected]>

One of the recently popularised ways to efficiently handle the rapid growth of size and complexity of the currently deployed Machine Learning (ML) models is by taking a decentralised approach which ameliorates various challenges associated with traditional, centralized ML paradigms including but not limited to data privacy, ownership and control, scalability, robustness and fault tolerance, and communication overhead. Decentralised Learning (DL) is a relatively new class of ML algorithms where the learning process takes place collaboratively on a network of interconnected devices without the reliance on any central server supervising the training.

On the other hand, a series of recent unfortunate incidents like Facebook mislabelling black men as primates and facial-analysis software having 0.8% error for light-skinned men and 34.7% for dark-skinned women have indicated the lack of representation of minorities in the ML models in use. Thus, the need for training personalised ML models catering to the differing requirements, data distribution, and attributes pertaining to the different communities has been unequivocally acknowledged. Clustering-based approaches such as the Iterative Federated Clustering Algorithm (IFCA) to achieve the personalisation of ML models have recently been in the spotlight primarily in the context of Federated Learning (FL).

The main goal of this project is to lay down the foundational framework needed to carry out such clustering-based personalised model training in DL. In particular, we wish to develop a privacy-preserving, communication-efficient, and decentralised way to estimate key statistical summaries of the data/models held by the nodes (possibly using some techniques based on sampling or sketching) iteratively over the training rounds to compare their similarity and, eventually, furnish a dynamic way to cluster the network based on that. This will, in turn, help in the development of a DL equivalent of some of the state-of-the-art clustering-based personalised model training algorithms.

To contribute effectively to this project, we highly value:

  • A strong mathematical grasp and interest in probability theory, combinatorics, and analysis.
  • Proficiency in basic machine learning implementation.