Student Projects

The following projects are available for Master and Bachelor students. They are performed in close collaboration with an experienced member of the lab. Apply for a project by sending an email to the contact mentioned for the project.

You may also suggest new projects, ideally close enough to our ongoing, or previously completed projects. In that case, you will have to convince Anne-Marie Kermarrec that it is worthwhile, of reasonable scope, and that someone in the lab can mentor you!

 

Projects available for Fall 2024.

Harnessing Increased Client Participation with Cohort-Parallel Federated Learning

MSc Thesis

Contact: Martijn de Vos ([email protected])

Federated Learning (FL) is a machine learning approach where nodes collaboratively train a global model. As more nodes participate in a round of FL, the effectiveness of individual model updates by nodes also diminishes. In this project, we intend to increase the effectiveness of client updates by dividing the network into smaller partitions, or cohorts.

We name this approach Cohort-Parallel Federated Learning (CPFL), which is a novel learning approach where each cohort independently trains a global model using FL, until convergence. The produced models by each cohort are then unified using an ensemble. We already have preliminary evidence that smaller, isolated networks converge quicker than in a one-network setting where all nodes participate. This project mainly focuses on two aspects: 

  1. designing a practical algorithm for cohort-parallel FL, and
  2. conducting experiments that will quantify the effectiveness of this approach.

You will experiment with different datasets, data distributions and client clustering methods. These experiments will investigate the balance between the number of cohorts, model accuracy, training time, and compute and communication resources. Experience with PyTorch is highly recommended.

 

One-shot federated learning benchmark

Master’s thesis or Master’s semester project: 12 credits

Contact: Akash Dhasade ([email protected])

One-shot federated learning (OFL) is an evolving area of research where communication between clients and the server is restricted to a single round. Standard approaches to OFL are based on knowledge distillation, ensemble learning etc. These approaches further take various forms based on the assumptions made, for instance:

  1. Can clients reveal extra information about their training in the one-shot communication?
  2. Is there a public dataset available for knowledge transfer?

While availability of such information seems trivial, it can result in significantly different performance properties for competing algorithms.

Besides the assumptions, these algorithms have been evaluated on very specific metrics, e.g., accuracy. Their performance properties on other metrics like computational efficiency and scalability is not well studied. The goal of this benchmark is to exhaustively evaluate OFL algorithms under different assumptions and performance metrics towards a general goal of devising more performant OFL algorithms. Your task will be to implement and evaluate OFL algorithms across different datasets in a unified benchmark. Experience with PyTorch is highly recommended.

 

Optimizing the Simulation of Decentralized Learning Algorithms

Contact: Martijn de Vos ([email protected])

The simulation of distributed learning algorithms is important for understanding their achievable model accuracy and the overhead in terms of total training time and communication cost. For this purpose, our lab has recently designed and implemented a distributed simulator specifically devised to simulate decentralized learning (DL) algorithms [1].

The main idea of this simulator is that first, the timestamps of all events in the system (training, model transfers, aggregation, etc.) are determined using a discrete-event simulator. Meanwhile, the simulator devises a compute graph containing all compute tasks (train, aggregate, or test). Then, the compute graph is solved in a distributed manner, possibly using different machines and multiple workers. An advantage of this simulator over others, such as DecentralizePy, is that it supports the integration of real-world mobile traces that include each node’s training and network capacity.

Because of the discrete-event simulation, we maintain full control over the passing of time, enabling the evaluation of DL algorithms with nodes with different hardware characteristics. The FedScale simulator [2] uses a similar idea but only supports Federated Learning, which is generally easier to simulate than decentralized learning. While the first version of our simulator is already in use for various projects involving DL, there is a significant opportunity to enhance its scalability regarding the number of nodes it can support.

This project holds the potential to significantly improve the scalability of our simulator by identifying and implementing various optimization techniques. For instance, the current simulator is limited by the available memory, as many DL algorithms induce a memory footprint that scales linearly with the number of nodes in the DL network. One potential approach is to reduce the simulator’s memory footprint by strategically aggregating models as soon as they arrive in a machine. Other optimizations can revolve around the manipulation of the compute graph that is generated during the discrete-event simulation.

Affinity and experience with implementing distributed systems are required for this project. Since the project primarily focuses on the performance of simulation rather than on the ML algorithms, a deep understanding of ML/DL algorithms is optional but can be helpful during the project.

[1] Source code: https://github.com/sacs-epfl/decentralized-learning-simulator
[2] FedScale simulator: https://fedscale.ai

 

Scalable and Distributed LoRA Adapter Selection and Serving

Contact: Martijn de Vos ([email protected])

The fine-tuning of LLMs to specialize its performance on a particular task is essential for unlocking their full capabilities. It has become an important paradigm in the field of LLMs. LoRA has recently gained much attention [1]. LoRA introduces trainable parameters, or adapters, that interact with the pre-existing ones through low-rank matrices, allowing the model to adapt to new tasks without retraining it fully. It is based on the assumption that the differences between the pre-trained and fine-tuned model exhibit low-rank properties. LoRA keeps the pre-trained model parameters frozen and uses auxiliary low-rank matrices that are randomly initialized.

In some settings, there may be many LoRA adapters for diverse types of downstream tasks, such as text translation or summarization. Our goal is to build a disaggregated, distributed system architecture for fetching and utilizing the right adapters for a particular query input while minimizing the end-to-end inference latency. Related work can be found in [2] and [3]. The main goal of this project is to design, implement and evaluate (parts of) a scalable and distributed system that efficiently selects and serves LoRA adapters based on specific query inputs, aiming to reduce the end-to-end inference latency for various downstream tasks. Previous expertise with distributed ML systems is highly recommended. Potential research questions:

  1. How can a disaggregated system architecture speed up inference with LoRA and multiple adapters?
  2. How can we build a distributed adapter routing and selection mechanism?
  3. How can we adapt our system architecture to support batched requests?

[1] Hu, Edward J., et al. “Lora: Low-rank adaptation of large language models.” arXiv preprint arXiv:2106.09685 (2021).
[2] Sheng, Ying, et al. “S-lora: Serving thousands of concurrent lora adapters.” arXiv preprint arXiv:2311.03285 (2023).
[3] Zhao, Ziyu, et al. “LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild.” arXiv preprint arXiv:2402.09997 (2024).

 

Dynamic Expert Model Management: Smart Caching and Real-time Swapping in Mixture of Experts

Contact: Martijn de Vos ([email protected])

In the context of Mixture of Experts (MoE), we aim to develop a system that manages the storage and retrieval of expert models, allowing for efficient real-time swapping during inference. Given the varying popularity and usage of different experts, our system will implement smart caching mechanisms to optimize the performance and availability of the necessary experts. By leveraging a disaggregated architecture, we propose to separate storage and compute responsibilities: storage nodes will not only hold the expert models but also have some computational power for pre-processing tasks, while compute nodes will focus on the heavy lifting of model inference and the gating network.Potential research questions:

  1. How can we implement an effective caching strategy that dynamically adjusts to the popularity and demand of specific expert models in MoE systems?
  2. In what ways can we design and orchestrate the interaction between storage and compute nodes to facilitate rapid and efficient retrieval and loading of
    expert models, ensuring that the system scales dynamically with varying
    workloads?
  3. How can hot swapping of expert chunks be achieved seamlessly during
    inference to ensure minimal latency and maximum throughput?
  4. Can we quantify the benefits of this smart caching and disaggregated architecture approach in terms of response time, resource utilization, and overall system scalability?