Interdisciplinary Student Projects

SCITAS actively collaborates with several laboratories at EPFL, and offers several opportunities for advanced university students looking for challenging research projects in scientific computing. We welcome applications from different academic disciplines and backgrounds.

Registration opens on is-academia. Interested students should contact E. Tolley and register under their names on the platform.

Proposed TP4 Projects

Available for Fall 2022 and Spring 2023.

Projects in Radio Astronomy

Projects in scientific computing for radio astronomy, in collaboration with the Laboratory of Astrophysics.

Supervisors:  Stefano Corda, Emma Tolley, Chris Broekema (ASTRON)

Modern large-scale distributed radio telescopes, like the LOFAR array in the Netherlands[1], and the Square Kilometre Array currently under construction in South Africa and Australia[2], generate terabits of data every second that, to the casual observer, look suspiciously like white noise. Only after intensive processing, involving large volumes of data and significant amounts of compute capacity, can a scientist construct recognisable images. The goal of this project is to investigate use of quantum computing for calibration and imaging of radio astronomy data.

Quantum computing offers advantages over classical computing in select problems where the defining properties of a quantum computer (superposition, interference and entanglement) play a major role in the solution. This project will explore (partial) radio astronomical calibration using quantum computers. We will explore efficient manners to express the data (input) and readout (output) from a quantum computer for calibration. The student will work in the context of a research group with opportunities for collaboration.

This project should help express radio telescope visibility and image data and calibration solutions in an efficient manner onto quantum qbits. In collaboration with the research team, this will help apply quantum techniques to solve calibration and imaging challenges using new and pre-existing techniques, such as quatum linear solvers.

[1] LOFAR: The LOw-Frequency ARray, M.P. van Haarlem et al, Astronomy and Astrophysics, Aug 2013

Supervisors:  Stefano Corda, Emma Tolley, Chris Broekema (ASTRON)

Moderm radio astronomy relies on large volumes of instrument data being transported to a compute facility. For security reasons, receiving network data requires the kernel to parse the link layer through application layer protocol headers before the payload is delivered to user space. The context switches between kernel and user space and the need to copy all data from kernel to user space make this expensive in terms of used CPU cycles.

To mitigate the high CPU load and to ensure high and stable throughput the use of Remote Direct Memory Access (RDMA) is currently being investigated both for the Dutch LOFAR array and the future Square Kilometre Array. While RDMA does perform well and significantly reduce the required CPU cycles for data transport, it needs hardware support, and thus introduces a hardware dependency. Furthermore, since RDMA traffic is handled in hardware, it is invisible to the kernel. Therefore, our standard monitoring and tracing tools, such as tcpdump and wireshark will not (by default) be able to see this traffic. For development and debugging a software RDMA implementation, called rdma_rxw, exists [1], but this is very slow.

A relatively new introduction to the Linux kernel is io_uring [2]. This new interface allows a user space program to define submission and completion ring buffers that are shared with the kernel. In modern Linux kernel versions these shared ring buffers can be offered to about 28 different system calls. While context switches are still required, data no longer needs to be copied from kernel space to user space. This should allow for a better performing software RDMA implementation to be developed.

This project aims to take the existing software RDMA implementation in the Linux kernel and investigate whether performance can be improved using the io uring interface. The project will be divided into the following phases:
1. initial investigation, including:
(a) identify, build and commission a test-bed
(b) getting familiar with io_uring, liburing and rdma_rxe
(c) baseline measurements with rdma_rxe, normal Ethernet and hardware RDMA
2. identify the performance hot spot of the rdma_rxe interface (using profiling)
3. implement io_uring accelerated version of software RDMA interface
4. measure and document performance
5. write report, clean up and push code to public repository
6. (stretch goal) write a merge request for upstream linux adoption

Considering the need to work with somewhat complex and low-level existing code, we suggest this should be a rather large project for a masters student. A small proof of concept at higher level has been developed previously [3]. This used a convenient high-level library: liburing [4]. It is unclear whether this library can be used for software RDMA, or if the bare base interface must be used. It is expected though that the latter will be required for an eventual merge request to be accepted.


Supervisors:  Stefano Corda, Emma Tolley, Steven van der Vlug (ASTRON)

The signal
processing pipelines and system architectures of different kinds of radio telescopes have many similarities in the signal path. Edge computing is used to prepare the digitized, measured signals for further centralized processing. Before domainspecific data analysis is performed, common initial processing steps include filtering, beam forming, and correlation. The edge and central processing steps are characterized by
high data rates, impressive compute loads, and the necessity to process data in real time. Both applications need innovative solutions to overcome the deceleration of Moore’s Law, constraints on physical size, and energy consumption. Near the edge, we depend on FPGA technology, as FPGAs can be connected to AnalogtoDigital converters, operate in strict real time, and have highbandwidth I/O capabilities. However, FPGAs compute less efficiently, are less flexible, and more difficult to program than GPUs, even
though a high level programming language (OpenCL) significantly reduces programming effort compared to Hardware Description Languages.
Xilinx has recently introduced an innovative solution: the Adaptive Compute Acceleration Platform (ACAP). It combines the realtime and I/O capabilities of an FPGA, the computational power of programmable vectorprocessing units (called AI engines), accelerated signalprocessing units (DSPs), and the generalpurpose capabilities of CPU cores in a single chip, all programmable in C/C++/OpenCL, and all connected through a NetworkonChip. This hybrid approach should make edge computing much more flexible and (energy) efficient than on traditional FPGAs.
This  project will focus on:

1. An evaluation of the Xilinx ACAP in general

2. An evaluation of the AI engines for signalprocessing workloads (for applications in radio astronomy) and a performance comparison to GPU implementations using available Xilinx libraries and developing own implementations

3. A demonstration with a representative setup for applications in radio astronomy, including measurement and analysis of performance and energy utilization