Student Projects

DISAL offers a range of student projects in its three main areas of expertise: Distributed Robotic Systems, Sensor and Actuator Networks, and Intelligent Vehicles. For more information about supervision guidelines and different types of student projects, please refer to this page.

 


Autumn Semester 2025-2026 

Projects for the autumn semester are now available.

Market-Based Multi-Robot Task Allocation Algorithms for Micro Aerial Vehicles Performing Inspections

Market-based algorithms have received significant attention for assigning tasks to multiple robots in scenarios such as patrolling, exploration, and pick-and-delivery [1], [2]. The task allocation problem in these domains is typically NP-hard, which means that finding an optimal solution is often not practical, especially in real-time applications. Market-based approaches, which create a simulated economic environment where robots can trade tasks and resources, offer a promising solution. These methods are capable of achieving high efficiency and can produce results close to optimal. However, their effectiveness depends strongly on how well the problem is defined through an appropriate taxonomy [3], [4], [5].

Several aspects are important in this taxonomy. These include the task capacity of each robot (whether they can handle one or multiple tasks), the type of tasks (whether they require one or multiple robots), task interdependence (independent, dependent within the same schedule, or dependent across different schedules), the timing of task assignment (instantaneous or time-extended), the system architecture (centralized, decentralized, distributed, or hybrid), and the communication framework (local or global connectivity).

In this project, the student is expected to design a market-based task allocation algorithm for a team of micro aerial vehicles (MAVs) and their docking stations. The work will be carried out in a low-fidelity simulation environment first, using either MATLAB or Python, and later on in high-fidelity simulation, using either Webots or Gazebo. In the simulation, MAVs will perform basic inspection tasks generated by a ground control station, while docking stations will provide recharging services. Both MAVs and docking stations will be subject to constraints such as limited flight time, communication range, and minimum charging duration. The student will begin by reviewing relevant literature, define the problem taxonomy based on the review, and then implement and evaluate at least two different algorithms across different scenarios using selected performance metrics.

Recommended type of project: semester or master project

Work breakdown: 20% theory, 50% coding, 30%  experimentation

Prerequisites: Broad interest in multi-robot systems, good programming skills (Python, C/C++, Matlab), knowledge in ROS2 (if you took DIS course, it is a plus)

Keywords: Micro Aerial Vehicles, docking stations, task allocation, market-based algorithms, multi-robot system architectures.

Contact: Kağan Erünsal

References:
[1] Quinton F., Grand C., Lesire C., Market Approaches to the Multi-Robot Task Allocation Problem: a Survey, Journal of Intelligent & Robotic Systems (2023)
[2] Talebpour, Z., Martinoli, A.: Multi-robot coordination in dynamic environments shared with humans. In: IEEE international conference on robotics and automation, Brisbane, Australia (2018)
[3] Gerkey, B.P., Matari ́c, M.J.: A formal analysis and taxonomy of task allocation in multi-robot systems. Int. J. Robot. Res. 23(9), 939–954 (2004)
[4] Ayorkor Korsah, G., Stentz, A., Bernardine Dias, M.: A comprehensive taxonomy for multi-robot task allocation. Int. J. Robot. Res. 32(12), 1495–1512 (2013)
[5] Bernardine Dias, M., Zlot, R., Kalra, N., Anthony, S.: Market-based multirobot coordination: a survey and analysis. Proc. IEEE 94(7), 1257–1270 (2006)

 

Safe Navigation and Exploration in Environments Modeled as Gaussian Mixture Model

Image source: https://sites.google.com/stanford.edu/splat-nav [3]

There has been recently a lot of attention given to Gaussian Splats [1]. This technique allows high quality view generations using a mixture of Gaussians, learned from training images and known camera poses, using differential rendering. Although this technique was initially developed and intended for the Computer Graphics community, the robotics community has recently explored ways of leveraging this representation for mobile robots. Simultaneous Localization and Mapping (SLAM) [2] and safe navigation [3] with such environment representation have already been studied. However, previous work using coarser mixtures of Gaussians exists [4,5] as well. Various strategies have been explored for navigation in such representation [3,6,7] with various drawbacks and advantages.

A previous project already studied how to generate safe trajectories in environments represented as a coarse mixture of Gaussians. Various trajectory representations and techniques were considered. The core assumption was that the environment was known and a map could be generated a priori. This project will therefore focus on how exploration techniques can be implemented when using Gaussian mixtures to represent the environment. This includes studying how to incorporate new measurements into the map efficiently (e.g., using [9]) and how to define unexplored regions, as partly covered in [9]. The work will be carried out in simulation in the Webots simulator [8], using realistic drone simulations to demonstrate the capacity of a drone to explore a given environment.

Recommended type of project: semester project / master project
Work breakdown: 50% theory, 30% coding, 20% simulation
Prerequisites: Broad interest in robotics; excellent programming skills (Python, C/C++); good knowledge in ROS, git.
Keywords: Micro Aerial Vehicles, simulation, Gaussian Splats, Gaussian Mixture Models, Mixture of Gaussians, navigation, exploration.
Contact: Lucas Wälti

References:

[1] Kerbl, Bernhard, Georgios Kopanas, Thomas Leimkuehler, and George Drettakis. “3D Gaussian Splatting for Real-Time Radiance Field Rendering.” ACM Trans. Graph. 42, no. 4 (July 26, 2023): 139:1-139:14. https://doi.org/10.1145/3592433.

[2] Keetha, Nikhil, Jay Karhade, Krishna Murthy Jatavallabhula, Gengshan Yang, Sebastian Scherer, Deva Ramanan, and Jonathon Luiten. “SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM.” arXiv, April 16, 2024. http://arxiv.org/abs/2312.02126.

[3] Chen, Timothy, Ola Shorinwa, Joseph Bruno, Javier Yu, Weijia Zeng, Keiko Nagami, Philip Dames, and Mac Schwager. “Splat-Nav: Safe Real-Time Robot Navigation in Gaussian Splatting Maps.” arXiv, April 26, 2024. http://arxiv.org/abs/2403.02751.

[4] Tabib, Wennie, Kshitij Goel, John Yao, Mosam Dabhi, Curtis Boirum, and Nathan Michael. “Real-Time Information-Theoretic Exploration with Gaussian Mixture Model Maps.” In Robotics: Science and Systems XV. Robotics: Science and Systems Foundation, 2019. https://doi.org/10.15607/RSS.2019.XV.061.

[5] Tabib, Wennie, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael. “Autonomous Cave Surveying With an Aerial Robot.” IEEE Transactions on Robotics 38, no. 2 (April 2022): 1016–32. https://doi.org/10.1109/TRO.2021.3104459.

[6] Corah, Micah, Cormac O’Meadhra, Kshitij Goel, and Nathan Michael. “Communication-Efficient Planning and Mapping for Multi-Robot Exploration in Large Environments.” IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 1715–21. https://doi.org/10.1109/LRA.2019.2897368.

[7] Chen, Timothy, Aiden Swann, Javier Yu, Ola Shorinwa, Riku Murai, Monroe Kennedy III, and Mac Schwager. “SAFER-Splat: A Control Barrier Function for Safe Navigation with Online Gaussian Splatting Maps.” arXiv, September 15, 2024. http://arxiv.org/abs/2409.09868.

[8] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

[9] Goel, Kshitij, and Wennie Tabib. “GIRA: Gaussian Mixture Models for Inference and Robot Autonomy.” In 2024 IEEE International Conference on Robotics and Automation (ICRA), 6212–18, 2024. https://doi.org/10.1109/ICRA57147.2024.10611216.

 

Alternatives to the Expectation-Maximization algorithm for Gaussian Mixture Model optimization


EM algorithm illustration (source)


Illustration from [3].

In the context of autonomous robot navigation with depth cameras or LiDARs, common representations that leverage a fixed discretization of the space (e.g., octomap, voxel grids) are extensively used. However, while discrete space representations are fast and efficient for lookup, collision checks and updates, they scale poorly and require significant memory and bandwidth to be shared across multiple robots. GMMs are flexible geometric descriptors and require a minimal number of parameters to describe complex environments, as opposed to fixed resolution alternatives. However, GMMs are usually fitted using the Expectation-Maximization (EM) algorithm, where the number of components must be given a priori. Criteria have been defined to help with that choice, including Bayesian Information Criterion (BIC) and Akaike Information Criterion (AIC), or elbow-criterion where the EM algorithm is run for several number of components. No criterion is perfect, especially in the context of 3D mapping, where the points distributions are non-Gaussian. See for instance [1, 2, 3] for implementations of the EM-algorithm.

This project will therefore aim to study alternative methods for selecting the number of components, as well as for the EM algorithm itself. Indeed, the point clouds measured by a robot must be efficiently converted to a Gaussian Mixture. Deep learning approaches will likely offer suitable tools for this. Alternatively, one could consider retaining the more common voxel grid representation and look into compression approaches to reduce bandwidth when sharing map information between robots.

Recommended type of project: semester project / master project
Work breakdown: 50% theory, 30% coding, 20% simulation
Prerequisites: Broad interest in robotics; excellent programming skills (Python, C/C++); good knowledge in ROS, git.
Keywords: Micro Aerial Vehicles, simulation, Gaussian Mixture Models, Mixture of Gaussians, navigation, point cloud.
Contact: Lucas Wälti

References:

[1] Goel, Kshitij, and Wennie Tabib. “GIRA: Gaussian Mixture Models for Inference and Robot Autonomy.” In 2024 IEEE International Conference on Robotics and Automation (ICRA), 6212–18, 2024. https://doi.org/10.1109/ICRA57147.2024.10611216.

[2] Dong, Haolin, Jincheng Yu, Yuanfan Xu, Zhilin Xu, Zhaoyang Shen, Jiahao Tang, Yuan Shen, and Yu Wang. “MR-GMMapping: Communication Efficient Multi-Robot Mapping System via Gaussian Mixture Model.” IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 3294–3301. https://doi.org/10.1109/LRA.2022.3145059.

[3] Tabib, Wennie, and Nathan Michael. “Simultaneous Localization and Mapping of Subterranean Voids with Gaussian Mixture Models.” In Field and Service Robotics, edited by Genya Ishigami and Kazuya Yoshida, 173–87. Singapore: Springer, 2021. https://doi.org/10.1007/978-981-15-9460-1_13.

[4] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

 

Uncooled Thermal Cameras for Gas Detection

Thermal cameras are often used in inspection and detection of gas leaks. The gases present in industrial leaks mainly exhibit absorption peaks at shorter wavelengths. However, thermal cameras for this use case are based on photodetectors, which generally require cooling to increase sensitivity [1]. This added weight makes them unsuitable for deployment on smaller robots, such as micro aerial vehicles. However, the absorption spectrum of several gases encountered in industrial leaks extend into longer wavelengths as well [2]. As such, it would be possible to detect gases with uncooled (and therefore lighter) thermal cameras, albeit with a decreased sensitivity.

In this project, we aim to test if uncooled cameras can be used to detect gases. The student will gather real footage of gases with thermal cameras, and test the usage of a combination of filtering, signal processing and computer vision methods to detect gases. If possible, we will also experiment with and compare results against a cooled thermal camera.

Recommended type of project: semester project

Work breakdown: 20% theory, 50% real world experiments, 30% coding

Prerequisites: An interest in computer vision, optics, signal processing. Good background in Python

Keywords: Thermal cameras, gas detection, computer vision

Contact: Alexander Wallén Kiessling

References:

[1] Vollmer, Michael, and Klaus-Peter Möllmann. Infrared thermal imaging: fundamentals, research and applications. John Wiley & Sons, 2018.

[2] Meribout, Mahmoud. “Gas leak-detection and measurement systems: Prospects and future trends.” IEEE Transactions on Instrumentation and Measurement 70 (2021): 1-13.

 

Visual Inertial SLAM for aerial vehicles

  

Simultaneous Localization and Mapping (SLAM) is used to localize mobile robots in areas with no GNSS coverage. This is particularly useful underground or indoors, and there are many sensor modalities which can be used for SLAM. For micro aerial vehicles, cameras have become the de-facto choice for localization, due to their weight and low cost. However, over the course of time a multitude of methods have become available for visual based SLAM [1].

In this project, the primary aim is to understand key differences and benchmark several popular visual SLAM methods against each other, for use on micro aerial vehicles. The student will be provided with real world datasets collected on aerial vehicles, but will also need to benchmark in high fidelity simulation. For motivated students or a master project, it is also of high interest to attempt to incorporate multi-camera methods – a relatively new and active field of research [2].

Recommended type of project: semester project / master projects

Work breakdown: 20% theory, 80% coding

Prerequisites: An interest in visual SLAM, computer vision. Strong background in Python and C/C++General background from mobile/aerial robotics.

Keywords: SLAM, aerial vehicles, computer vision.

Contact: Alexander Wallén Kiessling

References:

[1] Servières, Myriam, et al. “Visual and Visual‐Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking.” Journal of Sensors 2021.1 (2021): 2054828.

[2] A. J. Yang, C. Cui, I. A. Bârsan, R. Urtasun and S. Wang, “Asynchronous Multi-View SLAM,” 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 2021, pp. 5669-5676, doi: 10.1109/ICRA48506.2021.9561481.

 

Gas Source Localization using Neural Networks

  

The goal of Gas Source Localization (GSL) is to find gas leaks as efficiently as possible. This is important in emergency situations involving toxic gases or to reduce the environmental impact of industrial gas leaks. Gas sources can be localized using static sensor networks or mobile robots. Many state-of-the-art methods rely on probabilistic algorithms that estimate the likelihood of the source position, e.g. Source Term Estimation (STE) [1, 2]. However, most of these algorithms assume a simplified model of the gas plume/distribution, which is a very complicated fluid dynamic phenomenon. Typically, this assumption fails in cluttered environments. In this project, we aim to estimate the position of the gas source directly using an Artificial Neural Network (ANN).

We want to train an ANN with gas maps (input) and predict the location of gas sources (output). Previous methods first explore the entire environment and then predict the location of the gas source [3, 4]. However, we consider a mobile robot that takes continuous measurements and relies on intermediate predictions. Therefore, the model should work with incomplete input maps, i.e. before the robot has explored the whole environment. This project builds on a previous semester project using a convolutional neural network for GSL. The governing physical equations of gas dispersion are Partial Differential Equations (PDEs). Therefore, we would like to explore more advanced model architectures specialized for PDEs, e.g. Fourier Neural Operators [5] or Physical Informed Neural Networks [6]. Furthermore, the work should be extended from 2D to 3D environments.

Recommended type of project: master project / semester project

Work breakdown: 20% theory, 60% coding, 20% simulation

Prerequisites: An interest in machine learning and robotic systems. Strong background in Python and Pytorch. Previous experience in training of neural networks.

Keywords: Neural Network, Gas Source Localization

Contact: Nicolaj Schmid, [email protected]

References:

[1] M. Hutchinson, H. Oh, and W.-H. Chen, “A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors,” Information Fusion, vol. 36, pp. 130–148, Jul. 2017.

[2] W. Jin, F. Rahbar, C. Ercolani, and A. Martinoli, “Towards Efficient Gas Leak Detection in Built Environments: Data-Driven Plume Modeling for Gas Sensing Robots,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom: IEEE, May 2023, pp. 7749–7755.

[3] A. S. A. Yeon, A. Zakaria, S. M. M. S. Zakaria, R. Visvanathan, K. Kamarudin, and L. M. Kamarudin, “Gas Source Localization via Mobile Robot with Gas Distribution Mapping and Deep Neural Network,” in 2022 2nd International Conference on Electronic and Electrical Engineering and Intelligent System (ICE3IS), Yogyakarta, Indonesia: IEEE, Nov. 2022, pp. 120–124.

[4] Z. H. M. Juffry et al., “Deep Neural Network for Localizing Gas Source Based on Gas Distribution Map,” in Proceedings of the 6th International Conference on Electrical, Control and Computer Engineering, vol. 842, Z. Md. Zain, Mohd. H. Sulaiman, A. I. Mohamed, Mohd. S. Bakar, and Mohd. S. Ramli, Eds., in Lecture Notes in Electrical Engineering, vol. 842. , Singapore: Springer Singapore, 2022, pp. 1105–1115.

[5] Z. Li et al., “Fourier Neural Operator for Parametric Partial Differential Equations,” 2020, arXiv.

[6] S. Cai, Z. Mao, Z. Wang, M. Yin, and G. E. Karniadakis, “Physics-informed neural networks (PINNs) for fluid mechanics: a review,” Acta Mech. Sin., vol. 37, no. 12, pp. 1727–1738, Dec. 2021

 

Reinforcement Learning for Gas Source Localization

  

The goal of Gas Source Localization (GSL) is to locate gas leaks as efficiently as possible. This is important in emergency situations involving toxic gases or to reduce the environmental impact of industrial gas leaks. In mobile robotics, many GSL techniques have been developed that estimate the source position based on gas measurements. Navigation is a crucial part of such an algorithm because it determines where the measurements are taken, i.e., how much time the robot spends exploring a particular part of the environment. There are a variety of different navigation techniques in the literature: from bio-inspired reactive approaches [1] to information-driven ones [2]. In recent years, reinforcement learning [3-5] has become a compelling alternative due to its potential to navigate complex environments.

In a previous work, we were able to show that gas sources can be localized efficiently by extracting certain features from a gas map (e.g., highest gas variance). However, the robot was moving on a fixed scanning trajectory and did not consider an adaptive navigation method. In this project we want to complement the GSL algorithm with reinforcement learning (DQN, DDPG, …) to explore the environment more efficiently.

Recommended type of project: master project / semester project

Work breakdown: 20% theory, 50% coding, 30% simulation

Prerequisites: An interest in machine learning and robotic systems. Strong background in Python and Pytorch. Previous experience in reinforcement learning.

Keywords: Reinforcement Learning, Gas Source Localization

Contact: Nicolaj Schmid, [email protected]

References:

[1] A. Francis, S. Li, C. Griffiths, and J. Sienz, “Gas source localization and mapping with mobile robots: A review,” Journal of Field Robotics, vol. 39, no. 8, pp. 1341–1373, Dec. 2022

[2] M. Vergassola, E. Villermaux, and B. I. Shraiman, “‘Infotaxis’ as a strategy for searching without gradients,” Nature, vol. 445, no. 7126, pp. 406–409, Jan. 2007

[3] Y. Shi, M. Wen, Q. Zhang, W. Zhang, C. Liu, and W. Liu, “Autonomous Goal Detection and Cessation in Reinforcement Learning: A Case Study on Source Term Estimation”.

[4] Y. Zhao et al., “A deep reinforcement learning based searching method for source localization,” Information Sciences, vol. 588, pp. 67–81, Apr. 2022

[5] Y. Shi, K. McAreavey, C. Liu, and W. Liu, “Reinforcement Learning for Source Location Estimation: A Multi-Step Approach,” in 2024 IEEE International Conference on Industrial Technology (ICIT), Bristol, United Kingdom: IEEE, Mar. 2024, pp. 1–8.

 

 


[Previous Projects] Spring Semester 2024-2025

Benchmarking a Multi-Robot System Composed of Flying Quadrotors for the Coverage of Known Assets

Assigned to: Francesco Scatigno

Rotary-wing Micro Aerial Vehicles (MAVs) can play a key role in the context of inspection. Such tasks often require the deployment of expensive tools and equipment to inspect infrastructural assets, such as buildings, bridges, transmission towers, etc., often implying costly downtime periods, on top of the inspection costs themselves. Single remotely operated drones are already employed in several cases to ease access to such structures and provide detailed information at a minimal cost. However, the size of such infrastructures can be prohibitive or intractable for a single robot, which is why it is natural to consider the inspection problem with a team of autonomous flying robots.

In this project, the aim is to quantitatively benchmark a solution previously proposed by our lab, that implements a partly centralized and distributed architecture for the coverage of a known asset by a team of Micro Aerial Vehicles [1]. This method must be compared against other existing coverage strategies, such as Next-Best-View (NBV) [2] and/or Frontier [3] based methods. A computational analysis was performed to compare [1] and [2] but quantitative results are lacking. A comparison to other classical approaches would be valuable as well (e.g., Frontiers). The student will work in simulation using Webots [4], where a simulation with multiple robots is already provided, along with a ROS interface.

Recommended type of project: semester project / master projects
Work breakdown: 20% theory, 80% coding
Prerequisites: Broad interest in robotics; excellent programming skills (Python, C/C++); good knowledge in ROS, git.
Keywords: Quadrotors, drones, Micro Aerial Vehicles, multi-robot system, coordination, coverage.
Contact: Lucas Wälti

References:

[1] L. Wälti, I. Kagan Erunsal and A. Martinoli, “Multi-Robot Online Coverage with a Team of Resource-Constrained Micro Aerial Vehicles”, 2024 Distributed Autonomous Robotic Systems (DARS), New York, USA, 2024. https://infoscience.epfl.ch/entities/publication/3336a4cc-d140-4d29-b6d0-ef4d43278736

[2] Bircher, Andreas, Mina Kamel, Kostas Alexis, Helen Oleynikova, and Roland Siegwart. “Receding Horizon ‘Next-Best-View’ Planner for 3D Exploration.” In 2016 IEEE International Conference on Robotics and Automation (ICRA), 1462–68. Stockholm, Sweden: IEEE, 2016. https://doi.org/10.1109/ICRA.2016.7487281.

[3] Cieslewski, Titus, Elia Kaufmann, and Davide Scaramuzza. “Rapid Exploration with Multi-Rotors: A Frontier Selection Method for High Speed Flight.” In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2135–42. Vancouver, BC: IEEE, 2017. https://doi.org/10.1109/IROS.2017.8206030.

[4] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

 

Safe Navigation in Environments Represented by a Gaussian Mixture Model

Assigned to: Ludovic Pujol


Image source: https://sites.google.com/stanford.edu/splat-nav [3]

There has been recently a lot of attention given to Gaussian Splats [1]. This technique allows high quality view generations using a mixture of Gaussians, learned from training images and known camera poses, using differential rendering. Although this technique was initially developed and intended for the Computer Graphics community, the robotics community has recently explored ways of leveraging this representation for mobile robots. Simultaneous Localization and Mapping (SLAM) [2] and safe navigation [3] with such environment representation have already been studied. However, previous work using coarser mixtures of Gaussians exists [4,5] as well. Various strategies have been explored for navigation in such representation [3,6,7] with various drawbacks and advantages.

This project aims at studying how to navigate in environments represented as a Mixture of Gaussians (MoG). Various approaches exist, such as motion primitives, local discretization as voxel grid, safe corridors generation (polytopes) or Control Barrier Functions (CBFs). This project should therefore cover and provide an understanding of the different approaches encountered in the literature, as well as implement one or several solution(s) for path planning. The work will be carried out in simulation in the Webots simulator [8], using realistic drone simulations. Several synthetic environments and Gaussian representations will be generated to test and evaluate the selected implemented approach(es).

Recommended type of project: semester project / master project
Work breakdown: 50% theory, 30% coding, 20% simulation
Prerequisites: Broad interest in robotics; excellent programming skills (Python, C/C++); good knowledge in ROS, git.
Keywords: Micro Aerial Vehicles, simulation, Gaussian Splats, Gaussian Mixture Models, Mixture of Gaussians.
Contact: Lucas Wälti

References:

[1] Kerbl, Bernhard, Georgios Kopanas, Thomas Leimkuehler, and George Drettakis. “3D Gaussian Splatting for Real-Time Radiance Field Rendering.” ACM Trans. Graph. 42, no. 4 (July 26, 2023): 139:1-139:14. https://doi.org/10.1145/3592433.

[2] Keetha, Nikhil, Jay Karhade, Krishna Murthy Jatavallabhula, Gengshan Yang, Sebastian Scherer, Deva Ramanan, and Jonathon Luiten. “SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM.” arXiv, April 16, 2024. http://arxiv.org/abs/2312.02126.

[3] Chen, Timothy, Ola Shorinwa, Joseph Bruno, Javier Yu, Weijia Zeng, Keiko Nagami, Philip Dames, and Mac Schwager. “Splat-Nav: Safe Real-Time Robot Navigation in Gaussian Splatting Maps.” arXiv, April 26, 2024. http://arxiv.org/abs/2403.02751.

[4] Tabib, Wennie, Kshitij Goel, John Yao, Mosam Dabhi, Curtis Boirum, and Nathan Michael. “Real-Time Information-Theoretic Exploration with Gaussian Mixture Model Maps.” In Robotics: Science and Systems XV. Robotics: Science and Systems Foundation, 2019. https://doi.org/10.15607/RSS.2019.XV.061.

[5] Tabib, Wennie, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael. “Autonomous Cave Surveying With an Aerial Robot.” IEEE Transactions on Robotics 38, no. 2 (April 2022): 1016–32. https://doi.org/10.1109/TRO.2021.3104459.

[6] Corah, Micah, Cormac O’Meadhra, Kshitij Goel, and Nathan Michael. “Communication-Efficient Planning and Mapping for Multi-Robot Exploration in Large Environments.” IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 1715–21. https://doi.org/10.1109/LRA.2019.2897368.

[7] Chen, Timothy, Aiden Swann, Javier Yu, Ola Shorinwa, Riku Murai, Monroe Kennedy III, and Mac Schwager. “SAFER-Splat: A Control Barrier Function for Safe Navigation with Online Gaussian Splatting Maps.” arXiv, September 15, 2024. http://arxiv.org/abs/2409.09868.

[8] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

 

Belief Sharing and Merging of a Multi-Robot System for Gas Source Localization Task

Assigned to:  Marc Arias Mitjà

The deployment of robots for Gas Source Localization (GSL) tasks in hazardous scenarios significantly reduces risks to humans and animals, given their autonomy and sensing capability for gas and wind. Probabilistic GSL algorithms [1][2] infer the source position efficiently from scattered measurements and make informed navigation decisions. The belief of a candidate source position is updated by comparing the actual measurements and the expected concentrations derived from the gas model.

The application of Multi-Robot Systems (MRS) [3] in this field offers a promising research direction, particularly for enhancing information gathering procedures in time-critical applications. However, when employing an MRS where each robot builds a belief map of the source position based on its own measurements, the challenges of merging belief maps across robots and leveraging differences in their estimations to develop more efficient navigation strategies remain unexplored.

This project aims to investigate the deployment of MRS for GSL tasks, focusing on belief sharing and merging between robots. The proposed approach will be evaluated in the high-fidelity robotic simulator, Webots [4].

Recommended type of project: semester project, master project

Work breakdown: 40% theory, 60% simulation

Prerequisites: passion on solving issues, broad interest in robotics, good programming skill (C/C++), experience with Bayesian inference and Webots will be an asset

Keywords: gas source localization, probabilistic algorithm, multi-robot system

Contact: Wanting Jin

References:

[1] W. Jin, F. Rahbar, C. Ercolani, and A. Martinoli, “Towards Efficient Gas Leak Detection in Built Environments: Data-Driven Plume Modeling for Gas Sensing Robots,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). London, United Kingdom: IEEE, May 2023, pp. 7749–7755.

[2] W. Jin and A. Martinoli, “Sense in Motion with Belief Clustering: Efficient Gas Source Localization with Mobile Robots,” In 2024 IEEE International Conference on Robotics and Automation (ICRA). Yokohama, Japan: IEEE, May 2024.

[3] J. R. Bourne, M. N. Goodell, X. He, J. A. Steiner, and K. K. Leang, “Decentralized multi-agent information-theoretic control for target estimation and localization: finding gas leaks,” The International Journal of Robotics Research, vol. 39, no. 13, pp. 1525–1548, 2020.

[4] Webots: https://cyberbotics.com/doc/reference/index.

 

Gas Leak Detection with Sensor Network by Using Graph Neural Networks

Assigned to: Jin Kilidjian

 

Gas leak detection is a critical application in industrial safety, environmental protection, and public health [1]. Traditional approaches rely on sensor networks to measure gas concentrations and identify leaks based on predefined thresholds [2]. However, these methods maybe be less effective given the sparse measurements (limited number of sensor nodes), noisy nature of the gas sensor, and dynamic environmental factors such as airflow influenced by obstacles. An effective fault detection method that evolves the spatial and temporal relationships between sensor signals across the sensor network could facilitate early-stage gas leak detection and reduce the likelihood of false alarms.

Graph Neural Networks (GNNs) [3] are well-suited for processing graph-structured data, such as sensor networks, where nodes represent sensors, and edges represent relationships of the spatial proximity of the network. By leveraging the relational structure of the sensor network, GNNs can provide more robust and accurate predictions for gas leak detection.

The project aims to develop and evaluate a GNN based approach for fault detection in the sensor network system for detecting gas leaks using data collected from a simulated or real-world sensor network.

Recommended type of project: semester project, master project

Work breakdown: 50% theory, 50% programming

Prerequisites: passion on solving issues, broad interest in data science and deep learning, good programming skill (C/C++), experience with GNN will be an asset

Keywords: graph neural networks, sensor networks, gas leak detection

Contact: Wanting Jin

References:

[1] W. Jin, F. Rahbar, C. Ercolani, and A. Martinoli, “Towards Efficient Gas Leak Detection in Built Environments: Data-Driven Plume Modeling for Gas Sensing Robots,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). London, United Kingdom: IEEE, May 2023, pp. 7749–7755.

[2] Burgués, Javier, et al. “Gas distribution mapping and source localization using a 3D grid of metal oxide semiconductor sensors.” Sensors and Actuators B: Chemical 304 (2020): 127309.

[3] M. Zhao and O. Fink, “DyEdgeGAT: Dynamic Edge via Graph Attention for Early Fault Detection in IoT Systems,” in IEEE Internet of Things Journal, vol. 11, no. 13, pp. 22950-22965, July, 2024

 

Optimizing Gas Sensor Placement on Micro Drones via CFD and Wind Tunnel Experiments

Assigned to: Thomas Cirillo

Detecting the source of a dangerous gas is very useful for environmental monitoring and for taking fast and effective countermeasures in case of an emergency. Multi-rotor drones are becoming increasingly popular for performing this task. However, the propeller rotation inside the gas plume can compromise gas sensor readings. Therefore, an appropriate mechanical design and gas sensor configuration are essential to minimize the adverse effects of airflow on the gas sensor [1].

In a previous semester project [2], a Computational Fluid Dynamics (CFD) simulation of the gas flow in the vicinity of Crazyflie [3] nano drone was conducted using OpenFOAM [4] and Solidworks [5] to reveal the flow’s properties. Such effort needed to be complemented by experimental work to decide on the gas sensor placement [6].

In this project, taking the previous work [2], [6] as a baseline, we aim to experimentally validate the result of the CFD simulation for the Crazyflie in wind tunnel experiments further and leverage this expertise to iteratively refine the mechanical design and sensor configuration, primarily leveraging CFD tools, for the larger micro drone Starling [7], which shows significantly better autonomy and payload capacity than the Crazyflie nano drone.

Recommended type of project: semester project/master project

Work breakdown: 20% theory, 30% coding, 50% simulation and physical experiments

Prerequisites: Broad interest in robotics, good programming skills (C/C++ and/or Matlab), an introductory knowledge or interest in learning fluid dynamics, and a background in CFD will be an asset

Keywords: CFD simulations, quadrotors, gas sensing, mechanical design

Contact: İzzet Kağan Erünsal and Nicolaj Schmid

References:

[1] S.R. Weerasinghe and M. Monasor, “Simulation and Experimental Analysis of Hovering and Flight of a Quadrotor”, 13th International Conference on Heat Transfer, Fluid Mechanics, and Thermodynamics, 2017
[2] A. Moufidi, “Validation of a High-Fidelity Simulator for Odor Sensing with a Quadrotor”, Semester project, DISAL, 2020[3] https://www.bitcraze.io/products/old-products/crazyflie-2-0/, accessed in June 2024
[4] https://www.openfoam.com/, OpenFOAM, accessed in June 2024
[5] https://www.solidworks.com/, Solidworks, accessed in June 2024
[6] Ercolani C. and Martinoli A., “3D Odor Source Localization using a Micro Aerial Vehicle: System Design and Performance Evaluation”, Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, October 2020, Las Vegas, NV, USA, online organization, pp. 6194-6200.
[7] https://www.modalai.com/products/starling-v1, ModalAI, accessed in June 2024

 

Gas Source Localization using Neural Networks

Assigned to: Jad Benabdelkader

Gas Source Localization (GSL) aims to locate gas leaks as efficiently as possible. This is important in emergency situations involving toxic gases or to reduce the environmental impact of industrial gas leaks. Gas sources can be localized using static sensor networks or mobile robots. Many state-of-the-art methods rely on probabilistic algorithms that estimate the likelihood of the source position, e.g. Source Term Estimation (STE) [1, 2]. However, most of these methods assume an explicit model of the gas plume/distribution, which is a very complicated fluid dynamic phenomenon. Typically, this assumption fails in cluttered environments. In this project, we aim to estimate the position of the gas source directly using an Artificial Neural Network (ANN).

We want to train an ANN with gas maps (input) and predict the location of gas sources (output). Previous methods first map the entire environment and then predict the source location [3, 4]. However, we consider a mobile robot that takes continuous measurements and relies on intermediate predictions. Therefore, the model should work with incomplete input maps, i.e. before the robot has explored the whole environment. Previously, our lab has used deep learning in the context of STE [2]. Hence, we have already collected a significant amount of data in simulation that can be used for training.

Recommended type of project: semester project

Work breakdown: 20% theory, 80% coding

Prerequisites: An interest in machine learning and robotic systems. Strong background in Python and Pytorch. Previous experience in training of neural networks.

Keywords: Machine Learning, Neural Network, Gas Source Localization

Contact: Nicolaj Schmid

References:

[1] M. Hutchinson, H. Oh, and W.-H. Chen, “A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors,” Information Fusion, vol. 36, pp. 130–148, Jul. 2017.

[2] W. Jin, F. Rahbar, C. Ercolani, and A. Martinoli, “Towards Efficient Gas Leak Detection in Built Environments: Data-Driven Plume Modeling for Gas Sensing Robots,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom: IEEE, May 2023, pp. 7749–7755.

[3] A. S. A. Yeon, A. Zakaria, S. M. M. S. Zakaria, R. Visvanathan, K. Kamarudin, and L. M. Kamarudin, “Gas Source Localization via Mobile Robot with Gas Distribution Mapping and Deep Neural Network,” in 2022 2nd International Conference on Electronic and Electrical Engineering and Intelligent System (ICE3IS), Yogyakarta, Indonesia: IEEE, Nov. 2022, pp. 120–124.

[4] Z. H. M. Juffry et al., “Deep Neural Network for Localizing Gas Source Based on Gas Distribution Map,” in Proceedings of the 6th International Conference on Electrical, Control and Computer Engineering, vol. 842, Z. Md. Zain, Mohd. H. Sulaiman, A. I. Mohamed, Mohd. S. Bakar, and Mohd. S. Ramli, Eds., in Lecture Notes in Electrical Engineering, vol. 842. , Singapore: Springer Singapore, 2022, pp. 1105–1115.

 

Drag Model Identification and Wind Prediction with “Off-The-Shelf” Drones

Assigned to: Badil Mujovi

Rotary-wing unmanned and Micro Aerial Vehicles (UAVs and MAVs) are gaining popularity for environmental monitoring. Especially, wind sensing is an application that can benefit from such platforms as a flying robot can be deployed easily and report on the local wind conditions in an area of interest. Common approaches consist in fitting the rotary-wing UAV with dedicated sensors, which often need to extend far beyond the robot’s body to avoid the downwash from the propellers. This drastically increases the footprint of the aircraft and requires dedicated hardware to be mounted on the robot, reducing its payload capacity for other tasks, even more so for MAVs. Recent work in our lab consisted in developing a method for drag model identification and external force detection for rotary-wing MAVs that does not require any dedicated sensor. We could show that external forces can be sensed in any direction, while only relying on commonly available onboard sensors and information (accelerometer, pose estimate, throttle command), without requiring any dedicated hardware.

The goal of this project is therefore to verify whether the approach mentioned above and presented in [1] allows us to detect the wind speed around an MAV by inverting its identified drag model. The first step of the project will consist in developing a flight data acquisition process for the Crazyflie drone [2], identifying the drag parameters of the Crazyflie drone leveraging the collected flight data, obtained during dynamical flights performed in no-wind conditions. The second step will consist in measuring wind intensity while flying in a wind tunnel with controlled wind conditions. The student will be assisted in recording flight data but will also be able to participate in the acquisition of experimental data with flying robots according to the needs of the project.

Recommended type of project: semester project

Work breakdown: 20% theory, 50% coding, 30% real-world experiments

Prerequisites: Broad interest in robotics; good programming skills (Python, C/C++); knowledge in ROS.

Keywords: Quadrotors, drones, Micro Aerial Vehicles, Crazyflie, drag model, wind sensing.

Contact: Nicolaj Schmid and Lucas Wälti

References:

[1] L. Wälti and A. Martinoli, “Lumped Drag Model Identification and Real-Time External Force Detection for Rotary-Wing Micro Aerial Vehicles,” 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 3853-3860, doi: 10.1109/ICRA57147.2024.10610862.

[2] Giernacki, Wojciech, Mateusz Skwierczyński, Wojciech Witwicki, Paweł Wroński, and Piotr Kozierski. “Crazyflie 2.0 Quadrotor as a Platform for Research and Education in Robotics and Control Engineering.” In 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), 37–42, 2017. https://doi.org/10.1109/MMAR.2017.8046794.