Student Projects

DISAL offers a range of student projects in its three main areas of expertise: Distributed Robotic Systems, Sensor and Actuator Networks, and Intelligent Vehicles. For more information about supervision guidelines and different types of student projects, please refer to this page.

 


Spring Semester 2025-2026 

Projects for the spring semester are now available.

Information-driven Gas Distribution Mapping

Assigned to: Emile Meyer

The goal of robotic Gas Distribution Mapping (GDM) is to build a gas concentration map using gas measurements collected by a mobile robot. In our previous work [1], we derived a novel GDM algorithm from the underlying physical equation describing gas dispersion: ADApprox – an Advection-Diffusion Approximation – outperforms a widely used benchmark algorithm (Kernel DM+V/W [2]) while directly relating learned and physical parameters. The physical grounding increases the accuracy and interpretability of the method. In this project, we would like to further develop ADApprox, in particular, making it more memory and computation efficient.

ADApprox approximates the gas concentration for an entire field of points. For each of these approximation points, parameters are learned from measurements. At the moment, these approximation points are uniformly distributed across the environment. However, not all regions carry the same amount of information and require the same density of approximation points. This project aims to adapt the position of the approximation points online such that fewer points are required while achieving the same mapping accuracy. This would increase the memory and computation efficiency of the method. For the previous publication, a dataset was collected during physical experiments which can be used for evaluation. More data could be gathered in simulation for more complex environments.

Recommended type of project: master project / semester project

Work breakdown: 20% literature review, 20% theory, 40% coding, 20% simulation

Prerequisites: Curiosity to improve mapping algorithms. Strong background in Python and Numpy.

Keywords: Gas Distribution Mapping, Information-driven Mapping, ADApprox

Contact: Nicolaj Bösel-Schmid

References:
[1] N. Bösel-Schmid, W. Jin and A. Martinoli, “Physics-Based Gas Mapping with Nano Aerial Vehicles: The ADApprox Algorithm,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hangzhou, China, 2025
[2] M. Reggente and A. J. Lilienthal, “The 3D-Kernel DM+V/W algorithm: Using wind information in three dimensional gas distribution modelling with a mobile robot,” SENSORS, 2010 IEEE, Waikoloa, HI, USA, 2010, pp. 999-1004

Reinforcement Learning for Gas Source Localization

Assigned to: Emilien Coudurier

 

The goal of Gas Source Localization (GSL) is to locate gas leaks as efficiently as possible. This is important in emergency situations involving toxic gases or to reduce the environmental impact of industrial gas leaks. In mobile robotics, many GSL techniques have been developed that estimate the source position based on gas measurements. Navigation is a crucial part of such an algorithm because it determines where the measurements are taken, i.e., how much time the robot spends exploring a particular part of the environment. There are a variety of different navigation techniques in the literature: from bio-inspired reactive approaches [1] to information-driven ones [2]. In recent years, Reinforcement Learning (RL) [3-5] has become a compelling alternative due to its potential to navigate complex environments.

In a previous work, we were able to show that gas sources can be localized efficiently by extracting certain features from a gas map (e.g., highest gas variance) [6]. However, the robot was moving on a fixed scanning trajectory and did not consider adaptive path planning. Therefore, we implemented RL algorithms (DDPG, SAC) optimizing for efficient GSL and evaluated them in simulation. This project aims to make the RL methods more robust and potentially extend it to Multi-Agent RL (MARL).

Recommended type of project: master project / semester project

Work breakdown: 20% literature review, 50% coding, 30% simulation

Prerequisites: An interest in machine learning and robotic systems. Strong background in Python and Pytorch. Previous experience in reinforcement learning.

Keywords: Reinforcement Learning, Gas Source Localization, Multi-Agent Reinforcement Learning

Contact: Nicolaj Bösel-Schmid

References:
[1] A. Francis, S. Li, C. Griffiths, and J. Sienz, “Gas source localization and mapping with mobile robots: A review,” Journal of Field Robotics, vol. 39, no. 8, pp. 1341–1373, Dec. 2022
[2] M. Vergassola, E. Villermaux, and B. I. Shraiman, “‘Infotaxis’ as a strategy for searching without gradients,” Nature, vol. 445, no. 7126, pp. 406–409, Jan. 2007
[3] Y. Shi, M. Wen, Q. Zhang, W. Zhang, C. Liu, and W. Liu, “Autonomous Goal Detection and Cessation in Reinforcement Learning: A Case Study on Source Term Estimation”.
[4] Y. Zhao et al., “A deep reinforcement learning based searching method for source localization,” Information Sciences, vol. 588, pp. 67–81, Apr. 2022
[5] Y. Shi, K. McAreavey, C. Liu, and W. Liu, “Reinforcement Learning for Source Location Estimation: A Multi-Step Approach,” in 2024 IEEE International Conference on Industrial Technology (ICIT), Bristol, United Kingdom: IEEE, Mar. 2024, pp. 1–8.

Simultaneous Localization and Mapping with Gaussian Mixture Models

Assigned to: Arno Thomas Laurie

GMM registration [2] and loop closure example using GMM registration [3].

Gaussian Mixture Models (GMM) allow one to significantly compress the 3D occupancy data of the environment, typically obtained from the onboard LiDAR or Time-of-Flight camera embedded on a mobile robot. While more conventional representations, such as voxel grids, are easy to manipulate, they are limited by their finite resolution and therefore do not scale well in terms of memory footprint and update cost as the extent of the mapped environment grows. GMMs offer the possibility to compress this information, enabling more efficient data storage and manipulation.

This project will investigate and test the usage of GMMs as occupancy representation, and how it can be used in the context of SLAM. The open-source GIRA framework [3] provides several key functionalities including GMM fitting and GMM-to-GMM registration [2]. Using a suitable pose-graph optimization framework, such as GTSAM [4], this project will thus implement the full SLAM pipeline and validate it in simulation using the Webots simulator [5].

Recommended type of project: semester project / master project
Work breakdown: 30% theory, 50% coding, 20% simulation
Prerequisites: Broad interest in robotics; excellent programming skills (C/C++, Python); good knowledge in ROS, git.
Keywords: Micro Aerial Vehicles, simulation, Gaussian Mixture Models, navigation, SLAM.
Contact: Lucas WĂ€lti

References:
[1] Tabib, Wennie, and Nathan Michael. “Simultaneous Localization and Mapping of Subterranean Voids with Gaussian Mixture Models.” In Field and Service Robotics, edited by Genya Ishigami and Kazuya Yoshida. Springer, 2021. https://doi.org/10.1007/978-981-15-9460-1_13.
[2] Tabib, Wennie, Cormac O’Meadhra, and Nathan Michael. “On-Manifold GMM Registration.” IEEE Robotics and Automation Letters 3, no. 4 (2018): 3805–12. https://doi.org/10.1109/LRA.2018.2856279.
[3] Goel, Kshitij, and Wennie Tabib. “GIRA: Gaussian Mixture Models for Inference and Robot Autonomy.” 2024 IEEE International Conference on Robotics and Automation (ICRA), May 2024, 6212–18. https://doi.org/10.1109/ICRA57147.2024.10611216.
[4] GTSAM. “GTSAM.” Accessed November 13, 2023. http://gtsam.org/.
[5] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

 

Synchronized Multi-Robots Mapping Using Time-of-Flight Sensors and Voxel Occupancy Grids

Assigned to: Tuan Linh Phan

Three robots exploring an environment [2] and sub-maps alignment to obtain a complete map [1].

Environment mapping with mobile robots is a common task and it is often carried out by a single robot. However, having several robots build the same map simultaneously proves to significantly increase the complexity of the task, requiring proper synchronization and information sharing. Initial strategies consisted in merging sub-maps of the environment with some overlap to form the full map [1], often in a centralized way. Distributed approaches are however desirable, where robots share with each other their observations and locally update their knowledge of the environment [2].

This project will aim at reviewing the state of the art in terms of multi-robots mapping, identifying relevant methods for updating, sharing and synchronizing the map of the environment. In particular, it will be interesting to investigate the centralized vs. distributed paradigms and their implications in computation and communication costs. Experiments with several robots will be performed in simulation in the Webots simulator [3].

Recommended type of project: semester project / master project
Work breakdown: 30% theory, 50% coding, 20% simulation
Prerequisites: Broad interest in robotics; excellent programming skills (C/C++, Python); good knowledge in ROS, git.
Keywords: Micro Aerial Vehicles, simulation, cooperative SLAM, map fusion.
Contact: Lucas WĂ€lti

References:
[1] Jessup, J., S. N. Givigi, and A. Beaulieu. “Robust and Efficient Multi-Robot 3D Mapping with Octree Based Occupancy Grids.” 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), October 2014, 3996–4001. https://doi.org/10.1109/SMC.2014.6974556.
[2] Corah, Micah, Cormac O’Meadhra, Kshitij Goel, and Nathan Michael. “Communication-Efficient Planning and Mapping for Multi-Robot Exploration in Large Environments.” IEEE Robotics and Automation Letters 4, no. 2 (2019): 1715–21. https://doi.org/10.1109/LRA.2019.2897368.
[3] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

 

Bridging the Sim-to-Real Gap with a Photorealistic Environment

Assigned to: Diana Bejan

Recent advances in 3D reconstruction and photorealistic simulation have made it possible to recreate real-world environments with high visual and geometric fidelity. Such digital twins are increasingly valuable for robotics, where the sim-to-real gap, the discrepancy between simulated and real-world performance, remains a key challenge for deploying vision-based algorithms [1], [2], [3].

In this project, the student will reproduce a real drone arena inside Isaac Sim using smartphone-based 3D reconstruction techniques. The objective is to create a photorealistic virtual environment that closely matches the real arena in texture, color, and geometry. Once the virtual replica is built, the student will select a vision-based robotics algorithm, such as a visual-inertial odometry or object detection method, to evaluate within both the simulated and real-world arenas.

By comparing the algorithm’s performance across these two domains, the student will quantify and analyze the sim-to-real gap, identifying the main factors affecting transferability between simulation and reality. The project will contribute to understanding how photorealistic environments can improve the reliability of simulation-based development and testing for aerial robotics.

Recommended type of project: semester project / master project
Work breakdown: 20% theory, 40% coding, 40% simulations
Prerequisites: Broad interest in robotics; Good python skills; Understanding of ROS, Isaac Sim and perception in robotics.
Keywords: Sim-to-real transfer, Isaac sim, Visual-inertial odometry, Object Detection.
Contact: Yacine Derder


References:

[1] Wang, Guangming et al. (2025). “NeRFs in robotics: A survey.” The International Journal of Robotics Research. 10.1177/02783649251374246.
[2] NVIDIA. Isaac Sim 4.0 Documentation: High-Fidelity Simulation for Robotics, 2024.
[3] Tobin, Josh et al. (2017). “Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World.” 10.48550/arXiv.1703.06907.

 

Using LiDAR to Localize and Track MAVs

Assigned to: Amandine Meunier

LiDAR sensors are increasingly used for precise localization and object tracking in robotics due to their robustness to lighting conditions and ability to provide accurate 3D spatial information [1]. With the growing availability of compact, high-resolution LiDAR systems, their use in ground-based tracking of Micro Aerial Vehicles (MAVs) has become an active research topic. Understanding how different LiDAR technologies perform in detecting and tracking fast-moving aerial robots is key to enabling safe coordination and autonomous operation in multi-drone environments.

In this project, the student will first survey the main categories of 3D LiDAR sensors, focusing on mechanical and solid-state designs, and compare their specifications such as range, field of view, scan rate, and resolution. Based on this review, two representative LiDAR types will be selected and implemented in a simulation environment such as Webots, Gazebo, or Isaac Sim [2], [3].  The LiDAR sensor will be placed on the ground or on a fixed structure, while a Crazyflie drone will fly within the scene. The student will then develop methods to use the LiDAR data to detect, localize, and track the moving MAV.

The project will conclude with a comparative analysis of the selected LiDAR types, evaluating their performance in terms of detection reliability, tracking accuracy, robustness to motion, and computational efficiency. The results will provide insights into the suitability of different LiDAR designs for ground-based tracking of aerial robots and inform future real-world experiments.

Recommended type of project: semester project / master project
Work breakdown: 30% theory, 40% coding, 30% simulations
Prerequisites: Good Python skills; familiarity with robotics simulation (Webots, Gazebo, or Isaac Sim); basic understanding of LiDAR sensing and drone dynamics.
Keywords: LiDAR, Object Tracking, MAV Localization, Isaac Sim, Crazyflie
Contact: Yacine Derder

References:
[1] Georgios Zamanakos et al. (2021). “A comprehensive survey of LIDAR-based 3D object detection methods with deep learning for autonomous driving” Computers & Graphics, Volume 99, 2021, Pages 153-181.

[2] Bitcraze. “Crazyflie’s Adventures with ROS 2 and Gazebo”, 2024 https://www.bitcraze.io/2024/09/crazyflies-adventures-with-ros-2-and-gazebo/

[3] C. Llanes et al. “CrazySim: A Software-in-the-Loop Simulator for the Crazyflie Nano Quadrotor,” 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 12248-12254, doi: 10.1109/ICRA57147.2024.10610906.

Thermal Multi-view reconstruction

 

Thermal cameras are often used in robotic inspection tasks. However, the resultant image is in most cases dependent on the viewing angle [1], to a greater extent than many objects in normal vision. As such, utilizing multiple viewpoints to reconstruct objects in thermal is a challenging problem [2].

This project will develop a method which incorporates as much of the relevant physics as possible in thermal sensing, in order to reconstruct objects in thermal from a multi-camera setup. This includes gas absorption, thermal reflections, emissivity of materials, spectral sensitivity curves of cameras – and more. The method will be tested experimentally using a stereo camera setup to test reconstructing objects with known properties.

Recommended type of project: semester project

Work breakdown: 40% theory, 50% coding, 10% experiments

Prerequisites: An interest in computer vision. Good background in coding. 

Keywords: Thermal cameras, reconstruction, computer vision, physics.

Contact: Alexander Wallén Kiessling

References:

[1] Vollmer, Michael, and Klaus-Peter Möllmann. Infrared thermal imaging: fundamentals, research and applications. John Wiley & Sons, 2018.

[2] Mouats, Tarek, et al. “Thermal stereo odometry for UAVs.” IEEE Sensors Journal 15.11 (2015): 6335-6347.

TIO – Thermal inertial odometry for robots

Assigned to: Paul Vincent Stéphane Bourgois

Visual inertial odometry (VIO) is used to localize mobile robots in areas with no GNSS coverage. This is particularly useful underground or indoors, and there are many sensor modalities which can be used [1]. For micro aerial vehicles, cameras have become the de-facto choice for localization, due to their low weight and low cost. However, cameras perceive light in the visual spectrum, and so do not perform well in low light condition. Instead, thermal cameras perceive the infrared radiation of objects, which does not depend on the same visual conditions as normal cameras.

In this project, the primary aim is to utilize thermal cameras for navigation, with a particular emphasis on low-light conditions. The goal is then to see if regular VIO algorithms will adapt to using thermal camera, and to also extend navigation methods as needed. The student will be provided with real world datasets collected on aerial vehicles, and benchmarks against VIO algorithms using camera in the same environment.

Recommended type of project: semester project / master projects

Work breakdown: 30% theory, 50% coding, 20% experiments

Prerequisites: An interest in visual SLAM, computer vision. Strong background in Python and C/C++. General background from mobile/aerial robotics.

Keywords: SLAM, aerial vehicles, computer vision.

Contact: Alexander Wallén Kiessling

References:

[1] Luca Carlone et al., “Visual SLAM – {SLAM Handbook} From Localization and Mapping to Spatial Intelligence”, 2026 Cambridge University Press.

 


Autumn Semester 2025-2026 

These are past projects.

Market-Based Multi-Robot Task Allocation Algorithms for Micro Aerial Vehicles Performing Inspections

Assigned to: Francis Pannatier

Market-based algorithms have received significant attention for assigning tasks to multiple robots in scenarios such as patrolling, exploration, and pick-and-delivery [1], [2]. The task allocation problem in these domains is typically NP-hard, which means that finding an optimal solution is often not practical, especially in real-time applications. Market-based approaches, which create a simulated economic environment where robots can trade tasks and resources, offer a promising solution. These methods are capable of achieving high efficiency and can produce results close to optimal. However, their effectiveness depends strongly on how well the problem is defined through an appropriate taxonomy [3], [4], [5].

Several aspects are important in this taxonomy. These include the task capacity of each robot (whether they can handle one or multiple tasks), the type of tasks (whether they require one or multiple robots), task interdependence (independent, dependent within the same schedule, or dependent across different schedules), the timing of task assignment (instantaneous or time-extended), the system architecture (centralized, decentralized, distributed, or hybrid), and the communication framework (local or global connectivity).

In this project, the student is expected to design a market-based task allocation algorithm for a team of micro aerial vehicles (MAVs) and their docking stations. The work will be carried out in a low-fidelity simulation environment first, using either MATLAB or Python, and later on in high-fidelity simulation, using either Webots or Gazebo. In the simulation, MAVs will perform basic inspection tasks generated by a ground control station, while docking stations will provide recharging services. Both MAVs and docking stations will be subject to constraints such as limited flight time, communication range, and minimum charging duration. The student will begin by reviewing relevant literature, define the problem taxonomy based on the review, and then implement and evaluate at least two different algorithms across different scenarios using selected performance metrics.

Recommended type of project: semester or master project

Work breakdown: 20% theory, 50% coding, 30%  experimentation

Prerequisites: Broad interest in multi-robot systems, good programming skills (Python, C/C++, Matlab), knowledge in ROS2 (if you took DIS course, it is a plus)

Keywords: Micro Aerial Vehicles, docking stations, task allocation, market-based algorithms, multi-robot system architectures.

Contact: Kağan ErĂŒnsal

References:
[1] Quinton F., Grand C., Lesire C., Market Approaches to the Multi-Robot Task Allocation Problem: a Survey, Journal of Intelligent & Robotic Systems (2023)
[2] Talebpour, Z., Martinoli, A.: Multi-robot coordination in dynamic environments shared with humans. In: IEEE international conference on robotics and automation, Brisbane, Australia (2018)
[3] Gerkey, B.P., Matari ́c, M.J.: A formal analysis and taxonomy of task allocation in multi-robot systems. Int. J. Robot. Res. 23(9), 939–954 (2004)
[4] Ayorkor Korsah, G., Stentz, A., Bernardine Dias, M.: A comprehensive taxonomy for multi-robot task allocation. Int. J. Robot. Res. 32(12), 1495–1512 (2013)
[5] Bernardine Dias, M., Zlot, R., Kalra, N., Anthony, S.: Market-based multirobot coordination: a survey and analysis. Proc. IEEE 94(7), 1257–1270 (2006)

 

Safe Navigation and Exploration in Environments Modeled as Gaussian Mixture Model

Assigned to: Tomasz NiedziaƂkowski

Image source: https://sites.google.com/stanford.edu/splat-nav [3]

There has been recently a lot of attention given to Gaussian Splats [1]. This technique allows high quality view generations using a mixture of Gaussians, learned from training images and known camera poses, using differential rendering. Although this technique was initially developed and intended for the Computer Graphics community, the robotics community has recently explored ways of leveraging this representation for mobile robots. Simultaneous Localization and Mapping (SLAM) [2] and safe navigation [3] with such environment representation have already been studied. However, previous work using coarser mixtures of Gaussians exists [4,5] as well. Various strategies have been explored for navigation in such representation [3,6,7] with various drawbacks and advantages.

A previous project already studied how to generate safe trajectories in environments represented as a coarse mixture of Gaussians. Various trajectory representations and techniques were considered. The core assumption was that the environment was known and a map could be generated a priori. This project will therefore focus on how exploration techniques can be implemented when using Gaussian mixtures to represent the environment. This includes studying how to incorporate new measurements into the map efficiently (e.g., using [9]) and how to define unexplored regions, as partly covered in [9]. The work will be carried out in simulation in the Webots simulator [8], using realistic drone simulations to demonstrate the capacity of a drone to explore a given environment.

Recommended type of project: semester project / master project
Work breakdown: 50% theory, 30% coding, 20% simulation
Prerequisites: Broad interest in robotics; excellent programming skills (Python, C/C++); good knowledge in ROS, git.
Keywords: Micro Aerial Vehicles, simulation, Gaussian Splats, Gaussian Mixture Models, Mixture of Gaussians, navigation, exploration.
Contact: Lucas WĂ€lti

References:

[1] Kerbl, Bernhard, Georgios Kopanas, Thomas Leimkuehler, and George Drettakis. “3D Gaussian Splatting for Real-Time Radiance Field Rendering.” ACM Trans. Graph. 42, no. 4 (July 26, 2023): 139:1-139:14. https://doi.org/10.1145/3592433.

[2] Keetha, Nikhil, Jay Karhade, Krishna Murthy Jatavallabhula, Gengshan Yang, Sebastian Scherer, Deva Ramanan, and Jonathon Luiten. “SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM.” arXiv, April 16, 2024. http://arxiv.org/abs/2312.02126.

[3] Chen, Timothy, Ola Shorinwa, Joseph Bruno, Javier Yu, Weijia Zeng, Keiko Nagami, Philip Dames, and Mac Schwager. “Splat-Nav: Safe Real-Time Robot Navigation in Gaussian Splatting Maps.” arXiv, April 26, 2024. http://arxiv.org/abs/2403.02751.

[4] Tabib, Wennie, Kshitij Goel, John Yao, Mosam Dabhi, Curtis Boirum, and Nathan Michael. “Real-Time Information-Theoretic Exploration with Gaussian Mixture Model Maps.” In Robotics: Science and Systems XV. Robotics: Science and Systems Foundation, 2019. https://doi.org/10.15607/RSS.2019.XV.061.

[5] Tabib, Wennie, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael. “Autonomous Cave Surveying With an Aerial Robot.” IEEE Transactions on Robotics 38, no. 2 (April 2022): 1016–32. https://doi.org/10.1109/TRO.2021.3104459.

[6] Corah, Micah, Cormac O’Meadhra, Kshitij Goel, and Nathan Michael. “Communication-Efficient Planning and Mapping for Multi-Robot Exploration in Large Environments.” IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 1715–21. https://doi.org/10.1109/LRA.2019.2897368.

[7] Chen, Timothy, Aiden Swann, Javier Yu, Ola Shorinwa, Riku Murai, Monroe Kennedy III, and Mac Schwager. “SAFER-Splat: A Control Barrier Function for Safe Navigation with Online Gaussian Splatting Maps.” arXiv, September 15, 2024. http://arxiv.org/abs/2409.09868.

[8] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

[9] Goel, Kshitij, and Wennie Tabib. “GIRA: Gaussian Mixture Models for Inference and Robot Autonomy.” In 2024 IEEE International Conference on Robotics and Automation (ICRA), 6212–18, 2024. https://doi.org/10.1109/ICRA57147.2024.10611216.

 

Alternatives to the Expectation-Maximization algorithm for Gaussian Mixture Model optimization

Assigned to: Ramon Heeb


EM algorithm illustration (source)


Illustration from [3].

In the context of autonomous robot navigation with depth cameras or LiDARs, common representations that leverage a fixed discretization of the space (e.g., octomap, voxel grids) are extensively used. However, while discrete space representations are fast and efficient for lookup, collision checks and updates, they scale poorly and require significant memory and bandwidth to be shared across multiple robots. GMMs are flexible geometric descriptors and require a minimal number of parameters to describe complex environments, as opposed to fixed resolution alternatives. However, GMMs are usually fitted using the Expectation-Maximization (EM) algorithm, where the number of components must be given a priori. Criteria have been defined to help with that choice, including Bayesian Information Criterion (BIC) and Akaike Information Criterion (AIC), or elbow-criterion where the EM algorithm is run for several number of components. No criterion is perfect, especially in the context of 3D mapping, where the points distributions are non-Gaussian. See for instance [1, 2, 3] for implementations of the EM-algorithm.

This project will therefore aim to study alternative methods for selecting the number of components, as well as for the EM algorithm itself. Indeed, the point clouds measured by a robot must be efficiently converted to a Gaussian Mixture. Deep learning approaches will likely offer suitable tools for this. Alternatively, one could consider retaining the more common voxel grid representation and look into compression approaches to reduce bandwidth when sharing map information between robots.

Recommended type of project: semester project / master project
Work breakdown: 50% theory, 30% coding, 20% simulation
Prerequisites: Broad interest in robotics; excellent programming skills (Python, C/C++); good knowledge in ROS, git.
Keywords: Micro Aerial Vehicles, simulation, Gaussian Mixture Models, Mixture of Gaussians, navigation, point cloud.
Contact: Lucas WĂ€lti

References:

[1] Goel, Kshitij, and Wennie Tabib. “GIRA: Gaussian Mixture Models for Inference and Robot Autonomy.” In 2024 IEEE International Conference on Robotics and Automation (ICRA), 6212–18, 2024. https://doi.org/10.1109/ICRA57147.2024.10611216.

[2] Dong, Haolin, Jincheng Yu, Yuanfan Xu, Zhilin Xu, Zhaoyang Shen, Jiahao Tang, Yuan Shen, and Yu Wang. “MR-GMMapping: Communication Efficient Multi-Robot Mapping System via Gaussian Mixture Model.” IEEE Robotics and Automation Letters 7, no. 2 (April 2022): 3294–3301. https://doi.org/10.1109/LRA.2022.3145059.

[3] Tabib, Wennie, and Nathan Michael. “Simultaneous Localization and Mapping of Subterranean Voids with Gaussian Mixture Models.” In Field and Service Robotics, edited by Genya Ishigami and Kazuya Yoshida, 173–87. Singapore: Springer, 2021. https://doi.org/10.1007/978-981-15-9460-1_13.

[4] “Webots: Robot Simulator.” Accessed October 6, 2022. https://cyberbotics.com/.

 

Uncooled Thermal Cameras for Gas Detection

Assigned to: Mikaël Joël Michel Schaer

Thermal cameras are often used in inspection and detection of gas leaks. The gases present in industrial leaks mainly exhibit absorption peaks at shorter wavelengths. However, thermal cameras for this use case are based on photodetectors, which generally require cooling to increase sensitivity [1]. This added weight makes them unsuitable for deployment on smaller robots, such as micro aerial vehicles. However, the absorption spectrum of several gases encountered in industrial leaks extend into longer wavelengths as well [2]. As such, it would be possible to detect gases with uncooled (and therefore lighter) thermal cameras, albeit with a decreased sensitivity.

In this project, we aim to test if uncooled cameras can be used to detect gases. The student will gather real footage of gases with thermal cameras, and test the usage of a combination of filtering, signal processing and computer vision methods to detect gases. If possible, we will also experiment with and compare results against a cooled thermal camera.

Recommended type of project: semester project

Work breakdown: 20% theory, 50% real world experiments, 30% coding

Prerequisites: An interest in computer vision, optics, signal processing. Good background in Python. 

Keywords: Thermal cameras, gas detection, computer vision

Contact: Alexander Wallén Kiessling

References:

[1] Vollmer, Michael, and Klaus-Peter Möllmann. Infrared thermal imaging: fundamentals, research and applications. John Wiley & Sons, 2018.

[2] Meribout, Mahmoud. “Gas leak-detection and measurement systems: Prospects and future trends.” IEEE Transactions on Instrumentation and Measurement 70 (2021): 1-13.

 

Visual Inertial SLAM for aerial vehicles

Assigned to: Sameh Lahouar

  

Simultaneous Localization and Mapping (SLAM) is used to localize mobile robots in areas with no GNSS coverage. This is particularly useful underground or indoors, and there are many sensor modalities which can be used for SLAM. For micro aerial vehicles, cameras have become the de-facto choice for localization, due to their weight and low cost. However, over the course of time a multitude of methods have become available for visual based SLAM [1].

In this project, the primary aim is to understand key differences and benchmark several popular visual SLAM methods against each other, for use on micro aerial vehicles. The student will be provided with real world datasets collected on aerial vehicles, but will also need to benchmark in high fidelity simulation. For motivated students or a master project, it is also of high interest to attempt to incorporate multi-camera methods – a relatively new and active field of research [2].

Recommended type of project: semester project / master projects

Work breakdown: 20% theory, 80% coding

Prerequisites: An interest in visual SLAM, computer vision. Strong background in Python and C/C++. General background from mobile/aerial robotics.

Keywords: SLAM, aerial vehicles, computer vision.

Contact: Alexander Wallén Kiessling

References:

[1] ServiĂšres, Myriam, et al. “Visual and Visual‐Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking.” Journal of Sensors 2021.1 (2021): 2054828.

[2] A. J. Yang, C. Cui, I. A. BĂąrsan, R. Urtasun and S. Wang, “Asynchronous Multi-View SLAM,” 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 2021, pp. 5669-5676, doi: 10.1109/ICRA48506.2021.9561481.

 

Gas Source Localization using Neural Networks

  

The goal of Gas Source Localization (GSL) is to find gas leaks as efficiently as possible. This is important in emergency situations involving toxic gases or to reduce the environmental impact of industrial gas leaks. Gas sources can be localized using static sensor networks or mobile robots. Many state-of-the-art methods rely on probabilistic algorithms that estimate the likelihood of the source position, e.g. Source Term Estimation (STE) [1, 2]. However, most of these algorithms assume a simplified model of the gas plume/distribution, which is a very complicated fluid dynamic phenomenon. Typically, this assumption fails in cluttered environments. In this project, we aim to estimate the position of the gas source directly using an Artificial Neural Network (ANN).

We want to train an ANN with gas maps (input) and predict the location of gas sources (output). Previous methods first explore the entire environment and then predict the location of the gas source [3, 4]. However, we consider a mobile robot that takes continuous measurements and relies on intermediate predictions. Therefore, the model should work with incomplete input maps, i.e. before the robot has explored the whole environment. This project builds on a previous semester project using a convolutional neural network for GSL. The governing physical equations of gas dispersion are Partial Differential Equations (PDEs). Therefore, we would like to explore more advanced model architectures specialized for PDEs, e.g. Fourier Neural Operators [5] or Physical Informed Neural Networks [6]. Furthermore, the work should be extended from 2D to 3D environments.

Recommended type of project: master project / semester project

Work breakdown: 20% theory, 60% coding, 20% simulation

Prerequisites: An interest in machine learning and robotic systems. Strong background in Python and Pytorch. Previous experience in training of neural networks.

Keywords: Neural Network, Gas Source Localization

Contact: Nicolaj Schmid, [email protected]

References:

[1] M. Hutchinson, H. Oh, and W.-H. Chen, “A review of source term estimation methods for atmospheric dispersion events using static or mobile sensors,” Information Fusion, vol. 36, pp. 130–148, Jul. 2017.

[2] W. Jin, F. Rahbar, C. Ercolani, and A. Martinoli, “Towards Efficient Gas Leak Detection in Built Environments: Data-Driven Plume Modeling for Gas Sensing Robots,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom: IEEE, May 2023, pp. 7749–7755.

[3] A. S. A. Yeon, A. Zakaria, S. M. M. S. Zakaria, R. Visvanathan, K. Kamarudin, and L. M. Kamarudin, “Gas Source Localization via Mobile Robot with Gas Distribution Mapping and Deep Neural Network,” in 2022 2nd International Conference on Electronic and Electrical Engineering and Intelligent System (ICE3IS), Yogyakarta, Indonesia: IEEE, Nov. 2022, pp. 120–124.

[4] Z. H. M. Juffry et al., “Deep Neural Network for Localizing Gas Source Based on Gas Distribution Map,” in Proceedings of the 6th International Conference on Electrical, Control and Computer Engineering, vol. 842, Z. Md. Zain, Mohd. H. Sulaiman, A. I. Mohamed, Mohd. S. Bakar, and Mohd. S. Ramli, Eds., in Lecture Notes in Electrical Engineering, vol. 842. , Singapore: Springer Singapore, 2022, pp. 1105–1115.

[5] Z. Li et al., “Fourier Neural Operator for Parametric Partial Differential Equations,” 2020, arXiv.

[6] S. Cai, Z. Mao, Z. Wang, M. Yin, and G. E. Karniadakis, “Physics-informed neural networks (PINNs) for fluid mechanics: a review,” Acta Mech. Sin., vol. 37, no. 12, pp. 1727–1738, Dec. 2021

 

Reinforcement Learning for Gas Source Localization

Assigned to: Badil Mujovi

  

The goal of Gas Source Localization (GSL) is to locate gas leaks as efficiently as possible. This is important in emergency situations involving toxic gases or to reduce the environmental impact of industrial gas leaks. In mobile robotics, many GSL techniques have been developed that estimate the source position based on gas measurements. Navigation is a crucial part of such an algorithm because it determines where the measurements are taken, i.e., how much time the robot spends exploring a particular part of the environment. There are a variety of different navigation techniques in the literature: from bio-inspired reactive approaches [1] to information-driven ones [2]. In recent years, reinforcement learning [3-5] has become a compelling alternative due to its potential to navigate complex environments.

In a previous work, we were able to show that gas sources can be localized efficiently by extracting certain features from a gas map (e.g., highest gas variance). However, the robot was moving on a fixed scanning trajectory and did not consider an adaptive navigation method. In this project we want to complement the GSL algorithm with reinforcement learning (DQN, DDPG, …) to explore the environment more efficiently.

Recommended type of project: master project / semester project

Work breakdown: 20% theory, 50% coding, 30% simulation

Prerequisites: An interest in machine learning and robotic systems. Strong background in Python and Pytorch. Previous experience in reinforcement learning.

Keywords: Reinforcement Learning, Gas Source Localization

Contact: Nicolaj Schmid, [email protected]

References:

[1] A. Francis, S. Li, C. Griffiths, and J. Sienz, “Gas source localization and mapping with mobile robots: A review,” Journal of Field Robotics, vol. 39, no. 8, pp. 1341–1373, Dec. 2022

[2] M. Vergassola, E. Villermaux, and B. I. Shraiman, “‘Infotaxis’ as a strategy for searching without gradients,” Nature, vol. 445, no. 7126, pp. 406–409, Jan. 2007

[3] Y. Shi, M. Wen, Q. Zhang, W. Zhang, C. Liu, and W. Liu, “Autonomous Goal Detection and Cessation in Reinforcement Learning: A Case Study on Source Term Estimation”.

[4] Y. Zhao et al., “A deep reinforcement learning based searching method for source localization,” Information Sciences, vol. 588, pp. 67–81, Apr. 2022

[5] Y. Shi, K. McAreavey, C. Liu, and W. Liu, “Reinforcement Learning for Source Location Estimation: A Multi-Step Approach,” in 2024 IEEE International Conference on Industrial Technology (ICIT), Bristol, United Kingdom: IEEE, Mar. 2024, pp. 1–8.

 

 


Â