Aerial 3D vision : Improving Point Clouds Accuracy via Correspondences Matching with Deep Learning

Keywords:

LiDAR, 3D Correspondences, Deep Learning, Transformer Architecture, Bayesian Architecture, Point Cloud, Remote Sensing

Introduction:

Aerial Laser Scanning using LiDAR sensors plays a crucial role in remote sensing tasks, enabling accurate 3D measurements of the environment at large scale. However, LiDAR point clouds are subject to various sources of error and noise, which can affect the accuracy of subsequent analyses and applications. This project aims to improve a methodology that leverages deep learning techniques to enhance LiDAR point cloud accuracy by establishing 3D correspondences (i.e. finding recognizable points scanned multiple times during a flight).

By improving the accuracy of LiDAR point clouds, a wide range of remote sensing tasks can benefit, including terrain mapping, object recognition, change detection, and environmental monitoring. The project will focus on developing deep learning models based on Transformer or Bayesian architectures that can effectively capture the complex spatial relationships and dependencies within the point cloud data to refine the correspondences between 3D points, ultimately improving the quality and reliability of the obtained matches.

Challenges:

Several challenges are associated with LiDAR-based 3D correspondences, including:

  1. Sparse and noisy data: LiDAR point clouds often suffer from sparsity and noise, which can introduce inaccuracies when establishing correspondences between points.

  2. Occlusions and overlapping objects: Occlusions and overlapping objects in the scene can hinder the accurate matching of corresponding points, leading to erroneous correspondences.

  3. Computational efficiency: Deep learning methods, particularly those based on Transformer and Bayesian architectures, can be computationally expensive. Finding a balance between accuracy and efficiency is essential for practical deployment in real-world scenarios.

Objectives:

The main objectives of this project are the following:

  1. Adaptation of the point cloud representation to the needs of the Transformer or Bayesian models

  2. Implementation and evaluation of the chosen algorithm(s) on provided aerial LiDAR point cloud datasets to assess the improvements in accuracy and reliability

  3. Evaluation of several training strategies and variants compared to the available model

  4. Other contributions that motivate the student are very welcome !

Prerequisites:

Candidates interested in this project should possess the following prerequisites:

  1. Proficiency in Python programming language
  2. Background in computer vision, machine learning, and deep learning techniques.

  3. For master students affiliated to Data Science, Computer Science, Environmental Sciences and Engineering, Robotics, Microtech and SysCom programs.

Contact:

Interested candidates are requested to send a brief motivation statement and if available their CV via email to the following contacts:

Aurelien Brun, Jan Skaloud

References:

  1. L. Jospin et. al., 2022, Hands-on Bayesian Neural Networks — a Tutorial for Deep Learning Users

  2. A. Vaswani et. al., 2017, Attention Is All You Need

  3. A. Dosovitskiy et. al., 2020, An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale

  4. A. Brun et. al., 2022, Lidar point–to–point correspondences for rigorous registration of kinematic scanning in dynamic networks