UAVs, Navigation and Sensor Orientation

The navigation is composed of two main concepts: positioning & guidance.

  • The positioning is the determination of the position and velocity of a moving object with respect to a known reference
  • The guidance is the planning and maintenance of a course from one location (origin) to another (destination)

TOPO is active in aerial photogrammetry research, both airborne and drone-based. In this context, many challenging problems arise related to sensor orientation determination, autonomous navigation, sensor fusion, design, testing and operation of micro aerial vehicles, both fixed-wing and multi-rotor.

A list of currently available semester/master projects is maintained below:

NO GPS? No Problem!

Visual navigation in a priori known environments

based on Deep Learning algorithms

Pose determination (determination of the position and orientation of the camera) with respect to well-known landmarks or beacon-points is a problem that takes its sources in the late XIXe century. The state-of-the-art method consists in recognizing similar objects (at least three) both in the image and on the terrain. However, this task is not automatized, and even difficult for a human operator.

The objective of this project is to develop an algorithm that determines the camera pose parameters (location and attitude) of an image with respect to a 3D textured model of the surrounding e.g. a textured DEM (Digital Elevation Model) if the studied area is simple enough.

Such an algorithm could lead to several applications such as adjusting the trajectory of spatial probes landing on Mars [1], Figure 1, Autonomous (GNSS free) Navigation of Nano Drones [2] and georeferencing historical photographs in order to detect environmental changes [3], Figure 2.

Figure 1: Example of DTM based Terrain Relative Navigation suggested by NASA-JPL Science Definition Team for Mars 2020 report [1]

Figure 2: SmapShot: Georeferencing historical photos on national textured DEM [3]

Deep learning seems to be a promising approach since it has shown a countless successful application in recognising/detect/segment well-known topography and visual landmarks (mountain peaks, buildings, vegetation and other).

Two approaches of utilising Deep Learning algorithms for navigation and orientation would be studied and compared. First one a fusion between deep learning and classical approaches 1. and another entirely based on deep learning architectures and approaches: 2. see below:

  1. In a loosely coupled approach, deep-learning could help matching features between the image and the 3D-known model in order to be inputted in a traditional Photogrammetric/Computer vision resection algorithm.
  2. In a tightly coupled approach, the deep-learning algorithm could be trained with a combination of 3D virtual flights and real-flights. The 3D virtual flights used for the database-generation would use an available DTM (Digital Terrain Model) to generate virtual photos of the terrain with known position and attitude. Real flights with photos with known camera pose (position and attitude) would be as well available for test and fine-tuning the deep learning algorithm. Software’s like POV-Ray would be used for a generation the training data from the high-quality DTMs. This virtual data can be used together with real flights data for the DL model training.

 Resources to be provided to the student:

a) Aerial images with known attitude and position from drone flights performed within the same area of the available DTM

b) High-quality DTM and aligned areal historical photos that can be used for independent algorithms test and validation [3] https://smapshot.heig-vd.ch/map/?imageId=4336

Outcome: To perform a thorough literature review in the area. To locate most suited Deep NN architectures for the task. To test the performance of methods 1) and 2) for camera pose determination.

The recommended type of project: Semester or Master project.

Work breakdown: 30% theory, 60% development, 10% experiments

Prerequisites: Prior exposure and interests in Deep Learning and Python language. Desire to learn quickly and advance personal skills in the area.  Independent and adventurous mindset with curiosity toward navigation, drones and mapping.

Curious? Inspired? Please contact:

Dr Iordan Doytchinov (https://people.epfl.ch/iordan.doytchinov) or Emanuel Cledat (https://people.epfl.ch/emmanuel.cledat)

References:

[1]        J. Mustard et al., “Report of the Mars 2020 science definition team,” 2013.

[2]        A. Suleiman, Z. Zhang, L. Carlone, S. Karaman, and V. Sze, “Navion: A 2mW Fully Integrated Real-Time Visual-Inertial Odometry Accelerator for Autonomous Navigation of Nano Drones.”

[3]        T. Produit, N. Blanc, S. Composto, J. Ingensand, et al. “Crowdsourcing the georeferencing of historical pictures” Proceedings of the Free and Open Source Software for Geospatial (FOSS4G) Conference  https://smapshot.heig-vd.ch/map/?imageId=4336

[4]        T. Campbell, R. Furfaro, R. Linares, and D. Gaylor, “A deep learning approach for optical autonomous planetary relative terrain navigation.”

SENSE DYNAMICS

Advanced drone navigation and control based on aerodynamics ‘learning and sensation’.

This via a combination of Deep-Learning, CFD modelling and wind-tunnel testing.

For any vehicle moving in autonomous mode, reliable navigation is crucial. To operate drones in cluttered and challenging environments such as cities, forests or mountainous terrain, their position, attitude, and velocity must be known accurately. The lack of GNSS (Global Navigation Satellite System) signals due to obstructions or external interference can cause complete failure of a drone navigation system, which would be unacceptable for any application Beyond Line-Of-Sight (BLOS). Currently, at TOPO, a novel navigation system is developed based on flight dynamics [1]. With a Vehicle Dynamic Model (VDM), drones can navigate through a complex environment in the absence of GNSS signal. However, wind and gust effects cause drift errors, which may accumulate up to hundreds of metres during a GNSS outage of a few minutes (see [2], [3]).

This current research project aims at drawing inspiration from nature to solve this problem. The development of a novel sensor network and analysis framework – based on Computational Fluid Dynamics (CFD) and Deep Neural Networks – will provide real-time drone “skin sensation” of the wind effects, and the underlying aerodynamic forces will be integrated into the VDM-based navigation.

The Student (Master or Semester project) would join the research project and can take part both in the technical (instrumentation/wind tunnel testing) and or computational part (Deep-Learning, Computational Fluid Dynamics). The workload can be split upon specific project arrangement.  In more detail, available challenges can be seen outlined below:

  • Experimental
    1. To contribute to the study of measurement uncertainty, sensitivity and practical performance of state-of-the-art air pressure, strain gauges and beyond state-of-the-art velocity vector measurement sensors (heat-flux based). This experimental work would validate the performance of the novel sensors envisaged to be integrated within the concept of ‘drone-skin’ distributed sensation system.
    2. To participate in the instrumentation of the wind tunnel and the gathering of experimental data.
    3. To validate predictions of new models (CFD and Deep Learning based) for aerodynamic forces prediction by comparison with wind tunnel data.
  • Modelling
    1. To build a database of CFD simulations (steady-state and transient) using the ANSYS Workbench and Parametric Design Language (APDL).
    2. To carry out statistical sensitivity studies for the CFD simulations, in order to assess their accuracy. This will be performed using the ANSYS Workbench interface and a predefined workflow (“design of experiments” technique).
    3. To develop Deep Neural Regression networks that can relate time series of measurements from aerofoil sensors (sequence 1) with lift and drag fields and vectors (sequence 2), and to study their performance and uncertainty under supervisor’s guidance.

General understanding and interest in the field of (drone, robotics, aerodynamics, CFD modelling, deep-learning) are good pre-requisite for a student joining the project. Understanding of Matlab, Python (Pytorch), ANSYS, CATIA and practical instrumentation would be highly south after.

The interested student applicants can benefit of being a part of a new growing beyond state of the art research project. For more information please do contact:

Dr Iordan Doytchinov

https://people.epfl.ch/iordan.doytchinov

References:

[1]   M. Khaghani and J. Skaloud, “Autonomous Vehicle Dynamic Model-Based Navigation for Small UAVs,” Navigation, vol. 63, no. 3, pp. 345–358, Sep. 2016.

[2]   M. Khaghani and J. Skaloud, “Assessment of VDM-based autonomous navigation of a UAV under operational conditions,” Rob. Auton. Syst., vol. 106, no. 106, pp. 152–164, Aug. 2018.

[3]   M. Khaghani and J. Skaloud, “Evaluation of Wind Effects on UAV Autonomous Navigation Based on Vehicle Dynamic Model,” Proc. 29th Int. Tech. Meet. Satell. Div. Inst. Navig. (ION GNSS+ 2016), pp. 1432–1440.

Contact/Proposal

If you have an interest in this domain, please don’t hesitate to contact TOPO staff.