The use of redundant MEMS grade IMUs (R-IMU) present an economically and ergonomically viable solution to improve navigation performance in a classical sensor-fusion. However, such a system has never been tested in an aerodynamically constrained sensor fusion framework. Thanks to recent in-house development of R-IMU board (dahu4Nav), there is a hope to come up with a first ever sensor fusion architecture that has inertial redundancy in addition to already incorporated aerodynamics. However, there are numerous challenges on hardware, firmware and software fronts that need to be resolved.
Can we teach our drones to ‘feel’ the aerodynamics under their wings? Is it possible to navigate and find one’s position only using your sensation of aerodynamics? Can we draw inspiration how nature has solved the challenge? In this project one will join an exiting research project investigating the vechicle dynamic navigation concept. State of the art wind tunnel testing, Computational Fluid Dynamics and Physics guided machine learning.
Can one find his absolute GPS cordinates and view pose only using a single RGB camera in less than a second? Can you tell exactly where and from what angle a photo or video has been taken? In this project the students will join research team, developing machine learning solutions answering those challenges.
Success of a drone mission is incumbent on accurate determination of its real-time position, velocity, and attitude, also known as navigation states altogether. For small Unmanned Aerial Vehicles, this is conventionally determined by a fusion of Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS). Very recently, a Vehicle Dynamic Model based navigation system – VDMNav (in post-processing) has demonstrated an improvement in 1) attitude determination in normal flying condition and 2) in position in case of GNSS outage in comparison to traditional INS-based navigation. The student will work with the first real-time prototype in C++ and improve its performance and modulations via different workpackages.
This project is a collaboration between TOPO laboratory and Fastree3D, an EPFL/TU-Delft spin-off, based both in Lausanne and Delft. It leverages more than 20 years of academic research in CMOS image sensors and Single-Photon detectors. They focus on intelligent 3D vision systems for the automotive market. Their product is a flash LiDAR (or else called Time-Of-Flight camera, or range camera) that measures distances for each pixel based on time-of-flight technology. This project focuses on (1) defining the spatial resolution of the Fastree3D LiDAR: Ground Sampling Distance (GSD) and working distance (distance at which the output of the sensor is meaningful) and (2) creating a controlled calibration field and define the limits of the Fastree3D LiDAR.
Within this project, the student will implement a system to geo-reference (i.e., assign precise geographic coordinates on a map) to objects, such as vehicles, cars, buildings detected in 360 degree street-view images following an approach recently proposed in . This has been successfully employed to automatically build a catalog for all the trees in a city from Google Street-View images. The student will learn the methods employed from the original authors and attempt to reproduce the results targeting specific classes of objects and a unique dataset in Africa
Terrestrial mobile mapping has been gaining popularity in the recent years. One of the greatest challenge in mapping is to know the exact location of the vehicle when pictures are taken. However, the buildings and vegetation in the area can cause situations where the GNSS signal is denied. This leads to drifts in the estimated position. In this project you will be exploring the introduction of a new measurement using the dynamics model of the vehicle in order to reduce this drift.
The goal of this semester-project is to quantify the quality of the texture of different materials. Several quality estimators could be studied, for example, the number of tie-point and the precision of the determination of these tie-points in the image. The outcome of this project will permit to determine the precision of the final 3D model.
The purpose of this internship is to help in the development of full water-ice classification algorithm using both NILI and extra image processing.
The proposed project aims at demonstrating the feasibility, as well as developing the necessary algorithmic workflow for a sub-decimeter localization of terrestrial objects using the visual data from a micro UAV-mounted panoptic camera.
This semester project aimed to explore methods of processing laser data and derivatives useful for the interpretation of forms.
The combination of Global Navigation Satellite System (GNSS) with Inertial Navigation System (INS) has been widely used in order to improve the accuracy of a navigation solution for many transport systems. However, for safety-of-life applications and for future autonomous cars and UAVs, there are other important requirements in terms of integrity, availability or continuity. This task has been already addressed for civil aviation by integrity based systems like SBAS, GBAS or RAIM
The GNSS receiver is normally placed vertically above a point of interest and the measurements are reported to this point by knowing the height of the receiver’s antenna. The new receivers employes sensors which can measure the tilt (from the horizontal plane). Knowing the receiver’s tilt and slant distance to the point allows to correct a situation when the antenna is not centered perfectly above the point. The project goal lies in the determination of the accuracy of such tilt-compensation implemented by the receiver. The student shall propose the methodology and employ additional sensor(s)/observation to provide reference values. The results are then used and crosschecked with known points of interest already established around and outside EPFL.