2020

Nicola Gantenbein

  About : Hexagon

Description

The internship was carried out at the Hexagon Technology Center in collaboration with Leica Geosystems. The latter is well known for their surveying products and systems where a wide variety of technologies including GPS satellite navigation, laser measurements and optical based sensors are used. The project is based on correction of IMU signals to support new high-precision systems. The large-scale use of IMU’s is relatively new. Thanks to the large quantity of IMU sensor used in mobile devices or automatic vehicles, as well as the development of small MEMS-structures (Microelectromechanical systems), the price of IMU’s significantly dropped over the last years. The small size and cheap production have the drawback that they usually have a high noise level and/or biases compared to high-end IMU sensors.

To obtain a position from an accelerometer a twofold integration in time is necessary. It is obvious that this makes it highly sensitive to small errors in the measurements. Already the noise in accelerometer and gyroscope will cause an error of several meters after few seconds when using no correction tools.

One main error source is that the orientation of the sensor must be well known to subtract the acceleration due to gravity. Already a small deviation in orientation will cause significant drift in orientation. A convolutional neural network (CNN) modeling static and time dependent errors is used to significantly reduce the orientation error. An additional correction of the accelerometers’ measurements is applied by the CNN. The results show that this method has the potential to outperform basic recalibrations, that are based on estimating static correction factors.

Figure 1 : Angular velocities measured by gyroscope (https://en.wikipedia.org/wiki/File:Flight_dynamics_with_text.png)

——————————————————————————————————————

Constantin Lionel

About : Playmaster

Description

During my internship, I was integrated into a team developing PLAYMASTER.GG , a tool aiming at helping esports players to improve better. The tool consists of a map where players are proposed different exercises and a website where they are able to see some in-depth statistics on their performance. It was released publicly during the time of my internship. As an intern I was integrated to the team developing the program and assigned to various different tasks where there was a need. To do this I had to learn some game-related coding languages which allowed me to create some parts of the programm. I was also invited to suggest and develop my own improvements and ideas for the tool.

——————————————————————————————————————–

Jiahua Wu

About : L2F

Description

I did my internship at Learn to Forecast (L2F), an AI start-up located at Innovation Park in Lausanne, from February 17th to August 17th. Its main service is to provide the clients with predictive algorithms according to their requirements. Traditionally topological data analysis and time series analysis are the specialised fields of the company, and it has successfully developed Giotto-tda, a high-performance topological machine learning toolbox in Python and Giotto-time, a machine learning based time series forecasting toolbox in Python. Currently it starts working on democratising AI and has launched a deep learning platform to enable people with little knowledge of machine learning and coding to train and deploy personalized deep learning models.

As a data science intern, my first task was to implement hierarchical time series (HTS) prediction algorithms for the time series team to enrich the open source library Giotto-time. For the first week I went through the materials and the next week I started my implementation with the library scikit-learn and wrote documentation for the code. Although the code worked, it was far from perfect. Therefore, it wasn’t immediately integrated into the code base and served as a starting point for further refinement.

Then the plan to build a deep learning platform was announced and I was reallocated to a newly formed task force of 5 people to develop the Python backend for the image classification pipeline. The pipeline covers data validation, pre-processing, model training and result rendering. As a developer from the very beginning, I was involved in implementing or iterating all of the steps, during which I applied my knowledge in machine learning and gained a great amount of coding experiences.

Shortly after the start of the internship, the pandemic began and to keep everyone save, we are required to work from home. This practice did not affect the operation of the company at all and we held online beer meetings to share our life during quarantine, in which I had a lot fun and was encouraged by the optimism of the colleagues.

All in all, it was a pleasant journey and I am grateful to everyone in the company who has contributed to it.

——————————————————————————————————————–

Tianyang Dong

About : AXA

Description

AXA Technology Services Advanced Engineering Lab SA is a lab of AXA located in Innovation Park near EPFL and focuses on using advanced computer technology to provide better service for customers. My internship is completed in the computer vision group, which explores ways of making use of aerial images with the help of deep CNN. During the internship, we try to extract building features like roof type and number of stories from aerial images. Pretrained neural networks are fine-tuned on the dataset constructed by hand, which contains hundreds of distinct images per class. For the first feature, roof type, our model achieves a recall of 0.95 per class. Though the second feature is more complex, because the image resolution for buildings varies according to their height, we still manage to find a convolutional model with an average recall of 0.8.

——————————————————————————————————————–

Joachim Koerfer

About : CSCS

Description

CSCS is the Swiss National Supercomputing Center which develops and operates high-performance computer systems and is an essential service facility for Swiss researchers. A live performance python dashboard was developped for debugging and analysing HPX applications. HPX is a C++ library that allows for a new kind of parallel programming : asynchronous task-based programming. This new tool allows for users to diagnostic live data coming from HPX, such as performance counter and task data. A live dashboard allows for quick performance assessment and can be used in workshops to demonstrate the capabilities of the HPX library. The image shown here provides a screenshot of the dashboard, where the user is examining the tasks that have been executed in some
application.

——————————————————————————————————————–

Costa Georgantas

About : https://invision.ai/#Mission

Description

Invision AI is a Toronto based computer vision company that specializes in processing information on edge devices, rather sending data directly to the cloud. As the computing power of those devices is limited, a lot of care has to be put in building efficient algorithms for data processing. The main focus of this internship was on improving the quality of the detections and tracking of vehicles and people on video. A deep neural network is used to predict bounding cuboids for each object of interest in the scene, those detections are then processed to track their movement in three dimensions.

As the labelling of images with these cuboids took a lot of time, I started working on generating synthetic data of vehicles by rendering 3D models in Blender. Thousands of images with ground-truth information were automatically generated this way, which saved a lot of time in manual annotations. I then worked on optimizing the detector and tracker in various ways to make them as reliable as possible, and started implementing a new parametrization for people detection. I had the chance to work with a small team of experienced computer scientists that were not only extremely skilled but also willing to take the time teach me. I learned a lot about coding in a team environment, and gained some valuable work experience.

——————————————————————————————————————–

Colin Ducommun – Automatic gating of flowcytometry results

   About : https://www.bnovate.com

Description

L’entreprise bNovate fabrique et vend des appareils (appelés BactoSense) permettant de vérifier et contrôler le niveau de bactéries présentent dans l’eau. Ces appareils utilisent la technique dite de “cytométrie de flux”. Ces appareils sont utilisés en laboratoire pour certaines expériences, ou alors directement reliés à certaines réserves d’eau potable pour vérifier le taux de bactéries présentes dans l’eau. BNovate a ses locaux à l’Innovation Park, juste à côté de l’EPFL. Avec une taille plutôt raisonnable (une vingtaine de collaborateurs, certains à temps partiel), l’ambiance de travail au sein de l’entreprise était stimulante.

La première partie de mon stage a été consacrée à la prise en mains des différents outils utilisés par l’entreprise et à se familiariser avec les données à disposition ainsi que de trouver une manière de les visualiser qui soit pratique et fonctionnelle avant de pouvoir les analyser. Le principal problème rencontré était de pouvoir identifier de façon formelle le bruit dû aux capteurs dans les jeux de données, afin de pouvoir épurer ces dernières. L’absence d’échantillons de validation a grandement complexifié ma tâche. Utiliser des méthodes de classification par exemple, bien que ce soit une approche intéressante que j’ai utilisée, perd de son utilité quand on ne peut vérifier son efficacité. Je me suis plutôt penché sur une étude statistique du bruit présent dans les différents appareils (BactoSense) pour déterminer s’il suivait approximativement une loi statistique (il s’avère que cette loi varie légèrement selon l’appareil analysé). L’objectif de cette démarche était de pouvoir automatiquement épurer les données de de pouvoir affiner l’analyse des populations de bactéries présentes dans un
échantillon.

——————————————————————————————————————–

Andrew O’Sullivan

about : Entropica Labs

Description

Entropica Labs is a small Singaporean based start-up that carries out research in Quantum Computing. We worked with BMW in order to see whether Quantum Computing could provide benefits for industrial applications they were interested in. For example we tackled the ride hailing (taxi passenger assignment) problem, where we try to assign the most efficiently possible the taxis such that the passenger waiting time is minimized. Thus, multiple solvers were implemented in order to compare the results on small test instances and larger datasets taken from the New York taxi dataset. The Quantum and Quantum inspired solvers where not successful because of the exceedingly large problem sizes and the requirement to obtain a “good” solution within a short execution time. However a new family of custom classical heuristics were successfully developed and were able to outperform the other solvers over a wide range of scenarios as seen in the figure below. This was done while maintaining the short execution time of some of the simple solvers such as the Hungarian (Munkres) algorithm, which was used when the problem could be cast into a simple assignment problem (SAP).

Caption: Illustrating the quality of the solutions found with various solvers. The quality is measured by comparing the waiting time obtained by the global solver and dividing it by the estimated waiting time obtained with a greedy assignment policy. Therefore the lower the ratio the better the global solver. These tests were carried out over a wide range of scenarios. The P/T ratio per GOTW is the number of passenger requests per taxi during a global optimization window (GOTW). It thus illustrates how well the solvers perform during rush hour or low demand periods.

I really enjoyed working at Entropica Labs in a very international atmosphere where despite the limitations of Covid-19, I still managed to get an insight into Asian culture and enjoy great food. Listing in on the decision making process of a small company was totally new and certainly informative. Overall the internship and living abroad was certainly an enriching experience for me.

——————————————————————————————————————

Thomas Ramseier

About RUAG, Suisse

Description

RUAG Aerodynamics is a department of RUAG Switzerland AG located in Emmen and specialized in wind tunnel testing. The facility disposes of two wind tunnels used in the domains of aerospace and automotive predominantly. Other competencies of the department are among others the design of high precision balances and computational fluid dynamics
simulations (CFD).

The main task for this internship was to optimize an existing wind tunnel using CFD analysis and comparison with theoretical models and design guidelines for such facilities. To do so, a Lattice-Boltzmann Solver was used. After exploring various possibilities of improvements, a convergence towards an improved configuration was obtained. This improvement enables to increase the test section velocity significantly using a similar electrical power input.

Figure 1: Velocity cut of the wind tunnel test section using a Large-Eddy simulation software

The remaining part of this internship comprised various tasks such as the simulation of a high speed train for validation purposes using OpenFoamR and the design of a mechanical system to improve the accuracy of automotive wind tunnel measurements. Finally the simulation of one of the wind tunnels of the facility in order to reproduce precisely different characteristics observed experimentally was undertaken.

Figure 2: Vortices and detachment point visualisation on the ICE3 high speed train

————————————————————————————————————

Jean Ventura – Implementation and analysis of the Once-for-All algorithm

About : Sony, Allemagne

Description

During this internship, a procedure for Neural Architecture Search using networks subsampling (called Once-for-All) was explored on the network MobileNetV2 using the dataset CIFAR10. The internship took place at Sony’s Artificial Intelligence Laboratory in Stuttgart, the laboratory is mainly focusing on Artificial Intelligence along with Speech and Sound Recognition and Detection.

Using the Once-for-All training procedure, we were able to cut the computational and memory resources by more than a third without significantly decreasing the accuracy of the subnetwork. While still a preliminary experiment, the results are promising and could be a great technique to reduce the training cost when multiple devices with various resources constraints are used. Indeed, sampling a subnetwork satisfying the constraints and having a good accuracy would eliminate the need to retrain from scratch.