Théau Vannier 


Instadeep is a FrancoTunisian startup specialized in decisionmaking AI products for the Enterprise that was founded in 2014. It is now based all over the globe, namely London, Paris, Tunis, Cape Town, … The goal of the company is mainly to “accelerate the transition to an AIFirst World that benefits everyone” to reuse their own words. It is to me a very interesting company that aims at using the more advanced skills and technologies we have nowadays to solve real life problems (binpacking, “PCB”, BioAI, …). The company also aims at developing research environments like Jumanji, a suite of opensource Reinforcement Learning environments written in JAX providing clean, hardwareaccelerated environments for industrydriven research. Finally, they also work on developing new methods and AI tools.

The DeepPCB team, with whom I worked, is working on solving and optimizing the PCB solution. Nowadays the PCB (printed circuit board) are manually designed by engineers. They sometimes use softwares to help them, but none is powerful enough to give a descent result and the engineers end up doing most of the job in designing them. DeepPCB is trying to find a solution to this problem using reinforcement learning. A PCB is a circuit board with chips like capacitance, USB port, … laying on it. The chips are also called the components and they have pins on them. Some pins must be connected to each other; a connected group of pins is called a net. Typically, a PCB can easily have up to 100 nets to connect.

The PCB problem can be seen as two subproblems:
• The routing: We give you a set of points (pins) in a 2/3dimensional space, the net to which each pin belongs to and some physical constraints (obstacles, minimal
distance between wires, etc…). The goal is to connect all the nets while minimizing some constraints (wire length, number of vias used, etc…)
• The placement: Here the goal is to spatially place the components such that the pins are optimally placed for the routing.

Ningwei MA


ABB is a global technology company in electrification and automation. The company’s solutions connect engineering know-how and software to optimize how things are manufactured, moved, powered and operated. I worked at the cooperate research center which explores the new solution for engineering problems. The team I worked with is the software team, which aimed at modularizing an existing ABB electricity grid protection product. My work was to find or implement an in-memory key-value database that serves as a communication media among different modules. I first tried commercial in-memory database like Redis/Dragonfly but they were too slow for our industrial target, so we decided to implement a shared-memory-based hash map ourselves with the help of Boost library. After this was done, we tried to integrate it with the code base of the protection software and tried to optimize it, from the implementation aspect and algorithm of smartly distributing the keys and values to several hash maps to avoid contention. Once we finished a working fast shared-memory tool for each module process, we began to add different modules like Python module or specific recording module that monitors the change of the database. In the meantime, we were also optimizing the design of the system to simplify the interface of each module. After the backend worked fine, we add visualization tools like Grafana and used the data recorded in the recording module to show the change in the system over time. The project was done mostly in C++.

Giulia Mescolini


Nestlé is the largest food and beverage company in the world, and Nestlé Research, based in Lausanne, is the division devoted to scientific research in various related fields, such as health & nutrition, food science, or materials. I worked within the Digital Health group, specialized in developing innovative digital solutions to support health and nutrition research and guide consumers to balance their nutrition or prevent diseases.

The aim of my project was the enrichment of the information on Nestlé food databases through Machine Learning tools. In particular, I worked at the classification of food descriptions according to the ingredients and the cooking methods, mapping them to labels organized hierarchically in a food ontology. For this task, I relied on Natural Language Processing Models, and had the opportunity to challenge myself with state-of-the-art techniques, but also with the development of strategies to handle complex and heterogeneous real data.

Moreover, I lived an involving experience at my workplace, learning the dynamics of teamwork and empowering my soft skills. I am specially grateful to the colleagues that I met for the welcoming environment that surrounded me during my internship.

Ali Garjani


Jules-Gonin Ophthalmic Hospital – Fondation Asile Des Aveugles is an eye hospital in Lausanne, and its roots go back to 1843. The hospital’s data science team explores projects involving using machine learning and data science to analyze medical images. During my internship, I worked on the prediction of disease recurrence in patients with Central Serous Chorioretinopathy (CSCR) from multimodal imaging. CSCR is the fourth most common eye disease considering the retina, and it typically occurs in males between their 20s to 50s who experience central vision loss or distortion. Although there are treatments that mitigate the symptoms of CSCR, there is still a chance of the recurrence of the disease. Hence having a tool to predict this recurrence can help the doctors to treat the patient before the disease goes to a critical stage. In my six-month internship, I worked on developing this tool, which consists of applying image processing and medical imaging techniques on raw scans to extract features and parameters, forming sequential data out of these features, and training a time-series deep model on the data to make the prediction. Figure below shows different models’ performance with receiver operating characteristic (ROC) curve.

Paolo Motta


NVISO is an artificial intelligence company founded in 2009 and headquartered at the Innovation Park of the ´Ecole Polytechnique F´ed´erale de Lausanne in Switzerland. It provides artificial intelligence solutions that can sense, comprehend and act upon human behavior using emotion analytics.NVISO’s products and services consist of applications, software development kits (SDK’s), and data services. These are used by NVISO customers to measure and increase productivity, and to accurately perform specific business functions, such as the automation of customer-facing operations. NVISO commercialization is focused on AI solutions for several key industries.

I have worked in the Research and Development (R&D) group in the field of Computer Vision, focusing on training deep learning models for various customer projects. Specifically, my projects dealt with object detection, facial action unit recognition, and body pose estimation. These models are crucial for various applications such as security systems, autonomous vehicles, and emotion recognition.

During my time in the R&D group, I was able to successfully develop deep learning models that achieved high levels of accuracy and performance on our datasets. The models were trained using a combination of convolutional neural networks (CNNs) and graph neural networks (GCNs) and were optimized using various techniques such as transfer learning and data augmentation. The main outcome of these projects were highly accurate deep learning models performing the tasks required by the clients, receiving positive feedback for their performances and accuracies. Additionally, I was also responsible
for the maintenance and improvement of the models over time, ensuring that they continue to meet the evolving needs of our customers.

Moritz Waldleben


CFS Engineering is a small company located at the EPFL Innovation Park with a mission to offer services in the numerical simulation of fluid and structural mechanics engineering problems. They specialize in aerodynamics and use their in-house Navier-Stokes Multi-Block solver to perform simulations on their servers.
My project was about mesh smoothing. A computational fluid mesh divides the field around an object into grid cells. These cells are then used in a computational fluid dynamics solver. Constructing complex meshes is not an easy task and is normally done with commercial software such as ICEM CFD. A standard way is to use transfinite interpolation (TFI) for the generation. After constructing an initial
mesh quality metrics such as skewness and boundary orthogonality can be improved with smoothing. I was working on elliptic mesh smoothing. In this procedure, elliptic differential equations are used to smooth existing TFI cells. I extended and further developed a Fortran program that should be integrated into their flow solver to improve a generated mesh in 3D.

Thomas Rimbot


CERN is the most prominent European research center in nuclear physics, providing researchers around the world with advanced technological tools to uncover the secrets of the universe, the most famous being the Large Hadron Collider (LHC). I did my internship in the TE-MSC-TM section (TEchnology department, Magnets, Superconductors and Cryostats group, Tests and Measurements section), under the supervision
of section leader Dr. Stephan Russenschuck. The goal was to develop and implement a generalized field description in strongly-curved accelerator magnets.

In practical applications, accelerator magnets are either straight or very slightly curved. The description of the magnetic field in the aperture is therefore developed with classical Fourier expansion in cylindrical coordinates, assuming a cylindrical geometry. However, if we move on to more curved magnets with bigger eccentricities, we expect this approximation to get worse, if not completely unusable. My goal was to formalize this framework and develop the theory in a more suited coordinate systems: the toroidal coordinates, with the expansion in toroidal harmonics. I derived and implemented formulas for their computation, compared them to the classical expansion, proved that the latter was completely wrong because of curvature, applied them to real test cases like the Extra Low Energy Antiproton ring (ELENA), and wrote a paper for publication together with the people I worked with.

Figure 1: ELENA magnet.

In particular, one of the main questions was about scaling laws. In a classical setting, the Fourier expansion allows us to compute the expansion coefficients (harmonics) on a reference circle inside the aperture of the magnet. However, these harmonics will only allow to reconstruct the field on this specific circle. Instead of recomputing the harmonics at every point inside the magnets, we can use scaling laws, which give us formulas to directly modify the computed coefficient and get the harmonics everywhere inside the aperture. The problem is that we expect these scaling laws to not hold in a curved setting. On the other hand, if we use the more suited toroidal harmonics, their scaling laws do in fact hold, allowing to reconstruct the field everywhere in the magnet.

Figure 2: Scaling laws of the toroidal harmonics (red) vs the classical Fourier harmonics (blue) in stronglycurved

Fekih Selim


I performed my master internship in Data Friendly Space , an INGO that works on providing humanitarian organizations with software development capacities and Machine Learning solutions. I worked on implementing NLP solutions for the Data Entry and Extraction Platform
(DEEP), a platform designed to perform secondary data review, by annotating documents and performing analysis on them. My internship’s objective was to help design NLP solutions to faster and optimize the analysis process for analysts.

More specifically, I worked on designing different NLP models, one for extracting relevant entries from a document and another for classifying relevant entries into a large predefined set of tags (referred to as analysis framework in the humanitarian sphere). The training data is
previously annotated data by humanitarian analysts. I contributed by helping make the models faster, less memory consuming and more accurate. This helped reduce costs for inference and make the models more helpful to humanitarian analysts.

Here is a visualization of some of the results we have on the Entry Classification task.

Anna Peruso


I conducted my internship at Grammont Finance, in Lutry (VD). Grammont Finance SA is a small Swiss company specialised in financial engineering and trading. The main activity of the company is trading of Swiss equity and index derivatives in the EUREX market.

The goal of my project was to compare European and American options prices obtained by Heston’s stochastic volatility model with those obtained by the Local Volatility model, when the local volatility surface is indeed obtained by Heston prices. Both models find their roots in the attempt to overcome Black-Scholes’ unrealistic assumption of constant volatility and are an important tool for trading companies to better price options.

One typical approach to price derivatives in computational finance is to solve the associated parabolic PDE by means of Feynman-Kac theorem. My main assignment was to implement from scratch these two models in C++, together with a multi-dimensional Finite Difference solver for PDEs which was flexible enough to deal with both. Since many simulations needs to be run to price different options, much attention was given to find stable
and fast numerical schemes, in order to optimize computational costs.

Philippine des Courtils


Metadvice is a 4-years old start-up located in St-Sulpice (VD) whose core activity is to build AI for health care systems, clinicians and pharma; it has offices England, the US and in Switzerland for the moment. More specifically, the company specialised in some major cardiometabolic and autoimmune diseases such as hypertension, diabetes Type II and rheumatoid arthritis along with cancer diseases, in collaboration with experts (professors and clinicians).

I first wrapped up a project centred around precision medicine for rheumatoid arthritis, in collaboration with a clinic in England. Using cleaned anonymized clinical data and medicine knowledge, my goal was to produce therapy recommendations for patients with transfer learning and fairly simple neural nets.

Justine Stoll


Founded in 2010 in Rwanda, Laterite is a data, research, and technical advisory firm that helps clients understand and analyze complex development challenges in the social sector. Aside from its main activities, Laterite is developing Laterite.ai, a platform providing a collection of apps that researchers in the social sector can use to design better surveys, analyze data faster and explore new ideas.

Throughout my internship I worked on several of these tools. For example, one of the apps I contributed to aims at facilitating the processing and extraction of information from long lists of answers to open-ended questions. It is common to include open-ended questions, as opposed to multiple choice questions, in surveys. While these allow for a totally unbiased expression of opinions, answers are immensely harder to interpret and use than with multiple choice questions. In fact, if we ask the same open-ended question to 10,000 people, it is likely that we end up with 10,000 different answers, even though only say 5 or 6 themes are covered. The tool we developed, groups all answers to an open-ended question into clusters of common content, and summarizes each cluster. In this way, the researcher immediately has an overview of the themes that are covered in the answers.

The figure below shows the output of the app, where we analyzed 100 answers to the open-ended question “In your opinion, how could this education program be improved?”. Out of the 100 responses, only two underlying themes where identified and summarized.

Patron Théo


During my internship, I had the opportunity to work at AXA, a French insurance company that is also heavily involved in technology research. I worked in a tech lab focused on computer vision research, where I collaborated with a team of interns and supervisors. My work included two main projects: one focused on flood mapping using satellite imagery, and the other on developing an internal tool to assist risk engineers.

In the first project, I was able to use my skills in Python and JavaScript to develop a prototype tool on Google Earth Engine that allowed us to dynamically compute metrics related to flood mapping and show computed flooded areas. I gained significant knowledge in how satellite imagery works, including both SAR and optical imagery, and used data from the Sentinel satellite. These skills have been of great use to me in my current thesis work.

In the second project, I worked as a software developer and data scientist in a team of 15 people using Scrum methodology. I learned how to write efficient, clean, and welldocumented code in Python, how to work effectively in a team environment, and how to apply modern data science tools and techniques.

Overall, my experience at AXA was invaluable in terms of developing my skills and understanding of computer vision, data science, and software engineering.

Fadel Mamar Seydou


The eye hospital Jules-Gonin or ”Fondation Asile des aveugles” is a 180 year-old privately owned

foundation . It has specialists in every domain of ophthalmology making it an impressive compact onestop solution for people with eye diseases. This makes it a reference in ophthalmology in Switzerland and Europe. The foundation is located in the heart of Lausanne at Avenue de France 15, 1002 Lausanne. It is a medium sized hospital with over 600 collaborators.

My role as an intern was to develop a deep learning model for the segmentation of atrophy lesion second to wet age-related macular degeneration (AMD). AMD is a disease that happens when the macula (i.e. a part of the retina) get damaged as the person gets older. It results in a loss of central vision and is currently the leading cause of irreversible blindness in the developed world. It is expected that by 2040, around 288 million people will suffer from it. Currently, no effective treatment exists making it an active area of research.  In my case, I focused on the analysis of the ”wet” case as it is the most challenging one and no known segmentation algorithm from SD-OCT (i.e. spectral domain optical coherence tomography) scans exist. It is important to note that the dry AMD accounts for 80% of the cases. My work built up on a previous intern’s work and a publication by RetinAI (i.e. a partner and strong stakeholder in the project). I leveraged the publication done by RetinAI on the segmentation of atrophy lesions second to Dry AMD and extended the methodology to the Wet AMD.

The internship environment along with the deliverables expected from the intern made the experience very positive. It was very rich in learning. I was able to collaborate with an external stakeholder (RetinAI) and domain experts which helped get a better understanding of the responsibilities of a data scientist.

Figure 1 Example of hyperparameters search: tuning of learning rate. The selection of the best learning rate