Thomas Sanchez – Modelling Early Visual Processes
About : Ageing in Action and Vision Lab, Paris, France
I joined the Ageing in Action and Vision Lab from the 20th February to the 13th of July 2017 in Paris, France. The main research interest of this laboratory is to investigate the impacts of visual ageing on perceptive and cognitive aspects of visual interesting. The most interesting part is their cross-disciplinary approach of this problem, combining experimental psychophysics with computational and experimental neuroscience.
My main role was to develop a neuronal model that is able to reproduce human navigation towards a goal. To do so, I implemented a neuronal model of several layers that processed the visual input in a similar fashion as the brain. This model essentially extracted the different motion information and combined them together to reconstruct the optic flow, which is the pattern of motion of objects in a visual scene caused by the relative motion between an observer and a scene. The second part used the neuronal optic flow computation for spatial navigation.
The second part of my work was to investigate and reproduce existing behavioural experimental results using the previously developed model. We were able to investigate navigation tasks and reproduce several observed results. As a third part, we investigated the effect of ageing by implementing some known effects at a cognitive level and also attempted at reproducing experimental results in order to better understand which consequence of ageing has the strongest impact on spatial navigation.
This internship was extremely interesting as it allowed my to work on a modelling project from scratch, working in a very autonomous fashion, while still being able to get an insight in the works of my colleagues. This was a very valuable experience of what research looks like.
Paul Berclaz – Réalisation d’un dispositif de mesure de couleurs pour impression digitale et flexographique
About : Bobst SA, Suisse
I did my internship at Bost SA, this company is a leader in the making of all the machinery necessary to produce packaging, from printing to folding and cutting. They are specialized in top end product, like cosmetic, and high speed work.
Because they produce top-end packaging they have to be very careful about the quality of their product. In the color department this means that the color must not deviate too much from the original one. Currently this is checked with a spectrometer at the end of the assembly line. However, this has the downside of having to get rid of a lot of paper if a problem is detected . The ideal solution would be to check on-line the color deviation, by doing so we can immediately correct the problem or stop the machinery.
This project was a study about the viability of using the color camera to control the color deviation.
The first step was to check the precision of the color measurement using a camera on the control half-tone (the colors patch printed on the side). Then the goal was to be able to reproduce the whole color range of a given printer with only a few half-tone and be able to predict color deviation anywhere.
The results were encouraging and even confirmed by real-life measurement directly on the printers. This conducted Bobst to follow the project further with another project proposal.
The long term goal is to be able to detect deviation at real-time and automatically correct them by changing a few variable on the printer (pressure, ink density etc..).
Aleksa Stankovic – Numerical methods for stochastic differential equations
About : Nafora, Suisse
I have been part of Nafora trading company between 14th of February until 15th of August, 2017. The company is focused on writing and maintaining algorithms which analyze the market and look for good opportunities to buy or sell (long or short) products offered at many global exchanges.
Due to the fact that the company is not very large in size, I was involved in almost every corner of the company’s work. This included working on software development tasks which ranged from implementing new functionalities for in-house trading engine up to doing emergency fixes on issues which were impacting production environment. Another part of my work on this side was getting new types of data into the software architecture and implementing migration between different APIs used in company.
Apart from software development jobs, I was also involved in algorithmic research. At the start of my internship, I was presented with successful trading strategies that are running in production at Nafora. I was given a trading idea to investigate. The result of this research was a strategy that showed very strong historical performance (the graph can be seen below). Besides working on two more strategies, I was constantly involved in cooperating with other colleagues helping them out in areas which I am strong at (particularly numerical algorithms) and getting their insight on projects I was working on (their financial knowledge and experience proved to be of great use).
The internship was very valuable because it showed me different aspects of algorithmic trading company’s business. I could constantly monitor the performance of different strategies and see any violent movements in the market. It gave me a feeling that I was very close to the action in the market. This proved to be very motivating for doing research because doing a good job could be very quickly translated into real value for the company.
Figure 1: PNL (profit and loss) analysis of a strategy on a historical prices for a time between
14th August, 2009, up to 5th May, 2010.
Clément Lefebvre – Developing tools for Medical images data handling and visualization
About : CHUV, Lausanne
The internship took place in the Centre Hospitalier Universitaire Vaudois (CHUV) located in Lausanne. More particularly I joined the Center of Cardiac Magnetic Resonance directed by Professor Juerg Schwitter. The group performs cardiac medical imaging of patients using magnetic resonance, it also performs research with the objective to improve the acquisition and the analysis of cardiac MRI data.
The first task was to derive an efficient procedure to segment the left and right heart’s ventricles and construct a mesh which can then be used for numerical simulation. The long-term goal is to be able to construct a mesh specific for each patient (see image) and use this mesh to run an accurate simulation of the patient heart. This simulation could be used to establish a diagnosis to eliminate the use of more invasive procedures. This problem was solved using the native features of 3D Slicer and the Fast Grow-Cut algorithm to reduce the time needed for the segmentations to several minutes.
The second task was to find a solution to the problem of mis-alignment of the short-axis image of the left ventricle due to different breath-hold positions of the patient during the MRI procedure. This mis-alignement can be the source of a loss of accuracy on the measurements of the volume of the left ventricle which can lead to a wrong diagnostic. To correct this mis-alignment, I implemented a module in 3D Slicer using Python. With this module, we are able to compute the linear transformation corresponding to the diaphragm displacement for each breath-hold. The long term goal is to obtain more accurate results which are necessary to determine an accurate diagnosis.
Yann-Eric N’cho – Business Analytics for Optical Sorting Machines
About: Bühler AG, Uzwil, Switzerland
My internship was conducted in the company Bühler AG, more precisely at its Satellite in the EPFL Innovation Park, from the 8th of March to the 31th of August 2017. Bühler is a Swiss family-owned company established since 1860 in Uzwil (Saint-Gall). It is a technology company specialized in plants as well as equipments and services related to food processing and advanced materials. The core technologies of the Group are in the field of mechanical and thermal process engineering.
My main task during this period was to develop analytical tools for an optical rice sorter belonging to Bühler advanced technologies portfolio, the SORTEX S UltraVision (Figure 1). Indeed, many of Bühler’s machines are equiped with sensors and process high volumes of raw material every day. It constitutes a huge amount of data from which, we want to extract as much information as possible. For this purpose, we performed exploratory analysis and machine learning algorithm in order to grasp the characteristics of the processes, the machine and the product being processed. The major part was to find the most optimal way to render the insights, both interactive and user-friendly. A dashboard appeared to the most convenient solution. We came up with two main approaches based respectively on R Shiny and Microsoft Power BI. The second part was to develop an anomaly detection algorithm to have a better control on the processing. By the end of the internship, we extended this anomaly detection research to other machines.
Figure 1 : SORTEX S UltraVision, Optical rice sorter.
Figure 2 : First page of the dashboard realised with Microsoft Power BI.
Alexander Lorkowski – Modeling of Nano-Particle Interactions in Suspension
About : Empa, Suisse
I had the great opportunity to work for the Advanced Colloidal Materials Engineering (ACME) team at the Swiss Federal Laboratories for Materials Science and Technology (EMPA). The focal point of EMPA is materials and technology research. The company strives to bridge application-oriented research to the practical implementation of new ideas. The ACME team develops functional materials using the potential of sol-gel chemistry.
As part of the ACME team, the objective of this internship was to understand nanoparticle interaction in suspension. The best-known theory describing the forces that govern this interaction is the DLVO theory, developed by Derjaguin, Landau, Verwey and Overbeek. However, when particles with radii below 100 nm are considered, the predicted stabilities are less accurate, and sometimes contradictory to experimental evidence. The project consisted of developing code and testing for both the interaction energy of a charged plate or spherical nanoparticle with the long-range ionic distribution that subsequently forms from the charges.
The developed code for the spherical nanoparticle appears to match the analytical curves quite well. It can serve as the groundwork for further research and investigation such as attaching polymers to the surface of the nanoparticle or changing the electrolyte in the solution. An example is provided in Figure 1.
More than ever, computer simulation has become an effective and essential tool to many scientists alongside the traditional theories and experiments. This internship has provided an excellent opportunity to apply my acquired knowledge in the CSE program to practical application.
Figure 1: Spherical nanoparticle simulations, with the y-axis plotted in log-scale, of A) ϰrp = 1 and adimensional surface charge of 1, B) ϰrp = 1 and adimensional surface charge of 5, C) ϰrp = 3 and adimensional surface charge of 5, and D) ϰrp = 3 and adimensional surface charge of 10. DLVO (green), DLVO with correction (black) and the Ohshima (blue for negative ions and red for positive ions) predictions for the ionic distributions are plotted for each.
Guillaume Mollard – GPU acceleration of Machine Learning algorithms
About : Dathena, Suisse et Singapour
Dathena is a young startup created in 2016 that provides a data governance platform for companies. The idea is to use Natural Language Processing methods combined with Machine Learning algorithms to classify large amounts of data. This classification allows companies to regain control over their data and to locate and protect confidential data from leakage. By assigning a category and a level of confidentiality to each file in the company dataset, Dathena can detect abnormal access rights and suggest Data Loss Prevention policies.
I stayed 5 months in Dathena’s research and development office based in Singapore. My project was to find ways to speed up Dathena’s classification pipeline, exploiting GPU computing power. Thanks to the partnership of the startup with NVIDIA, I had the opportunity to work with a Tesla P100 GPU, which is actually their most powerful one designed for GPU computing.
As the startup is using Scala and Spark for tasks parallelization within a computer cluster, I tried different open source libraries that could bring GPU accessibility in such an environment. Using the library DeepLearning4J developed by Skymind, I developed a Multi Layer Perceptron which could run 4.5 to 7 times faster on GPU (see Figure 1). This deep learning algorithm is now integrated in Dathena classification pipeline. I also worked on a Kmeans GPU implementation, but it is still a work in progress. Aside from my main project, I also worked a bit on speech recognition, OCR integration and other little tasks concerning the optimization of certain parts of the pre-exisiting code.
This internship helped me to understand challenges that need to be addressed in the machine learning field, as the amount of data to be processed is getting large and is growing fast. It was also a good experience to conduct my own researches inside of a dynamic and growing startup.
Nils Wandel – Pharmacokinetic Modeling
About : Roche, Suisse
During my 8 week internship I worked at Roche in Basel. Roche is one of the biggest pharmaceutical companies in the world and develops numerous medicaments – in particular against cancer. I worked in the pRED (pharmaceutical Research and Early Development) section – other sections at Roche comprise for example Finance, Production, Diagnostics,etc.
My first task was to bring data from different studies into a unique format so that they can be analysed within the same framework. Since I had the idea of a “universal” Roche-database (at the moment there are several pretty specific databases at Roche which aren’t compatible amongst each other), I wanted to design a new database which could comprise data from all kinds of different scientific studies (like for example in vivo / in vitro, internal / external studies, etc.). My supervisor was always very open to new ideas so he allowed me to design the database as well as a web-interface in order to insert, read and analyze the data. The innovation of this new database is a “tag”-structure which allows to describe experiments in a very abstract and universal way (see picture below). This description can then be used to make very specific queries like for example: “give me all the data from kidney-measurements of male monkeys where a certain substance was administered”.
After the design and implementation phase of the database and its web-interface I imported some actual study data and started with analyzing. In the group I worked in, hereby mainly R in combination with the packages “Shiny” for the visual representation and “RxODE” for solving the ordinary differential equations of the pharmacokinetic models came into play.
Loïc Veyssière – Adaptation et intégration d’un modèle de programmation par tâches et communications asynchrones dans un solver CFD
About : Intel, France
I did my internship at the Exascale Computing Research lab (ECR) with the Intel company. The ECR activity can be described through two fields of expertise: the development of profiling tools to analyze calculus performance and communication network, and the support on HPC scientific applications.
To tackle the challenges of massive parallel simulations, advances have to be found at all level of the application – compiler optimization, parallelization at shared and distributed level, vectorization, mathematical algorithm, hardware selection and others. As a consequence, efforts should be concentrated on holistic approaches. The main purpose of the internship was to continue the effort of the ECR lab in code modernization by integrating and extending the DClib library to the linear solver of Yales2 – a mathematical software simulating fluid flows for combustion on massive complex meshes and developed by Coria / CNRS.
The DC-lib library is a task-based programming model. It consists in an hybrid approach using domain decomposition and asynchronous communications to exploit distributed memory parallelism combine with Divide-and-Conquer to exploit shared memory parallelism. As tangible benefits, the method naturally exposes concurrency between the computing tasks, a good data locality and an efficient load-balancing thanks to the work-stealing mechanism.
Finally, my contribution has produced positive results: in addition to demonstrating the feasibility of the integration, performance improvements were established which is encouraging for future upgrades. Moreover, algorithmic changes have been proposed to avoid some machine to machine communications.
Figure 1: PRECCINSTA burner with Yales2
Martin Barry – Skin detection by DNN’s using multispectral data
About : Sony Stuttgart, Allemagne
I have done a six months internship in Sony Stuttgart from 1st of march 2017 until 31st of August. I was working in the Image processing department. They are following diverse field of research using their new multi-spectral sensor. Mostly detection/estimations, i.e. illumination estimation, iris detection, skin detection so on so forth. Newly they are mostly interested in optimisation of old results/performances using deep neural networks (DNN).
My work consisted in developing my own skin detection DNN, I used python and Tensorflow library to implement a proper neural network. The Final algorithm was consisting in
(1) pre-processing of the image, such as calculus of its texture (Homogeneity, Entropy…), sub dividing the dataset images into samples of smaller patches used as input for the NN.
(2) Learning the task using the deep neural network, the creation of the neural network was the longest part of this internship as every parameters had to be optimised, number of layers, number of convolutions/dense layers…
(3) Once the neural network has learnt its task it is fed into a GUI that is used as one can expect to detect skin one a given image.
This network could be done thanks to the new informations given by Sony 16 channels images compared to the 3 channels RGB classic images that don’t contain enough information.
Working in Sony was a great experience, though one should be careful, indeed it is a big company with a lot of student, thus you will be more treated as a semester student in a lab than an actual intern.
Pierre Kibleur- G-Therapeutics
About : G-Therapeutics, Suisse
G-Therapeutics is a Swiss-Dutch company spinned off Grégoire Courtine’s (EPFL) research on spinal cord injury. The company is developing a therapy combining a spinal cord implant with robotically-assisted training for people with spinal cord injury to regain walking capability. Aiming to accelerate and augment the patient’s functional recovery, the body-weight support system named Rysen will support the patients during overground locomotor training. This system is an essential training tool from day one of rehabilitation to the last providing adaptable multi-directional support as well as advanced rehabilitation exercises, such as climbing stairs. The importance of using such a robot for rehabilitation has been demonstrated in an article published this July in Science Translational Medicine.
My role has been to participate in the entire software development process according to standards for a medical product. Tasks comprised sensors integration, coding of the real-time controller, establishing links to the risk management process, and elaborate the software development process according to the IEC 62304 norm for medical device software. I fulfilled many tasks in a small development team and continuously learnt from a totally new and challenging environment. I will do my Master thesis in a parent project of G-Therapeutics activities, at the frontier between robotics and neurosciences.
Figure 1: Illustration of the robot’s action on the patient, taken from https://actu.epfl.ch/news/