Completed Semester Projects

Spring 2023

Deep generative models such as GANs and VAEs have shown the remarkable ability to learn complex data distributions and produce highly-realistic samples. When trained on data from creative domains (e.g., clothing design), these models could provide professionals with an invaluable tool to support their creative practice. For example, a deep generative model could be used to generate variations of an original design, to find radically different designs that might never have been considered, or to blend multiple designs. However, harnessing this power means solving non-trivial problems such as meanignfully identifying specific points in the latent space (inversion) and navigating the latent spaces learned by these models (disentanglement).

We developed a deep generative model for clothing designers with working solutions to these problems which was featured as a demo in NeurIPS 2021 [1]. However, there is still much room for improvement. This project will involve identifying and implementing state-of-the-art methods related to GAN inversion and latent-space disentanglement from recent publications in NeurIPS, ICLR, etc. Depending on time, interest, and motivation, there is also the possibility of implementing new types of generative models such as diffusion models, or implementing new functionalities such as clothing try-on [2].

We are seeking students interested in deep learning, deep generative modeling, and GAN inversion and latent-space disentanglement to join this project.

Contact: [email protected]


Jupyter notebooks have become an essential tool used in data science, scientific computing, and machine learning in both industry and academia. Cloud based Jupyter notebooks like Google Colab, Noto, and Jupyter Hub bring the power of Jupyter notebooks into the cloud and make it easier to share and collaborate. At EPFL and other universities, these cloud-based Jupyter notebooks are used as interactive textbooks, platforms for distributing and grading homework, and as simulation environments.

These notebooks produce rich logs of interaction data, but there is currently no easy way for teachers and students to view and make sense of this data. This data could provide a valuable source of feedback that both teachers and students could use to improve their teaching and learning. This way of using data is called learning analytics, and we have recently begun designing a software extension that will bring the power of learning analytics directly into cloud-based Jupyter notebooks.

We are looking for students to join in the development of this learning analytics tool with any of the following interests: data visualization, full-stack web development, UX research, learning analytics, education.

Contact: [email protected] or [email protected]

Learning how to grip the pen properly is a fundamental part of handwriting training for children, which requires constant monitoring of their pen grip posture and timely intervention from teachers. Various sensing technologies have been explored to automate the pen grip posture estimation, like the camera-based system or using EMG armband. In the context of digital writing, namely, writing on tablets, these solutions with additional sensors lack portability. In this project, we aim to tackle this challenge by exploiting the integrated sensors of touch screens and digital pens. A previous study identified that it is promising to reconstruct the 3D hand pose based on the capacitive images provided by the touch screen. Together with the accessible pen tip location and orientation, which are strongly coupled with the hand pose, we postulate that the pen grip posture can be inferred in situ with a single commodity tablet and pen. Furthermore, building upon it, a new method for pen grip posture quality evaluation can be investigated.

To this end, in this research project, we will work on an Android tablet or Wacom Pen tablet and develop new algorithms for pen grip posture estimation and analysis. We will have weekly meetings to address questions, discuss progress, and think about future ideas. 

We are looking for students with any of the following interests: Machine Learning, Human-computer Interaction, Computer Vision, and Mobile Computing. Relevant IT skills include Python and knowledge of any one of the following object-oriented programming languages: C++, Java or C#. If you are interested, do not hesitate to contact me.

Contact: [email protected]

It can be difficult to envisage potential downstream ethical and environmental impacts of engineering design decisions, particularly for students trained on school projects with limited interaction with real world constraints and complexity. This project involves continuing the development of an ethical game (involving drone design) to support students’ reflection on the broader impacts of their design decision.

Concretely, the student selected for this project will integrate Cellulo robot(s) with the game in order to enable collaborative, multi-person interactions and the collection of haptic feedback.

Skills developed in this project: Robot programming with Qt/QtQuick software, Basic C#, QML programming, working with git, educational activity design, user experience design.

Proposed work packages

  1. Study phase: familiarization with the “game” scenario and design, and existing Cellulo applications.
  2. Design and implement an approach to integrate the Cellulo robots with the game, including collecting haptic feedback.
  3. Field testing “game” with groups of students, including evaluation of the system and educational impact.
  4. Report writing.

Contact: [email protected]

Nowadays people commonly use digital tablets to take notes by writing, usually requiring digital pens with a dedicated stylus. The technology behind it mainly relies on the sensing of pen tip contact location on the touch screen, which is rendered as part of the handwriting trajectory and forms the handwriting product. However, one important aspect of the handwriting process is often ignored and underexplored by researchers, i.e., the hand imprint when the hand is naturally resting on the screen.  Yet, the full palm imprint can be detected with the mutual-capacitance sensing technology used in most modern digital tablets. In this project, we postulate that the static and dynamic features of hand imprint during the process convey rich information for handwriting recognition, which is promising to empower the user to write on tablets with a conventional pen.

To this end, in this research project, we will 1) create an Android tablet app and 2) develop new deep learning algorithms for handwriting recognition with standalone capacitive images of the touch screen. We will have weekly meetings to address questions, discuss progress, and think about future ideas.

We are looking for students with any of the following interests: Human-computer Interaction, Computer Vision, Deep Learning, and Mobile Computing. Relevant IT skills include Python and knowledge about Andriod Development. If you are interested, do not hesitate to contact me. This project is mainly targeted for a thesis but can be adapted for semester projects.

Contact: [email protected]

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Given the importance of reading throughout a person’s life, it is not surprising that dyslexia has been extensively studied. Yet, there is no consensus about theories or explanations which may explain its origin. A recent promising research trend focused on abnormalities of the “internal clock” used to sample information as one of the main underlying deficits. We are developing several digital activities to explore that view.

In order to test that hypothesis, we need to measure children’s performances in those activities against some baseline cognitive skills. Thus, the goal of this project is to develop an automatic reading assessment activity. Since the application is on iPad, you will learn how to program in the Swift language. We are seeking students interested in IOS development, signal processing and machine learning to join this project.

Contact: [email protected]

Tangible user interfaces (TUIs) are a technology that makes it possible for people to interface with the digital world by manipulating physical objects. Typically, physical objects are tagged with fiducial markers similar to QR codes that make it easier to automatically identify their position and orientation. This information is then processed and used to project a visualization directly onto the objects. For example, this is the approach used by the popular reacTIVision software.

The goal of this project is to develop a TUI toolkit that does not rely on the use of fiducial markers, but instead uses augmented reality technologies to identify objects directly. This finished toolkit will provide users with an easy-to-use system for training the models to identify arbitrary objects, and will operate under a variety of lighting conditions and camera angles. This toolkit will be designed to be a drop-in replacement for existing toolkits such as reacTIVision.

This project will be developed using Swift and ARKit for iOS. We are seeking students interested in augmented reality and iOS development to join the project.

Contact: [email protected]

Virtual reality (VR) has the potential to radically distrupt education. However, we know little about how to design effective learning experiences in VR. To address this, we are currently developing a variety of VR learning experiences that will be tested with students and teachers.

We are seeking students interested in joining in on the development of educational VR applications for the Oculus Quest 2 using the Unity3D platform. Relevant technical experience includes knowledge of C#, Unity XR Interaction Framework, and Unity3D.

Contact: [email protected]

Spring 2022

Teachers need detailed and actionable feedback on their performance in order to improve their teaching. Feedback which could be useful includes how much time they have spent lecturing versus the time students spent doing activities in class; when were the moments when the teacher was stressed during their lectures. Classroom conversations are a great source to analyze the interactions between teacher and students in class time.

The goal of this project is to provide automated feedback to teachers from the class based on analysis of the classroom conversations. The project has two parts for two students: 1) One student will work on a dataset of recordings of conversations of teachers managing mathematical robotic classrooms and will use novel machine learning algorithms to identify teachers’ orchestration patterns. 2) The other student will design the dashboard that visualizes the key moments of the class, elements of teacher conversation analysis and provides actionable feedback to teachers. 

Prerequisites: For part 1, experience or interest in speech recognition, learning python, machine learning, jupyter notebooks. For part 2, front end design and development using Flutter.

Contact: [email protected], [email protected]

A Braitenberg vehicle [1] is an agent that can autonomously move around based on its sensor inputs. It has primitive sensors that measure some stimulus at a point, and wheels (each driven by its own motor) that function as actuators. Depending on how sensors and wheels are connected, the vehicle exhibits different behaviors. This means that, depending on the sensor-motor wiring, it appears to strive to achieve certain situations and to avoid others, changing course when the situation changes. The simplest vehicles display four possible connections between sensors and actuators (ipsilateral or contralateral, and excitatory or inhibitory), producing four combinations with different behaviours named fear, aggression, liking, and love. These correspond to biological positive and negative reactions present in many animal species.

As part of the research efforts around our educational robot Cellulo, in CHILI we have built a tile-based tangible programming language wherein children write programs by connecting puzzle-like command tiles such as “move 1 step”, “if then” etc. The programming goal is to make a Cellulo robot do a task (such as navigate through a maze) which changes depending on the activity. These tiles are read by another Cellulo robot to a central computer or tablet which interprets them and sends the command to the first Cellulo robot to do the task.

In this project, the student will design and implement a learning activity consisting of two parts. In the first part, children will implement the basic behaviours of the Braitenberg vehicle. In the second part, children will be given a task wherein they have to put the behaviours together to make the vehicle behave in a certain manner and accomplish the task. In this manner, children will learn the power of object-oriented programming, and creating and using classes. Children will also learn computational thinking skills of breaking down large problems or putting together smaller solutions to solve big problems.


[1] HOGG, David Wardell; MARTIN, Fred; RESNICK, Mitchel. Braitenberg creatures. Cambridge: Epistemology and Learning Group, MIT Media Laboratory, 1991.

Experience or interest in: Robot programming with Qt/QtQuick software, QML programming, working with git, educational activity design, user experience design.

Contact: [email protected], [email protected]

Fall 2022

There are serious concerns about the fairness of machine learning methods used in high-stakes situations. Commonly-used models have been shown to be biased or less accurate for women, minorities, and other vulnerable populations in healthcare, judicial, and educational settings. In this project we are focused on uncovering and mitigating algorithmic bias in university admissions.

University admissions is a complex area since biases can be introduced by both humans and algorithms. Because of this, methods used to mitigate bias bring together methods from both machine learning and human-computer interaction (HCI). We are exploring methods from machine learning to develop predictive models that are free from bias (e.g., where the equalized odds ratio for each subgroup is roughly the same) and methods from HCI to develop and test interfaces and data visualizations that reduce the biases introduced by humans when choosing who to admit.

We are seeking students to join this project with the following interests: machine learning, bayesian modeling, algorithmic fairness, human-computer interaction, data visualization, full-stack web development, UX research.

Contact: [email protected]

Recent studies have shown that using narratives for approaching biology content produces better results than informative sessions. Similarly, using social robots to guide interactive activities for learning also shows a better engagement of young students compared to tablets or traditional methods (humans). This project has the goal of developing the architecture of a social robot to combine these two strategies: a robot capable of verbally communicating with young students to tell narratives of biology content (chosen by teachers to be approached with their students). The development and validation of this system will be done in partnership with the Learning Science department of ETHZ. An evaluation of using Natural Language Processing (NLP) algorithms to produce the narratives will be performed and, if the algorithms perform with acceptable accuracy for human understanding, they will also be applied and validated.

Keywords:  Human-Robot Interaction, Reinforcement Learning, Genetic Algorithms, Natural Language Processing.

Contact: [email protected]

Probabilistic reasoning is a crucial skill for making good decisions, however it is a skill that many people struggle with. Engaging in probabilistic modeling is an good way of improving probabilistic reasoning skills, however this practice requires a substantial background in mathematics and probability. Probabilistic programming languages such as Pyro, PyMC3, and Tensorflow Probability make it easier for people without this mathematical background to engage in probabilistic modeling. These languages provide a way to specify complex probability models by writing computer programs containing a mix of ordinary deterministic computation and randomly sampled values representing a generative process for data.

Unfortunately, using these languages stil requires an advanced understanding of programming. This project will build on prior work showing that general-purpose programming languages can be designed for novices (e.g., Scratch). We aim to extend this work to probabilistic programming languages, providing a way for novices in both programming and mathematics to meaningfully engage in probabilistic programming.

We are seeking student interested in programming language design, probabilistic progaramming, probabilistic modeling, and design for children to join this project.

Contact: [email protected]

The goal of this project is to continue the development of a web-based user interface that non-programmers can use to meaningfully navigate the latent space of a deep generative model. This work is part of the project “GANs and Beyond” and builds on previous work published in the NeurIPS workshop on Creativity [1]. A video demo of the existing interface can be seen at

This project will use React and other front-end web technologies to explore new ways of visualizing and exploring the vast space of possible designs. Ideally, we will also test this interface with stakeholders to better understand its strengths and weaknesses and to collect evidence of its effectiveness.

Students interested in HCI, full-stack web development, UX research, and human-centered artificial intelligence should contact [email protected].


Spring 2021

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Given the importance of reading throughout a person’s life, it is not surprising that dyslexia has been extensively studied. Yet, there is no consensus about theories or explanations which may explain its origin. A recent promising research trend focused on abnormalities of the “internal clock” as the main underlying deficit. This is translated into difficulties in perceiving, synchronizing and reproducing rhythmic sequences. The goal of this project is to create several activities around that topic.

More precisely, we will develop an application for iPad that combines several games to assess the rhythmic abilities of children. The skeleton of the application already exists, and there will be two phases to the project: 1) enhance the pre-existing rhythmic activities, 2) extend the possibilities to interact with the application: voice/sound processing and maybe eye-tracking.

When embedded in a learning activity, an Intelligent Tutoring System (ITS) must intervene based on the perceived situation to support learners and eventually increase the learning gains. The situation is often evaluated based on learners’ performance. Nevertheless, in activities that are exploratory by design, such as constructivist activities, performance could be misleading. Previous studies, with a collaborative learning activity called JUSThink mediated by a robot, found that behavioral labels, obtained by a classification analysis on multi-modal behaviors, are strongly linked to learning and seem to allow for better discrimination between high and low gainers.

However, in these papers, the authors treat all multi-modal behaviors as averages and frequencies over the entire duration of the task and do not consider the temporality of the data. In this project, we investigate how these behaviors evolve throughout the activity and if differences in the groups’ behaviors exist at the temporal level.

This project semester focuses on the propagation of a virus by developing a virtual activity with Unity platform (i.e on a computer or tablet) and physical activity (i.e with the real Cellulo robots). The goal is to create an environment to teach the topic of complex behavior to raise people’s awareness of the propagation in terms of interaction between the agents in a system (e.g. a virus). It contributes by designing a learning activity and evaluating its effectiveness in a classroom scenario.

This project is the continuation of Tangible programming using Cellulo. It aims at providing a proof of concept as well as improving the existing design visually as well as in terms of possibilities. Finally it aims at making the interface more robust and better suited to the learning goals.

The goal of my semester project is to implement the Bluetooth connection to Cellulo robots into Unity and to have a solution that would work under multiple platforms. In the end, one should be able to scan for nearby robots and connect to them in his Unity project.

The main objective of the project is to explore the possibilities the Unity framework allows in terms of developing multiple-party VR software. In that optic, we developed a collaborative interior design workspace in virtual reality. Interior design students benefit from this experience since it introduces elements that we can’t have in the real world. For instance, moving furniture with a controller, or painting walls instantly with no extra cost.

This project aims at making interactions with QTrobot more natural. During an activity, we want the robot to detect positive, neutral and negative emotional cues from the user and tune its behavior accordingly. Since emotions can be shown very differently from one person to another, the emotional thresholds need to be personalized for every user.

Deep generative models have shown the great capability to generate random examples recently while the interpretation of the design space remains unknown. We hope to build a tool for fashion designers to explore the design space and generate fashion items according to their aesthetic in a fast and automatic way.

Imagine a recruitment platform where you could simply enter your skills and it would propose you all the jobs corresponding to these skills. To this purpose, the platform would have first to extract skills found in each job advertisement. This semester project aims at developing, validating, and comparing methods for extracting skills from job-ad data. Supervised methods are mostly used for related work with large amounts of labeled data. But in our case, we dispose of unlabeled data and the time and money spent on labeling it would be too much for too less. We need unsupervised methods that could be capable of extract skills from any new dataset.

Our current analysis is built upon previous analysis conducted in the lab related to the JUSThink activity in which it was shown that data-driven clustering approaches based on behavioral features such as interaction with the activity, speech, affective features, and gaze patterns can : discriminate between gainers and non-gainers. The clustering process identified three separate clusters (Type 1 gainers , Type 2 gainers and non-gainers) Looking for statistically significant activity patterns in the lower level sequential log data can be interesting as it can:

  • Suggest the most common sequence patterns among each type of learner groups.
  • Allow us to understand how students that learn differ in their action patterns from those who do not end up learning
  • Help us to observe if the similarities and differences found at temporal level with action patterns are consistent with the previous So Sequence mining and differential sequential mining will allows to have comparative profiles over the whole interaction and over phases of interaction.

The question motivating this project is: “Is it possible to tell the rules a robot is following by the way it moves?”. If yes, in the case that robot, rather than autonomous, is controlled by a human, this would mean that we are able to tell the types of rules (and thus have a glimpse on the type of mindset) that the person is following.

There are many concepts in the mathematics curriculum that children will need to learn. One of these is learning about slopes, which is not necessarily intuitive. Furthermore some children may not be motivated to learn new mathematical concepts. The idea is to make it more fun by interactivity. Such work has already been done at the CHILI laboratory at EPFL but it had for limitation the requirement to be played in a classroom which is an issue in this time of pandemic.

The project aims at extending Reachy’s Human-Robot interactions functionalities related to audio processing, verbal interaction and natural language understanding.

The motivation behind this project was to create an online 2 player Cellulo Pac-Man game that would be suitable for the Covid-19 situation. The goal would be to provide a gamified, social and interactive rehabilitation experience.

This project aims at developing a web-app as part of a platform for teaching linear mathematical functions in online classrooms. Specifically, it wants to offer an activity that improves the learning process of students for the understanding of the steepness of linear functions. Also, an emphasis has been put on making the activity fully compatible with its counterpart that has been developed for use with Cellulo robots, so that learning sessions with Cellulos can be expanded with the support of in-class use of the web-app, or with remote participation of students.

Highlighting the potential of analyzing learners’ behaviors by coupling robot data with the data from conventional methods of assessment through quizzes in educational settings and showcasing the classification of learners in the behavioral feature space with respect to the task performance, giving insight into the relevance of behavior patterns for high performance in a robot mediated activity that consists in a path planning activity with 12 teams of students (learners) divided into two stages .

This project aims at completing another one named “Tangible programming using Cellulo”. Tangible programming tries to come up with new and fresh ideas regarding education. It makes the hypothesis that, for young students especially, learning through senses (sight, touch, hearing) may be more interesting than looking at boring and conventional lines of codes. The bet made by my project is the following: by working on smooth and intuitive interfaces for teachers giving tangible programming lessons, we also improve the experience of the students. At the base of this smoothening, a user interface that presents itself as a dashboard.

Studying interactions between individuals and emerging behaviors is important to learn and gain an insight about the functioning of complex systems.

Social robots are highly popular for human-robotinteraction for educational purposes. One main aspect of social robotics is to enable a robot to perceive verbal and non-verbal cues and adapt its behavior accordingly. Being able to process images is a key feature to perceive such nonverbal cues.

Develop an app allowing human users to teach micro-level behaviours to Cellulo robots with a programming-by-demonstration framework that extracts features from a trajectory and define the effectiveness of such a data set.

Reachy is a humanoid robot which is equipped with two 7 degrees of freedom arms and grippers “Otis”, thus it is physically capable to hold a pen and write. In this project, we would like to make Reachy to write in real life. Plus, since Reachy has the talent to be social and interactive, we would like it to use its writing skill to interact with people.

Older Semester Project Reports

Level up the interaction space with Cellulo robots

L. Burget 


User interaction design with interactive graph

A. Colicchio 


Object detection for flower recognition

T. Perret 

publication thumbnail

CoPainter: Interactive AI drawing experience

P. Golinski 

publication thumbnail

VR with Cellulo, bringing haptic feedback to VR.

J. Mion 

publication thumbnail

Cellulo Geometry Learning Activity

B. Beuchat; A. Scalisi 


Cellulo – Pattern recognition and Online prediction of pacman path and target

I. Leimgruber 


Emotion and Attention loss detection for Co-Writer

L. Bommottet; C. Cadoux 


Multi-Armed Bandits for Addressing the Exploration/Exploitation Trade-off in Self Improving Learning Environment

L. P. Faucon 


Exploring students learning approaches in MOOCs

L. P. Faucon 


Unsupervised extraction of students navigation patterns on an EPFL MOOC

T. L. C. Asselborn; V. Faramond; L. P. Faucon