Completed Semester Projects

Spring 2024

Various human-robot interaction systems have been designed to support children’s learning of language and literacy. They commonly foster learning in a way of utilizing social robots to engage a learner in the designed educational activities, which heavily rely on the human-robot social relationship. In this project, we will work on building the QT Cowriter, an intelligent conversational handwriting companion robot powered by ChatGPT, that can interactively talk with children like a friend and hold a pen to write on tablets. Building upon it, we would like to exploit the social bond between children and robots and explore the model of Relational Norm Intervention (RNI) for the purpose of body posture regulation during handwriting.

To this end, in this research project, we will work with QTrobot and Wacom tablets. We will implement innovative human-robot interaction systems, develop state-of-the-art algorithms, and conduct small-scale user studies.  We have weekly meetings to address questions, discuss progress, and think about future ideas. 

We are looking for students with the following interests: HRI, ROS, and Large Language Models. Relevant IT skills include Python and ROS basics. If you are interested, do not hesitate to contact me.

Contact: [email protected]

VR workspaces are becoming increasingly prevalent, revolutionizing industries such as design, training, and collaboration. However, prolonged use of virtual desktops in VR can lead to discomfort and health issues due to prolonged static postures. This project seeks to address this challenge by creating an unobtrusive system that subtly guides users into healthier postures without disrupting their immersive experiences. In [1], the unobtrusive intervention is implemented by adjusting the content position at an unperceivable low speed, while the motion strategies were pre-defined regardless of the current posture of the user. Thus, a second objective of this project is to investigate a Reinforcement Learning agent to adaptively adjust the content position and learn personalized intervention strategies to help the user keep a proper posture or stimulate body movement from time to time to prevent long static postures.

To this end, in this project, we will work on a VR/XR development with Unity and develop new RL algorithms for adaptive posture intervention. We will have weekly meetings to address questions, discuss progress, and think about future ideas. We aim to summarize the results in a scientific report. 

We are looking for students with any of the following interests: VR/XR, Machine Learning, and Human-computer Interaction. Relevant IT skills include Python and knowledge of any one of the following object-oriented programming languages: C++ or C#. Experience with VR/XR development would be beneficial. If you are interested, do not hesitate to contact me.

Contact: [email protected]

Learning how to grip the pen properly is a fundamental part of handwriting training for children, which requires constant monitoring of their pen grip posture and timely intervention from teachers. Various sensing technologies have been explored to automate the pen grip posture estimation, like the camera-based system or using EMG armband. In the context of digital writing, namely, writing on tablets, these solutions with additional sensors lack portability. In this project, we aim to tackle this challenge by exploiting the integrated sensors of touch screens and digital pens.  One study identified that it is promising to reconstruct the 3D hand pose based on the capacitive images provided by the touch screen. Together with the accessible pen tip location and orientation, which are strongly coupled with the hand pose, we postulate that the pen grip posture can be inferred in situ with a single commodity tablet and pen. This is a continuing project, in which a dataset and the first version of pen gripping posture analysis have been developed in Phase 1&2. In this phase, the goal is to improve the algorithm and implement the baseline condition.  

To this end, in this research project, we will work on a Wacom Pen tablet and develop new deep-learning algorithms for pen grip posture estimation and analysis. We will have weekly meetings to address questions, discuss progress, and think about future ideas. We aim to summarize the results in a scientific report.

We are looking for students with any of the following interests: Machine Learning, Human-computer Interaction, Computer Vision, and Mobile Computing. Relevant IT skills include Python and basic knowledge of any one of the following object-oriented programming languages: C++, Java or C#. If you are interested, do not hesitate to contact me.

Contact: [email protected]

This project will explore uses of diminished reality, a type of mixed reality where real-world objects are removed or occluded using computer vision before being passed into a user’s headset. You will use a Zed Mini camera to capture and process a video stream of the world; identify objects using YOLO (or a similar real-time object-detection algorithm); attempt to occlude, blur, or remove these objects; and then finally pass this modified video stream into the user’s headset. The end goal of this project is to develop a mixed reality environment that supports users in blocking out distractions to improve focus and concentration. More information on diminished reality can be found in this paper: https://dl.acm.org/doi/fullHtml/10.1145/3491102.3517452.

Required knowledge: Unity, Python, CV

Contact: Richard Davis at [email protected]

In this project, you will analyse a dataset of people being interviewed by a human and by a robot. The goal is trying to find patterns in the extracted metrics, investigating potential correlations between the metrics and the type of interlocutor. The metrics are voice features and autonomous speech recognition outputs from the system. Your work will involve: (1) video analysis to build the ground truth of the metrics, (2) implementing and fine-tuning algorithms to find patterns, and (3) implementing a machine learning algorithm in the robot trying to identify, in running-time, if the users are behaving like they are “talking to a robot”, based on the analysed dataset.

Required knowledge: Python, Machine Learning, Basics of Statistic

Key-words: Data Analysis, Human-Robot Interaction, Conversational AI 

Contact: [email protected] with tag in the subject: [Semester Project]

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Eye-tracking technology has been successfully used to study the reading behaviour of children. This semester project aims to explore its application in more ecologically valid scenarios and expand its usage. We started to develop an iOS application that utilizes the eye-tracking abilities of ARKit, Apple’s augmented reality framework, to study children’s gaze behaviour while interacting with the iPad. The goal of this semester project is: 1) to improve eye-tracking and 2) to conduct experiments comparing the results obtained from the ARKit application with eye-tracking glasses, as well as across different iPad models.


Since the application is on iPad, you will learn how to program in the Swift language. A background in Linear Algebra is useful. We seek students interested in IOS development and experiment design to join this project. This project is rather designed for a master semester project, but can be adapted for motivated bachelor students.

Contact: [email protected]

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. A recent promising research trend focused on abnormalities of the “internal clock” used to sample information as one of the main underlying deficits. We are developing several digital activities to explore that view.

In this project, you will develop a game that calls on the rhythmic abilities of the user.
Since the application is on iPad, you will learn how to program in the Swift language. We seek students interested in IOS development and Game Design to join this project. This project is rather designed for a master semester project, but can be adapted for motivated bachelor students. 


Contact: [email protected]

Jupyter notebooks have become an essential tool used in data science, scientific computing, and machine learning in both industry and academia. Cloud based Jupyter notebooks like Google Colab, Noto, and Jupyter Hub bring the power of Jupyter notebooks into the cloud and make it easier to share and collaborate. At EPFL and other universities, these cloud-based Jupyter notebooks are used as interactive textbooks, platforms for distributing and grading homework, and as simulation environments.

These notebooks produce rich logs of interaction data, but there is currently no easy way for teachers and students to view and make sense of this data. This data could provide a valuable source of feedback that both teachers and students could use to improve their teaching and learning. This way of using data is called learning analytics, and we have recently begun designing a software extension that will bring the power of learning analytics directly into cloud-based Jupyter notebooks.

We are looking for students to join in the development of this lerning analytics tool with any of the following interests: data visualization, full-stack web development, UX research, learning analytics, education.

Contact: [email protected] 

Fall 2023

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Given the importance of reading throughout a person’s life, it is not surprising that dyslexia has been extensively studied. Yet, there is no consensus about theories or explanations which may explain its origin. A recent promising research trend focused on abnormalities of the “internal clock” used to sample information as one of the main underlying deficits. We are developing several digital activities to explore that view.

In order to test that hypothesis, we need to measure children’s performances in those activities against some baseline cognitive skills. Thus, this project aims to implement several standard cognitive performance tests playfully and interestingly. Since the application is on iPad, you will learn how to program in the Swift language. We seek students interested in IOS development and Game Design to join this project.

Contact: [email protected]

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Eye-tracking technology has been successfully used to study the reading behaviour of children. This semester project aims to explore its application in more ecologically valid scenarios and expand its usage. We started to develop an iOS application that utilizes the eye-tracking abilities of ARKit, Apple’s augmented reality framework, to study children gaze behaviour while interacting with the iPad. The goal of this semester project is: 1) improve the eye-tracking, 2) conduct experiments comparing the results obtained from the ARKit application with eye-tracking glasses, as well as across different iPad models, and 3) develop educational games that leverage eye-tracking technology for an engaging learning experience.

Since the application is on iPad, you will learn how to program in the Swift language. We seek students interested in IOS development, experiment design and game Design to join this project.

Contact: [email protected]

Educational activities in Human-Robot Interaction (HRI) commonly do not take into consideration the teachers in their design and evaluation. This is an important piece missing since teachers are the ones who know better their students and the ones who need the most to know how their students are performing in such activities. This gap is long mainly because the tools to program such robots are fuzzy for non-expert programming. This project aims to develop features for Graphical User Interfaces that facilitate teachers designing and evaluation of such activities in an intuitive way. It will connect to already existing algorithms to control the robot that automates the execution of these interactions. After executing the activities, the interface will show to the teachers the data collected in easily readable visualization methods. The system should also be able to autonomously generate reports of the interactions. Validation of the GUI is expected to be performed with teachers through user utilization and interviews.

Keywords: Python, Interface Desing, REACT, UX.

Contact: [email protected]

This project aims to create a chat-based agent for analyzing and interpreting data. Initial prototypes will  create ReAct agents to answer questions about data (using LangChain and the OpenAI API). For example, rather than writing the code to run a regression in R or Python, a user could simply ask “What is the relationship between variable y and X?” The agent will be responsible for deciding which type of analysis to run, running it, and reporting the results back to the user.

Part of this project will involve creating a set of “tools” that the ReAct agent can use to perform data analysis. This is similar to creating a well-defined API for different operations that the ReAct agent can access. More information about the creation of custom tools can be found at https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html.

As the project matures, there will be a need to begin working with a local large langauge model such as vicuna-13b for data privacy reasons. Part of the project may involve implementing and potentially fine-tuning such a model, and then comparing its performance to the OpenAI models such as gpt-3.5-turbo.

Students interested in applications of large language models, data science, and human-centered artificial intelligence should contact [email protected].

The COVID-19 pandemic has highlighted the limitations of video conferencing tools like Zoom when it comes to facilitating effective online learning and teaching, especially for hands-on, project-based approaches. This project aims to address these limitations by exploring the potential of new devices, such as augmented and virtual reality (AR/VR) headsets, to promote more immersive and interactive exchanges in online spaces. By combining physical and virtual representations, we envision creating learning environments that bridge the gap between face-to-face and remote education. The provided proof of concept video (https://youtu.be/nfCsT74ixdE) showcases the preliminary work done in this area and demonstrates the potential for leveraging AR/VR technology in enhancing online learning experiences. The project will utilize Unity and C# to create mixed reality experiences. This project has the potential to improve online education by offering new opportunities for hands-on learning, regardless of students’ location.

Contact: Bertrand Schneider

Makerspaces are collaborative workspaces that provide students with hands-on learning opportunities to explore various concepts through prototyping and making. By utilizing 3D pose data, collected through sensors and cameras 24/7 during a semester-long course, this project seeks to develop insights into how students interact with the makerspace environment, their tools, and projects. The objective is to identify meaningful patterns, correlations, and insights regarding student behavior, engagement levels, and learning progress within the makerspace. The project will also involve visualizing the analyzed data in an informative and user-friendly manner for further analysis and interpretation. This project is an opportunity to contribute to the emerging field of multimodal learning analytics and innovative ways to enhance student learning in makerspaces. Ultimately, the outcomes of this project will produce recommendations for educational institutions and makerspaces seeking to optimize learning and engagement among students.

Contact: Bertrand Schneider

Various human-robot interaction systems have been designed in supporting children’s learning of language and literacy. They commonly foster learning in a way of utilizing social robots to engage a learner in the designed educational activities, which heavily rely on the human-robot social relationship. With the recent boosting development of the Large Language Model (LLM), it shows the potential of empowering social robots with the ability of language understanding and generation, to elicit and maintain a social and emotional bond between humans and robots. In this project, we will work on building the QT Cowriter, an intelligent conversational handwriting companion robot powered by ChatGPT, that can interactively talk with children like a friend and hold a pen to write on tablets. 

To this end, in this research project, we will work with QTrobot and Wacom tablets. We will explore different LLMs and OpenAI APIs. We will develop state-of-the-art algorithms and implement innovative human-robot interaction systems. We have weekly meetings to address questions, discuss progress, and think about future ideas. 

We are looking for students with any of the following interests: HRI/HCI, Machine Learning, and Large Language Models. Relevant IT skills include Python and ROS basics. If you are interested, do not hesitate to contact me.

Contact: [email protected]

Learning how to grip the pen properly is a fundamental part of handwriting training for children, which requires constant monitoring of their pen grip posture and timely intervention from teachers. Various sensing technologies have been explored to automate the pen grip posture estimation, like the camera-based system or using an EMG armband. In the context of digital writing, namely, writing on tablets, these solutions with additional sensors lack portability. In this project, we aim to tackle this challenge by exploiting the combination of the reflector and the frontal camera of a tablet for on-device pen gripping posture prediction. Together with the accessible pen tip location and orientation, which are strongly coupled with the hand pose, we postulate that the performance of pen gripping posture prediction can be further improved.

To this end, in this research project, we will work on an iPad and a customized reflector, build an iOS application, and develop new ML algorithms for gripping posture prediction. We will have weekly meetings to address questions, discuss progress, and think about future ideas. 

We are looking for students with any of the following interests: Machine Learning, Mobile Computing, and Human-Computer Interaction. Relevant IT skills include Python and the basics of Swift. Experience with iOS development can be beneficial. If you are interested, do not hesitate to contact me.

Contact: [email protected]

This project will expand the expressiveness and power of our tool by incorporating more powerful models. Additionally, this project will involve iterating on the user interface, exploring new ways of visualizing and exploring the vast space of possible designs. Ideally, we will also test this interface with stakeholders to better understand its strengths and weakneses and to collect evidence of its effectiveness.

Students interested in deep generative models for image synthesis, GANs, diffusion models, HCI, full-stack web development, UX research, and human-centered artificial intelligence should contact [email protected].

[1] Jiang, W., Davis, R. L., Kim, K. G., & Dillenbourg, P. (2022). GANs for All: Supporting Fun and Intuitive Exploration of GAN Latent Spaces. NeurIPS 2021 Competitions and Demonstrations Track, 292–296.
[2] Davis, R. L., Wambsganss, T., Jiang, W., Kim, K. G., Käser, T., & Dillenbourg, P. (2023). Fashioning the Future: Unlocking the Creative Potential of Deep Generative Models for Design Space Exploration. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–9.

Jupyter notebooks have become an essential tool used in data science, scientific computing, and machine learning in both industry and academia. Cloud based Jupyter notebooks like Google Colab, Noto, and Jupyter Hub bring the power of Jupyter notebooks into the cloud and make it easier to share and collaborate. At EPFL and other universities, these cloud-based Jupyter notebooks are used as interactive textbooks, platforms for distributing and grading homework, and as simulation environments.

These notebooks produce rich logs of interaction data, but there is currently no easy way for teachers and students to view and make sense of this data. This data could provide a valuable source of feedback that both teachers and students could use to improve their teaching and learning. This way of using data is called learning analytics, and we have recently begun designing a software extension that will bring the power of learning analytics directly into cloud-based Jupyter notebooks.

We are looking for students to join in the development of this learning analytics tool with any of the following interests: data visualization, full-stack web development, UX research, learning analytics, education.

Contact: [email protected] or [email protected]

Speech Recognition algorithms have achieved higher accuracies in the last decades. However, since their performance is mostly dependent on the sound, there is a remarkable drop in it when only low-quality audio is available. This fact invalidates most of activities that depend on verbal communication to be autonomous. Therefore, in environments where the noise level is high, such as buildings close to main avenues or classrooms during the school break time, the autonomous interaction with robots through verbal communication is drastically affected. The addition of visual analyses of the sentences through Lip Reading has shown potential alternatives to overcome this issue in its recent studies. In this project, Audio-Visual Speach Recognition (AVSR) algorithms will be improved and validated by users in social robots for verbal interactions in noisy environments. The algorithm uses Convolutional Neural Networks (CNN) to visually recognize visemes and combine them with the phonemes recognized by audio. ROS nodes are used to distribute the process and perform the high-cost computations on a powerful server to make the system feasible in real-time (for the user). In the experiments of this thesis, the robot will interact with users in noisy environments to test the accuracy of the algorithm and the user’s acceptability of the system performance.

Keywords:  ROS, Machine Learning, Deep Learning, OpenCV, Distributed Systems.

Contact: [email protected]

This project aims to leverage LLMs to improve feedback generation on student work in fablab. Feedback is crucial for effective learning outcomes, but it can be challenging to provide insightful and timely feedback. By leveraging the langchain framework, we plan to develop sophisticated prompting systems to iteratively assess and enhance feedback based on learning sciences principles. This process will also explore recent developments in LLMs, including generating intermediate reasoning steps using “chain of thoughts” approaches to enrich feedback generation. We will also explore how to “keep the human in the loop” so that teacher input is leveraged to maintain the authenticity and reliability of the feedback. The ultimate goal is to create a versatile platform for feedback generation across various academic domains.

Contact: Bertrand Schneider

Spring 2023

Deep generative models such as GANs and VAEs have shown the remarkable ability to learn complex data distributions and produce highly-realistic samples. When trained on data from creative domains (e.g., clothing design), these models could provide professionals with an invaluable tool to support their creative practice. For example, a deep generative model could be used to generate variations of an original design, to find radically different designs that might never have been considered, or to blend multiple designs. However, harnessing this power means solving non-trivial problems such as meanignfully identifying specific points in the latent space (inversion) and navigating the latent spaces learned by these models (disentanglement).

We developed a deep generative model for clothing designers with working solutions to these problems which was featured as a demo in NeurIPS 2021 [1]. However, there is still much room for improvement. This project will involve identifying and implementing state-of-the-art methods related to GAN inversion and latent-space disentanglement from recent publications in NeurIPS, ICLR, etc. Depending on time, interest, and motivation, there is also the possibility of implementing new types of generative models such as diffusion models, or implementing new functionalities such as clothing try-on [2].

We are seeking students interested in deep learning, deep generative modeling, and GAN inversion and latent-space disentanglement to join this project.

Contact: [email protected]

[1] https://neuripscreativityworkshop.github.io/2021/#/
[2] https://paperswithcode.com/task/virtual-try-on/latest

Jupyter notebooks have become an essential tool used in data science, scientific computing, and machine learning in both industry and academia. Cloud based Jupyter notebooks like Google Colab, Noto, and Jupyter Hub bring the power of Jupyter notebooks into the cloud and make it easier to share and collaborate. At EPFL and other universities, these cloud-based Jupyter notebooks are used as interactive textbooks, platforms for distributing and grading homework, and as simulation environments.

These notebooks produce rich logs of interaction data, but there is currently no easy way for teachers and students to view and make sense of this data. This data could provide a valuable source of feedback that both teachers and students could use to improve their teaching and learning. This way of using data is called learning analytics, and we have recently begun designing a software extension that will bring the power of learning analytics directly into cloud-based Jupyter notebooks.

We are looking for students to join in the development of this learning analytics tool with any of the following interests: data visualization, full-stack web development, UX research, learning analytics, education.

Contact: [email protected] or [email protected]

Learning how to grip the pen properly is a fundamental part of handwriting training for children, which requires constant monitoring of their pen grip posture and timely intervention from teachers. Various sensing technologies have been explored to automate the pen grip posture estimation, like the camera-based system or using EMG armband. In the context of digital writing, namely, writing on tablets, these solutions with additional sensors lack portability. In this project, we aim to tackle this challenge by exploiting the integrated sensors of touch screens and digital pens. A previous study identified that it is promising to reconstruct the 3D hand pose based on the capacitive images provided by the touch screen. Together with the accessible pen tip location and orientation, which are strongly coupled with the hand pose, we postulate that the pen grip posture can be inferred in situ with a single commodity tablet and pen. Furthermore, building upon it, a new method for pen grip posture quality evaluation can be investigated.

To this end, in this research project, we will work on an Android tablet or Wacom Pen tablet and develop new algorithms for pen grip posture estimation and analysis. We will have weekly meetings to address questions, discuss progress, and think about future ideas. 

We are looking for students with any of the following interests: Machine Learning, Human-computer Interaction, Computer Vision, and Mobile Computing. Relevant IT skills include Python and knowledge of any one of the following object-oriented programming languages: C++, Java or C#. If you are interested, do not hesitate to contact me.

Contact: [email protected]

It can be difficult to envisage potential downstream ethical and environmental impacts of engineering design decisions, particularly for students trained on school projects with limited interaction with real world constraints and complexity. This project involves continuing the development of an ethical game (involving drone design) to support students’ reflection on the broader impacts of their design decision.

Concretely, the student selected for this project will integrate Cellulo robot(s) with the game in order to enable collaborative, multi-person interactions and the collection of haptic feedback.

Skills developed in this project: Robot programming with Qt/QtQuick software, Basic C#, QML programming, working with git, educational activity design, user experience design.

Proposed work packages

  1. Study phase: familiarization with the “game” scenario and design, and existing Cellulo applications.
  2. Design and implement an approach to integrate the Cellulo robots with the game, including collecting haptic feedback.
  3. Field testing “game” with groups of students, including evaluation of the system and educational impact.
  4. Report writing.

Contact: [email protected]

Nowadays people commonly use digital tablets to take notes by writing, usually requiring digital pens with a dedicated stylus. The technology behind it mainly relies on the sensing of pen tip contact location on the touch screen, which is rendered as part of the handwriting trajectory and forms the handwriting product. However, one important aspect of the handwriting process is often ignored and underexplored by researchers, i.e., the hand imprint when the hand is naturally resting on the screen.  Yet, the full palm imprint can be detected with the mutual-capacitance sensing technology used in most modern digital tablets. In this project, we postulate that the static and dynamic features of hand imprint during the process convey rich information for handwriting recognition, which is promising to empower the user to write on tablets with a conventional pen.

To this end, in this research project, we will 1) create an Android tablet app and 2) develop new deep learning algorithms for handwriting recognition with standalone capacitive images of the touch screen. We will have weekly meetings to address questions, discuss progress, and think about future ideas.

We are looking for students with any of the following interests: Human-computer Interaction, Computer Vision, Deep Learning, and Mobile Computing. Relevant IT skills include Python and knowledge about Andriod Development. If you are interested, do not hesitate to contact me. This project is mainly targeted for a thesis but can be adapted for semester projects.

Contact: [email protected]

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Given the importance of reading throughout a person’s life, it is not surprising that dyslexia has been extensively studied. Yet, there is no consensus about theories or explanations which may explain its origin. A recent promising research trend focused on abnormalities of the “internal clock” used to sample information as one of the main underlying deficits. We are developing several digital activities to explore that view.

In order to test that hypothesis, we need to measure children’s performances in those activities against some baseline cognitive skills. Thus, the goal of this project is to develop an automatic reading assessment activity. Since the application is on iPad, you will learn how to program in the Swift language. We are seeking students interested in IOS development, signal processing and machine learning to join this project.

Contact: [email protected]

Tangible user interfaces (TUIs) are a technology that makes it possible for people to interface with the digital world by manipulating physical objects. Typically, physical objects are tagged with fiducial markers similar to QR codes that make it easier to automatically identify their position and orientation. This information is then processed and used to project a visualization directly onto the objects. For example, this is the approach used by the popular reacTIVision software.

The goal of this project is to develop a TUI toolkit that does not rely on the use of fiducial markers, but instead uses augmented reality technologies to identify objects directly. This finished toolkit will provide users with an easy-to-use system for training the models to identify arbitrary objects, and will operate under a variety of lighting conditions and camera angles. This toolkit will be designed to be a drop-in replacement for existing toolkits such as reacTIVision.

This project will be developed using Swift and ARKit for iOS. We are seeking students interested in augmented reality and iOS development to join the project.

Contact: [email protected]

Virtual reality (VR) has the potential to radically distrupt education. However, we know little about how to design effective learning experiences in VR. To address this, we are currently developing a variety of VR learning experiences that will be tested with students and teachers.

We are seeking students interested in joining in on the development of educational VR applications for the Oculus Quest 2 using the Unity3D platform. Relevant technical experience includes knowledge of C#, Unity XR Interaction Framework, and Unity3D.

Contact: [email protected]

Spring 2022

Teachers need detailed and actionable feedback on their performance in order to improve their teaching. Feedback which could be useful includes how much time they have spent lecturing versus the time students spent doing activities in class; when were the moments when the teacher was stressed during their lectures. Classroom conversations are a great source to analyze the interactions between teacher and students in class time.

The goal of this project is to provide automated feedback to teachers from the class based on analysis of the classroom conversations. The project has two parts for two students: 1) One student will work on a dataset of recordings of conversations of teachers managing mathematical robotic classrooms and will use novel machine learning algorithms to identify teachers’ orchestration patterns. 2) The other student will design the dashboard that visualizes the key moments of the class, elements of teacher conversation analysis and provides actionable feedback to teachers. 

Prerequisites: For part 1, experience or interest in speech recognition, learning python, machine learning, jupyter notebooks. For part 2, front end design and development using Flutter.


Contact: [email protected], [email protected]

A Braitenberg vehicle [1] is an agent that can autonomously move around based on its sensor inputs. It has primitive sensors that measure some stimulus at a point, and wheels (each driven by its own motor) that function as actuators. Depending on how sensors and wheels are connected, the vehicle exhibits different behaviors. This means that, depending on the sensor-motor wiring, it appears to strive to achieve certain situations and to avoid others, changing course when the situation changes. The simplest vehicles display four possible connections between sensors and actuators (ipsilateral or contralateral, and excitatory or inhibitory), producing four combinations with different behaviours named fear, aggression, liking, and love. These correspond to biological positive and negative reactions present in many animal species.

As part of the research efforts around our educational robot Cellulo, in CHILI we have built a tile-based tangible programming language wherein children write programs by connecting puzzle-like command tiles such as “move 1 step”, “if then” etc. The programming goal is to make a Cellulo robot do a task (such as navigate through a maze) which changes depending on the activity. These tiles are read by another Cellulo robot to a central computer or tablet which interprets them and sends the command to the first Cellulo robot to do the task.

In this project, the student will design and implement a learning activity consisting of two parts. In the first part, children will implement the basic behaviours of the Braitenberg vehicle. In the second part, children will be given a task wherein they have to put the behaviours together to make the vehicle behave in a certain manner and accomplish the task. In this manner, children will learn the power of object-oriented programming, and creating and using classes. Children will also learn computational thinking skills of breaking down large problems or putting together smaller solutions to solve big problems.

References:

[1] HOGG, David Wardell; MARTIN, Fred; RESNICK, Mitchel. Braitenberg creatures. Cambridge: Epistemology and Learning Group, MIT Media Laboratory, 1991.

Experience or interest in: Robot programming with Qt/QtQuick software, QML programming, working with git, educational activity design, user experience design.

Contact: [email protected], [email protected]

Fall 2022

There are serious concerns about the fairness of machine learning methods used in high-stakes situations. Commonly-used models have been shown to be biased or less accurate for women, minorities, and other vulnerable populations in healthcare, judicial, and educational settings. In this project we are focused on uncovering and mitigating algorithmic bias in university admissions.

University admissions is a complex area since biases can be introduced by both humans and algorithms. Because of this, methods used to mitigate bias bring together methods from both machine learning and human-computer interaction (HCI). We are exploring methods from machine learning to develop predictive models that are free from bias (e.g., where the equalized odds ratio for each subgroup is roughly the same) and methods from HCI to develop and test interfaces and data visualizations that reduce the biases introduced by humans when choosing who to admit.

We are seeking students to join this project with the following interests: machine learning, bayesian modeling, algorithmic fairness, human-computer interaction, data visualization, full-stack web development, UX research.

Contact: [email protected]

Recent studies have shown that using narratives for approaching biology content produces better results than informative sessions. Similarly, using social robots to guide interactive activities for learning also shows a better engagement of young students compared to tablets or traditional methods (humans). This project has the goal of developing the architecture of a social robot to combine these two strategies: a robot capable of verbally communicating with young students to tell narratives of biology content (chosen by teachers to be approached with their students). The development and validation of this system will be done in partnership with the Learning Science department of ETHZ. An evaluation of using Natural Language Processing (NLP) algorithms to produce the narratives will be performed and, if the algorithms perform with acceptable accuracy for human understanding, they will also be applied and validated.

Keywords:  Human-Robot Interaction, Reinforcement Learning, Genetic Algorithms, Natural Language Processing.

Contact: [email protected]

Probabilistic reasoning is a crucial skill for making good decisions, however it is a skill that many people struggle with. Engaging in probabilistic modeling is an good way of improving probabilistic reasoning skills, however this practice requires a substantial background in mathematics and probability. Probabilistic programming languages such as Pyro, PyMC3, and Tensorflow Probability make it easier for people without this mathematical background to engage in probabilistic modeling. These languages provide a way to specify complex probability models by writing computer programs containing a mix of ordinary deterministic computation and randomly sampled values representing a generative process for data.

Unfortunately, using these languages stil requires an advanced understanding of programming. This project will build on prior work showing that general-purpose programming languages can be designed for novices (e.g., Scratch). We aim to extend this work to probabilistic programming languages, providing a way for novices in both programming and mathematics to meaningfully engage in probabilistic programming.

We are seeking student interested in programming language design, probabilistic progaramming, probabilistic modeling, and design for children to join this project.

Contact: [email protected]

The goal of this project is to continue the development of a web-based user interface that non-programmers can use to meaningfully navigate the latent space of a deep generative model. This work is part of the project “GANs and Beyond” and builds on previous work published in the NeurIPS workshop on Creativity [1]. A video demo of the existing interface can be seen at https://youtu.be/dcC7G2zBuL8.

This project will use React and other front-end web technologies to explore new ways of visualizing and exploring the vast space of possible designs. Ideally, we will also test this interface with stakeholders to better understand its strengths and weaknesses and to collect evidence of its effectiveness.

Students interested in HCI, full-stack web development, UX research, and human-centered artificial intelligence should contact [email protected].

[1] https://neuripscreativityworkshop.github.io/2021/#/

Spring 2021

Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Given the importance of reading throughout a person’s life, it is not surprising that dyslexia has been extensively studied. Yet, there is no consensus about theories or explanations which may explain its origin. A recent promising research trend focused on abnormalities of the “internal clock” as the main underlying deficit. This is translated into difficulties in perceiving, synchronizing and reproducing rhythmic sequences. The goal of this project is to create several activities around that topic.

More precisely, we will develop an application for iPad that combines several games to assess the rhythmic abilities of children. The skeleton of the application already exists, and there will be two phases to the project: 1) enhance the pre-existing rhythmic activities, 2) extend the possibilities to interact with the application: voice/sound processing and maybe eye-tracking.

When embedded in a learning activity, an Intelligent Tutoring System (ITS) must intervene based on the perceived situation to support learners and eventually increase the learning gains. The situation is often evaluated based on learners’ performance. Nevertheless, in activities that are exploratory by design, such as constructivist activities, performance could be misleading. Previous studies, with a collaborative learning activity called JUSThink mediated by a robot, found that behavioral labels, obtained by a classification analysis on multi-modal behaviors, are strongly linked to learning and seem to allow for better discrimination between high and low gainers.

However, in these papers, the authors treat all multi-modal behaviors as averages and frequencies over the entire duration of the task and do not consider the temporality of the data. In this project, we investigate how these behaviors evolve throughout the activity and if differences in the groups’ behaviors exist at the temporal level.

This project semester focuses on the propagation of a virus by developing a virtual activity with Unity platform (i.e on a computer or tablet) and physical activity (i.e with the real Cellulo robots). The goal is to create an environment to teach the topic of complex behavior to raise people’s awareness of the propagation in terms of interaction between the agents in a system (e.g. a virus). It contributes by designing a learning activity and evaluating its effectiveness in a classroom scenario.

This project is the continuation of Tangible programming using Cellulo. It aims at providing a proof of concept as well as improving the existing design visually as well as in terms of possibilities. Finally it aims at making the interface more robust and better suited to the learning goals.

The goal of my semester project is to implement the Bluetooth connection to Cellulo robots into Unity and to have a solution that would work under multiple platforms. In the end, one should be able to scan for nearby robots and connect to them in his Unity project.

The main objective of the project is to explore the possibilities the Unity framework allows in terms of developing multiple-party VR software. In that optic, we developed a collaborative interior design workspace in virtual reality. Interior design students benefit from this experience since it introduces elements that we can’t have in the real world. For instance, moving furniture with a controller, or painting walls instantly with no extra cost.

This project aims at making interactions with QTrobot more natural. During an activity, we want the robot to detect positive, neutral and negative emotional cues from the user and tune its behavior accordingly. Since emotions can be shown very differently from one person to another, the emotional thresholds need to be personalized for every user.

Deep generative models have shown the great capability to generate random examples recently while the interpretation of the design space remains unknown. We hope to build a tool for fashion designers to explore the design space and generate fashion items according to their aesthetic in a fast and automatic way.

Imagine a recruitment platform where you could simply enter your skills and it would propose you all the jobs corresponding to these skills. To this purpose, the platform would have first to extract skills found in each job advertisement. This semester project aims at developing, validating, and comparing methods for extracting skills from job-ad data. Supervised methods are mostly used for related work with large amounts of labeled data. But in our case, we dispose of unlabeled data and the time and money spent on labeling it would be too much for too less. We need unsupervised methods that could be capable of extract skills from any new dataset.

Our current analysis is built upon previous analysis conducted in the lab related to the JUSThink activity in which it was shown that data-driven clustering approaches based on behavioral features such as interaction with the activity, speech, affective features, and gaze patterns can : discriminate between gainers and non-gainers. The clustering process identified three separate clusters (Type 1 gainers , Type 2 gainers and non-gainers) Looking for statistically significant activity patterns in the lower level sequential log data can be interesting as it can:

  • Suggest the most common sequence patterns among each type of learner groups.
  • Allow us to understand how students that learn differ in their action patterns from those who do not end up learning
  • Help us to observe if the similarities and differences found at temporal level with action patterns are consistent with the previous So Sequence mining and differential sequential mining will allows to have comparative profiles over the whole interaction and over phases of interaction.

The question motivating this project is: “Is it possible to tell the rules a robot is following by the way it moves?”. If yes, in the case that robot, rather than autonomous, is controlled by a human, this would mean that we are able to tell the types of rules (and thus have a glimpse on the type of mindset) that the person is following.

There are many concepts in the mathematics curriculum that children will need to learn. One of these is learning about slopes, which is not necessarily intuitive. Furthermore some children may not be motivated to learn new mathematical concepts. The idea is to make it more fun by interactivity. Such work has already been done at the CHILI laboratory at EPFL but it had for limitation the requirement to be played in a classroom which is an issue in this time of pandemic.

The project aims at extending Reachy’s Human-Robot interactions functionalities related to audio processing, verbal interaction and natural language understanding.

The motivation behind this project was to create an online 2 player Cellulo Pac-Man game that would be suitable for the Covid-19 situation. The goal would be to provide a gamified, social and interactive rehabilitation experience.

This project aims at developing a web-app as part of a platform for teaching linear mathematical functions in online classrooms. Specifically, it wants to offer an activity that improves the learning process of students for the understanding of the steepness of linear functions. Also, an emphasis has been put on making the activity fully compatible with its counterpart that has been developed for use with Cellulo robots, so that learning sessions with Cellulos can be expanded with the support of in-class use of the web-app, or with remote participation of students.

Highlighting the potential of analyzing learners’ behaviors by coupling robot data with the data from conventional methods of assessment through quizzes in educational settings and showcasing the classification of learners in the behavioral feature space with respect to the task performance, giving insight into the relevance of behavior patterns for high performance in a robot mediated activity that consists in a path planning activity with 12 teams of students (learners) divided into two stages .

This project aims at completing another one named “Tangible programming using Cellulo”. Tangible programming tries to come up with new and fresh ideas regarding education. It makes the hypothesis that, for young students especially, learning through senses (sight, touch, hearing) may be more interesting than looking at boring and conventional lines of codes. The bet made by my project is the following: by working on smooth and intuitive interfaces for teachers giving tangible programming lessons, we also improve the experience of the students. At the base of this smoothening, a user interface that presents itself as a dashboard.

Studying interactions between individuals and emerging behaviors is important to learn and gain an insight about the functioning of complex systems.

Social robots are highly popular for human-robotinteraction for educational purposes. One main aspect of social robotics is to enable a robot to perceive verbal and non-verbal cues and adapt its behavior accordingly. Being able to process images is a key feature to perceive such nonverbal cues.

Develop an app allowing human users to teach micro-level behaviours to Cellulo robots with a programming-by-demonstration framework that extracts features from a trajectory and define the effectiveness of such a data set.

Reachy is a humanoid robot which is equipped with two 7 degrees of freedom arms and grippers “Otis”, thus it is physically capable to hold a pen and write. In this project, we would like to make Reachy to write in real life. Plus, since Reachy has the talent to be social and interactive, we would like it to use its writing skill to interact with people.

Older Semester Project Reports

Level up the interaction space with Cellulo robots

L. Burget 

2019-06-07.

User interaction design with interactive graph

A. Colicchio 

2019-01-17.

Object detection for flower recognition

T. Perret 

2019-01-17.

Cellulo – Pattern recognition and Online prediction of pacman path and target

I. Leimgruber 

2019.
publication thumbnail

CoPainter: Interactive AI drawing experience

P. Golinski 

2019-01-17.
publication thumbnail

VR with Cellulo, bringing haptic feedback to VR.

J. Mion 

2019-01-17.
publication thumbnail

Cellulo Geometry Learning Activity

B. Beuchat; A. Scalisi 

2019-01-17.

Emotion and Attention loss detection for Co-Writer

L. Bommottet; C. Cadoux 

2017-06-21.

Multi-Armed Bandits for Addressing the Exploration/Exploitation Trade-off in Self Improving Learning Environment

L. P. Faucon 

2017.

Exploring students learning approaches in MOOCs

L. P. Faucon 

2017.

Unsupervised extraction of students navigation patterns on an EPFL MOOC

T. L. C. Asselborn; V. Faramond; L. P. Faucon 

2017.