Semester Projects and Master Theses

Check this page regularly, we add new projects all the time.

Semester Projects (Master/Bachelor)

Cellulo has brought the potential of haptic and user interaction to the 2 Dimensions. Using the dotted pattern, we have been able to develop a multitude of games and activities for educational or rehabilitation purposes.

However, it is time for Cellulo to reach the skies and bring the haptic and user interaction with it!

In this project, we are aiming to reutilize the electronics already integrated into the new version of Cellulo to develop a brand new drone, which, despite losing its absolute localization based in dotted patterns it will be able to measure the position and movements done by the user and at the same time provide force feedback to the user, what is called haptic, to enrich the gamification experience in rehabilitation or learning activity.

Overall, you will: i) Learn to mathematically analyze the physical requirements of the drone for the use case. ii) Design and analyze the structure of a graspable drone. iii) Develop the core electronics to adapt Cellulo to the air. iv) (Optional) Integrate ROS2 in the flying Cellulo.

Who are you: Robotics, control and mechatronics, aero-hydrodynamics or similar master student (Bachelor students in the last year can be considered). You have passion and curiosity for mechatronic systems, understands the core principles of fluid mechanics and are ready to level up the game and help to design a new drone concept. It is expected a high level of physics/mathematics as well as a core understanding of mechatronics (Mechanics, electronics, software).

Prerequisites: Familiarity with kinematics and, dynamics, dimensional analysis, fluid mechanics, electronics (Embedded systems), aerial robots.

Note: This is a starting project, therefore there are many opportunities to experiment, bring your creativity and learn all the processes of the full engineering project.

Bibliography: https://dl.acm.org/doi/10.1145/3242587.3242658

Contact: [email protected]

Teachers need detailed and actionable feedback on their performance in order to improve their teaching. Feedback which could be useful includes how much time they have spent lecturing versus the time students spent doing activities in class; when were the moments when the teacher was stressed during their lectures. Classroom conversations are a great source to analyze the interactions between teacher and students in class time.

The goal of this project is to provide automated feedback to teachers from the class based on analysis of the classroom conversations. The project has two parts for two students: 1) One student will work on a dataset of recordings of conversations of teachers managing mathematical robotic classrooms and will use novel machine learning algorithms to identify teachers’ orchestration patterns. 2) The other student will design the dashboard that visualizes the key moments of the class, elements of teacher conversation analysis and provides actionable feedback to teachers. 

Prerequisites: For part 1, experience or interest in speech recognition, learning python, machine learning, jupyter notebooks. For part 2, front end design and development using Flutter.


Contact: [email protected], [email protected]

A swarm is the coherent behavior that emerges from simple interaction rules between self-organized agents. Many examples can be found in biological systems from the microscopic (atoms and cells) to larger-scale aggregates of ants, bees, fish and birds. A new interesting research line explored in CHILI is the use of robot swarms as a powerful way into education to learn about complex systems. 

Related to this research line, one project is to examine the behavioral dynamics of an online game in which a group of players would need to cooperate to push boxes in the map. An analysis of the interactions between individuals and the resultant emergent behaviors would help us to understand how people behave to effectively coordinate one’s behaviors with those of others and help us distinguish between approaches that people would choose ( for example: centralized vs. decentralized control, creating non-verbal communication channels).  

In this semester project, the student would:

  • Start from the work done in a previous semester (https://www.epfl.ch/labs/chili/wp-content/uploads/2021/07/NicolasWagner_Summary.pdf
  • From the data collected, use machine learning algorithms to investigate the relation between trajectories collected and interactions. 
  • Enhance and adapt the game according to the findings of the analysis
  • Run an experiment with larger number of people
  • Apply and verify the validity of the  proposed analysis.

Prerequisites: experience or interest in learning pattern recognition, signal processing, python, and unity game development. 

Contact: [email protected] and [email protected]

As part of a collaborative project, we are investigating the advantages combining the benefits of self-reconfigurable modular robots with those of educational swarm robots, ultimately aiming at (1) identifying novel, effective ways to teach humans about emergent behaviors and complex systems, (2) exploring the mutual benefits emerging from the combination (which concurrently enhances the Human-Robot interaction capabilities of modular robots and the Robot-Robot interaction capabilities of educational robots).

The objective of this project would be to design an activity involving this new breed of robots. In this semester project, the student would: 

  • Brainstorm for possible activities and applications. 
  • Learn to use the implemented robot simulator developed in Unity and enhance if needed. 
  • Implement the activity in simulation.
  • Depending on the progress, do a pilot experiment and test with real robots. 

Prerequisite: experience (or interest in learning) game design and unity (game development platform). 

Contact: [email protected] and [email protected]

As part of a collaborative project, we are investigating the advantages combining the benefits of self-reconfigurable modular robots with those of educational swarm robots, ultimately aiming at (1) identifying novel, effective ways to teach humans about emergent behaviors and complex systems, (2) exploring the mutual benefits emerging from the combination (which concurrently enhances the Human-Robot interaction capabilities of modular robots and the Robot-Robot interaction capabilities of educational robots).

The educational robot is the CELLULO robot, an omnidirectional mobile robot. The reconfigurable robot is the MORI, a modular origami robot.
The current onboard controller responsible for the movement of the Cellulo robot is a PID controller. Attaching A MORI or more on top of cellulo would create a varying mass degrading the current robot controller. The objective of this semester project is to implement a more robust and adaptive controller. 

Prerequisites: experience (or interest in learning) robust and adaptive control theory 

Contact: [email protected] and [email protected]

Dialogue plays a very important role in collaborative learning as it is primarily through dialogue that students build a joint understanding of the shared problem space and engage in  knowledge  construction . In line with the literature, we found in a study with 32 teams that the amount of verbal interaction is a significant indicator of those teams who end up learning (they have high verbal interaction) compared to those who don’t in an open ended collaborative activity called JUSThink. In this activity, mediated by a robot, the children solve together an algorithmic reasoning problem by building railway tracks to connect gold-mines on a Swiss map by spending as little money as possible. 

However, precisely, what do these results mean for designing more effective robot interventions? Do teams speak WHILE performing actions, i.e., building on each other’s ideas ‘on the go’ to find novel solutions or do they ‘stop and pause’ their actions to discuss their conceptual ideas or both? Secondly, is such behavior similar in teams that end up learning versus those who don’t? In this project, the student will try to answer the aforementioned questions both by a Quantitative Analysis as well as a Qualitative Analysis on temporal data. The quantitative analysis will be done by employing machine learning techniques such as sequence mining on our quantitative actions and speech dataset while the qualitative analysis will be made on a descriptive actions and speech corpus from the same activity. The qualitative analysis will focus on inspecting what the team members say when performing actions versus when not doing an action. Such insights will assist in designing better interventions for the robot to ultimately help with the learning goal. 

Requirements: experience or interest in learning python, machine learning, jupyter notebooks

For more details on the JUSThink activity, you can check out the paper: https://infoscience.epfl.ch/record/280176?ln=en

Level: Masters 

Contact: [email protected]

Supervisors: Jauwairia, Aditi

Our personality, at some level, has the power to influence many areas in our life including how people perceive us. This perception can then directly change people’s level of attention, engagement and trust in what we have to say. This becomes especially critical in positions of responsibility, such as a human/robot teacher/tutor, where their personality may translate into their pedagogical strategy and hence, influence the learning process of a child. 

In the context of Human-Robot Interaction, some examples of a distinct robot personality, inspired by psychology and learning theories, include an adversarial robot that induces conflict among the team members as a way to raise the cognitive load of the students or a Socratic robot that asks questions for the same purpose of increasing the cognitive load of the students or even a supportive robot with excessive positive reinforcements to motivate the students towards the learning process. 

Briefly, in this project, 1) the student will design one such robot with a distinct personality, echoing in its pedagogical strategy, in terms of its behaviors where a behavior is defined by a verbal utterance accompanied by a gesture and an emotion. This robot will be deployed in an open ended collaborative activity called JUSThink. In this activity, the children solve together an algorithmic reasoning problem by building railway tracks to connect gold-mines on a Swiss map by spending as little money as possible; 2) following this implementation, the idea is to conduct a small study with the designed robot in the JUSThink context and evaluate its effect on the learning gain of the children as well as how it is perceived by them; and 3) lastly, if possible, we will compare the effect of our designed robot to the effect of other robots with different personalities in the same context. Results from this project will give insights for designing more effective ‘robots with personalities’ in educational HRI settings. 

Requirements: experience or interest in human-robot interaction, ROS, python

For more details on the JUSThink activity, you can check out the paper: https://infoscience.epfl.ch/record/280176?ln=en

Level: Bachelors, Masters 

Contact: [email protected]

Supervisors: Jauwairia, Barbara

A very popular method for doing experiments in Human-Robot Interaction is the “Wizard of Oz” approach, which envisions participants to interact with a robot that, while seemingly autonomous, is actually being controlled “behind the scenes” by a human experimenter [1]. This methodology has a number of advantages, including letting researchers refine the robot’s behaviour before developing it, or even test humans’ reaction to robot behaviours and functionalities that we don’t yet know how to develop. But… what if the robot revealed the presence of a person controlling it? Or lied about it? Would this have a positive or negative impact on the participant’s perception of the robot?

In this project you will find the answer to this question (well… a preliminary, incomplete, answer, that is. There is nothing ever completely sure about humans!). Concretely, you will start from a Human-Robot Interaction framework developed during a previous semester project, which envision the participant and a humanoid robot to chat a bit and then play a memory game. In that framework, the robot is fully autonomous. You will create variants of the framework (either using QTrobot – https://robots.ros.org/qtrobot/ or NAO – https://www.softbankrobotics.com/emea/en/nao), playing with the extent to which the robot is autonomous, whether it reveals or not the presence of a human remotely controlling it and whether it lies or not. Then, you’ll design and conduct an experiment to assess the impact of these variants on people’s perception of the robot and the interaction with it.

You will program in Python, get familiar with ROS (the de-facto standard middleware in Robotics – https://www.ros.org/) and/or Choregraphe (the IDE developed by SoftBank Robotics to program its social robots – https://developer.softbankrobotics.com/pepper-naoqi-25/naoqi-developer-guide/other-tutorials/choregraphe-tutorials), and acquire hands-on experience in designing experiments with humans.

References:

[1] RIEK, Laurel D. Wizard of oz studies in hri: a systematic review and new reporting guidelines. Journal of Human-Robot Interaction, 2012, 1.1: 119-136.

Prerequisites: experience or interest in learning Python, ROS, robot programming, experiment design.

Contact: [email protected] and [email protected]

“Ears cannot speak, lips cannot hear, but eyes can both signal and perceive. For human beings, this dual function makes the eyes a remarkable tool for social interaction” [1]. A person approaching us catches our attention (i.e., we turn to look at her), and we signal our openness to interaction by fixating our gaze on her. If she starts speaking to us, we maintain our gaze focused on her, if she doesn’t, we let our eyes roam around in search of other interesting things to look at. In short, our attention system is remarkably sophisticated and plays a crucial role in our social interactions.

In this project, you will design, develop and test an attention system for the social robot Reachy (https://www.pollen-robotics.com/reachy/), specifically taking into account the additional requirements posed by educational contexts (e.g., younger people are likely to be the students and thus more important… but to look at a student busy solving an exercise might unsettle and disturb him). Concretely, you will (1) learn how to use OpenCV (https://opencv.org/) functions to detect objects, people, faces and even smiles from the camera stream, (2) build on literature and your creativity to invent the rules that make a believable attention system for social robots, (3) develop it as a ROS module (ROS is the de-facto standard middleware in Robotics – https://www.ros.org/) and test it in experiments with human participants.

You will program in Python, get familiar with ROS, OpenCV and other widely used Python libraries for signal processing, and acquire hands-on experience in endowing a robot with a key capability required to interact with humans.

References:

[1] GOBEL, Matthias S.; KIM, Heejung S.; RICHARDSON, Daniel C. The dual function of social gaze. Cognition, 2015, 136: 359-364.

Prerequisites: experience or interest in learning Python, ROS, Python libraries for video processing, robot programming.

Contact: [email protected]

In a research line within the JUSThink project, we develop mutual understanding skills for a humanoid robot in the context of a collaborative problem-solving activity that aims to improve the computational thinking skills of the human, by applying abstract and algorithmic reasoning to solve an unfamiliar problem on networks. In this activity, the robot and the human construct a solution together: the human acts via a touch screen, and the robot acts via direct commands to the activity as well as verbalising its intentions and actions. Although the human can understand the utterances of the robot, our robot currently relies on the human’s use of the touch screen, and can not comprehend what is said if the human were to speak: you are here to change that!

In this project, you will endow a humanoid robot (Reachy or QTrobot) with the ability to understand the verbalised intentions of the human, in order to enhance the interaction. Thus, you will improve the verbal skills of the robot, so that it can: i) recognise the verbal content of speech via a speech-to-text tool like Google reliably in real-time (i.e. automatic speech recognition), and ii) detect the intention of the human from the transcribed text within the context of the activity and its state (i.e. natural language understanding).

Overall, you will: i) develop these skills as a modular package in Python and Robot Operating System (ROS), ii) validate these skills via metrics that you will design to see how well they work, and iii) (optionally) evaluate how they affect the human-robot interaction in a small user study.

Prerequisites: experience or interest in learning Python and ROS.

Contact: [email protected] and [email protected]

In a research line within the JUSThink project, we develop mutual understanding skills for a humanoid robot in the context of a collaborative problem-solving activity that aims to improve the computational thinking skills of the human, by applying abstract and algorithmic reasoning to solve an unfamiliar problem on networks. In this activity, the robot and the human construct a solution together: the human acts via a touch screen, and the robot acts via direct commands to the activity as well as verbalising its intentions and actions. Thus, only what the robot says and the consequences of the robot’s actions are observable by the human.  

The previous studies were performed with the humanoid robot QTrobot that had limited motor capabilities. We now have a newcomer, Reachy, that wants to participate in the activity as well, but needs your help in improving its skills to better interact with the human learners. Reachy allows a precise control of the arms, that can be used to act on the same screen as it works with a human, the same way as the human does: by actually touching/tapping-on the touch screen. Thus, in this project, you will improve the motor skills of Reachy, so that it can use the touch screen as it takes part in the activity and interacts with the human.

Overall you will: i) develop the skills as a modular package in Python and Robot Operating System (ROS), ii) validate these skills via metrics that you will design to see how well they work, and iii) (optionally) evaluate how they affect the human-robot interaction in a small user study.

Prerequisites: experience or interest in learning Python and ROS.

Contact: [email protected] and [email protected]

In a research line within the JUSThink project, we develop mutual understanding skills for a humanoid robot in the context of an activity, where the robot interacts with a human learner to solve a problem together by taking joint actions on a touch screen, as well as verbalising its intentions and actions. The activity aims to improve the computational thinking skills of the human, by applying abstract and algorithmic reasoning to solve an unfamiliar problem on networks. The previous studies were performed with the humanoid robot QTrobot. We now have a newcomer, Reachy, that wants to participate in the activity as well, but needs your help in improving its skills to better interact with the human learners.

This project involves the development of two dedicated sets of skills for Reachy, namely about (Part 1) motor skills and (Part 2) vision skills, that will be integrated by the end of the project for a complete human-robot interaction scenario. It is intended for two bachelor students or one advanced student, where in the case of two they will need to work together towards the end to integrate their solutions.

Regarding Part 1, the motor skills part of the project, Reachy allows a precise control of the arms, that can be used to exhibit deictic gestures such as pointing to the human when it is his/her turn, as well as to point to the regions of the activity it refers to. These could complement the robot’s verbalised intentions as they serve as explicit/overt signals of referential communication, which could enhance the collaboration. Yet, the current skills of the robot are currently only available as low-level motor control (e.g. move this motor to that angle). Thus, you will develop high-level motor interaction skills of Reachy, so that it can:

  1. point to an object in the environment, e.g. the human that it detects (see Part 2) or assumes to be sitting, the screen, the chair, or itself (where positions can be assumed to be known beforehand),
  2. point to a region of an activity on the touch screen.

The robot will need to maintain and process the relative positions of the objects in terms of e.g. coordinate frames (in comparison to its bold frame), and exhibit a pointing gesture via a dedicated package in Robot Operating System (ROS) directed towards the object or region of interest.

Regarding Part 2, the vision skills part of the project, Reachy will detect and recognise a human face by using its camera feeds, so that it can face the human all the time, greet and point to the human, invite him/her to the activity (by e.g. pointing to the chair), and bid farewell afterwards (by e.g. waving along the direction of the human). The robot will infer the position of the human via another dedicated package in ROS, and make this information available for other packages, such as the package in Part 1. Overall, in this project, you will:

  1. develop these skills separately as modular, dedicated packages in ROS
  2. validate these skills via metrics that you will design to see how well they work, and 
  3. integrate the skills so that the robot recognises (Part 2) and points to (Part 1) a human within its field of view, even when the human moves
  4. (optionally) evaluate how they affect the human-robot interaction in a small user study

Prerequisites: experience or interest in learning Python and ROS.

Contact: [email protected], [email protected] and [email protected]

Master Theses

We have funding for supporting master theses in the field of learning technologies. In 2017, EPFL has launched the Swiss EdTech Collider which now gathers 77 start-ups in this field.  Some of them will be interested to host master theses. You will be supervised by Prof. Dillenbourg or his team members but you will  located in a start-up (different cities in CH).  If you are interested, contact: [email protected] or [email protected].