Master/Semester projects

Overview of available semester/master projects

DataScienceHardwareUX
Development
Back end software development/
software architectures
DataBase ManagementApplication DevelopementSystem/low level programming
Collaborative working space in virtual realityx
Help the Reachy robot become social!xx
Make the Reachy robot able to writexxx
Tangible card-based programming interface for Cellulo Robotxx
Master’s project with EdTech startup company (Classtime Inc.)xxx
Master’s project with EdTech sartup company (Magma Learning)xx
Mapping and Exploring Visual Design Spaces with Deep Generative Modelsxxx
Data Visualization for Sporting Events with Startup Company x
Collaborative Learning Softwarexxxx

The CHILI lab is inventing learning technologies that exploit recent advances in human-computer interaction (e.g. eye tracking, augmented reality, …) and in human-robot interaction. We are working on several educational platforms described below. Each platforms offers possibilities for semester and master projects. While semester projects are often limited to development, master projects usually include an empirical study with learners, supervised by our team.  The platforms are:

  1. We have funding for supporting master theses in the field of learning technologies in Fall 2019, Spring 2020, Fall 2020, Spring 2021 and Fall 2021. In 2017, EPFL has launched the Swiss EdTech Collider which now gathers 77 start-ups in this field.  Some of them will be interested to host master theses. You will be supervised by Prof. Dillenbourg or his team members but you will  located in a start-up (different cities in CH).  Projects are added on a regular basis, some examples below. If you are interested, contact: pierre.dillenbourg (at) epfl.ch or aditi.kothiyal (at) epfl.ch.
  2. A variety of projects in LEARNING ANALYTICS, i.e.  data sciences applied to education are offered by my lab (contact: jennifer.olsen (at) epfl.ch) as well  by the Center for Digital Education. See their project list here.
  3. Dual-T is a research project on vocational education and training (VET). Currently it has two parts: (i) development of novel technologies to support VET learners and (ii) training needs analysis. For developing novel technologies, we are focusing on virtual reality in order to expand the learners’ experience. Training needs analysis concerns itself with finding methods for identifying the newest skills needed for people in a profession, and involves the use of data science and applied machine learning. Current projects concern the augmented/virtual reality tools, as well as training needs analysis for software developers. Contact: kevin.kim (at) epfl.ch, ramtin.yazdanian (at) epfl.ch
  4. CELLULO is a small robot for education and rehabilitation. It moves by itself and can be moved by pupils. The hardware is ready and projects concern the software environments as well as designing and experimenting with new learning activities and rehabilitation games. Contact: hala.khodr (at) epfl.ch, arzu.guneysu (at) epfl.ch
  5. JUSTHINK project aims to improve the computational thinking skills of children by exercising algorithmic reasoning with and through graphs, where graphs are posed as a way to represent, reason with and solve a problem. Contact: utku.norman (at) epfl.ch, jauwairia.nasir (at) epfl.ch
  6. CO-WRITER is a project in which kids who face writing difficulties are offered to teach Nao how to write. Nao is a small humanoid robot available on the market. The projects concerns smoothening the interaction between the robot and young children. Contact: thibault.asselborn (at) epfl.ch, barbara.bruno (at) epfl.ch, negin.safaei (at) epfl.ch
  7. CLASSROOM ORCHESTRATION is a project to support teachers for managing learning activities with technologies, specifically educational robots. The project concerns designing awareness tools and intervention strategies about what students are doing with robots in the classroom. Contact: [email protected]

Some of these projects are described below, but since research is moving on permanently, we always have new opportunities. You can always contact the names above or pierre.dillenbourg (at) epfl.ch if you are interested in advancing digital education.

Swiss EdTech Collider

Research question

Does machine-learning supported assessment question creation, based on contextual suggestions and a library of metacognitive questions, increase the quality of questions and speed of question creation?

Goals

In this study, we would like to engage with a Master Student to achieve the following: 

  • Programming/product extension: Machine-learning based extension of our Classtime question creation editor to support question creation by 
    • Providing Contextual suggestions (e.g., when starting to type “London” as a first multiple-choice question, the solution would auto-suggest options like “New York”, “Madrid”, etc.
    • Inclusion of good pedagogical practices (e.g., hints like “incorrect options should not be longer / look different from correct options”
  • Programming/product extension: Machine-learning based extension to suggest context-based reflection questions, suggesting additional questions for pedagogical reasons such as , 
    • Deeper understanding: “What is this problem about?” or “What is sought, what should the answer look like?”
    • Connection questions: “In which aspects is the present problem similar to others that I have already solved?”
    • Strategic questions: “Which solution strategy is the right one for the problem at hand – and for what reason?”
    • Reflection questions: “Is the result meaningful?” or “Can I solve the problem in another way?” or “Have I considered all important information?”
  • Data analytics: Reviewing our existing data set based on sessions with ~5 m students, whether there are specific characteristics of questions that drive engagement and deeper thinking and use these in the automatic question generation
  • Data analytics and research: Devising a study set-up to assess the effectiveness of (1) ML supported question creation (contextual suggestions of multiple-choice options + reflection questions) vs. (2) non ML supported question creation. Conduct of study and analysis of results

Info about Classtime

Classtime (www.classtime.com) is a web-based engagement and examination platform for modern teaching. Classtime allows to intensify the interaction between teacher and learner, increases the transparency of learning progress and saves the teacher time (auto-correction of homework/examinations) – be it in face-to-face or online/distance learning. Further applications include pre-knowledge check, assessments, flipped classroom, gamification, etc.

See video: Link

Key activities of the Master student

  • Programming using machine-learning concepts and libraries based on contextual proximity
  • Working closely with the CTO, CEO, and Product / UX specialists
  • Data science and analytics activities
  • Pedagogical research, supported by the team and with academic experts on the roles and types of metacognitive questions
  • Define academic follow-on questions
  • Study execution and write-up

Benefits for Master student

  • Engage in highly relevant Learning Sciences and learning analytics research. Digital assessment tools are becoming more important, fueled by the recent distance/online learning push. Solutions need to make sure that they maximize the educational value
  • Access to a large pool of teachers, students and schools internationally to conduct meaningful experiments
  • Development of a product extension that will be used in real-life with millions of students and teachers
  • Collaboration with a dynamic and entrepreneurial team running a growing and promising start-up, on-site in Zurich

Contact: aditi.kothiyal(at)epfl.ch

Research goals
Long-term retention is known to be improved by the principle of spaced repetition, according to which a learner should wait until a concept is almost forgotten before revising it. Standard implementations of spaced repetition are however rather rigid and do not adapt to the learner’s personal memory abilities. This is the case for the Leitner system, where the interval until the next
revision is simply doubled in case of correct answer, or halved in case of incorrect answer.

MAGMA Learning has developed a novel spaced repetition system in which the forgetting rate is gradually adapted to each learner thanks to machine learning. Since personalization is also known to improve the effectiveness of learning, its combination with spaced repetition is indeed a natural step to produce even more beneficial results.

The goal of this project is to conceive a rigorous experimental setup in which to test the effectiveness of personalized spaced repetition compared to standard spaced repetition. The various algorithms to be tested will be implemented in our personal AI tutor app ARI 9000, which will be used by hundreds of students at EPFL and other universities. The experimental setup should be general enough to apply to other settings, such as corporate training in companies.

Beyond measuring the effectiveness of personalized spaced repetition, the project also aims at analyzing the importance of the many parameters that enter the learning process. This includes for example the recall probability to reach before revising (what does “almost forgotten” mean?), the frequency and duration of revision sessions, learning paths, difficulty of concepts, familiarity with related concepts, etc. These analyses will lead to a better understanding of performance improvement and provide a strategy of personalized recommendations for ARI 9000 users.


Activities of the Masters student
• Review literature about spaced repetition systems and their experimental testing
• Conceive practical experiments to test the effectiveness of personalized spaced repetition
• Analyze resulting data and quantify improvement metrics
• Evaluate the importance of the parameters involved in the algorithm
• Devise recommendation strategies to optimize learning

Contact: aditi.kothiyal(at)epfl.ch


Learning Analytics 

Learning analytics involves applying techniques in data science for optimizing and understanding learning. In CHILI, the projects range from applying existing algorithms to new data sets, comparing the use of algorithms to address a goal, and visualizing data in a meaningful manner to support learning. 


Dual-T

In the Dual-T project, we are interested in how to “expand the experience” of the learners in vocational education. We consider digital technologies as a means to approach the problem. We are particularly interested in expanding the experience by generating and exploring digital variations of designs. Exploring design variations can help the learners in acquiring a better understanding of the design space. We are currently exploring this idea with two professions – florists and gardeners.

Training Needs Analysis is the identification of skills that will help people in a profession improve their performance and obtain the skills they need. Currently, we are focusing on performing training needs analysis on software developers, for whom we have much publicly available data, in the form of Stack Overflow questions, Stack Overflow Developer Survey, and Google Trends.

Following are the list of available projects and their descriptions. In case of interest, please send an email to the contact person.

VR can offer an opportunity for learners to create and explore designs in an immersive environment. What could be more interesting is when multiple learners interact in the virtual space. This project involves developing a VR application that provides a collaborative working space for multiple learners. We are currently working on garden designing as a target application.

Requirements: experience or interest in learning Unity, C#, VR app development

Contact: kevin.kim (at) epfl.ch

Synopsis: Use deep generative models such as GANs and variational autoencoders to learn the structure of visual design spaces and build tools that designers, artists, and other professionals can use to generate new examples and explore design spaces.
Levels: Bachelor, Master
Description: One of the key markers of expertise among designers, engineers, and other creative professionals is a coherent understanding of the design space in which their work is situated. This is one of the reasons that experts are better able to understand and solve design problems than novices. Novices are more likely to become fixated on a single idea or solution, since they are unaware of the full design space of possible solutions. This project involves developing a set of tools that will make it easier for novices to visualize and explore a broader design space within creative domains. The primary set of tools will be deep generative models (e.g., StyleGAN2-ada, vq-vae-2), which will be used to learn the design spaces from examples. A secondary set of tools will be novice-facing tools that can be used to visualize and explore the design spaces learned by these deep generative models.
Deliverables: The project will be defined depending on the candidates interests and expertise. The basic set of deliverables will be two or more deep generative models trained on datasets we will provide, with results that allow for comparison of different approaches. A more ambitious project might include the development of novel models, finding or creating novel datasets from other creative domains, or the development of high-quality, user-facing tools for the exploration of learned design spaces.
Prerequisites: Python, PyTorch, TensorFlow, etc. Interest in understanding and supporting human creativity.
Contact: richard.davis (at) epfl.ch and kevin.kim (at) epfl.ch


JUSThink

The JUSThink project aims to explore how a humanoid robot can help improve the computational thinking skills of children by exercising algorithmic reasoning with and through graphs, where graphs are posed as a way to represent, reason with and solve a problem. The project targets at fostering children’s understanding of abstract graphs through a collaborative problem solving task, in a setup consisting of a QTrobot as the humanoid robot and touch screens as input devices.

To help improve the learning outcomes in this context of human-human-robot interaction, this project aims to use data generated in the experiments to explore models of engagement and mutual modelling for adapting the robot behavior in real time using multi-modal data (interaction logs, speech, gaze patterns and facial expressions) and machine learning techniques among other things. 

Dialogue plays a very important role in collaborative learning as it is primarily through dialogue that students build a joint understanding of the shared problem space and engage in  knowledge  construction . In line with the literature, we found in a study with 32 teams that the amount of verbal interaction is a significant indicator of those teams who end up learning (they have high verbal interaction) compared to those who don’t in an open ended collaborative activity called JUSThink. In this activity, mediated by a robot, the children solve together an algorithmic reasoning problem by building railway tracks to connect gold-mines on a Swiss map by spending as little money as possible. 

However, precisely, what do these results mean for designing more effective robot interventions? Do teams speak WHILE performing actions, i.e., building on each other’s ideas ‘on the go’ to find novel solutions or do they ‘stop and pause’ their actions to discuss their conceptual ideas or both? Secondly, is such behavior similar in teams that end up learning versus those who don’t? In this project, the student will try to answer the aforementioned questions both by a Quantitative Analysis as well as a Qualitative Analysis on temporal data. The quantitative analysis will be done by employing machine learning techniques such as sequence mining on our quantitative actions and speech dataset while the qualitative analysis will be made on a descriptive actions and speech corpus from the same activity. The qualitative analysis will focus on inspecting what the team members say when performing actions versus when not doing an action. Such insights will assist in designing better interventions for the robot to ultimately help with the learning goal. 

Requirements: experience or interest in learning python, machine learning, jupyter notebooks

For more details on the JUSThink activity, you can check out the paper: https://infoscience.epfl.ch/record/280176?ln=en

Level: Masters 

Contact: [email protected]

Supervisors: Jauwairia, Aditi, Utku 

In a research line within the JUSThink project, we develop mutual understanding skills for a humanoid robot in the context of a collaborative problem-solving activity that aims to improve the computational thinking skills of the human, by applying abstract and algorithmic reasoning to solve an unfamiliar problem on networks. In this activity, the robot and the human construct a solution together: the human acts via a touch screen, and the robot acts via direct commands to the activity as well as verbalising its intentions and actions. Although the human can understand the utterances of the robot, our robot currently relies on the human’s use of the touch screen, and can not comprehend what is said if the human were to speak: you are here to change that!

In this project, you will endow a humanoid robot (Reachy or QTrobot) with the ability to understand the verbalised intentions of the human, in order to enhance the interaction. Thus, you will improve the verbal skills of the robot, so that it can: i) recognise the verbal content of speech via a speech-to-text tool like Google reliably in real-time (i.e. automatic speech recognition), and ii) detect the intention of the human from the transcribed text within the context of the activity and its state (i.e. natural language understanding).

Overall, you will: i) develop these skills as a modular package in Python and Robot Operating System (ROS), ii) validate these skills via metrics that you will design to see how well they work, and iii) (optionally) evaluate how they affect the human-robot interaction in a small user study.

Prerequisites: experience or interest in learning Python and ROS.
Contact: utku.norman (at) epfl.ch and barbara.bruno (at) epfl.ch

Our personality, at some level, has the power to influence many areas in our life including how people perceive us. This perception can then directly change people’s level of attention, engagement and trust in what we have to say. This becomes especially critical in positions of responsibility, such as a human/robot teacher/tutor, where their personality may translate into their pedagogical strategy and hence, influence the learning process of a child. 

In the context of Human-Robot Interaction, some examples of a distinct robot personality, inspired by psychology and learning theories, include an adversarial robot that induces conflict among the team members as a way to raise the cognitive load of the students or a Socratic robot that asks questions for the same purpose of increasing the cognitive load of the students or even a supportive robot with excessive positive reinforcements to motivate the students towards the learning process. 

Briefly, in this project, 1) the student will design one such robot with a distinct personality, echoing in its pedagogical strategy, in terms of its behaviors where a behavior is defined by a verbal utterance accompanied by a gesture and an emotion. This robot will be deployed in an open ended collaborative activity called JUSThink. In this activity, the children solve together an algorithmic reasoning problem by building railway tracks to connect gold-mines on a Swiss map by spending as little money as possible; 2) following this implementation, the idea is to conduct a small study with the designed robot in the JUSThink context and evaluate its effect on the learning gain of the children as well as how it is perceived by them; and 3) lastly, if possible, we will compare the effect of our designed robot to the effect of other robots with different personalities in the same context. Results from this project will give insights for designing more effective ‘robots with personalities’ in educational HRI settings. 

Requirements: experience or interest in human-robot interaction, ROS, python

For more details on the JUSThink activity, you can check out the paper: https://infoscience.epfl.ch/record/280176?ln=en

Level: Bachelors, Masters 

Contact: [email protected]

Supervisors: Jauwairia, Barbara

In a research line within the JUSThink project, we develop mutual understanding skills for a humanoid robot in the context of a collaborative problem-solving activity that aims to improve the computational thinking skills of the human, by applying abstract and algorithmic reasoning to solve an unfamiliar problem on networks. In this activity, the robot and the human construct a solution together: the human acts via a touch screen, and the robot acts via direct commands to the activity as well as verbalising its intentions and actions. Thus, only what the robot says and the consequences of the robot’s actions are observable by the human.  

The previous studies were performed with the humanoid robot QTrobot that had limited motor capabilities. We now have a newcomer, Reachy, that wants to participate in the activity as well, but needs your help in improving its skills to better interact with the human learners. Reachy allows a precise control of the arms, that can be used to act on the same screen as it works with a human, the same way as the human does: by actually touching/tapping-on the touch screen. Thus, in this project, you will improve the motor skills of Reachy, so that it can use the touch screen as it takes part in the activity and interacts with the human.

Overall you will: i) develop the skills as a modular package in Python and Robot Operating System (ROS), ii) validate these skills via metrics that you will design to see how well they work, and iii) (optionally) evaluate how they affect the human-robot interaction in a small user study.

Prerequisites: experience or interest in learning Python and ROS.

Contact: utku.norman (at) epfl.ch and barbara.bruno (at) epfl.ch

In a research line within the JUSThink project, we develop mutual understanding skills for a humanoid robot in the context of an activity, where the robot interacts with a human learner to solve a problem together by taking joint actions on a touch screen, as well as verbalising its intentions and actions. The activity aims to improve the computational thinking skills of the human, by applying abstract and algorithmic reasoning to solve an unfamiliar problem on networks. The previous studies were performed with the humanoid robot QTrobot. We now have a newcomer, Reachy, that wants to participate in the activity as well, but needs your help in improving its skills to better interact with the human learners.

This project involves the development of two dedicated sets of skills for Reachy, namely about (Part 1) motor skills and (Part 2) vision skills, that will be integrated by the end of the project for a complete human-robot interaction scenario. It is intended for two bachelor students or one advanced student, where in the case of two they will need to work together towards the end to integrate their solutions.

Regarding Part 1, the motor skills part of the project, Reachy allows a precise control of the arms, that can be used to exhibit deictic gestures such as pointing to the human when it is his/her turn, as well as to point to the regions of the activity it refers to. These could complement the robot’s verbalised intentions as they serve as explicit/overt signals of referential communication, which could enhance the collaboration. Yet, the current skills of the robot are currently only available as low-level motor control (e.g. move this motor to that angle). Thus, you will develop high-level motor interaction skills of Reachy, so that it can:

  1. point to an object in the environment, e.g. the human that it detects (see Part 2) or assumes to be sitting, the screen, the chair, or itself (where positions can be assumed to be known beforehand),
  2. point to a region of an activity on the touch screen.

The robot will need to maintain and process the relative positions of the objects in terms of e.g. coordinate frames (in comparison to its bold frame), and exhibit a pointing gesture via a dedicated package in Robot Operating System (ROS) directed towards the object or region of interest.

Regarding Part 2, the vision skills part of the project, Reachy will detect and recognise a human face by using its camera feeds, so that it can face the human all the time, greet and point to the human, invite him/her to the activity (by e.g. pointing to the chair), and bid farewell afterwards (by e.g. waving along the direction of the human). The robot will infer the position of the human via another dedicated package in ROS, and make this information available for other packages, such as the package in Part 1. Overall, in this project, you will:

  1. develop these skills separately as modular, dedicated packages in ROS
  2. validate these skills via metrics that you will design to see how well they work, and 
  3. integrate the skills so that the robot recognises (Part 2) and points to (Part 1) a human within its field of view, even when the human moves
  4. (optionally) evaluate how they affect the human-robot interaction in a small user study

Prerequisites: experience or interest in learning Python and ROS.

Contact: utku.norman (at) epfl.ch and barbara.bruno (at) epfl.ch (and Victor)

Intelligent Tutoring Systems (ITS) are required to intervene in a learning activity while it is unfolding, to support the learner. To do so, they often rely on performance of a learner, as an approximation for engagement in the learning process. However, in learning tasks that are exploratory by design, such as constructivist learning activities, performance in the task can be misleading and may not always hint at an engagement that is conducive to learning. Using the data from a robot mediated collaborative learning task JUSThink in an out-of-lab setting, tested with 68 children, our results show that data-driven machine learning approaches, applied on behavioral features including interaction with the activity (touches/edits on the screen), speech, affective and gaze patterns, not only are capable of discriminating between high and low learners, but can do so better than classical approaches that rely on performance alone. As a step forward, we would like to extend towards time-series analysis 1) to model ‘Trends in multi-modal behavioral state transitions for learners and non-learners’ over the interaction using techniques like PCA, clustering, followed by HMMs; and 2) to first ‘analyze and then generalize the behavioral changes within different sections of the interactions for learners and non-learners’ using similarity/distance metrics and clustering techniques. Apart from getting general insights on how learners behave in open-ended activities, these insights will assist in designing better interventions for the robot to ultimately help with the learning goal. Also, the project is open for suggestions/ideas from the student along the way to be incorporated within the aforementioned scope. For more details on the JUSThink activity, you can check out the paper here: https://infoscience.epfl.ch/record/280176?ln=en

Prerequisites: experience or interest in learning python, machine learning, jupyter notebooks

Contact: [email protected]

Supervisors: Jauwairia Nasir, Aditi Kothiyal

Teachers, robots or intelligent tutoring systems often intervene in a learning activity while it is unfolding, in order to support the learner. To do so, they often rely on the performance of a learner, as an approximation for engagement in the learning process. However, in learning tasks that are exploratory by design, such as constructivist learning activities, performance in the task can be misleading and may not always hint at an engagement that is conducive to learning. Using the data from a robot mediated collaborative learning task JUSThink in an out-of-lab setting, tested with 68 children, our results show that data-driven machine learning approaches, applied on behavioral features including interaction with the activity (touches/edits on the screen), speech, affective and gaze patterns, are not only capable of discriminating between high and low learners, but can do so better than classical approaches that rely on performance alone. 

As a step forward, we are interested in discovering statistically prevalent action patterns in the lower level sequential log data. Such analysis is useful in 1) understanding how high learning students may differ in their action patterns during a collaborative learning activity compared to low learning teams as well as 2) what are the most frequent patterns of sequences and how they relate with the different types of learners. These insights will further assist in designing better robot interventions and teaching and learning strategies to ultimately help learners achieve the learning goal. The student will employ techniques like Sequential Pattern Mining and Differential Sequence Mining in this project on the JUSThink dataset. Also, the project is open for suggestions/ideas from the student along the way to be incorporated within the aforementioned scope. For more details on the JUSThink activity, you can check out the paper: https://infoscience.epfl.ch/record/280176?ln=en

Prerequisites: Experience or interest in learning python, learning analytics, jupyter notebooks, etc. 

Contact: [email protected]

Supervisors: Jauwairia Nasir, Aditi Kothiyal

When interacting with other people, or our pets, humans are multi-modal: we express ourselves via our words, our intonation, our facial expressions and our gestures. We nod to tell someone that we’re following what they’re saying, we smile when someone else smiles, we raise our voice when we’re angry. Using these modes comes naturally to us, and we typically assume our interaction partner to be able to extract and correctly interpret the information we transmit through them. This is why endowing a robot with the ability to detect and react to multi-modal cues is a fundamental research topic in Social Robotics.

In this project, you will develop a “Natural Interaction Robot Behavior Module” that can be used as a plug-and-play module for a robot irrespective of the activity. Our goal is threefold: 1) To investigate the multi-modal cues of the user that can be triggers for the module , 2) To be able to then detect these multi-modal behaviors of the user in real-time, 3) To build a set of robot behaviors in reaction to aforementioned multi-modal cues to have an autonomous “Natural Interaction Robot Behavior Module”. As a possible 4th step, the behavior module can be validated in a simple user study at the end. 

The student will work with a QTrobot that is a small humanoid robot capable of expressing a wide range of behaviors through composing various gestures, facial emotions and speech acts. For detecting user behaviors in real-time, the student will use the inbuilt sound localization feature of QTrobot or an external microphone for modelling audio behavior while inbuilt NuiTrack or OpenFace for modelling facial behavior. 

Requirements: experience or interest in human-robot interaction, robotic platforms, ROS, python

Contact: [email protected]

Supervisors: Jauwairia, Barbara


CoWriter / iReCHeCk

The CoWriter Project aims at exploring how a robot can help children with the acquisition of handwriting, with an original approach: the children are the teachers who help the robot to better write! This paradigm, known as learning by teaching, has several powerful effects: it boosts the children’ self-esteem (which is especially important for children with handwriting difficulties), it get them to practise hand-wrtiing without even noticing, and engage them into a particular interaction with the robot called the Protégé effect: because they unconsciously feel that they are somehow responsible if the robot does not succeed in improving its writing skills, they commit to the interaction, and make particular efforts to figure out what is difficult for the robot, thus developing their metacognitive skills and reflecting on their own errors.

Human-Robot Interaction

“Ears cannot speak, lips cannot hear, but eyes can both signal and perceive. For human beings, this dual function makes the eyes a remarkable tool for social interaction” [1]. A person approaching us catches our attention (i.e., we turn to look at her), and we signal our openness to interaction by fixating our gaze on her. If she starts speaking to us, we maintain our gaze focused on her, if she doesn’t, we let our eyes roam around in search of other interesting things to look at. In short, our attention system is remarkably sophisticated and plays a crucial role in our social interactions.

In this project, you will design, develop and test an attention system for the social robot Reachy (https://www.pollen-robotics.com/reachy/), specifically taking into account the additional requirements posed by educational contexts (e.g., younger people are likely to be the students and thus more important… but to look at a student busy solving an exercise might unsettle and disturb him). Concretely, you will (1) learn how to use OpenCV (https://opencv.org/) functions to detect objects, people, faces and even smiles from the camera stream, (2) build on literature and your creativity to invent the rules that make a believable attention system for social robots, (3) develop it as a ROS module (ROS is the de-facto standard middleware in Robotics – https://www.ros.org/) and test it in experiments with human participants.

You will program in Python, get familiar with ROS, OpenCV and other widely used Python libraries for signal processing, and acquire hands-on experience in endowing a robot with a key capability required to interact with humans.

References:

[1] GOBEL, Matthias S.; KIM, Heejung S.; RICHARDSON, Daniel C. The dual function of social gaze. Cognition, 2015, 136: 359-364.

Prerequisites: experience or interest in learning Python, ROS, Python libraries for video processing, robot programming.

Contact: barbara.bruno (at) epfl.ch


Cellulo

In the Cellulo Project, we are aiming to design and build the pencils of the future’s classroom, in the form of robots. We imagine these as swarm robots, each of them very simple and affordable, that reside on large paper sheets that contain the learning activities. Our vision is that these be ubiquitous, namely a natural part of the classroom ecosystem, as to shift the focus from the robot to the activity. With Cellulo you can actually grab and move a planet to see what happens to its orbit, or vibrate a molecule with your hands to see how it behaves. Cellulo makes tangible what is intangible in learning.

A new interesting research line explored in CHILI is the use of robot swarms as a powerful way into education to learn about complex systems. With the Cellulo platform, learners will be able to explore the connection between micro-level behavior of individuals and the macro-level emergent patterns from their interactions.
In order to tangibly define the interaction at the micro level, one idea is to define a programming-by-demonstration framework that generically extracts the relevant features from a given trajectory performed by the user.
In this project, the student would:
– Define a grammar of relevant interaction rules, inspired by the literature.
– Develop a program to acquire trajectories from different people.
– Develop the framework to learn the interaction rules from the acquired trajectories.
– Validate the feasibility and accuracy of such a framework.

Prerequisites: experience or interest in learning program-by-demonstration algorithms, swarm algorithms, machine learning.

Contact: [email protected] and [email protected]


In this project upon an initial prototype of a multi-user gaming environment, that already allows gameplay over the internet, the student is expected to implement a new multi-user game having collaborative and competitive versions that will be adaptive to social distancing. The current version allows a patient to play the game with the tangible robots on one side and another player is playing the game with keys and controlling a robot. The expected outcomes of this project include a new game logic with multi-user versions, design of the user interface, and game visuals in unity.

For more information about the robots and Cellulo for Rehab: https://www.epfl.ch/labs/chili/index-html/research/cellulo/gamified-rehabilitation-with-tangible-robots/

Prerequisites: experience in Unity programming, interest in game design

Contact: [email protected]

Collaborative Learning

Description: Modelling-based Estimation Learning Environment (MEttLE for short) is a learning software for engineering estimation problem solving. Within MEttLE, students learn how to solve open-ended engineering estimation problems and are guided by features embedded within the software, such as hints, visualizations and sub-goals. Currently the software is designed for an individual learner. However, collaboration can be useful when learning open-ended problem solving. Hence, the goal of this semester project is to modify MEttLE software to be used by two learners collaborating online with a specific role-assignment protocol. 

Specifically, you will build on the existing MEttLE software which is developed in node.js with mongodb. You will add the features of multiple users entering the same problem solving session simultaneously, but being able to access different features depending on their role at the current time and role switching. By the end of the project, the system (MEttLE) will be able to assign roles to learners as they enter the system, allowing them to have different views of the software, collaborate and switch the roles according to the protocol until the end of the session.

Levels: Bachelor, Master

Prerequisites: Experience in or motivated to learn Node.js, mongodb and React.

Contact: aditi.kothiyal (at)epfl.ch

Classroom Orchestration

Tangible programming with Cellulo robots is a project that aims to make programming inside the classroom possible without a tablet or laptop for children. In this activity, first students write a program by  moving a Cellulo robot over Hextiles which contain commands. Afterwards, they run this program and study the behaviour of another robot generated by their program to evaluate their program.In this project, you will create a platform for teachers to be aware of what the student is doing in this activity. You have to send students’ data to the backend and then implement a teacher dashboard, showing student actions and behaviours such as how much time students have spent on each task.      

Skills: software engineering (FrontEnd (Preferably Flutter or React) +  (Firebase))

For more information or apply please contact [email protected] or [email protected]


Celloroom (Cellulo in Classrooms) is a project that aims to bring collaborative learning activities inside the classroom for children. In this platform,  students collaboratively playing a learning game to learn concepts of line slope and intercept, etc. In this project, you will extend and improve the current activities in a platform for teachers and students . You have to develop the user interface of the game and send students’ data to the backend to manage the game rules.

Skills: software engineering (FrontEnd (Preferably Flutter or React) +  (Firebase))

For more information or apply please contact [email protected]


Miscellaneous

Synopsis: We are looking for someone with front-end development experience (web and/or iOS) to help us design and build an experimental video player intended to disrupt sports broadcasting.
Levels: Bachelor, Master
Description: This project is part of a collaboration with a startup that has developed a proprietary machine learning algorithm able to extract information from smartphone recordings of sporting events. An important part of this project is to explore novel ways of visualizing this information as part of the video player that will show the sporting events. These early-stage video-player prototypes could be developed for iOS or the web, depending on interest and prior experience.
Deliverables: The basic set of deliverables will be a custom video player that is also capable of visualizing time-based and summative data about sporting events.
Prerequisites: iOS and/or front-end web development (e.g., React/Redux)
Contact: [email protected]