Check this page regularly, we add new projects all the time.
All the projects listed below can be either a semester project or master thesis, where the depth and workload can be adjusted accordingly by discussion. If you cannot find anything interesting to you on the list, we always encourage students to bring their original ideas to us.
Projects (Master/Bachelor)
Virtual reality (VR) has the potential to radically distrupt education. However, we know little about how to design effective learning experiences in VR. To address this, we are currently developing a variety of VR learning experiences that will be tested with students and teachers.
We are seeking students interested in joining in on the development of educational VR applications for the Oculus Quest 2 using the Unity3D platform. Relevant technical experience includes knowledge of C#, Unity XR Interaction Framework, and Unity3D.
Contact: [email protected]
Tangible user interfaces (TUIs) are a technology that makes it possible for people to interface with the digital world by manipulating physical objects. Typically, physical objects are tagged with fiducial markers similar to QR codes that make it easier to automatically identify their position and orientation. This information is then processed and used to project a visualization directly onto the objects. For example, this is the approach used by the popular reacTIVision software.
The goal of this project is to develop a TUI toolkit that does not rely on the use of fiducial markers, but instead uses augmented reality technologies to identify objects directly. This finished toolkit will provide users with an easy-to-use system for training the models to identify arbitrary objects, and will operate under a variety of lighting conditions and camera angles. This toolkit will be designed to be a drop-in replacement for existing toolkits such as reacTIVision.
This project will be developed using Swift and ARKit for iOS. We are seeking students interested in augmented reality and iOS development to join the project.
Contact: [email protected]
The goal of this project is to continue the development of a web-based user interface that non-programmers can use to meaningfully navigate the latent space of a deep generative model. This work is part of the project “GANs and Beyond” and builds on previous work published in the NeurIPS workshop on Creativity [1]. A video demo of the existing interface can be seen at https://youtu.be/dcC7G2zBuL8.
This project will use React and other front-end web technologies to explore new ways of visualizing and exploring the vast space of possible designs. Ideally, we will also test this interface with stakeholders to better understand its strengths and weaknesses and to collect evidence of its effectiveness.
Students interested in HCI, full-stack web development, UX research, and human-centered artificial intelligence should contact [email protected]
[1] https://neuripscreativityworkshop.github.io/2021/#/
Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Given the importance of reading throughout a person’s life, it is not surprising that dyslexia has been extensively studied. Yet, there is no consensus about theories or explanations which may explain its origin. A recent promising research trend focused on abnormalities of the “internal clock” used to sample information as one of the main underlying deficits. We are developing several digital activities to explore that view.
In order to test that hypothesis, we need to measure children’s performances in those activities against some baseline cognitive skills. Thus, the goal of this project is to develop an automatic reading assessment activity. Since the application is on iPad, you will learn how to program in the Swift language. We are seeking students interested in IOS development, signal processing and machine learning to join this project.
Contact: [email protected]
Dyslexia is a specific and long-lasting learning disorder characterised by reading performances well below those expected for a certain age. Given the importance of reading throughout a person’s life, it is not surprising that dyslexia has been extensively studied. Yet, there is no consensus about theories or explanations which may explain its origin. A recent promising research trend focused on abnormalities of the “internal clock” used to sample information as one of the main underlying deficits. We are developing several digital activities to explore that view.
In order to test that hypothesis, we need to measure children’s performances in those activities against some baseline cognitive skills. Thus, the goal of this project is to implement several standard cognitive performance tests, in a playful and interesting way.. Since the application is on iPad, you will learn how to program in the Swift language. We are seeking students interested in IOS development and Game Design to join this project.
Contact: [email protected]
Nowadays people commonly use digital tablets to take notes by writing, usually requiring digital pens with a dedicated stylus. The technology behind it mainly relies on the sensing of pen tip contact location on the touch screen, which is rendered as part of the handwriting trajectory and forms the handwriting product. However, one important aspect of the handwriting process is often ignored and underexplored by researchers, i.e., the hand imprint when the hand is naturally resting on the screen. Yet, the full palm imprint can be detected with the mutual-capacitance sensing technology used in most modern digital tablets. In this project, we postulate that the static and dynamic features of hand imprint during the process convey rich information for handwriting recognition, which is promising to empower the user to write on tablets with a conventional pen.
To this end, in this research project, we will 1) create an Android tablet app and 2) develop new deep learning algorithms for handwriting recognition with standalone capacitive images of the touch screen. We will have weekly meetings to address questions, discuss progress, and think about future ideas.
We are looking for students with any of the following interests: Human-computer Interaction, Computer Vision, Deep Learning, and Mobile Computing. Relevant IT skills include Python and knowledge about Andriod Development. If you are interested, do not hesitate to contact me. This project is mainly targeted for a thesis but can be adapted for semester projects.
Contact: [email protected]
It can be difficult to envisage potential downstream ethical and environmental impacts of engineering design decisions, particularly for students trained on school projects with limited interaction with real world constraints and complexity. This project involves continuing the development of an ethical game (involving drone design) to support students’ reflection on the broader impacts of their design decision.
Concretely, the student selected for this project will integrate Cellulo robot(s) with the game in order to enable collaborative, multi-person interactions and the collection of haptic feedback.
Skills developed in this project: Robot programming with Qt/QtQuick software, Basic C#, QML programming, working with git, educational activity design, user experience design.
Proposed work packages
- Study phase: familiarization with the “game” scenario and design, and existing Cellulo applications.
- Design and implement an approach to integrate the Cellulo robots with the game, including collecting haptic feedback.
- Field testing “game” with groups of students, including evaluation of the system and educational impact.
- Report writing.
Contact: [email protected]
Learning how to grip the pen properly is a fundamental part of handwriting training for children, which requires constant monitoring of their pen grip posture and timely intervention from teachers. Various sensing technologies have been explored to automate the pen grip posture estimation, like the camera-based system or using EMG armband. In the context of digital writing, namely, writing on tablets, these solutions with additional sensors lack portability. In this project, we aim to tackle this challenge by exploiting the integrated sensors of touch screens and digital pens. A previous study identified that it is promising to reconstruct the 3D hand pose based on the capacitive images provided by the touch screen. Together with the accessible pen tip location and orientation, which are strongly coupled with the hand pose, we postulate that the pen grip posture can be inferred in situ with a single commodity tablet and pen. Furthermore, building upon it, a new method for pen grip posture quality evaluation can be investigated.
To this end, in this research project, we will work on an Android tablet or Wacom Pen tablet and develop new algorithms for pen grip posture estimation and analysis. We will have weekly meetings to address questions, discuss progress, and think about future ideas.
We are looking for students with any of the following interests: Machine Learning, Human-computer Interaction, Computer Vision, and Mobile Computing. Relevant IT skills include Python and knowledge of any one of the following object-oriented programming languages: C++, Java or C#. If you are interested, do not hesitate to contact me.
Contact: [email protected]
Jupyter notebooks have become an essential tool used in data science, scientific computing, and machine learning in both industry and academia. Cloud based Jupyter notebooks like Google Colab, Noto, and Jupyter Hub bring the power of Jupyter notebooks into the cloud and make it easier to share and collaborate. At EPFL and other universities, these cloud-based Jupyter notebooks are used as interactive textbooks, platforms for distributing and grading homework, and as simulation environments.
These notebooks produce rich logs of interaction data, but there is currently no easy way for teachers and students to view and make sense of this data. This data could provide a valuable source of feedback that both teachers and students could use to improve their teaching and learning. This way of using data is called learning analytics, and we have recently begun designing a software extension that will bring the power of learning analytics directly into cloud-based Jupyter notebooks.
We are looking for students to join in the development of this learning analytics tool with any of the following interests: data visualization, full-stack web development, UX research, learning analytics, education.
Contact: [email protected] or [email protected]
Deep generative models such as GANs and VAEs have shown the remarkable ability to learn complex data distributions and produce highly-realistic samples. When trained on data from creative domains (e.g., clothing design), these models could provide professionals with an invaluable tool to support their creative practice. For example, a deep generative model could be used to generate variations of an original design, to find radically different designs that might never have been considered, or to blend multiple designs. However, harnessing this power means solving non-trivial problems such as meanignfully identifying specific points in the latent space (inversion) and navigating the latent spaces learned by these models (disentanglement).
We developed a deep generative model for clothing designers with working solutions to these problems which was featured as a demo in NeurIPS 2021 [1]. However, there is still much room for improvement. This project will involve identifying and implementing state-of-the-art methods related to GAN inversion and latent-space disentanglement from recent publications in NeurIPS, ICLR, etc. Depending on time, interest, and motivation, there is also the possibility of implementing new types of generative models such as diffusion models, or implementing new functionalities such as clothing try-on [2].
We are seeking students interested in deep learning, deep generative modeling, and GAN inversion and latent-space disentanglement to join this project.
Contact: [email protected]
[1] https://neuripscreativityworkshop.github.io/2021/#/
[2] https://paperswithcode.com/task/virtual-try-on/latest
Probabilistic reasoning is a crucial skill for making good decisions, however it is a skill that many people struggle with. Engaging in probabilistic modeling is an good way of improving probabilistic reasoning skills, however this practice requires a substantial background in mathematics and probability. Probabilistic programming languages such as Pyro, PyMC3, and Tensorflow Probability make it easier for people without this mathematical background to engage in probabilistic modeling. These languages provide a way to specify complex probability models by writing computer programs containing a mix of ordinary deterministic computation and randomly sampled values representing a generative process for data.
Unfortunately, using these languages stil requires an advanced understanding of programming. This project will build on prior work showing that general-purpose programming languages can be designed for novices (e.g., Scratch). We aim to extend this work to probabilistic programming languages, providing a way for novices in both programming and mathematics to meaningfully engage in probabilistic programming.
We are seeking student interested in programming language design, probabilistic progaramming, probabilistic modeling, and design for children to join this project.
Contact: [email protected]
Educational activities in Human-Robot Interaction (HRI) commonly do not take into consideration the teachers in their design and evaluation. This is an important piece missing since teachers are the ones who know better their students and the ones who need the most to know how their students are performing in such activities. This gap is long mainly because the tools to program such robots are fuzzy for non-expert programming. This project aims to develop features for Graphical User Interfaces that facilitate teachers designing and evaluation of such activities in an intuitive way. It will connect to already existing algorithms to control the robot that automates the execution of these interactions. After executing the activities, the interface will show to the teachers the data collected in easily readable visualization methods. The system should also be able to autonomously generate reports of the interactions. Validation of the GUI is expected to be performed with teachers through user utilization and interviews.
Keywords: Python, Interface Desing, UX.
Contact: [email protected]
Speech Recognition algorithms have achieved higher accuracies in the last decades. However, since their performance is mostly dependent on the sound, there is a remarkable drop in it when only low-quality audio is available. This fact invalidates most of activities that depend on verbal communication to be autonomous. Therefore, in environments where the noise level is high, such as buildings close to main avenues or classrooms during the school break time, the autonomous interaction with robots through verbal communication is drastically affected. The addition of visual analyses of the sentences through Lip Reading has shown potential alternatives to overcome this issue in its recent studies. In this project, Audio-Visual Speach Recognition (AVSR) algorithms will be improved and validated by users in social robots for verbal interactions in noisy environments. The algorithm uses Convolutional Neural Networks (CNN) to visually recognize visemes and combine them with the phonemes recognized by audio. ROS nodes are used to distribute the process and perform the high-cost computations on a powerful server to make the system feasible in real-time (for the user). In the experiments of this thesis, the robot will interact with users in noisy environments to test the accuracy of the algorithm and the user’s acceptability of the system performance.
Keywords: ROS, Machine Learning, Deep Learning, OpenCV, Distributed Systems.
Contact: [email protected]
Recent studies have shown that using narratives for approaching biology content produces better results than informative sessions. Similarly, using social robots to guide interactive activities for learning also shows a better engagement of young students compared to tablets or traditional methods (humans). This project has the goal of developing the architecture of a social robot to combine these two strategies: a robot capable of verbally communicating with young students to tell narratives of biology content (chosen by teachers to be approached with their students). The development and validation of this system will be done in partnership with the Learning Science department of ETHZ. An evaluation of using Natural Language Processing (NLP) algorithms to produce the narratives will be performed and, if the algorithms perform with acceptable accuracy for human understanding, they will also be applied and validated.
Keywords: Human-Robot Interaction, Reinforcement Learning, Genetic Algorithms, Natural Language Processing.
Contact: [email protected]
There are serious concerns about the fairness of machine learning methods used in high-stakes situations. Commonly-used models have been shown to be biased or less accurate for women, minorities, and other vulnerable populations in healthcare, judicial, and educational settings. In this project we are focused on uncovering and mitigating algorithmic bias in university admissions.
University admissions is a complex area since biases can be introduced by both humans and algorithms. Because of this, methods used to mitigate bias bring together methods from both machine learning and human-computer interaction (HCI). We are exploring methods from machine learning to develop predictive models that are free from bias (e.g., where the equalized odds ratio for each subgroup is roughly the same) and methods from HCI to develop and test interfaces and data visualizations that reduce the biases introduced by humans when choosing who to admit.
We are seeking students to join this project with the following interests: machine learning, bayesian modeling, algorithmic fairness, human-computer interaction, data visualization, full-stack web development, UX research.
Contact: [email protected]
A Braitenberg vehicle [1] is an agent that can autonomously move around based on its sensor inputs. It has primitive sensors that measure some stimulus at a point, and wheels (each driven by its own motor) that function as actuators. Depending on how sensors and wheels are connected, the vehicle exhibits different behaviors. This means that, depending on the sensor-motor wiring, it appears to strive to achieve certain situations and to avoid others, changing course when the situation changes. The simplest vehicles display four possible connections between sensors and actuators (ipsilateral or contralateral, and excitatory or inhibitory), producing four combinations with different behaviours named fear, aggression, liking, and love. These correspond to biological positive and negative reactions present in many animal species.
As part of the research efforts around our educational robot Cellulo, in CHILI we have built a tile-based tangible programming language wherein children write programs by connecting puzzle-like command tiles such as “move 1 step”, “if then” etc. The programming goal is to make a Cellulo robot do a task (such as navigate through a maze) which changes depending on the activity. These tiles are read by another Cellulo robot to a central computer or tablet which interprets them and sends the command to the first Cellulo robot to do the task.
In this project, the student will design and implement a learning activity consisting of two parts. In the first part, children will implement the basic behaviours of the Braitenberg vehicle. In the second part, children will be given a task wherein they have to put the behaviours together to make the vehicle behave in a certain manner and accomplish the task. In this manner, children will learn the power of object-oriented programming, and creating and using classes. Children will also learn computational thinking skills of breaking down large problems or putting together smaller solutions to solve big problems.
References:
[1] HOGG, David Wardell; MARTIN, Fred; RESNICK, Mitchel. Braitenberg creatures. Cambridge: Epistemology and Learning Group, MIT Media Laboratory, 1991.
Experience or interest in: Robot programming with Qt/QtQuick software, QML programming, working with git, educational activity design, user experience design.
Contact: [email protected], [email protected]
Teachers need detailed and actionable feedback on their performance in order to improve their teaching. Feedback which could be useful includes how much time they have spent lecturing versus the time students spent doing activities in class; when were the moments when the teacher was stressed during their lectures. Classroom conversations are a great source to analyze the interactions between teacher and students in class time.
The goal of this project is to provide automated feedback to teachers from the class based on analysis of the classroom conversations. The project has two parts for two students: 1) One student will work on a dataset of recordings of conversations of teachers managing mathematical robotic classrooms and will use novel machine learning algorithms to identify teachers’ orchestration patterns. 2) The other student will design the dashboard that visualizes the key moments of the class, elements of teacher conversation analysis and provides actionable feedback to teachers.
Prerequisites: For part 1, experience or interest in speech recognition, learning python, machine learning, jupyter notebooks. For part 2, front end design and development using Flutter.
Contact: [email protected], [email protected]