Master Student Projects

You can check the previous projects here.

Current projects for EPFL students in the 2026 spring semester (more will follow soon):

Learning-Based Inference of User Input on the Oripixel Platform

Supervisors: Dr. Yuhao Jiang, Ziqiao Wang, Prof. Jamie Paik

Description: To enable robots to understand and respond to human intent during contact-rich interactions, we aim to infer user-applied forces from minimal sensing on the Oripixel platform. This project seeks to investigate learning-based inference of human input on the Oripixel platform by mapping short time sequences from a set of one-directional load cells at the platform base to the position and 3D magnitude of forces applied on the top surface. This project will involve designing and training reinforcement learning models, building MuJoCo simulations for data generation and validation, and developing hardware data collection and calibration workflows. Strong experience in Python and PyTorch is required; experience with MuJoCo and microcontrollers (Arduino/ESP32) is a plus.

Expected Work:

  • Develop a reinforcement learning–based framework to infer contact position and 3D force (Fx, Fy, Fz) from short load-cell time series.
  • Build MuJoCo simulations of the Oripixel platform to generate labeled datasets and perform domain randomization.
  • Develop sensor calibration, synchronization, and logging scripts; integrate with microcontrollers (Arduino/ESP32) as needed.
  • Establish evaluation metrics and protocols; validate models in simulation and on hardware, with sim-to-real analysis.

Contact: [email protected]

Multi-modal Perception and Docking for Self-Reconfigurable Modular Robots

Section: Robotics, Mechanical Engineering, Microengineering
Supervisors: Theodoros Papafotiou, Prof. Jamie Paik
Number of Credits: 10-12 ECTS (300 hours)
Type: 40% Computer Vision, 30% Control, 10% Mechanical Design, 20% Evaluation

Timeline: Spring 2026

Description: Self-Reconfigurable Modular Robots (SRMRs) are composed of autonomous robotic modules capable of connecting, disconnecting, and reorganizing into new structures. Achieving fully autonomous reconfiguration requires each module to dock with sub-millimeter precision relying only on its onboard perception capabilities.This project focuses on integrating visual sensors into DeMOS, a newly developed modular robot platform at RRL and the successor of the Mori3 origami robot. DeMOS currently includes a UWB antenna for relative positioning and a 9-DoF IMU for orientation and motion sensing. The goal is to design and implement a sensor fusion–based docking system that leverages the combined information from visual sensors, UWB, and IMU to achieve accurate and robust docking between an individual module and a module cluster (1+ modules). The final algorithm should be modular, lightweight, and ideally applicable to other SRMR platforms equipped with a similar sensor suite.

Expected work:

  • Conduct a state-of-the-art review on autonomous docking strategies in modular robotics, with emphasis on vision-based and sensor-fusion approaches.
    Select and evaluate visual sensors suitable for DeMOS, and propose updates to its mechanical design for their integration.
  • Develop and document a docking algorithm that achieves at least sub-5 mm precision using Python 3 and integrates with the existing RRL software stack.
  • Optimize perception and control pipelines to minimize visual data requirements (e.g., pixel usage, frame rate) in order to meet the constraints of the onboard processor (Raspberry Pi Zero 2W).
    Integrate and validate the system both in the Gazebo simulator and on the physical DeMOS hardware.

Contact: [email protected]

Mechanically Intelligent Origami: Additive Manufacturing and Design Optimization

Section: Robotics, Mechanical Engineering, MicroengineeringSupervisors: Dr. Hwayeong Jeong, Prof. Jamie Paik – Reconfigurable Robotics LabNumber of Credits: 10 ECTS (300 hours)Type: 80% mechanical design and testing, 20% state of the art researchRequirements: Independent working and learning, CAD modeling, FDM 3D printing, prototyping

Timeline: Spring 2026

Description: This project investigates how 3D printing can enable functional origami-inspired structures capable of folding, deploying, and transforming shape. The work includes designing printable hinge mechanisms, evaluating and improving printed prototypes, optimizing fabrication parameters for specific geometries, and developing origami folding patterns tailored to the desired mechanical behavior. The overall aim is to create simple, lightweight, and reliable fabrication strategies for complex origami structures applicable to robotics, soft mechanisms, and deployable systems. Using these methods, the project also seeks to develop origami platforms with embedded mechanical intelligence.

Expected work:

  • State of the art research on 3D printed origami structure
  • Use FDM 3D printers to fabricate hinges and foldable panels (optimize the parameters)
  • Test folding behavior and characterize it
  • Developing mechanical intelligence embedded origami

Contact: Dr. Hwayeong Jeong