Previous Projects

Compuational-Based Design Optimization for Parallel Origami Robot

Section: Mechanical Engineering
Supervisors: Ziqiao Wang, Serhat Demirtaş, Dr. Yuhao Jiang, Prof. Jamie Paik (EPFL)
Number of Credits: 10-12 ECTS (300 hours) or Master Thesis
Timeline: Spring 2024

In the rapidly advancing field of robotics, Computational Robot Design has secured an essential position. A particularly promising direction within this domain is the application of multi-degree-of-freedom parallel mechanical structures. These structures not only offer possibilities for the development of continuum robotic arms but also lay the groundwork for multifunctional robotic platforms. However, despite their potential, the complexity of their parallel configurations can pose challenges in the quest for optimal designs. Yet, by mastering existing kinematic equations, we can decipher the intricate relationship between the workspace and the mechanical design parameters. Armed with this knowledge, the next step is to meticulously optimize these parameters to maximize the workspace. What’s the capstone of this optimization? It’s the ability to leverage programming languages to skillfully automate the design process. Imagine a scenario where, upon specifying a desired scale (such as the maximum height of the robot), one can effortlessly auto-generate patterns suitable for laser cutting.

Keywords: Computational Design Optimization, Parallel Origami Robotics, Automated Design Software Packages

Expected work:
• Conduct a comprehensive literature review on 3-DOF parallel pointing mechanical structures.
• Analyze and compare the advantages and disadvantages of these structures, discussing the feasibility of their origami-like implementation.
• Develop structure optimization algorithms based on kinematic equations.
• Hardware experiments, workspace evaluation
• Based on the optimization algorithms, create code that can automatically generate pattern designs.

Contact: [email protected]

Teleoperation of 3-DoF Origami-Inspired Platform Using Interactive Surface

Section: Robotics, Mechanical Engineering, Microengineering 
Supervisors: Alexander Schuessler, Prof. Jamie Paik (EPFL)
Number of Credits: 10 ECTS (300 hours)
Type: 50% control, 30% mechanical design, 20% evaluation
Timeline: Spring 2024

Description: Interactive surfaces show the potential of improving user feedback during teleoperation through richer tactile information compared to current feedback joysticks. Origami-inspired structures are compliant and lightweight, which make them applicable for safe human-robot interaction. Here, we use origami-inspired 3-DoF platform for one-to-one mapping of teleoperation inputs to a manipulation device. The system will be applied to a manipulation scenario for surgical applications.

Expected work:

  • State of the art research on interactive surfaces for teleoperation
  • Mechanical design and prototyping of manipulation device based on existing 3-DoF platform (including trocar setup)
  • Implementation of modular control interface using ROS 2
  • Modeling of manipulation device based on existing kinematics model
  • Workspace analysis of manipulation device based on kinematics model and/or experimental validation

Contact: [email protected]

Affective and Immersive Interactive Surfaces Augmenting Visual and Auditory Perception

Section: Mechanical Engineering
Supervisors: Serhat Demirtaş, Prof. Jamie Paik (EPFL)
Number of Credits: 10-12 ECTS (300 hours) or Master Thesis
Timeline: Spring 2024

Humanity steps into a world where reality and imagination seamlessly blend together, as we push the boundaries of what is possible. Through cutting-edge technology, we strive to create immersive tangible environments that not only captivate your visual and auditory senses but also respond to your physical presence, allowing you to interact with reconfigurable surfaces that are able to bear the weight of humans, evoke a wide range of motion primitives, and convey emotions. Inspired by nature, we incorporate an artistic mindset to complement our effort to create a full-body experience into the inner world of a tree, evoking corresponding feelings.

If you want to join us in building unforgettable user experiences, please contact us.

More information: http://www.ersinhanersin.co.uk/

Contact: [email protected], [email protected]

Robotic Eye

Section: Mechanical Engineering, Microengineering
Supervisors: Prof. Martin Rolfs (Humboldt-Universität zu Berlin), Prof. Jamie Paik (EPFL)
Number of Credits: 10-12 ECTS (300 hours) or Master Thesis
Timeline: Spring 2023

Humans actively sample information from the visual world, picking up new information with each glance. To do so, we make rapid eye movements (known as saccades) that place the fovea — the central portion of the retina with the highest density of photoreceptors and resolution — at relevant locations of the scene that deserve closer inspection. While the human eye has three axes of rotations, its rotations are specified in a two-dimensional plane (Listing’s plane), and its rapid and frequent movements follow stereotypical kinematic laws. The goal of this thesis project is to implement a robotic eye that is able to faithfully reproduce this behavior in real time.

The project is implemented in collaboration with Prof. Dr. Martin Rolfs (Humboldt-Universität zu Berlin), an expert in oculomotor control and active human perception. The physical implementation of a robotic model of the human eye would have impact in various domains of active human perception research. Its applications would range from the systematic assessment of eye tracking technology to closed-loop experiments on the perception of gaze behavior.

Check paikslab.com for more information.

Contact: [email protected]

Master Projects (with Logitech)

Shape Tunable Mous

Mechanical development of new designs with increased capabilities. Projects have been conducted on this topic already and but we would like to explore new paths. In this one the transportability constraint would be less important and the shape movement more complex. 

Adaptive Simulation Pedals

We would like to try adding some kinesthetic haptic to our simulation pedals. Meaning when the player drives a car in the game he can feel some active feedback from the pedals (road, braking, ESP)

Low Profile Scroll wheel

Thin mice today uses touch areas and there is a loss of physicality. We would like to have a “flat” scroll wheel to keep the physicality of a wheel while it being small in height. 

For any questions on those project, contact directly Prof. Jamie Paik

Origami Variable Stiffness Structure through Bistability

Section: Mechanical Engineering, Microengineering
Supervisors: Fabio Zuliani, Prof. Jamie Paik
Number of Credits: 10-12 ECTS (300 hours) or Master Thesis
Type: 20% theory, 30% design, 20% modeling, 30% prototyping
Timeline: Spring 2021

Description:
Several ways to vary the stiffness in origami structures are investigated in the Lab. Bistable mechanisms allow a single system, in our case a single origami joint, to have different slopes of stiffness depending on its folding state. Understanding, modeling and designing such a system will allow us to use this concept towards human-machine interactive structures with a mechanical intelligence that allows the structure to vary its stiffness depending on various position inputs in a mechanical automatic manner.

Expected work:
• State-of-the-art of bistable mechanisms and bistable logic
• Understanding current physical models and design
• Propose various design ideas and challenge them
• Select the best design and create a scenario to make a prototype
• Define a scientific testing methodology to assert the design and validate the models
• Characterize the system to understand limitations and further steps

Origami Structures Stiffness Modeling

Section: Mechanical Engineering, Microengineering
Supervisors: Fabio Zuliani, Prof. Jamie Paik
Number of Credits: 10-12 ECTS (300 hours) or Master Thesis
Type: 20% theory, 40% modeling, 20% design, 20% testing
Timeline: Spring 2021

Description:
Origami structures as used in the Lab to create novel human-machine interactive devices taking advantage of their mechanical benefits. A strong emphasis is put on understanding the stiffness behavior and propagation into such structures given their inherently thin and compliant structures.
This project aims to start a stiffness modeling framework of origami structures used in the lab and generalize them to broader structures in order to understand, optimize and control such structures, in particular in a vision to use them for haptic feedback in interactive devices.

Expected work:
• State-of-the-art of structural stiffness modeling
• Understanding current physical models and design constraints of existing structures
• Propose a broad approach to tackle general as well as more specific stiffness modeling
• Make a prototype to test and validate your modeling
• Define a scientific testing methodology to assert the design and validate the models
• Characterize the system to understand limitations and further steps for generalization

Modular robot pose estimation

Section: Machine Learning, Computer Vision, Robotics
Supervisors: Dr. Anastasia Bolotnikova, Kevin Holdcroft, Prof. Jamie Paik
Number of Credits: 10-12 ECTS (300 hours)
Type: 50% Machine Learning, 20% 3D modeling, 20% Computer Vision, 10% Image processing

Description:

The goal of this project is to develop a ROS package that takes a depth camera image stream as an input and outputs an accurate 6D pose (position and orientation) of a modular triangular origami robot module (see photo above). The challenge of the project is to develop the system that achieves a sub-millimeter accuracy of real-time 6D pose estimation of a robot module. The results of the 6D pose estimation will be used as a feedback in vision based robot control.

Expected work:

  • Train a neural network to detect 2D semantic keypoints of a modular robot block
    • Automatically generate an accurately annotated dataset of synthetic images using known robot mesh in a 3D environment (update mesh for more realistic appearance;
    • Train the convolutional neural network model to detect connected keypoints of a modular robot block in an image;
    • Evaluate accuracy and time performance of the 2D keypoint position estimation on real data;
    • (if accuracy is not satisfactory) experiment with dataset augmentation, and/or collect a dataset of real annotated images;
    • (if time performance is not satisfactory) simplify neural network structure to achieve faster inference.
  • Combine 2D keypoints position estimation with depth camera data to estimate 6D pose of the modular robotic block.
    • Deal with the sensor noise to achieve a smooth estimate of the pose.
  • Implement and document the ROS package that takes camera image stream as input and publishes results of the 6D pose estimation to a ROS topic.
  • Integrate and test the developed system in real-time visual feedback based control of a modular robot driving on the ground and connecting to other modules.

Adaptive Control for Cellulo-Mori [Collaboration between CHILI and RRL]

Section: Mechanical Engineering, Microengineering
Supervisors: Hala Khodr (CHILI, 75%), Kevin Holdcroft (RRL, 25%), Jamie Paik (RRL), Pierre Dillenbourg (CHILI)
Type: 25% theory, 30% design, 25% prototyping, 20% testing
Timeline: Fall 2021

We are investigating the advantages combining the benefits of self-reconfigurable modular robots with those of educational swarm robots, ultimately aiming at (1) identifying novel, effective ways to teach humans about emergent behaviors and complex systems, (2) exploring the mutual benefits emerging from the combination (which concurrently enhances the Human-Robot interaction capabilities of modular robots and the Robot-Robot interaction capabilities of educational robots).

This project seeks to unify Cellulo robot, an omnidirectional mobile robot for education, with the MORI, a modular origami robot. Attaching one or more Mori on top of Cellulo creates a varying mass that degrades the current robot position controller. The objective of this semester project is to implement a more robust and adaptive controller.

Contact: [email protected] and [email protected]

Design on an human interaction activity with Cellulo-Mori: Reconfigurable Modular Swarm Robots [Collaboration between CHILI and RRL]

Section: Mechanical Engineering, Microengineering
Supervisors: Hala Khodr (CHILI, 75%), Kevin Holdcroft (RRL, 25%), Jamie Paik (RRL), Pierre Dillenbourg (CHILI)
Type: 25% theory, 30% design, 25% prototyping, 20% testing
Timeline: Fall 2021

We are investigating the advantages combining the benefits of self-reconfigurable modular robots with those of educational swarm robots, ultimately aiming at (1) identifying novel, effective ways to teach humans about emergent behaviors and complex systems, (2) exploring the mutual benefits emerging from the combination (which concurrently enhances the Human-Robot interaction capabilities of modular robots and the Robot-Robot interaction capabilities of educational robots).

The objective of this project would be to design an activity involving this new breed of robots. In this semester project, the student would brainstorm for possible activities and applications, use and enhance the implemented robot simulator developed in Unity, implement the activity in simulation, and depending on the progress, do a pilot experiment and test with real robots.

Contact: [email protected] and [email protected]

Omnidirectional navigation control of a triangular origami modular robot

Section: Robotics, Control, Navigation
Supervisors: Dr. Anastasia Bolotnikova, Kevin Holdcroft, Prof. Jamie Paik
Number of Credits: 10-12 ECTS (300 hours)
Type: 50% Modeling, 30% Simulations, 20% Testing

Description:

The goal of this project is to derive a formula for computing wheel speed commands for the triangular modular origami robot moving on the plane in an arbitrary direction. Currently, it is known how to move a module along a curved trajectory perpendicular to an edge (see plot above). The challenge of this project is to figure out how to set wheel speed commands for an omnidirectional motion. 

The high-level navigation command sent to the robot dictates how it should move on a plane in X and Y direction and how it should rotate around the Z axis wrt its own frame of reference. Denote this velocity command as vector Vrobot = (VX, VY, WZ)T. This command needs to be translated into the wheels speed command that will result in a desired robot motion. Denote the wheels command as Vwheels = (Vright, Vleft, Vback)T (for right wheel, left wheel and back wheel). The aim of this project is to derive the formula for computing transformation matrix (denote M) that translates high-level velocity commands into wheels speed commands. 

Vwheels = MVrobot

Expected work:

  • Familiarize with the modular omnidirectional robotic platform
  • Study and model wheels-ground interaction
  • Derive formula for computing M (described above)
  • Test omnidirectional navigation in simulation
  • Test on real robot

Master thesis: Visual feedback based omnidirectional control of a Modular Origami Robot (MORI)

Section: Machine Learning, Computer Vision, Robotics, Control, Navigation
Supervisors: Dr. Anastasia Bolotnikova, Kevin Holdcroft, Prof. Jamie Paik
Number of Credits: Master thesis
Type: 50% Machine Learning and Computer Vision, 50% Control and Navigation

Description:

The two proposed semester projects (6D pose estimation and omnidirectional motion control) are related and can be combined into one master thesis. 

The great benefit of modular robotic platforms is that they can reconfigure into different structures. However, they cannot do it autonomously without a very precise control, which in its turn is not possible without robust sensor feedback.

The student will develop and test the visual feedback based omnidirectional Mori control. Addressing the 6D pose estimation and omnidirectional motion control challenges will enable modules to accurately and reactively navigate on a planar surface, avoid collisions and autonomously perform self-reconfiguration, a procedure that at the moment can only be done manually.

Expected work:

  • See “Expected work” of the 2 related semester projects “Modular robot pose estimation” and “Visual feedback based omnidirectional control of a Modular Origami Robot”

Origami-inspired modular soft extra robotic limbs

Section: Robotics, Mechanical Engineering, Microengineering, 
Supervisors: Mustafa Mete, Dr. Anastasia Bolotnikova, Prof. Jamie Paik – Reconfigurable Robotics Lab
Number of Credits: 10 ECTS (300 hours)
Type: 30% electronics, 40% mechanical design, 15% control, 15% testing

Description:

Extra robotic limbs (XRLs) constitute one of the developing areas in wearable robotics with recent advances in materials, sensors, actuators, and fabrication methods. They can provide task assistance and support and increase human dexterity. Origami-inspired modular XRLs composed of 3 DoF modules with pneumatic actuation, “Pneumagamis”, offer compliant, lightweight, and reconfigurable XRLs. They also provide safe-human robot interaction (HRI).

This project aims to develop a compact pneumatic setup, a connection mechanism between the modules, and their power and control signal sharing.

Expected work:

  • State-of-the-art of modular XRLs, power sharing, and connections
  • Compact electronics and pneumatic setup for modules
  • Locking mechanism between the modules (attachment should be the same for modules)
  • Power and control signal sharing between the modules
  • Testing the connections and sharing with several modules

Design and modelling of TPU pouch SPAs for exosuit

Background: The Reconfigurable robotics lab is working on a soft exosuit for assisting the torso during forward bending and ground lifting (Fig. 1). The forces are applied on the wearer’s torso using soft pneumatic actuators (SPAs) made out of thermally bonded thermoplastic polyurethane (TPU) sheets. While the SPAs provide sufficient force (400N) and displacement (7cm), more work is needed in the modelling, fabrication and characterization of these SPAs.

Fig. 1: Current prototype of the soft exosuit for forward support. It consists of TPU pouch SPAs shown in the last image.

Materials and methods:

The three main tasks for the semester project are:

  • Iterate on the fabrication process to find a clean and robust method, and fabricate SPAs of five different sizes (5, 20, 50, 120, 200 cm2 surface area). Currently, some of them start due to delamination at the tube connections. We have four different methods in the lab to make TPU SPA pouches, which can be assessed and compared with each other.
  • Characterization of the SPAs using either the Instron facility or the custom testing setup developed at RRL. Collect data of force vs. displacement for different pressures. Also, use pressure sensing mats for measuring pressure distribution.
  • Study existing literature and come up with a comprehensive model for the SPA force and displacement. Verify if the model fits measured data from testing.

The above will be useful to adapt the SPA design to meet the desired force and displacement.

 

Optimizing locomotion gaits of a modular robotic quadruped by changing polygon morphology

Section: Mechanical Engineering, Microengineering
Supervisors: Kevin Holdcroft, Jamie Paik
Type: 25% theory, 30% design, 25% prototyping, 20% testing
Timeline: Fall 2021

Description:

Self-reconfigurable modular robots (SRMRs) consist of individual autonomous components which combine together to form a functioning system. Modules can change how they can connect in order to optimally perform different tasks, or to perform tasks which are otherwise impossible. RRL has developed a novel modular robot inspired from Origami, the Mori 3. In this project, the student will study how different morphologies of individual modules influence gaits through addressing the speed, distance, and longevity of the robot. The student will develop a model, and then implement gait physically on the robot as validation.

Expected work:

  • State-of-the-art for quadrupedal gaits / modular robots.
  • Understanding existing simulation code bases.
  • Development of an effective gait / cost of transport (CoT) model, compromising speed and distance.
  • Execution of the locomotion code on the Mori 3, comparing simulation to reality.

Study of Sensor Fusion on a Modular Robot platform

Section: Mechanical Engineering, Microengineering
Supervisors: Kevin Holdcroft, Jamie Paik
Type: 40% theory, 40% design, 20% testing
Timeline: Fall 2021

Self-reconfigurable modular robots (SRMRs) consist of individual autonomous components which combine together to form a functioning system. Similar to Lego, modules can change how they can connect in order to optimally perform different tasks, or to perform tasks which are otherwise impossible. However, as each module adds additional peripherals, the chance of a single component failing increases, potentially crashing the robot. This project will address how to relate sensors across different modules together, to both improve the accuracy of the reading and to add redundancy should a sensor fail. The student will implement and evaluate a sensor fusion on RRL’s novel modular robot, the Mori 3.

Expected work:

  • State of the Art on sensor fusion
  • Clear methodology (Choice of fusion framework from SotA)
  • Model for relating sensors to each-other
  • Execution with pre-recorded empirical data
  • Optional: Evaluation on the Mori