Student Projects

If you are interested in one of the below projects, Semester Projects, Master Projects at EPFL, or Master Projects in Industry please contact the first person of reference indicated in each description either by telephone, or by email, or by visiting us directly to the LASA offices.

Semester Projects

Kinodynamic motion planning for safe manipulator throwing

The objective of this project is to develop a pipeline of kinodynamic planning for safe mobile manipulator throwing in a constrained environment. Given a static environment, e.g. a warehouse with fixed shelves, the motion planner is expected to generate a dynamically-feasible collision-free motion from \emph{initial state} to \emph{rest state} while passing the given \emph{throwing state}. Comparing to conventional motion planning problems, where the goal is to generate collision-free motion towards a \emph{static} goal state, our kinodynamic planning problem faces the fundamental trade-off between torque limit and space limit: due to limited torque, the robot requires enough free space to accelerate to \emph{throwing state} with non-trivial velocity and then decelerate.

Kinodynamic planning is notoriously difficult~\cite{donald1993kinodynamic}, and in practice, standard sampling-based methods need minutes to find a feasible and safe trajectory~\cite{kunz2014probabilistically, zhang2012sampling}. However, in our setup with static environment, a lot of computations can be done \emph{offline}, e.g. learning a implicit distance function in joint space to represent the static environment~\cite{koptev2021real}, and \emph{online} using parallelized distance query~\cite{koptev2022implicit}, the planning time could be greatly reduced, which will enable the robot with high agility and adaptability. In this project, we would like to investigate the application of implicit distance functions in bidirectional sampling-based planning methods to achieve obstacle-avoiding throw.

 

 

Learning object flying dynamics for throwing

The objective of this project is to learn the flying dynamics of objects from data, which is to be used for accurate robotic throwing. Predicting the object flying trajectory and hence determining the gripper release state is very difficult to solve analytically as object free-flying dynamics is highly nonlinear and includes nonuniform mass distribution, aerodynamics, and rotation-translation coupling. Therefore, it is ideal to learn the object flying dynamics from data. This project builds upon previous work at LASA in [1], where the flying dynamics was used in robotic catching [2], and Support Vector Regression (SVR) was used to model object flying dynamics because its computational efficiency enables online object motion prediction. On the other hand here in robotic throwing [3], object flying dynamics is used offline, hence computationally expensive generative models e.g. Gaussian Process (GP) can be utilized to encode the randomness of object flying. Via generative modeling, we could analyze the risk of certain throwing configurations and answer questions like “What is the most reliable throw?

Ongoing work on mobile manipulator throwing

Approach

  • Gaussian Process (GP) modeling for object flying dynamics
  • Design appropriate prior for efficient learning, e.g. projectile dynamics that only considers gravity
  • Design an appropriate kernel function that encodes the SE(3) motion of the object

Expectations from the student

  • To understand state Gaussian processes and Gaussian process regression in particular for the role of a kernel function therein [3]
  • To test the developed algorithm in simulation and benchmark the performance for modelling the flying dynamics of various small objects.
  •  (Optional) Active learning via Bayesian Optimization for throwing task

References

[1] S. Kim and A. Billard, “Estimating the non-linear dynamics of free-flying objects”, Robotics and Autonomous Systems, vol. 60, no. 9, pp. 1108–1122, 2012.

[2] S. Kim, A. Shukla and A. Billard, “Catching objects in flight”, IEEE Transactions on Robotics, vol. 63, no. 5, pp. 1049–1065, 2014.

[3] Y. Liu, A. Nayak and A. Billard, “A Solution to Adaptive Mobile Manipulator Throwing”, arXiv preprint arXiv:2207.10629, 2022.

[4] J. Kocijan, “Modelling and control of dynamic systems using Gaussian process models” Springer, 2016.

ProjectSemester Project
PeriodSept 21, 2022 – Jan 31, 2023
Type30% Theory, 40% Software, 30% Robotic Implementation
KnowledgeProgramming skills: Python, knowledge of ROS desired but not required.
SubjectRobotics, Machine Learning, Dynamic Manipulation
SupervisionYang Liu, Aradhana Nayak,
AvailabilityNot available

 

Safe Sim2Real transfer for Robot Dynamic Manipulation – Applied to Robotic AirHockey

The project aims to reduce the gap between simulation and reality in robot behaviour. The Dynamical system for motion generation may not be safe in terms of real robot motion. In impact aware manipulation, the robot is interacting with the environment with non zero relative speed but also needs to be safe and not damage itself. 

Approach:

The scenario here is a KUKA lbr iiwa arm pushing / hitting an object to place it outside of its reachable workspace. The project will aim to create a differentiable system to reduce sim2real transfer error by learning or converging to set of parameters for friction, coefficient of restitution, mass errors etc and comparison of the system with RL based sim2real transfer (SafeAPT).

AirHockey simulation
ProjectSemester Project
PeriodSept 21, 2022 – Jan 31, 2023
Type33% Theory, 33% Software, 33% Robotic Implementation
KnowledgeProgramming skills: Python/C++; ROS
SubjectRobotics, Control, Machine Learning, Linear Algebra
SupervisionHarshit Khurana, Farshad Khadivar
AvailabilityAvailable

 

A Combined Approach
for Motion Learning and Obstacle Avoidance

Project: Semester (or Master) Project
Period: 15.02.2022 – 01.06.2022
Type: 20% theory, 40% simulation & software,
40% robot implementation
Knowledge: Programming skills: Python (MATLAB/C++); ROS; Machine learning; Control, Linear Algebra
Subject: Robotics, Software, Control, Machine Learning, Linear Algebra
Supervision: Huber Lukas ([email protected]),
Prof. Aude Billard

Introduction

Robots navigating in human-inhabited, unstructured environments have to plan or learn the initial path in advance, but they encounter disturbances constantly. In milliseconds a flexible, yet safe control scheme must take the right decisions to avoid collisions. Dynamical systems (DS) have proven to be an ideal framework in such dynamic environments with robot arms [1]. DS allow fast evaluation to perform collision avoidance in real-time [2], [3]. Moreover, they can be used for complex, but safe motion learning [4].
Motion learning and collision avoidance is often regarded as two independent problems, but they have many similarities. This project investigates the unification of motion learning and obstacle avoidance.

Goal of the Project

The goal of this project is to take existing algorithms for obstacle avoidance and motion learning and apply them to existing, as well as newly generated data, and finally to a collaborative robot arm.
• Use dynamical system based motion learning and obstacle avoidance frame work
• Adapt the obstacle environment algorithm for three dimensional environments
• Test and record motion with 6 degree of freedom robot arm
• Evaluate and document results

Expected Learning for the Student

• Gain deep understanding of dynamical systems based control and learning algorithms
• Testing in simulation and real life scenarios with a robot arm.
• Learning and applying research methods in robotics.

References

[1] K. Kronander and A. Billard, “Passive interaction control with dynamical systems,” IEEE Robotics and Au-
tomation Letters, vol. 1, no. 1, pp. 106–113, 2015.
[2] L. Huber, A. Billard, and J.-J. Slotine, “Avoidance of convex and concave obstacles with convergence ensured
through contraction,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1462–1469, 2019.
[3] L. Huber, J.-J. Slotine, and A. Billard, “Avoiding dense and dynamic obstacles in enclosed spaces: Application
to moving in a simulated crowd,” arXiv e-prints, arXiv–2105, 2021.
[4] S. M. Khansari-Zadeh and A. Billard, “Learning stable nonlinear dynamical

 

Designing a Scalable, Modular Framework for a Foot-Interface System

Description:

HASLER project tackles the design and implementation of a four-arms system with application to laparoscopic surgery [1, 2]. This system enables the surgeons to control all the needed instruments in a laparoscopic intervention by themselves using their feet and hands. The proposed, implemented structure includes robotic arms and foot interfaces, and its feasibility has been proved [2]. 

Our objective in this project is to optimize the architecture of the foot interfaces in terms of cabling, communication protocols, number of microcontrollers, and modularity of the current software. Hence, the selected candidate will design a new modular and scalable software/hardware solution. He or she needs to experimentally proves the validity of such structure using the provided sensors (F/T sensor, switches, FSR sensors), microcontrollers (STM-based boards), and the other needed parts.

If you are interested in this project, please send your CV and one sample cpp project (As a Zip file or a link to a Github repository) to Louis Munier ([email protected]).

Skills:

– C++ programming language and object-oriented programming

– Robot operating system (ROS) [knowing micro-ROS is a plus]

– Embedded software/hardware programming (with STM-based boards)

References:

[1] https://www.epfl.ch/labs/lasa/4hands/

[2] Amanhoud, Walid, et al. “Contact-initiated shared control strategies for four-arm supernumerary manipulation with foot interfaces.” The International Journal of Robotics Research 40.8-9 (2021): 986-1014

ProjectSemester project
PeriodSept 21, 2022 – Jan 31, 2023
Type10% theory, 70% implementation, 20% experimentation
KnowledgeProgramming skills: C++ (OOP); ROS; – Embedded software/hardware programming (with STM-based boards)
SubjectTeleoperation, embedded systems, programming
SupervisionSoheil Gholami (ME A3 434, [email protected])
and Louis Munier (MED 3 1015, [email protected])
AvailabilityAvailable

Microsurgical Skills Assessment using ALI-scores and Learning Algorithms

Description:

A recent trend exists in surgical training to objectively assess the trainees’ skills and performance instead of using traditional methods involving subjectivity and bias problems. This assessment is essential to provide practical feedback and estimate their proficiency level (from naive to expert). Microsurgical operations, in particular, require safe, exact, and dexterous motions of the surgeon’s hands which need additional attention during the evaluation procedures. One of the microsurgical surgery performed regularly in operating theaters is microsurgical anastomosis. Indeed, when a blood vessel is cut, the two remaining ends can be sewn or stapled together (anastomosed) using the end-to-end surgical anastomosis technique. The most well-known approach to scoring surgeons’ capabilities during this operation is offered by the anastomosis lapse index (ALI) [1]. However, this method mandates the presence of an expert surgeon to evaluate the performed operation by observing the result. This participation is not always possible, if not impossible, due to the busy schedules of such surgeons. Also, it still contains subjectivity and bias problems.

Hence, this project aims to systematically analyze the photos and videos taken from an anastomosis task. The first objective is to train and validate a model to grade each surgeon based on the ALI-score approach by looking at the saved multimedia information (to classify each participant into naive, intermediate, and expert). The second aim is to extract crucial moments during the task (e.g., making a knot, cutting the thread). The last goal is to detect errors that occur throughout performing the job by the trainees and surgeons. 

If you are interested in this project, please send your CV and one sample “deep learning” or “image processing” project (As a Zip file or a link to a Github repository) to Soheil Gholami ([email protected]).

Skills:

– Python programming language (OOP)

– Image and video processing techniques

– Deep learning

References:

[1] Ghanem, Ali M., et al. “Anastomosis lapse index (ALI): a validated end product assessment tool for simulation microsurgery training.” Journal of Reconstructive Microsurgery 32.03 (2016): 233-241.

ProjectSemester Project
PeriodSept 21, 2022 – Jan 31, 2023
Type40% theory, 40% implementation, 20% experimentation
KnowledgePython programming language (OOP); Image and video processing techniques; Deep learning
SubjectSurgical techniques assessment, computer vision, deep learning
SupervisionSoheil Gholami (ME A3 434, [email protected])
AvailabilityAvailable

Master Projects at EPFL

Hybrid Quadratic Programming – Pullback Bundle Dynamical Systems Control

Dynamical System (DS) – based closed-loop control is a simple and effective way to generate reactive motion policies that well generalize to the robotic workspace, while retaining stability guarantees. Lately the formalism has been expanded in order to handle arbitrary geometry curved spaces, namely manifolds, beyond the standard flat Euclidean space. Despite the many different ways proposed to handle DS on manifolds, it is still unclear how to apply such structures on real robotic systems. In this preliminary study, we propose a way to combine modern optimal control techniques with a geometry-based formulation of DS. The advantage of such approach is two fold. First,it yields a torque-based control for compliant and adaptive motions; second, it generates dynamical systems consistent with the controlled system’s dynamics. The salient point of the approach is that the complexity of designing a proper constrained-based optimal control problem, to ensure that dynamics move on a manifold while avoiding obstacles or self-collisions, is “outsourced” to the geometric DS. Constraints are implicitly embedded into the structure of the space in which the DS evolves. The optimal control, on the other hand, provides a torque-based control interface, and ensures dynamical consistency of the generated output. The whole can be achieved with minimal computational overhead since most of the computational complexity is delegated to the closed-form geometric DS.
ProjectMaster Project (LASA)
PeriodSept.21st, 2022 – Jan. 31st, 2023
Section(s)ME MT SV EL IN MA
Type50% theory, 50% software
Knowledge(s)Programming skills (C++ / Python)
Subjects(s)Differential Geometry, Machine Learning, Robot Planning; Control Theory;
Responsible(s)Bernardo Fichera, Farshad Khadivar

 

Motion Planning under Whole-body Collision Avoidance for Multi-object Grasping

Description:

Grasping multiple objects is a very simple skill for humans but still challenging for robots. This project is to develop a motion planning algorithm for pre-grasp motions of multiple objects, while the objects that have been grasped should also be considered as a part of the robot for collision avoidance. The basic problem is how to generate collision-free motions to achieve the state transitions, given the initial states of objects and the final grasp states for the robot and objects.

The student will start from our existing simulation environment (MuJoCo) for robot motion and grasping with a robot manipulator and an anthropomorphic hand. The basic torque controllers and other APIs have been implemented to send commands to the robot. The student will need to get familiar with them and then design a motion planning algorithm based on the implicit distance functions between the robot and obstacles [1], which are represented by neural networks. Sampling-based methods [2] or DS-based collision avoidance [3] will be considered as solutions. The student will test the algorithm in the simulator with different objects,  and also be able to run it in real robots.

Expected Experiences for the Student

Basic understanding of robotics and motion planning

References:

  1. [1] Koptev, M., Figueroa Fernandez, N. B., & Billard, A. (2022). Implicit Distance Functions: Learning and Applications in Control. In International Conference on Robotics and Automation.
  2. [2]   Otte, M., & Frazzoli, E. (2016). RRTX: Asymptotically optimal single-query sampling-based motion planning with quick replanning. The International Journal of Robotics Research, 35(7), 797–822.
  3. [3] Huber, L., Billard, A., & Slotine, J. J. (2019). Avoidance of convex and concave obstacles with convergence ensured through contraction. IEEE Robotics and Automation Letters, 4(2), 1462-1469.
ProjectMaster Thesis (LASA)
PeriodOct., 2022 – Feb., 2023
Section(s)ME MT SV EL IN MA
Type50% theory, 50% software
Knowledge(s)Programming skills (Python),
Robotics, Neural network
Subjects(s)Motion planning; Collision avoidance
Responsible(s)Xiao Gao, Farshad Khadivar
URL
AvailabilityAvailable

Swarm Furniture Obstacle Avoidance for Smart-Living Environment Assisting Wheelchair Navigation

Description:

This project is a step towards the development of a modular control architecture for collaborative robots in smart-living environments. The goal is to develop a simulation environment of this fully smart home (mobile furniture) and implement a control framework for the modular robotic system to facilitate user’s navigation and obstacle avoidance through the smart home. The project will start by analyzing and drafting a swarm robot formation that could work in decentralised control based on LASA’s developed obstacle avoidance through modulated dynamical systems [1]. The resulting controller should plan motions either to move away or towards a person while respecting the environment’s constraints and avoiding collisions.

The test-bed simulation with modular robotic furniture will operate autonomously for increasing the efficient navigation of a pedestrian user or a smart wheelchair robot user (simulated semi-autonomous driving Qolo [2]). The resulting simulator and controller will serve as a baseline for enabling further development of the human-robot interaction framework to control specific behaviours of the modular robot furniture, such as a user commanding furniture to get closer or move further away.

This project is associated with the CIS Intelligent Assistive Robotics Collaboration Research Pillar. It is co-supervised by LASA and BioRob laboratories.

Approach

  1. Develop a test-bed environment (in Unity or Rviz) containing robotic furniture that incorporates physical/computational constraints and velocity control.

  2. Implement an obstacle avoidance DS-based algorithm on each robot object acting independently [1].

  3. Analyze the behaviour of the modular swarm acting independently and evaluate the performance of the swarm to provide a set of baseline results.

  4. Formulate a swarm controller for enhancing group response towards enhancing/facilitating user motion in the smart environment.

Expected Experiences for the Student

• Experience developing experimental test-bed for robot swarm simulation.

• Learning state-of-the-art motion control through time-invariant dynamical systems for mobile robots. 

• Experience virtual robot motion planning for a robotic swarm.

• Experience virtual robot dynamic control.

References:

  1. [1]  L. Huber, A. Billard, and J.-J. Slotine, “Avoidance of Convex and Concave Obstacles With Convergence Ensured Through Contraction,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1462–1469, 2019. DOI: 10.1109/LRA.2019.2893676

  2. [2]  D. F. Paez-Granados, H. Kadone, and K. Suzuki, “Unpowered Lower-Body Exoskeleton with Torso Lifting Mechanism for Supporting Sit-to-Stand Transitions,” in IEEE International Conference on Intelligent Robots and Systems, (Madrid), pp. 2755–2761, IEEE Xplorer, 2018. DOI:10.1109/IROS.2018.8594199

ProjectMaster Thesis (LASA/BioRob)
PeriodFebruary 2022 – July 2022
Section(s)ME MT SV EL IN MA
Type50% theory, 50% software
Knowledge(s)Programming skills (Python / C++ / Unity)
Control Systems; ROS (a plus); Rviz (a plus)
Subjects(s)Robot control; Control Theory; Simulation; Obstacle Avoidance
Responsible(s)Lukas Huber, Dr. Diego Paez, Dr. Anastasia Bolotnikova (BioRob)
URLCIS project, GitHub code
AvailabilityAvailable

Master Projects in Industry

Data analysis of hand dexterity in microsurgery

Description:

Microsurgery demands superior dexterity in combination with excellent visuospatial skills, that can be acquired only through long training periods. To improve our understanding of these skills, we follow a cohort of microsurgeons during short microsurgery courses, conducted at the University Hospitals in Geneva (HUG).

In this master thesis, the student will participate in the design of the sensors to record hand position, force application and movement changes. The student will start from an existing set-up that uses miniature pressure sensors mounted on gloves and covering the instruments and infra-red optitrack motion tracking. The student will design a new arrangement of the sensors to avoid visual obstruction. Addition of other measurement units such as EMG and RGBD vision tracking will be considered. The student will actively participate in the statistical analysis of the data gathered to date and use this information to guide the placement of the sensors. The student will also participate in the modeling of the dynamics of hand motions and pressure, by developing graphical displays and by using machine learning techniques to model the temporal evolution of these signals.

The MSc thesis is part of a collaboration between the EPFL Learning Algorithms and Systems Laboratory (LASA EPFL) under the supervision of Prof. Aude Billard and at the University Hospitals in Geneva under the supervision of Prof. Torstein Meling. It is expected that the student spends significant time at the HUG and at the EPFL.

ProjectMsc in industry (LASA/HUG)
PeriodSpring 2022-Autumn 2022
Section(s)ME MT SV EL IN MA
Type40% theory; 40% software; 10% hardware
Knowledge(s)Programming skills (Python / C++ / Unity); Signal processing; ROS (a plus)
Subject(s)Machine Learning; Data Analysis; Signal processing; Simulation
Responsible(s)Prof Aude Billard and Dominic Reber
URL
AvailabilityAvailable

 

Imitation learning for delicate robotic motions in microsurgery

Introduction:

Microsurgery provides a challenging environment for robotic procedures due to difficult conditions under microscopic vision and delicate control of surgical tools.
Further parameters such as soft tissue interaction caused by various agents have to be considered as well.
Modeling such complex scenarios with classis tools is cumbersome and error-prone, often rendering an approach unsafe or impossible.
However, recent achievements in machine learning show that learning these fine-grained motions instead is within reach and could enable robotic procedures in microsurgery.
 
This thesis is a collaboration with Carl Zeiss AG
Turning today’s research into tomorrow’s applications – together
With a clear focus on user needs, the ZEISS Innovation Hub @ KIT explores new technology and application fields, enabling the transformation of ideas into innovations.
Here, students, researchers, entrepreneurs, and ZEISS employees meet at eye level and practice an open-innovation culture.

Approach
The student will work on an experimental setup consisting of a stereo-vision system for image guidance and a high precision industrial robot for tool manipulation.
He or she will setup an imitation learning system, using ROS or Matlab Simulink, that learns tool motions specific to a well-defined part of the surgery.
The thesis will include acquisition of training data, further examples including video data of expert surgeons could enrich the data set for learning.
The student will start with basic motions covering well defined tasks during the surgery, thereby either adapting the methodology or moving to more complex steps.

ProjectMaster Thesis (Industry) or Internship
Periodspring 2022 – Autumn 2022
Section(s)ME MT SV EL IN MA
Type30% theory; 60% software; 10% hardware
Knowledge(s)Programming skills (Python /C++/Matlab); ROS a plus
Subject(s)Machine Learning; Robot Control; Simulation
Responsible(s)Prof Aude Billard and Dominic Reber
URL Zeiss InnoHub KIT
AvailabilityAvailable

 

Prototyping the flexible assembly production line using smart manipulators

Description

In industrial manufacturing large series of identical products can very efficiently be manufactured on automated production lines with a one-piece flow. On the other hand, when series are very low, assisted manual assembly is the most cost-effective solution even in high-wage countries. The most challenging scenario is however the mid-volume production ranges, where neither a dedicated production line is cost-effective and labor intensity is too costly to be executed by humans.

This gap can be filled by Industry 4.0 assisted flexible production lines equipped with smart manipulator robots.
In the project we will look into feasibility of prototyping such a solution where manipulators would require minimal time to switch production from one product group to other product group. In the frame of the project we will assess its feasibility and correctness of the obtained results and look for ways to improve it.

This project is in collaboration with Johnson Electric (JE) and is to be done in Murten and on Campus

Johnson Electric (JE) and EPFL are working together to develop a proof-of-concept tool which would solve the described problem.

ProjectMaster Thesis (Industry) or Internship
Periodspring 2022 – Autumn 2022
Section(s)ME MT SV EL IN MA
Type30% theory; 60% software; 10% hardware
Knowledge(s)Programming skills (Python /C++/Matlab); ROS a plus
Subject(s)Machine Learning; Robot Control; Simulation
Responsible(s)Prof Aude Billard
URL 
AvailabilityAvailable