AI4Environmental Processes

Multi-agent Reinforcement Learning for Robotic Construction
Collaboration Grant VI

The laboratories of Prof. Stefana Parascho (ENAC), Prof. Maryam Kamgarpour (STI) will be hosting the 6th CIS collaboration grant.

Abstract: Our proposal aims at increasing robots’ impact on a sustainable environment, by expanding their autonomy. Multi-robotic assembly applications have shown great potential for the efficient construction of structures in controlled environments. The proposed increased autonomy enables multi-robots to address construction in unknown or dangerous environments (disaster areas, extreme climates or dense urban sites) and working with existing (reused) or unprocessed construction material. Our approach in increasing robot autonomy in construction is based on the development of multi-agent reinforcement learning theory and algorithms for autonomous multi-robot assembly and construction and its transfer to the physical world. Reinforcement learning has a great potential to find solutions to the highly complex problems of multi-robot assembly: sequencing, path-planning, task-allocation. However, the assembly application brings additional theoretical and algorithmic challenges for multi-agent reinforcement learning. These challenges include very sparse rewards capturing task completion, heterogenous state information of each robot due to a given robot having its local sensors, and a very large state-space due to all possible configurations for the assembly task. At the same time, shifting the decision-making from designer to machine opens questions related to the design and construction practice. Building on the complementary expertise of our two labs, our goal is to advance theory and numerical algorithms to address these challenges and transfer them to our multi-robotic assembly testbed at EPFL. This will not only result in novel methods for assembly but also new design approaches for architectural construction.

 

Associated Labs

The Lab for Creative Computation (CRCL) is acting at the interface of design, digital technologies and construction. We explore new construction modes that combine robotics with human interventions and digital media, in pursuit of more creative solutions to contemporary design and construction challenges.

Our focus is on advancing fundamental understanding of multi-agent decision-making in uncertain and dynamic environments. Towards this vision, we develop game theory, distributed control, stochastic and data-driven safe control. Our theoretical work is motivated by applications ranging from transportation and power grid systems to rescue robotics. Please check out some of our publications for further details.

Opportunity

PostDoc in Multi-agent Reinforcement Learning for Robotic Construction

Contact

For more information, please contact: [email protected]