“Reinforcement Learning via Symmetries of Dynamics”
September 8, 2022 | Time 11:30am CET

Offline reinforcement learning (RL) leverages large datasets to train policies without interactions with the environment. The learned policies may then be deployed in real-world settings where interactions are costly or dangerous. Current algorithms over-fit to the training dataset and as a consequence perform poorly when deployed to out-of-distribution generalizations of the environment. We aim to address these limitations by learning a Koopman latent representation which allows us to infer symmetries of the system’s underlying dynamic. The latter is then utilized to extend the otherwise static offline dataset during training; this constitutes a novel data augmentation framework which reflects the system’s dynamic.
Currently I am a Research Scientist at RIKEN AIP with the focus on generalization in Reinforcement Learning (RL). My background is however in mathematical physics in which I have completed my Master at ETH Zurich and then my PhD at the Max-Planck Institute for Physics in Munich with a focus on Sting theory. During my first Postdoc at the University of Tokyo (Kavli IPMU) my research interests shifted towards the vibrant field of AI. These days my main ambition is to evolve RL-algorithms to be able to generalize to new tasks.