I am a Ph.D. student at the Intelligent Control Systems group, supervised by Dr. Sebastian Trimpe. Further, I am a scholar in the International Max Planck Research School for Intelligent Systems (IMPRS-IS), where Prof. Ingo Steinwart and Prof. Bernhard Schölkopf are part of my thesis advisery committee.
The main focus of my reasearch revolves around the fundamental question of when to learn and direct implications to problems where resources are scarce.
My current research combines machine learning with principled decision making as event-triggered learning. Augmenting classical model-based algorithm such as controler synthesis with purposful learning decisions yields powerful novel algorithms that only learn when necessary and beneficial, instead of permanently.
Before joining the Intelligent Control Systems group in 2018, I obtained Bachelor's degrees in Mathematics and Economics from the University of Bonn. For my Bachelor's thesis in Economics I conducted research at the University of California, Berkeley. Afterwards, I obtained a Master's degree in Mathematics and worked with Dr. Sebastian Trimpe for my Master's thesis in the Autonomous Motion Department.
Talks and posters
Poster at Learning for Dynamics and Control (L4DC), May 2019, MIT, Cambridge (US)
Talk at Oberseminar Stochastik University of Stuttgart, Jul 2018, Stuttgart (Germany)
Talk: at the GMA Meeting, Günzburg, Germany, May 2018.
Poster at Second Max Planck ETH Zurich Workshop on Learning Control, Feb 2018, Zurich (Switzerland)
2nd Annual Conference on Learning for Dynamics and Control, June 2020 (conference) Accepted
Despite the availability of ever more data enabled through modern sensor and computer technology, it still remains an open problem to learn dynamical systems in a sample-efficient way. We propose active learning strategies that leverage information-theoretical properties arising naturally during Gaussian process regression, while respecting constraints on the sampling process imposed by the system dynamics. Sample points are selected in regions with high uncertainty, leading to exploratory behavior and data-efficient training of the model. All results are verified in an extensive numerical benchmark.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems