In a broad sense, I'm interested in how nature solves optimality problems, mostly with regard to dynamics and locomotion. In particular I'm interested in how to design controllers that can effectively exploit natural dynamics and, perhaps more importantly, how to design the natural dynamics themselves (i.e. the mechanical system) such that exploiting them becomes simpler and almost automatic. In other words, my focus is on understanding the coupling between morphology and control.
I completed my Bsc and Msc at ETH Zurich, starting in Mechanical Engineering and then specializing in Robotics, Systems and Control. During this time I spent a semester at TU Delft, and finished my master thesis at BioRob at EPFL working on how tails can be used in legged locomotion. Before joining the Dynamic Locomotion Group at Max-Planck Institute for Intelligent Systems, I spent a couple years in Ishiguro Lab at Tohoku University in Japan.
Outside the lab I enjoy photography, cooking, music and games, in particular go. If you ever want to discuss things (research-related or not) feel free to drop me an e-mail at: firstname.lastname@example.org
IEEE Transactions on Robotics (T-RO) , May 2019 (article) In press
Properly designing a system to exhibit favorable natural dynamics can greatly simplify designing or learning the control policy. However, it is still unclear what constitutes favorable natural dynamics and how to quantify its effect. Most studies of simple walking and running models have focused on the basins of attraction of passive limit cycles and the notion of self-stability. We instead emphasize the importance of stepping beyond basins of attraction. In this paper, we show an approach based on viability theory to quantify robust sets in state-action space. These sets are valid for the family of all robust control policies, which allows us to quantify the robustness inherent to the natural dynamics before designing the control policy or specifying a control objective. We illustrate our formulation using spring-mass models, simple low-dimensional models of running systems. We then show an example application by optimizing robustness of a simulated planar monoped, using a gradient-free optimization scheme. Both case studies result in a nonlinear effective stiffness providing more robustness.
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, pages: 5076-5081, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.
An animal's running gait is dynamic, efficient, elegant, and adaptive. We see locomotion in animals as an orchestrated interplay of the locomotion apparatus, interacting with its environment. The Dynamic Locomotion Group at the Max Planck Institute for Intelligent Systems in Stuttgart develops novel legged robots to decipher aspects of biomechanics and neuromuscular control of legged locomotion in animals, and to understand general principles of locomotion.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems