In a broad sense, I'm interested in how nature solves optimality problems, mostly with regard to dynamics and locomotion. In particular I'm interested in how to design controllers that can effectively exploit natural dynamics and, perhaps more importantly, how to design the natural dynamics themselves (i.e. the mechanical system) such that exploiting them becomes simpler and almost automatic. In other words, my focus is on understanding the coupling between morphology and control.
I completed my Bsc and Msc at ETH Zurich, starting in Mechanical Engineering and then specializing in Robotics, Systems and Control. During this time I spent a semester at TU Delft, and finished my master thesis at BioRob at EPFL working on how tails can be used in legged locomotion. Before joining the Dynamic Locomotion Group at Max-Planck Institute for Intelligent Systems, I spent a couple years in Ishiguro Lab at Tohoku University in Japan.
Outside the lab I enjoy photography, cooking, music and games, in particular go. If you ever want to discuss things (research-related or not) feel free to drop me an e-mail at: email@example.com
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, pages: 5076-5081, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.
An animal's running gait is dynamic, efficient, elegant, and adaptive. We see locomotion in animals as an orchestrated interplay of the locomotion apparatus, interacting with its environment. The Dynamic Locomotion Group at the Max Planck Institute for Intelligent Systems in Stuttgart develops novel legged robots to decipher aspects of biomechanics and neuromuscular control of legged locomotion in animals, and to understand general principles of locomotion.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems