My focus is on the mechanical design of bioinspired legs. Specifically how the functional morphology, derived from anatomical features of animals influences the walking behaviour of robots.
I did my bachelor studies in mechatronics at Technische Universität Ilmenau and specialized in bio-mechatronics also at Ilmenau. After graduating in October 2016 I joined the Dynamic Locomotion Group.
Animals outperform robotic walking machines even though they have restrictions in actuator power density, and limited control loop speed. Despite animals only having access to sparse feedback information, due to limited nerve speed, they run faster, more robust and efficient compared to robots. We expect an 'intelligence' in...
In reinforcement learning, tasks that are difficult to learn are often made more amenable by shaping the reward (cost) landscape. This is typically done by adjusting the reward signal $R$ in the Markov Decision Process, composed of $(S,A,P,R,\gamma)$, where $S$ is the state-space, $A$ is the action-space, $P$ is the probabiolity tra...
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, pages: 5076-5081, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems