Header logo is


2017


On the Design of {LQR} Kernels for Efficient Controller Learning
On the Design of LQR Kernels for Efficient Controller Learning

Marco, A., Hennig, P., Schaal, S., Trimpe, S.

Proceedings of the 56th IEEE Annual Conference on Decision and Control (CDC), pages: 5193-5200, IEEE, IEEE Conference on Decision and Control, December 2017 (conference)

Abstract
Finding optimal feedback controllers for nonlinear dynamic systems from data is hard. Recently, Bayesian optimization (BO) has been proposed as a powerful framework for direct controller tuning from experimental trials. For selecting the next query point and finding the global optimum, BO relies on a probabilistic description of the latent objective function, typically a Gaussian process (GP). As is shown herein, GPs with a common kernel choice can, however, lead to poor learning outcomes on standard quadratic control problems. For a first-order system, we construct two kernels that specifically leverage the structure of the well-known Linear Quadratic Regulator (LQR), yet retain the flexibility of Bayesian nonparametric learning. Simulations of uncertain linear and nonlinear systems demonstrate that the LQR kernels yield superior learning performance.

am ics pn

arXiv PDF On the Design of LQR Kernels for Efficient Controller Learning - CDC presentation DOI Project Page [BibTex]

2017


arXiv PDF On the Design of LQR Kernels for Efficient Controller Learning - CDC presentation DOI Project Page [BibTex]


Optimizing Long-term Predictions for Model-based Policy Search
Optimizing Long-term Predictions for Model-based Policy Search

Doerr, A., Daniel, C., Nguyen-Tuong, D., Marco, A., Schaal, S., Toussaint, M., Trimpe, S.

Proceedings of 1st Annual Conference on Robot Learning (CoRL), 78, pages: 227-238, (Editors: Sergey Levine and Vincent Vanhoucke and Ken Goldberg), 1st Annual Conference on Robot Learning, November 2017 (conference)

Abstract
We propose a novel long-term optimization criterion to improve the robustness of model-based reinforcement learning in real-world scenarios. Learning a dynamics model to derive a solution promises much greater data-efficiency and reusability compared to model-free alternatives. In practice, however, modelbased RL suffers from various imperfections such as noisy input and output data, delays and unmeasured (latent) states. To achieve higher resilience against such effects, we propose to optimize a generative long-term prediction model directly with respect to the likelihood of observed trajectories as opposed to the common approach of optimizing a dynamics model for one-step-ahead predictions. We evaluate the proposed method on several artificial and real-world benchmark problems and compare it to PILCO, a model-based RL framework, in experiments on a manipulation robot. The results show that the proposed method is competitive compared to state-of-the-art model learning methods. In contrast to these more involved models, our model can directly be employed for policy search and outperforms a baseline method in the robot experiment.

am ics

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Event-based State Estimation: An Emulation-based Approach

Trimpe, S.

IET Control Theory & Applications, 11(11):1684-1693, July 2017 (article)

Abstract
An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor agents observe a dynamic process and sporadically transmit their measurements to estimator agents over a shared bus network. Local event-triggering protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. The event-based design is shown to emulate the performance of a centralised state observer design up to guaranteed bounds, but with reduced communication. The stability results for state estimation are extended to the distributed control system that results when the local estimates are used for feedback control. Results from numerical simulations and hardware experiments illustrate the effectiveness of the proposed approach in reducing network communication.

am ics

arXiv Supplementary material PDF DOI Project Page [BibTex]

arXiv Supplementary material PDF DOI Project Page [BibTex]


Model-Based Policy Search for Automatic Tuning of Multivariate PID Controllers
Model-Based Policy Search for Automatic Tuning of Multivariate PID Controllers

Doerr, A., Nguyen-Tuong, D., Marco, A., Schaal, S., Trimpe, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 5295-5301, IEEE, Piscataway, NJ, USA, IEEE International Conference on Robotics and Automation (ICRA), May 2017 (inproceedings)

am ics

PDF arXiv DOI Project Page [BibTex]

PDF arXiv DOI Project Page [BibTex]


Virtual vs. {R}eal: Trading Off Simulations and Physical Experiments in Reinforcement Learning with {B}ayesian Optimization
Virtual vs. Real: Trading Off Simulations and Physical Experiments in Reinforcement Learning with Bayesian Optimization

Marco, A., Berkenkamp, F., Hennig, P., Schoellig, A. P., Krause, A., Schaal, S., Trimpe, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 1557-1563, IEEE, Piscataway, NJ, USA, IEEE International Conference on Robotics and Automation (ICRA), May 2017 (inproceedings)

am ics pn

PDF arXiv ICRA 2017 Spotlight presentation Virtual vs. Real - Video explanation DOI Project Page [BibTex]

PDF arXiv ICRA 2017 Spotlight presentation Virtual vs. Real - Video explanation DOI Project Page [BibTex]


Scalable Pneumatic and Tendon Driven Robotic Joint Inspired by Jumping Spiders
Scalable Pneumatic and Tendon Driven Robotic Joint Inspired by Jumping Spiders

Sproewitz, A., Göttler, C., Sinha, A., Caer, C., Öztekin, M. U., Petersen, K., Sitti, M.

In Proceedings 2017 IEEE International Conference on Robotics and Automation (ICRA), pages: 64-70, IEEE, Piscataway, NJ, USA, IEEE International Conference on Robotics and Automation (ICRA), May 2017 (inproceedings)

dlg

Video link (url) DOI Project Page [BibTex]

Video link (url) DOI Project Page [BibTex]


Spinal joint compliance and actuation in a simulated bounding quadruped robot
Spinal joint compliance and actuation in a simulated bounding quadruped robot

Pouya, S., Khodabakhsh, M., Sproewitz, A., Ijspeert, A.

{Autonomous Robots}, pages: 437–452, Kluwer Academic Publishers, Springer, Dordrecht, New York, NY, Febuary 2017 (article)

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Linking {Mechanics} and {Learning}
Linking Mechanics and Learning

Heim, S., Grimminger, F., Drama, Ö., Spröwitz, A.

In Proceedings of Dynamic Walking 2017, 2017 (inproceedings)

dlg

[BibTex]

[BibTex]


no image
Self-Organized Behavior Generation for Musculoskeletal Robots

Der, R., Martius, G.

Frontiers in Neurorobotics, 11, pages: 8, 2017 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Is Growing Good for Learning?
Is Growing Good for Learning?

Heim, S., Spröwitz, A.

Proceedings of the 8th International Symposium on Adaptive Motion of Animals and Machines AMAM2017, 2017 (conference)

dlg

[BibTex]

[BibTex]


Evaluation of the passive dynamics of compliant legs with inertia
Evaluation of the passive dynamics of compliant legs with inertia

Györfi, B.

University of Applied Science Pforzheim, Germany, 2017 (mastersthesis)

dlg

[BibTex]

[BibTex]

2013


no image
Behavior as broken symmetry in embodied self-organizing robots

Der, R., Martius, G.

In Advances in Artificial Life, ECAL 2013, pages: 601-608, MIT Press, 2013 (incollection)

al

[BibTex]

2013


[BibTex]


no image
Information Driven Self-Organization of Complex Robotic Behaviors

Martius, G., Der, R., Ay, N.

PLoS ONE, 8(5):e63400, Public Library of Science, 2013 (article)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis

Zahedi, K., Martius, G., Ay, N.

Frontiers in Psychology, 4(801), 2013 (article)

Abstract
One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviours. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviours, because a maximisation of the PI corresponds to an exploration of morphology- and environment-dependent behavioural regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost.

al

link (url) DOI [BibTex]


no image
Robustness of guided self-organization against sensorimotor disruptions

Martius, G.

Advances in Complex Systems, 16(02n03):1350001, 2013 (article)

Abstract
Self-organizing processes are crucial for the development of living beings. Practical applications in robots may benefit from the self-organization of behavior, e.g.~to increase fault tolerance and enhance flexibility, provided that external goals can also be achieved. We present results on the guidance of self-organizing control by visual target stimuli and show a remarkable robustness to sensorimotor disruptions. In a proof of concept study an autonomous wheeled robot is learning an object finding and ball-pushing task from scratch within a few minutes in continuous domains. The robustness is demonstrated by the rapid recovery of the performance after severe changes of the sensor configuration.

al

DOI [BibTex]

DOI [BibTex]

2011


no image
Tipping the Scales: Guidance and Intrinsically Motivated Behavior

Martius, G., Herrmann, J. M.

In Advances in Artificial Life, ECAL 2011, pages: 506-513, (Editors: Tom Lenaerts and Mario Giacobini and Hugues Bersini and Paul Bourgine and Marco Dorigo and René Doursat), MIT Press, 2011 (incollection)

al

[BibTex]

2011


[BibTex]

2009


no image
A Sensor-Based Learning Algorithm for the Self-Organization of Robot Behavior

Hesse, F., Martius, G., Der, R., Herrmann, J. M.

Algorithms, 2(1):398-409, 2009 (article)

Abstract
Ideally, sensory information forms the only source of information to a robot. We consider an algorithm for the self-organization of a controller. At short timescales the controller is merely reactive but the parameter dynamics and the acquisition of knowledge by an internal model lead to seemingly purposeful behavior on longer timescales. As a paradigmatic example, we study the simulation of an underactuated snake-like robot. By interacting with the real physical system formed by the robotic hardware and the environment, the controller achieves a sensitive and body-specific actuation of the robot.

al

link (url) [BibTex]

2009


link (url) [BibTex]

2007


no image
Guided Self-organisation for Autonomous Robot Development

Martius, G., Herrmann, J. M., Der, R.

In Advances in Artificial Life 9th European Conference, ECAL 2007, 4648, pages: 766-775, LNCS, Springer, 2007 (inproceedings)

al

[BibTex]

2007


[BibTex]