Header logo is


2011


no image
Optimal Reinforcement Learning for Gaussian Systems

Hennig, P.

In Advances in Neural Information Processing Systems 24, pages: 325-333, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
The exploration-exploitation trade-off is among the central challenges of reinforcement learning. The optimal Bayesian solution is intractable in general. This paper studies to what extent analytic statements about optimal learning are possible if all beliefs are Gaussian processes. A first order approximation of learning of both loss and dynamics, for nonlinear, time-varying systems in continuous time and space, subject to a relatively weak restriction on the dynamics, is described by an infinite-dimensional partial differential equation. An approximate finitedimensional projection gives an impression for how this result may be helpful.

ei pn

PDF Web [BibTex]

2011


PDF Web [BibTex]


no image
An Experimental Demonstration of a Distributed and Event-based State Estimation Algorithm

(Best Interactive Paper Award (top out of 450))

Trimpe, S., D’Andrea, R.

In Proceedings of the 18th IFAC World Congress, 2011 (inproceedings)

am ics

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Reduced Communication State Estimation for Control of an Unstable Networked Control System

Trimpe, S., D’Andrea, R.

In Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference, 2011 (inproceedings)

am ics

PDF Supplementary material DOI [BibTex]

PDF Supplementary material DOI [BibTex]