Header logo is



Active Uncertainty Calibration in Bayesian ODE Solvers
Active Uncertainty Calibration in Bayesian ODE Solvers

Kersting, H., Hennig, P.

Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI), pages: 309-318, (Editors: Ihler, A. and Janzing, D.), AUAI Press, June 2016 (conference)

Abstract
There is resurging interest, in statistics and machine learning, in solvers for ordinary differential equations (ODEs) that return probability measures instead of point estimates. Recently, Conrad et al.~introduced a sampling-based class of methods that are `well-calibrated' in a specific sense. But the computational cost of these methods is significantly above that of classic methods. On the other hand, Schober et al.~pointed out a precise connection between classic Runge-Kutta ODE solvers and Gaussian filters, which gives only a rough probabilistic calibration, but at negligible cost overhead. By formulating the solution of ODEs as approximate inference in linear Gaussian SDEs, we investigate a range of probabilistic ODE solvers, that bridge the trade-off between computational cost and probabilistic calibration, and identify the inaccurate gradient measurement as the crucial source of uncertainty. We propose the novel filtering-based method Bayesian Quadrature filtering (BQF) which uses Bayesian quadrature to actively learn the imprecision in the gradient measurement by collecting multiple gradient evaluations.

ei pn

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


Automatic LQR Tuning Based on Gaussian Process Global Optimization
Automatic LQR Tuning Based on Gaussian Process Global Optimization

Marco, A., Hennig, P., Bohg, J., Schaal, S., Trimpe, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 270-277, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (inproceedings)

Abstract
This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree- of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Results of a two- and four- dimensional tuning problems highlight the method’s potential for automatic controller tuning on robotic platforms.

am ics pn

Video - Automatic LQR Tuning Based on Gaussian Process Global Optimization - ICRA 2016 Video - Automatic Controller Tuning on a Two-legged Robot PDF DOI Project Page [BibTex]

Video - Automatic LQR Tuning Based on Gaussian Process Global Optimization - ICRA 2016 Video - Automatic Controller Tuning on a Two-legged Robot PDF DOI Project Page [BibTex]


no image
Batch Bayesian Optimization via Local Penalization

González, J., Dai, Z., Hennig, P., Lawrence, N.

Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), 51, pages: 648-657, JMLR Workshop and Conference Proceedings, (Editors: Gretton, A. and Robert, C. C.), May 2016 (conference)

ei pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Probabilistic Approximate Least-Squares
Probabilistic Approximate Least-Squares

Bartels, S., Hennig, P.

Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), 51, pages: 676-684, JMLR Workshop and Conference Proceedings, (Editors: Gretton, A. and Robert, C. C. ), May 2016 (conference)

Abstract
Least-squares and kernel-ridge / Gaussian process regression are among the foundational algorithms of statistics and machine learning. Famously, the worst-case cost of exact nonparametric regression grows cubically with the data-set size; but a growing number of approximations have been developed that estimate good solutions at lower cost. These algorithms typically return point estimators, without measures of uncertainty. Leveraging recent results casting elementary linear algebra operations as probabilistic inference, we propose a new approximate method for nonparametric least-squares that affords a probabilistic uncertainty estimate over the error between the approximate and exact least-squares solution (this is not the same as the posterior variance of the associated Gaussian process regressor). This allows estimating the error of the least-squares solution on a subset of the data relative to the full-data solution. The uncertainty can be used to control the computational effort invested in the approximation. Our algorithm has linear cost in the data-set size, and a simple formal form, so that it can be implemented with a few lines of code in programming languages with linear algebra functionality.

ei pn

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


no image
Examining load-inducing factors in instructional design: An ACT-R approach

Wirzberger, M., Rey, G. D.

In Proceedings of the 14th International Conference on Cognitive Modeling (ICCM 2016), pages: 223-224, University Park, PA, Penn State, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Helping people make better decisions using optimal gamification

Lieder, F., Griffiths, T. L.

In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016 (inproceedings)

re

Project Page [BibTex]

Project Page [BibTex]


no image
CLT meets ACT-R: Modeling load-inducing factors in instructional design

Wirzberger, M., Rey, G. D.

In Abstracts of the 58th Conference of Experimental Psychologists, pages: 377, Pabst Science Publishers, Lengerich, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Modeling load factors in multimedia learning: An ACT-R approach

Wirzberger, M.

In Dagstuhl 2016. Proceedings of the 10th Joint Workshop of the German Research Training Groups in Computer Science, pages: 98, Universitätsverlag Chemnitz, Chemnitz, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Separating cognitive load facets in a working memory updating task: An experimental approach

Wirzberger, M., Beege, M., Schneider, S., Nebel, S., Rey, G. D.

In International Meeting of the Psychonomic Society, Granada – Spain, May 5-8, 2016, Abstract Book, pages: 211-212, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
CLT meets WMU: Simultaneous experimental manipulation of load factors in a basal working memory task

Wirzberger, M., Beege, M., Schneider, S., Nebel, S., Rey, G. D.

In 9th International Cognitive Load Theory Conference, June 22nd to 24th, 2016, Bochum, Germany, Abstracts, pages: 19, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Bedingt räumliche Nähe bessere Lernergebnisse? Die Rolle der Distanz und Integration beim Lernen mit multiplen Informationsquellen

Beege, M., Nebel, S., Schneider, S., Wirzberger, M., Schmidt, N., Rey, G. D.

In 50th Conference of the German Psychological Society. Abstracts, pages: 540, Pabst Science Publishers, Lengerich, 2016 (inproceedings)

re

[BibTex]

[BibTex]