Header logo is


2014


no image
Pole Balancing with Apollo

Holger Kaden

Eberhard Karls Universität Tübingen, December 2014 (mastersthesis)

am

[BibTex]

2014


[BibTex]


no image
Learning Coupling Terms for Obstacle Avoidance

Rai, A.

École polytechnique fédérale de Lausanne, August 2014 (mastersthesis)

am

Project Page [BibTex]

Project Page [BibTex]


no image
Object Tracking in Depth Images Using Sigma Point Kalman Filters

Issac, J.

Karlsruhe Institute of Technology, July 2014 (mastersthesis)

am

Project Page [BibTex]

Project Page [BibTex]


no image
Learning objective functions for autonomous motion generation

Kalakrishnan, M.

University of Southern California, University of Southern California, Los Angeles, CA, 2014 (phdthesis)

am

Project Page Project Page [BibTex]

Project Page Project Page [BibTex]


no image
Data-driven autonomous manipulation

Pastor, P.

University of Southern California, University of Southern California, Los Angeles, CA, 2014 (phdthesis)

am

Project Page Project Page [BibTex]

Project Page Project Page [BibTex]

1996


no image
From isolation to cooperation: An alternative of a system of experts

Schaal, S., Atkeson, C. G.

In Advances in Neural Information Processing Systems 8, pages: 605-611, (Editors: Touretzky, D. S.;Mozer, M. C.;Hasselmo, M. E.), MIT Press, Cambridge, MA, 1996, clmc (inbook)

Abstract
We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to adjust the size and shape of the receptive field in which its predictions are valid, and also to adjust its bias on the importance of individual input dimensions. The size and shape adjustment corresponds to finding a local distance metric, while the bias adjustment accomplishes local dimensionality reduction. We derive asymptotic results for our method. In a variety of simulations we demonstrate the properties of the algorithm with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning. 

am

link (url) [BibTex]

1996


link (url) [BibTex]