Header logo is


2008


no image
Motor Skill Learning for Cognitive Robotics

Peters, J.

6th International Cognitive Robotics Workshop (CogRob), July 2008 (talk)

Abstract
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this tutorial, we give a general overview on motor skill learning for cognitive robotics using research at ATR, USC, CMU and Max-Planck in order to illustrate the problems in motor skill learning. For doing so, we discuss task-appropriate representations and algorithms for learning robot motor skills. Among the topics are the learning basic movements or motor primitives by imitation and reinforcement learning, learning rhytmic and discrete movements, fast regression methods for learning inverse dynamics and setups for learning task-space policies. Examples on various robots, e.g., SARCOS DB, the SARCOS Master Arm, BDI Little Dog and a Barrett WAM, are shown and include Ball-in-a-Cup, T-Ball, Juggling, Devil-Sticking, Operational Space Control and many others.

ei

Web [BibTex]

2008


Web [BibTex]


no image
A Hilbert-Schmidt Dependence Maximization Approach to Unsupervised Structure Discovery

Blaschko, M., Gretton, A.

In MLG 2008, pages: 1-3, 6th International Workshop on Mining and Learning with Graphs, July 2008 (inproceedings)

Abstract
In recent work by (Song et al., 2007), it has been proposed to perform clustering by maximizing a Hilbert-Schmidt independence criterion with respect to a predefined cluster structure Y , by solving for the partition matrix, II. We extend this approach here to the case where the cluster structure Y is not fixed, but is a quantity to be optimized; and we use an independence criterion which has been shown to be more sensitive at small sample sizes (the Hilbert-Schmidt Normalized Information Criterion, or HSNIC, Fukumizu et al., 2008). We demonstrate the use of this framework in two scenarios. In the first, we adopt a cluster structure selection approach in which the HSNIC is used to select a structure from several candidates. In the second, we consider the case where we discover structure by directly optimizing Y.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Reinforcement Learning of Perceptual Coupling for Motor Primitives

Kober, J., Peters, J.

8th European Workshop on Reinforcement Learning for Robotics (EWRL 2008), 8, pages: 16, July 2008 (poster)

Abstract
Reinforcement learning is a natural choice for the learning of complex motor tasks by reward-related self-improvement. As the space of movements is high-dimensional and continuous, a policy parametrization is needed which can be used in this context. Traditional motor primitive approaches deal largely with open-loop policies which can only deal with small perturbations. In this paper, we present a new type of motor primitive policies which serve as closed-loop policies together with an appropriate learning algorithm. Our new motor primitives are an augmented version version of the dynamic systems motor primitives that incorporates perceptual coupling to external variables. We show that these motor primitives can perform complex tasks such a Ball-in-a-Cup or Kendama task even with large variances in the initial conditions where a human would hardly be able to learn this task. We initialize the open-loop policies by imitation learning and the perceptual coupling with a handcrafted solution. We first improve the open-loop policies and subsequently the perceptual coupling using a novel reinforcement learning method which is particularly well-suited for motor primitives.

ei

PDF [BibTex]

PDF [BibTex]


no image
Painless Embeddings of Distributions: the Function Space View (Part 1)

Fukumizu, K., Gretton, A., Smola, A.

25th International Conference on Machine Learning (ICML), July 2008 (talk)

Abstract
This tutorial will give an introduction to the recent understanding and methodology of the kernel method: dealing with higher order statistics by embedding painlessly random variables/probability distributions. In the early days of kernel machines research, the "kernel trick" was considered a useful way of constructing nonlinear algorithms from linear ones. More recently, however, it has become clear that a potentially more far reaching use of kernels is as a linear way of dealing with higher order statistics by embedding distributions in a suitable reproducing kernel Hilbert space (RKHS). Notably, unlike the straightforward expansion of higher order moments or conventional characteristic function approach, the use of kernels or RKHS provides a painless, tractable way of embedding distributions. This line of reasoning leads naturally to the questions: what does it mean to embed a distribution in an RKHS? when is this embedding injective (and thus, when do different distributions have unique mappings)? what implications are there for learning algorithms that make use of these embeddings? This tutorial aims at answering these questions. There are a great variety of applications in machine learning and computer science, which require distribution estimation and/or comparison.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Reinforcement Learning for Robotics

Peters, J.

8th European Workshop on Reinforcement Learning for Robotics (EWRL), July 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation

Hachiya, H., Akiyama, T., Sugiyama, M., Peters, J.

In AAAI 2008, pages: 1351-1356, (Editors: Fox, D. , C. P. Gomes), AAAI Press, Menlo Park, CA, USA, Twenty-Third Conference on Artificial Intelligence, July 2008 (inproceedings)

Abstract
Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are usually prohibitively expensive. A common approach is to use importance sampling techniques for compensating for the bias caused by the difference between data-sampling policies and the target policy. However, existing off-policy methods do not often take the variance of value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., Schölkopf, B.

In Proceedings of the 25th International Conference on Machine Learning, pages: 1112-1119, (Editors: WW Cohen and A McCallum and S Roweis), ACM Press, New York, NY, USA, ICML, July 2008 (inproceedings)

Abstract
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. We perform gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Although we focus on g.p. regression, the central idea is applicable to all kernel based algorithms, and we also provide some results for the support vector machine (s.v.m.) and kernel ridge regression (k.r.r.). Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Unsupervised Bayesian Time-series Segmentation based on Linear Gaussian State-space Models

Chiappa, S.

(171), Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany, June 2008 (techreport)

Abstract
Unsupervised time-series segmentation in the general scenario in which the number of segment-types and segment boundaries are a priori unknown is a fundamental problem in many applications and requires an accurate segmentation model as well as a way of determining an appropriate number of segment-types. In most approaches, segmentation and determination of number of segment-types are addressed in two separate steps, since the segmentation model assumes a predefined number of segment-types. The determination of number of segment-types is thus achieved by training and comparing several separate models. In this paper, we take a Bayesian approach to a segmentation model based on linear Gaussian state-space models to achieve structure selection within the model. An appropriate prior distribution on the parameters is used to enforce a sparse parametrization, such that the model automatically selects the smallest number of underlying dynamical systems that explain the data well and a parsimonious structure for each dynamical system. As the resulting model is computationally intractable, we introduce a variational approximation, in which a reformulation of the problem enables to use an efficient inference algorithm.

ei

[BibTex]

[BibTex]


no image
A New Non-monotonic Gradient Projection Method for the Non-negative Least Squares Problem

Kim, D., Sra, S., Dhillon, I.

(TR-08-28), University of Texas, Austin, TX, USA, June 2008 (techreport)

ei

Web [BibTex]

Web [BibTex]


no image
Flexible Models for Population Spike Trains

Bethge, M., Macke, J., Berens, P., Ecker, A., Tolias, A.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 52, June 2008 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Graphical Analysis of NMR Structural Quality and Interactive Contact Map of NOE Assignments in ARIA

Bardiaux, B., Bernard, A., Rieping, W., Habeck, M., Malliavin, T., Nilges, M.

BMC Structural Biology, 8(30):1-5, June 2008 (article)

Abstract
BACKGROUND: The Ambiguous Restraints for Iterative Assignment (ARIA) approach is widely used for NMR structure determination. It is based on simultaneously calculating structures and assigning NOE through an iterative protocol. The final solution consists of a set of conformers and a list of most probable assignments for the input NOE peak list. RESULTS: ARIA was extended with a series of graphical tools to facilitate a detailed analysis of the intermediate and final results of the ARIA protocol. These additional features provide (i) an interactive contact map, serving as a tool for the analysis of assignments, and (ii) graphical representations of structure quality scores and restraint statistics. The interactive contact map between residues can be clicked to obtain information about the restraints and their contributions. Profiles of quality scores are plotted along the protein sequence, and contact maps provide information of the agreement with the data on a residue pair level. CONCLUSIONS: The g raphical tools and outputs described here significantly extend the validation and analysis possibilities of NOE assignments given by ARIA as well as the analysis of the quality of the final structure ensemble. These tools are included in the latest version of ARIA, which is available at http://aria.pasteur.fr. The Web site also contains an installation guide, a user manual and example calculations.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Example-Based Learning for Single-Image Super-Resolution

Kim, K., Kwon, Y.

In DAGM 2008, pages: 456-463, (Editors: Rigoll, G. ), Springer, Berlin, Germany, 30th Annual Symposium of the German Association for Pattern Recognition, June 2008 (inproceedings)

Abstract
This paper proposes a regression-based method for single-image super-resolution. Kernel ridge regression (KRR) is used to estimate the high-frequency details of the underlying high-resolution image. A sparse solution of KRR is found by combining the ideas of kernel matching pursuit and gradient descent, which allows time-complexity to be kept to a moderate level. To resolve the problem of ringing artifacts occurring due to the regularization effect, the regression results are post-processed using a prior model of a generic image class. Experimental results demonstrate the effectiveness of the proposed method.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Multiple Kernel Learning Approach to Joint Multi-Class Object Detection

Lampert, C., Blaschko, M.

In DAGM 2008, pages: 31-40, (Editors: Rigoll, G. ), Springer, Berlin, Germany, 30th Annual Symposium of the German Association for Pattern Recognition, June 2008, Main Award DAGM 2008 (inproceedings)

Abstract
Most current methods for multi-class object classification and localization work as independent 1-vs-rest classifiers. They decide whether and where an object is visible in an image purely on a per-class basis. Joint learning of more than one object class would generally be preferable, since this would allow the use of contextual information such as co-occurrence between classes. However, this approach is usually not employed because of its computational cost. In this paper we propose a method to combine the efficiency of single class localization with a subsequent decision process that works jointly for all given object classes. By following a multiple kernel learning (MKL) approach, we automatically obtain a sparse dependency graph of relevant object classes on which to base the decision. Experiments on the PASCAL VOC 2006 and 2007 datasets show that the subsequent joint decision step clearly improves the accuracy compared to single class detection.

ei

PDF ZIP Web DOI [BibTex]

PDF ZIP Web DOI [BibTex]


no image
Natural Evolution Strategies

Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.

In CEC 2008, pages: 3381-3387, IEEE, Piscataway, NJ, USA, IEEE Congress on Evolutionary Computation, June 2008 (inproceedings)

Abstract
This paper presents natural evolution strategies (NES), a novel algorithm for performing real-valued dasiablack boxpsila function optimization: optimizing an unknown objective function where algorithm-selected function measurements constitute the only information accessible to the method. Natural evolution strategies search the fitness landscape using a multivariate normal distribution with a self-adapting mutation matrix to generate correlated mutations in promising regions. NES shares this property with covariance matrix adaption (CMA), an evolution strategy (ES) which has been shown to perform well on a variety of high-precision optimization tasks. The natural evolution strategies algorithm, however, is simpler, less ad-hoc and more principled. Self-adaptation of the mutation matrix is derived using a Monte Carlo estimate of the natural gradient towards better expected fitness. By following the natural gradient instead of the dasiavanillapsila gradient, we can ensure efficient update steps while preventing early convergence due to overly greedy updates, resulting in reduced sensitivity to local suboptima. We show NES has competitive performance with CMA on unimodal tasks, while outperforming it on several multimodal tasks that are rich in deceptive local optima.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Pairwise Correlations and Multineuronal Firing Patterns in the Primary Visual Cortex of the Awake, Behaving Macaque

Berens, P., Ecker, A., Subramaniyan, M., Macke, J., Hauck, P., Bethge, M., Tolias, A.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 48, June 2008 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Visual saliency re-visited: Center-surround patterns emerge as optimal predictors for human fixation targets

Wichmann, F., Kienzle, W., Schölkopf, B., Franz, M.

Journal of Vision, 8(6):635, 8th Annual Meeting of the Vision Sciences Society (VSS), June 2008 (poster)

Abstract
Humans perceives the world by directing the center of gaze from one location to another via rapid eye movements, called saccades. In the period between saccades the direction of gaze is held fixed for a few hundred milliseconds (fixations). It is primarily during fixations that information enters the visual system. Remarkably, however, after only a few fixations we perceive a coherent, high-resolution scene despite the visual acuity of the eye quickly decreasing away from the center of gaze: This suggests an effective strategy for selecting saccade targets. Top-down effects, such as the observer's task, thoughts, or intentions have an effect on saccadic selection. Equally well known is that bottom-up effects-local image structure-influence saccade targeting regardless of top-down effects. However, the question of what the most salient visual features are is still under debate. Here we model the relationship between spatial intensity patterns in natural images and the response of the saccadic system using tools from machine learning. This allows us to identify the most salient image patterns that guide the bottom-up component of the saccadic selection system, which we refer to as perceptive fields. We show that center-surround patterns emerge as the optimal solution to the problem of predicting saccade targets. Using a novel nonlinear system identification technique we reduce our learned classifier to a one-layer feed-forward network which is surprisingly simple compared to previously suggested models assuming more complex computations such as multi-scale processing, oriented filters and lateral inhibition. Nevertheless, our model is equally predictive and generalizes better to novel image sets. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Kernel Methods in Machine Learning

Hofmann, T., Schölkopf, B., Smola, A.

Annals of Statistics, 36(3):1171-1220, June 2008 (article)

Abstract
We review machine learning methods employing positive definite kernels. These methods formulate learning and estimation problems in a reproducing kernel Hilbert space (RKHS) of functions defined on the data domain, expanded in terms of a kernel. Working in linear spaces of function has the benefit of facilitating the construction and analysis of learning algorithms while at the same time allowing large classes of functions. The latter include nonlinear functions as well as functions defined on nonvectorial data.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Cross-validation Optimization for Large Scale Structured Classification Kernel Methods

Seeger, M.

Journal of Machine Learning Research, 9, pages: 1147-1178, June 2008 (article)

Abstract
We propose a highly efficient framework for penalized likelihood kernel methods applied to multi-class models with a large, structured set of classes. As opposed to many previous approaches which try to decompose the fitting problem into many smaller ones, we focus on a Newton optimization of the complete model, making use of model structure and linear conjugate gradients in order to approximate Newton search directions. Crucially, our learning method is based entirely on matrix-vector multiplication primitives with the kernel matrices and their derivatives, allowing straightforward specialization to new kernels, and focusing code optimization efforts to these primitives only. Kernel parameters are learned automatically, by maximizing the cross-validation log likelihood in a gradient-based way, and predictive probabilities are estimated. We demonstrate our approach on large scale text classification tasks with hierarchical structure on thousands of classes, achieving state-of-the-art results in an order of magnitude less time than previous work.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Partitioning of Image Datasets using Discriminative Context Information

Lampert, CH.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008 (inproceedings)

Abstract
We propose a new method to partition an unlabeled dataset, called Discriminative Context Partitioning (DCP). It is motivated by the idea of splitting the dataset based only on how well the resulting parts can be separated from a context class of disjoint data points. This is in contrast to typical clustering techniques like K-means that are based on a generative model by implicitly or explicitly searching for modes in the distribution of samples. The discriminative criterion in DCP avoids the problems that density based methods have when the a priori assumption of multimodality is violated, when the number of samples becomes small in relation to the dimensionality of the feature space, or if the cluster sizes are strongly unbalanced. We formulate DCP‘s separation property as a large-margin criterion, and show how the resulting optimization problem can be solved efficiently. Experiments on the MNIST and USPS datasets of handwritten digits and on a subset of the Caltech256 dataset show that, given a suitable context, DCP can achieve good results even in situation where density-based clustering techniques fail.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Non-monotonic Poisson Likelihood Maximization

Sra, S., Kim, D., Schölkopf, B.

(170), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, June 2008 (techreport)

Abstract
This report summarizes the theory and some main applications of a new non-monotonic algorithm for maximizing a Poisson Likelihood, which for Positron Emission Tomography (PET) is equivalent to minimizing the associated Kullback-Leibler Divergence, and for Transmission Tomography is similar to maximizing the dual of a maximum entropy problem. We call our method non-monotonic maximum likelihood (NMML) and show its application to different problems such as tomography and image restoration. We discuss some theoretical properties such as convergence for our algorithm. Our experimental results indicate that speedups obtained via our non-monotonic methods are substantial.

ei

PDF [BibTex]

PDF [BibTex]


no image
Correlational Spectral Clustering

Blaschko, MB., Lampert, CH.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008 (inproceedings)

Abstract
We present a new method for spectral clustering with paired data based on kernel canonical correlation analysis, called correlational spectral clustering. Paired data are common in real world data sources, such as images with text captions. Traditional spectral clustering algorithms either assume that data can be represented by a single similarity measure, or by co-occurrence matrices that are then used in biclustering. In contrast, the proposed method uses separate similarity measures for each data representation, and allows for projection of previously unseen data that are only observed in one representation (e.g. images but not text). We show that this algorithm generalizes traditional spectral clustering algorithms and show consistent empirical improvement over spectral clustering on a variety of datasets of images with associated text.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Approximate Dynamic Programming with Gaussian Processes

Deisenroth, M., Peters, J., Rasmussen, C.

In ACC 2008, pages: 4480-4485, IEEE Service Center, Piscataway, NJ, USA, 2008 American Control Conference, June 2008 (inproceedings)

Abstract
In general, it is difficult to determine an optimal closed-loop policy in nonlinear control problems with continuous-valued state and control domains. Hence, approximations are often inevitable. The standard method of discretizing states and controls suffers from the curse of dimensionality and strongly depends on the chosen temporal sampling rate. In this paper, we introduce Gaussian process dynamic programming (GPDP) and determine an approximate globally optimal closed-loop policy. In GPDP, value functions in the Bellman recursion of the dynamic programming algorithm are modeled using Gaussian processes. GPDP returns an optimal statefeedback for a finite set of states. Based on these outcomes, we learn a possibly discontinuous closed-loop policy on the entire state space by switching between two independently trained Gaussian processes. A binary classifier selects one Gaussian process to predict the optimal control signal. We show that GPDP is able to yield an almost optimal solution to an LQ problem using few sample points. Moreover, we successfully apply GPDP to the underpowered pendulum swing up, a complex nonlinear control problem.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search

Lampert, C., Blaschko, M., Hofmann, T.

In CVPR 2008, pages: 1-8, IEEE Computer Society, Los Alamitos, CA, USA, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2008, Best paper award (inproceedings)

Abstract
Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branchand- bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the 2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.

ei

PDF PDF Web DOI [BibTex]

PDF PDF Web DOI [BibTex]


no image
Computed Torque Control with Nonparametric Regression Models

Nguyen-Tuong, D., Seeger, M., Peters, J.

In ACC 2008, pages: 212-217, IEEE Service Center, Piscataway, NJ, USA, 2008 American Control Conference, June 2008 (inproceedings)

Abstract
Computed torque control allows the design of considerably more precise, energy-efficient and compliant controls for robots. However, the major obstacle is the requirement of an accurate model for torque generation, which cannot be obtained in some cases using rigid-body formulations due to unmodeled nonlinearities, such as complex friction or actuator dynamics. In such cases, models approximated from robot data present an appealing alternative. In this paper, we compare two nonparametric regression methods for model approximation, i.e., locally weighted projection regression (LWPR) and Gaussian process regression (GPR). While locally weighted regression was employed for real-time model estimation in learning adaptive control, Gaussian process regression has not been used in control to-date due to high computational requirements. The comparison includes the assessment of model approximation for both regression methods using data originated from SARCOS robot arm, as well as an evaluation of the robot tracking p erformance in computed torque control employing the approximated models. Our results show that GPR can be applied for real-time control achieving higher accuracy. However, for the online learning LWPR is superior by reason of lower computational requirements.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-Classification by Categorical Features via Clustering

Seldin, Y., Tishby, N.

In In the proceedings of the 25th International Conference on Machine Learning (ICML 2008), pages: 920-927, 25th International Conference on Machine Learning (ICML), June 2008 (inproceedings)

Abstract
We derive a generalization bound for multi-classification schemes based on grid clustering in categorical parameter product spaces. Grid clustering partitions the parameter space in the form of a Cartesian product of partitions for each of the parameters. The derived bound provides a means to evaluate clustering solutions in terms of the generalization power of a built-on classifier. For classification based on a single feature the bound serves to find a globally optimal classification rule. Comparison of the generalization power of individual features can then be used for feature ranking. Our experiments show that in this role the bound is much more precise than mutual information or normalized correlation indices.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Kernel Test of Nonlinear Granger Causality

Sun, X.

In Proceedings of the Workshop on Inference and Estimation in Probabilistic Time-Series Models, pages: 79-89, (Editors: Barber, D. , A. T. Cemgil, S. Chiappa), Isaac Newton Institute for Mathematical Sciences, Cambridge, United Kingdom, Workshop on Inference and Estimation in Probabilistic Time-Series Models, June 2008 (inproceedings)

Abstract
We present a novel test of nonlinear Granger causality in bivariate time series. The trace norm of conditional covariance operators is used to capture the prediction errors. Based on this measure, a subsampling-based multiple testing procedure tests the prediction improvement of one time series by the other one. The distributional properties of the resulting p-values reveal the direction of Granger causality. Encouraging results of experiments with simulated and real-world data support our approach.

ei

PDF [BibTex]

PDF [BibTex]


no image
Analysis of Pattern Recognition Methods in Classifying Bold Signals in Monkeys at 7-Tesla

Ku, S., Gretton, A., Macke, J., Tolias, A., Logothetis, N.

AREADNE 2008: Research in Encoding and Decoding of Neural Ensembles, 2, pages: 67, June 2008 (poster)

Abstract
Pattern recognition methods have shown that fMRI data can reveal significant information about brain activity. For example, in the debate of how object-categories are represented in the brain, multivariate analysis has been used to provide evidence of distributed encoding schemes. Many follow-up studies have employed different methods to analyze human fMRI data with varying degrees of success. In this study we compare four popular pattern recognition methods: correlation analysis, support-vector machines (SVM), linear discriminant analysis and Gaussian naïve Bayes (GNB), using data collected at high field (7T) with higher resolution than usual fMRI studies. We investigate prediction performance on single trials and for averages across varying numbers of stimulus presentations. The performance of the various algorithms depends on the nature of the brain activity being categorized: for several tasks, many of the methods work well, whereas for others, no methods perform above chance level. An important factor in overall classification performance is careful preprocessing of the data, including dimensionality reduction, voxel selection, and outlier elimination.

ei

[BibTex]

[BibTex]


no image
Thin-Plate Splines Between Riemannian Manifolds

Steinke, F., Hein, M., Schölkopf, B.

Workshop on Geometry and Statistics of Shapes, June 2008 (talk)

Abstract
With the help of differential geometry we describe a framework to define a thin-plate spline like energy for maps between arbitrary Riemannian manifolds. The so-called Eells energy only depends on the intrinsic geometry of the input and output manifold, but not on their respective representation. The energy can then be used for regression between manifolds, we present results for cases where the outputs are rotations, sets of angles, or points on 3D surfaces. In the future we plan to also target regression where the output is an element of "shape space", understood as a Riemannian manifold. One could also further explore the meaning of the Eells energy when applied to diffeomorphisms between shapes, especially with regard to its potential use as a distance measure between shapes that does not depend on the embedding or the parametrisation of the shapes.

ei

Web [BibTex]

Web [BibTex]


Thumb xl teaser
Bayesian Color Constancy Revisited

Gehler, P., Rother, C., Blake, A., Minka, T., Sharp, T.

In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, June 2008, http://dx.doi.org/10.1109/CVPR.2008.4587765 (inproceedings)

ei

website+code+data pdf [BibTex]

website+code+data pdf [BibTex]


no image
Reinforcement Learning of Motor Skills with Policy Gradients

Peters, J., Schaal, S.

Neural Networks, 21(4):682-697, May 2008 (article)

ei

PDF Web DOI [BibTex]


no image
Information Consistency of Nonparametric Gaussian Process Methods

Seeger, MW., Kakade, SM., Foster, DP.

IEEE Transactions on Information Theory, 54(5):2376-2382, May 2008 (article)

Abstract
Abstract—Bayesian nonparametric models are widely and successfully used for statistical prediction. While posterior consistency properties are well studied in quite general settings, results have been proved using abstract concepts such as metric entropy, and they come with subtle conditions which are hard to validate and not intuitive when applied to concrete models. Furthermore, convergence rates are difficult to obtain. By focussing on the concept of information consistency for Bayesian Gaussian process (GP)models, consistency results and convergence rates are obtained via a regret bound on cumulative log loss. These results depend strongly on the covariance function of the prior process, thereby giving a novel interpretation to penalization with reproducing kernel Hilbert space norms and to commonly used covariance function classes and their parameters. The proof of the main result employs elementary convexity arguments only. A theorem of Widom is used in order to obtain precise convergence rates for several covariance functions widely used in practice.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Real-time Learning of Resolved Velocity Control on a Mitsubishi PA-10

Peters, J., Nguyen-Tuong, D.

In ICRA 2008, pages: 2872-2877, IEEE Service Center, Piscataway, NJ, USA, 2008 IEEE International Conference on Robotics and Automation, May 2008 (inproceedings)

Abstract
Learning inverse kinematics has long been fascinating the robot learning community. While humans acquire this transformation to complicated tool spaces with ease, it is not a straightforward application for supervised learning algorithms due to non-convex learning problem. However, the key insight that the problem can be considered convex in small local regions allows the application of locally linear learning methods. Nevertheless, the local solution of the problem depends on the data distribution which can result into inconsistent global solutions with large model discontinuities. While this problem can be treated in various ways in offline learning, it poses a serious problem for online learning. Previous approaches to the real-time learning of inverse kinematics avoid this problem using smart data generation, such as the learner biasses its own solution. Such biassed solutions can result into premature convergence, and from the resulting solution it is often hard to understand what has been learned in tha t local region. This paper improves and solves this problem by presenting a learning algorithm which can deal with this inconsistency through re-weighting the data online. Furthermore, we show that our algorithms work not only in simulation, but we present real-time learning results on a physical Mitsubishi PA-10 robot arm.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
New Frontiers in Characterizing Structure and Dynamics by NMR

Nilges, M., Markwick, P., Malliavin, TE., Rieping, W., Habeck, M.

In Computational Structural Biology: Methods and Applications, pages: 655-680, (Editors: Schwede, T. , M. C. Peitsch), World Scientific, New Jersey, NJ, USA, May 2008 (inbook)

Abstract
Nuclear Magnetic Resonance (NMR) spectroscopy has emerged as the method of choice for studying both the structure and the dynamics of biological macromolecule in solution. Despite the maturity of the NMR method for structure determination, its application faces a number of challenges. The method is limited to systems of relatively small molecular mass, data collection times are long, data analysis remains a lengthy procedure, and it is difficult to evaluate the quality of the final structures. The last years have seen significant advances in experimental techniques to overcome or reduce some limitations. The function of bio-macromolecules is determined by both their 3D structure and conformational dynamics. These molecules are inherently flexible systems displaying a broad range of dynamics on time–scales from picoseconds to seconds. NMR is unique in its ability to obtain dynamic information on an atomic scale. The experimental information on structure and dynamics is intricately mixed. It is however difficult to unite both structural and dynamical information into one consistent model, and protocols for the determination of structure and dynamics are performed independently. This chapter deals with the challenges posed by the interpretation of NMR data on structure and dynamics. We will first relate the standard structure calculation methods to Bayesian probability theory. We will then briefly describe the advantages of a fully Bayesian treatment of structure calculation. Then, we will illustrate the advantages of using Bayesian reasoning at least partly in standard structure calculations. The final part will be devoted to interpretation of experimental data on dynamics.

ei

Web [BibTex]

Web [BibTex]


no image
Learning resolved velocity control

Peters, J.

2008 IEEE International Conference on Robotics and Automation (ICRA), May 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Causal inference from statistical data

Sun, X.

Biologische Kybernetik, Technische Hochschule Karlsruhe, Karlsruhe, Germany, April 2008 (phdthesis)

ei

Web [BibTex]

Web [BibTex]


no image
Pairwise Correlations and Multineuronal Firing Patterns in Primary Visual Cortex

Berens, P.

Biologische Kybernetik, Eberhard Karls Universität Tübingen, Tübingen, Germany, April 2008 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
Relating the Thermodynamic Arrow of Time to the Causal Arrow

Allahverdyan, A., Janzing, D.

Journal of Statistical Mechanics, 2008(P04001):1-21, April 2008 (article)

Abstract
Consider a Hamiltonian system that consists of a slow subsystem S and a fast subsystem F. The autonomous dynamics of S is driven by an effective Hamiltonian, but its thermodynamics is unexpected. We show that a well-defined thermodynamic arrow of time (second law) emerges for S whenever there is a well-defined causal arrow from S to F and the back-action is negligible. This is because the back-action of F on S is described by a non-globally Hamiltonian Born–Oppenheimer term that violates the Liouville theorem, and makes the second law inapplicable to S. If S and F are mixing, under the causal arrow condition they are described by microcanonical distributions P(S) and P(S|F). Their structure supports a causal inference principle proposed recently in machine learning.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Generalization and Similarity in Exemplar Models of Categorization: Insights from Machine Learning

Jäkel, F., Schölkopf, B., Wichmann, F.

Psychonomic Bulletin and Review, 15(2):256-271, April 2008 (article)

Abstract
Exemplar theories of categorization depend on similarity for explaining subjects’ ability to generalize to new stimuli. A major criticism of exemplar theories concerns their lack of abstraction mechanisms and thus, seemingly, generalization ability. Here, we use insights from machine learning to demonstrate that exemplar models can actually generalize very well. Kernel methods in machine learning are akin to exemplar models and very successful in real-world applications. Their generalization performance depends crucially on the chosen similaritymeasure. While similarity plays an important role in describing generalization behavior it is not the only factor that controls generalization performance. In machine learning, kernel methods are often combined with regularization techniques to ensure good generalization. These same techniques are easily incorporated in exemplar models. We show that the Generalized Context Model (Nosofsky, 1986) and ALCOVE (Kruschke, 1992) are closely related to a statistical model called kernel logistic regression. We argue that generalization is central to the enterprise of understanding categorization behavior and suggest how insights from machine learning can offer some guidance. Keywords: kernel, similarity, regularization, generalization, categorization.

ei

PDF Web DOI [BibTex]


no image
Bayesian methods for protein structure determination

Habeck, M.

Machine Learning in Structural Bioinformatics, April 2008 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Graph Mining with Variational Dirichlet Process Mixture Models

Tsuda, K., Kurihara, K.

In SDM 2008, pages: 432-442, (Editors: Zaki, M. J.), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 8th SIAM International Conference on Data Mining, April 2008 (inproceedings)

Abstract
Graph data such as chemical compounds and XML documents are getting more common in many application domains. A main difficulty of graph data processing lies in the intrinsic high dimensionality of graphs, namely, when a graph is represented as a binary feature vector of indicators of all possible subgraph patterns, the dimensionality gets too large for usual statistical methods. We propose a nonparametric Bayesian method for clustering graphs and selecting salient patterns at the same time. Variational inference is adopted here, because sampling is not applicable due to extremely high dimensionality. The feature set minimizing the free energy is efficiently collected with the DFS code tree, where the generation of useless subgraphs is suppressed by a tree pruning condition. In experiments, our method is compared with a simpler approach based on frequent subgraph mining, and graph kernels.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Kernel Method for the Two-sample Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

(157), Max-Planck-Institute for Biological Cybernetics Tübingen, April 2008 (techreport)

Abstract
We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg.~a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.

ei

PDF [BibTex]

PDF [BibTex]


no image
Manifold-valued Thin-plate Splines with Applications in Computer Graphics

Steinke, F., Hein, M., Peters, J., Schölkopf, B.

Computer Graphics Forum, 27(2):437-448, April 2008 (article)

Abstract
We present a generalization of thin-plate splines for interpolation and approximation of manifold-valued data, and demonstrate its usefulness in computer graphics with several applications from different fields. The cornerstone of our theoretical framework is an energy functional for mappings between two Riemannian manifolds which is independent of parametrization and respects the geometry of both manifolds. If the manifolds are Euclidean, the energy functional reduces to the classical thin-plate spline energy. We show how the resulting optimization problems can be solved efficiently in many cases. Our example applications range from orientation interpolation and motion planning in animation over geometric modelling tasks to color interpolation.

ei

PDF AVI Web DOI [BibTex]


no image
Model-Based Reinforcement Learning with Continuous States and Actions

Deisenroth, M., Rasmussen, C., Peters, J.

In ESANN 2008, pages: 19-24, (Editors: Verleysen, M. ), d-side, Evere, Belgium, European Symposium on Artificial Neural Networks, April 2008 (inproceedings)

Abstract
Finding an optimal policy in a reinforcement learning (RL) framework with continuous state and action spaces is challenging. Approximate solutions are often inevitable. GPDP is an approximate dynamic programming algorithm based on Gaussian process (GP) models for the value functions. In this paper, we extend GPDP to the case of unknown transition dynamics. After building a GP model for the transition dynamics, we apply GPDP to this model and determine a continuous-valued policy in the entire state space. We apply the resulting controller to the underpowered pendulum swing up. Moreover, we compare our results on this RL task to a nearly optimal discrete DP solution in a fully known environment.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning Inverse Dynamics: A Comparison

Nguyen-Tuong, D., Peters, J., Seeger, M., Schölkopf, B.

In Advances in Computational Intelligence and Learning: Proceedings of the European Symposium on Artificial Neural Networks, pages: 13-18, (Editors: M Verleysen), d-side, Evere, Belgium, 16th European Symposium on Artificial Neural Networks (ESANN), April 2008 (inproceedings)

Abstract
While it is well-known that model can enhance the control performance in terms of precision or energy efficiency, the practical application has often been limited by the complexities of manually obtaining sufficiently accurate models. In the past, learning has proven a viable alternative to using a combination of rigid-body dynamics and handcrafted approximations of nonlinearities. However, a major open question is what nonparametric learning method is suited best for learning dynamics? Traditionally, locally weighted projection regression (LWPR), has been the standard method as it is capable of online, real-time learning for very complex robots. However, while LWPR has had significant impact on learning in robotics, alternative nonparametric regression methods such as support vector regression (SVR) and Gaussian processes regression (GPR) offer interesting alternatives with fewer open parameters and potentially higher accuracy. In this paper, we evaluate these three alternatives for model learning. Our comparison consists out of the evaluation of learning quality for each regression method using original data from SARCOS robot arm, as well as the robot tracking performance employing learned models. The results show that GPR and SVR achieve a superior learning precision and can be applied for real-time control obtaining higher accuracy. However, for the online learning LWPR presents the better method due to its lower computational requirements.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
The Metric Nearness Problem

Brickell, J., Dhillon, I., Sra, S., Tropp, J.

SIAM Journal on Matrix Analysis and Applications, 30(1):375-396, April 2008 (article)

Abstract
Metric nearness refers to the problem of optimally restoring metric properties to distance measurements that happen to be nonmetric due to measurement errors or otherwise. Metric data can be important in various settings, for example, in clustering, classification, metric-based indexing, query processing, and graph theoretic approximation algorithms. This paper formulates and solves the metric nearness problem: Given a set of pairwise dissimilarities, find a “nearest” set of distances that satisfy the properties of a metric—principally the triangle inequality. For solving this problem, the paper develops efficient triangle fixing algorithms that are based on an iterative projection method. An intriguing aspect of the metric nearness problem is that a special case turns out to be equivalent to the all pairs shortest paths problem. The paper exploits this equivalence and develops a new algorithm for the latter problem using a primal-dual method. Applications to graph clustering are provided as an illustratio n. We include experiments that demonstrate the computational superiority of triangle fixing over general purpose convex programming software. Finally, we conclude by suggesting various useful extensions and generalizations to metric nearness.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Bayesian Inference and Optimal Design for the Sparse Linear Model

Seeger, MW.

Journal of Machine Learning Research, 9, pages: 759-813, April 2008 (article)

Abstract
The linear model with sparsity-favouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of Bayesian optimal design (or experiment planning), for which accurate estimates of uncertainty are essential. To this end, we employ expectation propagation approximate inference for the linear model with Laplace prior, giving new insight into numerical stability properties and proposing a robust algorithm. We also show how to estimate model hyperparameters by empirical Bayesian maximisation of the marginal likelihood, and propose ideas in order to scale up the method to very large underdetermined problems. We demonstrate the versatility of our framework on the application of gene regulatory network identification from micro-array expression data, where both the Laplace prior and the active experimental design approach are shown to result in significant improvements. We also address the problem of sparse coding of natural images, and show how our framework can be used for compressive sensing tasks.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Consistency of Spectral Clustering

von Luxburg, U., Belkin, M., Bousquet, O.

Annals of Statistics, 36(2):555-586, April 2008 (article)

Abstract
Consistency is a key property of statistical algorithms when the data is drawn from some underlying probability distribution. Surprisingly, despite decades of work, little is known about consistency of most clustering algorithms. In this paper we investigate consistency of the popular family of spectral clustering algorithms, which clusters the data with the help of eigenvectors of graph Laplacian matrices. We develop new methods to establish that for increasing sample size, those eigenvectors converge to the eigenvectors of certain limit operators. As a result we can prove that one of the two major classes of spectral clustering (normalized clustering) converges under very general conditions, while the other (unnormalized clustering) is only consistent under strong additional assumptions, which are not always satisfied in real data. We conclude that our analysis provides strong evidence for the superiority of normalized spectral clustering.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Plant Classification from Bat-Like Echolocation Signals

Yovel, Y., Franz, MO., Stilz, P., Schnitzler, H-U.

PLoS Computational Biology, 4(3, e1000032):1-13, March 2008 (article)

Abstract
Classification of plants according to their echoes is an elementary component of bat behavior that plays an important role in spatial orientation and food acquisition. Vegetation echoes are, however, highly complex stochastic signals: from an acoustical point of view, a plant can be thought of as a three-dimensional array of leaves reflecting the emitted bat call. The received echo is therefore a superposition of many reflections. In this work we suggest that the classification of these echoes might not be such a troublesome routine for bats as formerly thought. We present a rather simple approach to classifying signals from a large database of plant echoes that were created by ensonifying plants with a frequency-modulated bat-like ultrasonic pulse. Our algorithm uses the spectrogram of a single echo from which it only uses features that are undoubtedly accessible to bats. We used a standard machine learning algorithm (SVM) to automatically extract suitable linear combinations of time and frequency cues from the spectrograms such that classification with high accuracy is enabled. This demonstrates that ultrasonic echoes are highly informative about the species membership of an ensonified plant, and that this information can be extracted with rather simple, biologically plausible analysis. Thus, our findings provide a new explanatory basis for the poorly understood observed abilities of bats in classifying vegetation and other complex objects.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The role of stimulus correlations for population decoding in the retina

Schwartz, G., Macke, J., Berry, M.

Computational and Systems Neuroscience 2008 (COSYNE 2008), 5, pages: 172, March 2008 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Causal Reasoning by Evaluating the Complexity of Conditional Densities with Kernel Methods

Sun, X., Janzing, D., Schölkopf, B.

Neurocomputing, 71(7-9):1248-1256, March 2008 (article)

Abstract
We propose a method to quantify the complexity of conditional probability measures by a Hilbert space seminorm of the logarithm of its density. The concept of reproducing kernel Hilbert spaces (RKHSs) is a flexible tool to define such a seminorm by choosing an appropriate kernel. We present several examples with artificial data sets where our kernel-based complexity measure is consistent with our intuitive understanding of complexity of densities. The intention behind the complexity measure is to provide a new approach to inferring causal directions. The idea is that the factorization of the joint probability measure P(effect, cause) into P(effect|cause)P(cause) leads typically to "simpler" and "smoother" terms than the factorization into P(cause|effect)P(effect). Since the conventional constraint-based approach of causal discovery is not able to determine the causal direction between only two variables, our inference principle can in particular be useful when combined with other existing methods. We provide several simple examples with real-world data where the true causal directions indeed lead to simpler (conditional) densities.

ei

Web DOI [BibTex]