Header logo is



no image
Spatial Filtering based on Riemannian Manifold for Brain-Computer Interfacing

Xu, J.

Technical University of Munich, Germany, 2019 (mastersthesis)

ei

[BibTex]

[BibTex]


Thumb xl screenshot 2019 03 25 at 14.29.22
Learning to Control Highly Accelerated Ballistic Movements on Muscular Robots

Büchler, D., Calandra, R., Peters, J.

2019 (article) Submitted

Abstract
High-speed and high-acceleration movements are inherently hard to control. Applying learning to the control of such motions on anthropomorphic robot arms can improve the accuracy of the control but might damage the system. The inherent exploration of learning approaches can lead to instabilities and the robot reaching joint limits at high speeds. Having hardware that enables safe exploration of high-speed and high-acceleration movements is therefore desirable. To address this issue, we propose to use robots actuated by Pneumatic Artificial Muscles (PAMs). In this paper, we present a four degrees of freedom (DoFs) robot arm that reaches high joint angle accelerations of up to 28000 °/s^2 while avoiding dangerous joint limits thanks to the antagonistic actuation and limits on the air pressure ranges. With this robot arm, we are able to tune control parameters using Bayesian optimization directly on the hardware without additional safety considerations. The achieved tracking performance on a fast trajectory exceeds previous results on comparable PAM-driven robots. We also show that our system can be controlled well on slow trajectories with PID controllers due to careful construction considerations such as minimal bending of cables, lightweight kinematics and minimal contact between PAMs and PAMs with the links. Finally, we propose a novel technique to control the the co-contraction of antagonistic muscle pairs. Experimental results illustrate that choosing the optimal co-contraction level is vital to reach better tracking performance. Through the use of PAM-driven robots and learning, we do a small step towards the future development of robots capable of more human-like motions.

ei

Arxiv Video [BibTex]


no image
AReS and MaRS Adversarial and MMD-Minimizing Regression for SDEs

Abbati*, G., Wenk*, P., Osborne, M. A., Krause, A., Schölkopf, B., Bauer, S.

Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 1-10, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, 2019, *equal contribution (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Perception of temporal dependencies in autoregressive motion

Meding, K., Schölkopf, B., Wichmann, F. A.

European Conference on Visual Perception (ECVP), 2019 (poster)

ei

[BibTex]

[BibTex]


no image
Inferring causation from time series with perspectives in Earth system sciences

Runge, J., Bathiany, S., Bollt, E., Camps-Valls, G., Coumou, D., Deyle, E., Glymour, C., Kretschmer, M., Mahecha, M., van Nes, E., Peters, J., Quax, R., Reichstein, M., Scheffer, M. S. B., Spirtes, P., Sugihara, G., Sun, J., Zhang, K., Zscheischler, J.

Nature Communications, 2019 (article) In revision

ei

[BibTex]

[BibTex]


no image
Kernel Stein Tests for Multiple Model Comparison

Lim, J. N., Yamada, M., Schölkopf, B., Jitkrittum, W.

Advances in Neural Information Processing Systems 32, 33rd Annual Conference on Neural Information Processing Systems, 2019 (conference) To be published

ei

[BibTex]

[BibTex]


Thumb xl mode changes long exp
Fast Feedback Control over Multi-hop Wireless Networks with Mode Changes and Stability Guarantees

Baumann, D., Mager, F., Jacob, R., Thiele, L., Zimmerling, M., Trimpe, S.

ACM Transactions on Cyber-Physical Systems, 2019 (article) Accepted

ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


Thumb xl blockdiag
Event-triggered Learning

Solowjow, F., Trimpe, S.

2019 (techreport) Submitted

ics

arXiv PDF [BibTex]


no image
MYND: A Platform for Large-scale Neuroscientific Studies

Hohmann, M. R., Hackl, M., Wirth, B., Zaman, T., Enficiaud, R., Grosse-Wentrup, M., Schölkopf, B.

Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI), 2019 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
A Kernel Stein Test for Comparing Latent Variable Models

Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., Gretton, A.

2019 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Phenomenal Causality and Sensory Realism

Bruijns, S. A., Meding, K., Schölkopf, B., Wichmann, F. A.

European Conference on Visual Perception (ECVP), 2019 (poster)

ei

[BibTex]

[BibTex]


Thumb xl rae
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

2019, *equal contribution (conference) Submitted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

ei ps

arXiv [BibTex]


no image
Fisher Efficient Inference of Intractable Models

Liu, S., Kanamori, T., Jitkrittum, W., Chen, Y.

Advances in Neural Information Processing Systems 32, 33rd Annual Conference on Neural Information Processing Systems, 2019 (conference) To be published

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces

Klus, S., Schuster, I., Muandet, K.

Journal of Nonlinear Science, 2019, First Online: 21 August 2019 (article)

ei

DOI [BibTex]

DOI [BibTex]

2013


no image
Camera-specific Image Denoising

Schober, M.

Eberhard Karls Universität Tübingen, Germany, October 2013 (diplomathesis)

ei pn

PDF [BibTex]

2013


PDF [BibTex]


no image
Studying large-scale brain networks: electrical stimulation and neural-event-triggered fMRI

Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.

Twenty-Second Annual Computational Neuroscience Meeting (CNS*2013), July 2013, journal = {BMC Neuroscience}, year = {2013}, month = {7}, volume = {14}, number = {Supplement 1}, pages = {A1}, (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Correlation of Simultaneously Acquired Diffusion-Weighted Imaging and 2-Deoxy-[18F] fluoro-2-D-glucose Positron Emission Tomography of Pulmonary Lesions in a Dedicated Whole-Body Magnetic Resonance/Positron Emission Tomography System

Schmidt, H., Brendle, C., Schraml, C., Martirosian, P., Bezrukov, I., Hetzel, J., Müller, M., Sauter, A., Claussen, C., Pfannenberg, C., Schwenzer, N.

Investigative Radiology, 48(5):247-255, May 2013 (article)

ei

Web [BibTex]

Web [BibTex]


no image
Replacing Causal Faithfulness with Algorithmic Independence of Conditionals

Lemeire, J., Janzing, D.

Minds and Machines, 23(2):227-249, May 2013 (article)

Abstract
Independence of Conditionals (IC) has recently been proposed as a basic rule for causal structure learning. If a Bayesian network represents the causal structure, its Conditional Probability Distributions (CPDs) should be algorithmically independent. In this paper we compare IC with causal faithfulness (FF), stating that only those conditional independences that are implied by the causal Markov condition hold true. The latter is a basic postulate in common approaches to causal structure learning. The common spirit of FF and IC is to reject causal graphs for which the joint distribution looks ‘non-generic’. The difference lies in the notion of genericity: FF sometimes rejects models just because one of the CPDs is simple, for instance if the CPD describes a deterministic relation. IC does not behave in this undesirable way. It only rejects a model when there is a non-generic relation between different CPDs although each CPD looks generic when considered separately. Moreover, it detects relations between CPDs that cannot be captured by conditional independences. IC therefore helps in distinguishing causal graphs that induce the same conditional independences (i.e., they belong to the same Markov equivalence class). The usual justification for FF implicitly assumes a prior that is a probability density on the parameter space. IC can be justified by Solomonoff’s universal prior, assigning non-zero probability to those points in parameter space that have a finite description. In this way, it favours simple CPDs, and therefore respects Occam’s razor. Since Kolmogorov complexity is uncomputable, IC is not directly applicable in practice. We argue that it is nevertheless helpful, since it has already served as inspiration and justification for novel causal inference algorithms.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
What can neurons do for their brain? Communicate selectivity with bursts

Balduzzi, D., Tononi, G.

Theory in Biosciences , 132(1):27-39, Springer, March 2013 (article)

Abstract
Neurons deep in cortex interact with the environment extremely indirectly; the spikes they receive and produce are pre- and post-processed by millions of other neurons. This paper proposes two information-theoretic constraints guiding the production of spikes, that help ensure bursting activity deep in cortex relates meaningfully to events in the environment. First, neurons should emphasize selective responses with bursts. Second, neurons should propagate selective inputs by burst-firing in response to them. We show the constraints are necessary for bursts to dominate information-transfer within cortex, thereby providing a substrate allowing neurons to distribute credit amongst themselves. Finally, since synaptic plasticity degrades the ability of neurons to burst selectively, we argue that homeostatic regulation of synaptic weights is necessary, and that it is best performed offline during sleep.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Apprenticeship Learning with Few Examples

Boularias, A., Chaib-draa, B.

Neurocomputing, 104, pages: 83-96, March 2013 (article)

Abstract
We consider the problem of imitation learning when the examples, provided by an expert human, are scarce. Apprenticeship learning via inverse reinforcement learning provides an efficient tool for generalizing the examples, based on the assumption that the expert's policy maximizes a value function, which is a linear combination of state and action features. Most apprenticeship learning algorithms use only simple empirical averages of the features in the demonstrations as a statistics of the expert's policy. However, this method is efficient only when the number of examples is sufficiently large to cover most of the states, or the dynamics of the system is nearly deterministic. In this paper, we show that the quality of the learned policies is sensitive to the error in estimating the averages of the features when the dynamics of the system is stochastic. To reduce this error, we introduce two new approaches for bootstrapping the demonstrations by assuming that the expert is near-optimal and the dynamics of the system is known. In the first approach, the expert's examples are used to learn a reward function and to generate furthermore examples from the corresponding optimal policy. The second approach uses a transfer technique, known as graph homomorphism, in order to generalize the expert's actions to unvisited regions of the state space. Empirical results on simulated robot navigation problems show that our approach is able to learn sufficiently good policies from a significantly small number of examples.

ei

Web DOI [BibTex]

Web DOI [BibTex]


Thumb xl thumb hennigk2012 2
Quasi-Newton Methods: A New Direction

Hennig, P., Kiefel, M.

Journal of Machine Learning Research, 14(1):843-865, March 2013 (article)

Abstract
Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.

ei ps pn

website+code pdf link (url) [BibTex]

website+code pdf link (url) [BibTex]


no image
Regional effects of magnetization dispersion on quantitative perfusion imaging for pulsed and continuous arterial spin labeling

Cavusoglu, M., Pohmann, R., Burger, H. C., Uludag, K.

Magnetic Resonance in Medicine, 69(2):524-530, Febuary 2013 (article)

Abstract
Most experiments assume a global transit delay time with blood flowing from the tagging region to the imaging slice in plug flow without any dispersion of the magnetization. However, because of cardiac pulsation, nonuniform cross-sectional flow profile, and complex vessel networks, the transit delay time is not a single value but follows a distribution. In this study, we explored the regional effects of magnetization dispersion on quantitative perfusion imaging for varying transit times within a very large interval from the direct comparison of pulsed, pseudo-continuous, and dual-coil continuous arterial spin labeling encoding schemes. Longer distances between tagging and imaging region typically used for continuous tagging schemes enhance the regional bias on the quantitative cerebral blood flow measurement causing an underestimation up to 37% when plug flow is assumed as in the standard model.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The multivariate Watson distribution: Maximum-likelihood estimation and other aspects

Sra, S., Karp, D.

Journal of Multivariate Analysis, 114, pages: 256-269, February 2013 (article)

Abstract
This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where View the MathML source are equivalent), for high-dimensions using them can be difficult—largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived. But these are either grossly inaccurate in high-dimensions [K.V. Mardia, P. Jupp, Directional Statistics, second ed., John Wiley & Sons, 2000] or when reasonably accurate [A. Bijral, M. Breitenbach, G.Z. Grudic, Mixture of Watson distributions: a generative model for hyperspherical embeddings, in: Artificial Intelligence and Statistics, AISTATS 2007, 2007, pp. 35–42], they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover a hitherto unknown connection to the “diametrical clustering” algorithm of Dhillon et al. [I.S. Dhillon, E.M. Marcotte, U. Roshan, Diametrical clustering for identifying anticorrelated gene clusters, Bioinformatics 19 (13) (2003) 1612–1619].

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
How the result of graph clustering methods depends on the construction of the graph

Maier, M., von Luxburg, U., Hein, M.

ESAIM: Probability & Statistics, 17, pages: 370-418, January 2013 (article)

Abstract
We study the scenario of graph-based clustering algorithms such as spectral clustering. Given a set of data points, one rst has to construct a graph on the data points and then apply a graph clustering algorithm to nd a suitable partition of the graph. Our main question is if and how the construction of the graph (choice of the graph, choice of parameters, choice of weights) in uences the outcome of the nal clustering result. To this end we study the convergence of cluster quality measures such as the normalized cut or the Cheeger cut on various kinds of random geometric graphs as the sample size tends to in nity. It turns out that the limit values of the same objective function are systematically di erent on di erent types of graphs. This implies that clustering results systematically depend on the graph and can be very di erent for di erent types of graph. We provide examples to illustrate the implications on spectral clustering.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Falsification and future performance

Balduzzi, D.

In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, 7070, pages: 65-78, Lecture Notes in Computer Science, Springer, Berlin, Germany, Solomonoff 85th Memorial Conference, January 2013 (inproceedings)

Abstract
We information-theoretically reformulate two measures of capacity from statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. We show these capacity measures count the number of hypotheses about a dataset that a learning algorithm falsifies when it finds the classifier in its repertoire minimizing empirical risk. It then follows from that the future performance of predictors on unseen data is controlled in part by how many hypotheses the learner falsifies. As a corollary we show that empirical VC-entropy quantifies the message length of the true hypothesis in the optimal code of a particular probability distribution, the so-called actual repertoire.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Explicit eigenvalues of certain scaled trigonometric matrices

Sra, S.

Linear Algebra and its Applications, 438(1):173-181, January 2013 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
How Sensitive Is the Human Visual System to the Local Statistics of Natural Images?

Gerhard, H., Wichmann, F., Bethge, M.

PLoS Computational Biology, 9(1):e1002873, January 2013 (article)

Abstract
Several aspects of primate visual physiology have been identified as adaptations to local regularities of natural images. However, much less work has measured visual sensitivity to local natural image regularities. Most previous work focuses on global perception of large images and shows that observers are more sensitive to visual information when image properties resemble those of natural images. In this work we measure human sensitivity to local natural image regularities using stimuli generated by patch-based probabilistic natural image models that have been related to primate visual physiology. We find that human observers can learn to discriminate the statistical regularities of natural image patches from those represented by current natural image models after very few exposures and that discriminability depends on the degree of regularities captured by the model. The quick learning we observed suggests that the human visual system is biased for processing natural images, even at very fine spatial scales, and that it has a surprisingly large knowledge of the regularities in natural images, at least in comparison to the state-of-the-art statistical models of natural images.

ei

DOI [BibTex]

DOI [BibTex]


no image
A neural population model for visual pattern detection

Goris, R., Putzeys, T., Wagemans, J., Wichmann, F.

Psychological Review, 120(3):472–496, 2013 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Feedback Error Learning for Rhythmic Motor Primitives

Gopalan, N., Deisenroth, M., Peters, J.

In Proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA 2013), pages: 1317-1322, 2013 (inproceedings)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Gaussian Process Vine Copulas for Multivariate Dependence

Lopez-Paz, D., Hernandez-Lobato, J., Ghahramani, Z.

In Proceedings of the 30th International Conference on Machine Learning, W&CP 28(2), pages: 10-18, (Editors: S Dasgupta and D McAllester), JMLR, ICML, 2013, Poster: http://people.tuebingen.mpg.de/dlopez/papers/icml2013_gpvine_poster.pdf (inproceedings)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Review of Performance Variations in SMR-Based Brain–Computer Interfaces (BCIs)

Grosse-Wentrup, M., Schölkopf, B.

In Brain-Computer Interface Research, pages: 39-51, 4, SpringerBriefs in Electrical and Computer Engineering, (Editors: Guger, C., Allison, B. Z. and Edlinger, G.), Springer, 2013 (inbook)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
The Randomized Dependence Coefficient

Lopez-Paz, D., Hennig, P., Schölkopf, B.

In Advances in Neural Information Processing Systems 26, pages: 1-9, (Editors: C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
On a link between kernel mean maps and Fraunhofer diffraction, with an application to super-resolution beyond the diffraction limit

Harmeling, S., Hirsch, M., Schölkopf, B.

In IEEE Conference on Computer Vision and Pattern Recognition, pages: 1083-1090, IEEE, CVPR, 2013 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
Output Kernel Learning Methods

Dinuzzo, F., Ong, C., Fukumizu, K.

In International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines: theory and applications, ROKS, 2013 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Alignment-based Transfer Learning for Robot Models

Bocsi, B., Csato, L., Peters, J.

In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN 2013), pages: 1-7, 2013 (inproceedings)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Accurate indel prediction using paired-end short reads

Grimm, D., Hagmann, J., Koenig, D., Weigel, D., Borgwardt, KM.

BMC Genomics, 14(132), 2013 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Nonlinear Causal Discovery for High Dimensional Data: A Kernelized Trace Method

Chen, Z., Zhang, K., Chan, L.

In 13th International Conference on Data Mining, pages: 1003-1008, (Editors: H. Xiong, G. Karypis, B. M. Thuraisingham, D. J. Cook and X. Wu), IEEE Computer Society, ICDM, 2013 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
A probabilistic approach to robot trajectory generation

Paraschos, A., Neumann, G., Peters, J.

In Proceedings of the 13th IEEE International Conference on Humanoid Robots (HUMANOIDS), pages: 477-483, IEEE, 13th IEEE-RAS International Conference on Humanoid Robots, 2013 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
Geometric optimisation on positive definite matrices for elliptically contoured distributions

Sra, S., Hosseini, R.

In Advances in Neural Information Processing Systems 26, pages: 2562-2570, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

ei

PDF [BibTex]

PDF [BibTex]


no image
Coupling between spiking activity and beta band spatio-temporal patterns in the macaque PFC

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

43rd Annual Meeting of the Society for Neuroscience (Neuroscience), 2013 (poster)

ei

[BibTex]

[BibTex]


no image
Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising

Bottou, L., Peters, J., Quiñonero-Candela, J., Charles, D., Chickering, D., Portugualy, E., Ray, D., Simard, P., Snelson, E.

Journal of Machine Learning Research, 14, pages: 3207-3260, 2013 (article)

ei

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Fast Probabilistic Optimization from Noisy Gradients

Hennig, P.

In Proceedings of The 30th International Conference on Machine Learning, JMLR W&CP 28(1), pages: 62–70, (Editors: S Dasgupta and D McAllester), ICML, 2013 (inproceedings)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Structure and Dynamics of Information Pathways in On-line Media

Gomez Rodriguez, M., Leskovec, J., Schölkopf, B.

In 6th ACM International Conference on Web Search and Data Mining (WSDM), pages: 23-32, (Editors: S Leonardi, A Panconesi, P Ferragina, and A Gionis), ACM, WSDM, 2013 (inproceedings)

ei

Web DOI [BibTex]

Web DOI [BibTex]