Header logo is


2014


no image
Simultaneous Whole-Body PET/MR Imaging in Comparison to PET/CT in Pediatric Oncology: Initial Results

Schäfer, J. F., Gatidis, S., Schmidt, H., Gückel, B., Bezrukov, I., Pfannenberg, C. A., Reimold, M., M., E., Fuchs, J., Claussen, C. D., Schwenzer, N. F.

Radiology, 273(1):220-231, 2014 (article)

ei

DOI [BibTex]

2014


DOI [BibTex]


no image
Development of advanced methods for improving astronomical images

Schmeißer, N.

Eberhard Karls Universität Tübingen, Germany, Eberhard Karls Universität Tübingen, Germany, 2014 (diplomathesis)

ei

[BibTex]

[BibTex]


no image
The Feasibility of Causal Discovery in Complex Systems: An Examination of Climate Change Attribution and Detection

Lacosse, E.

Graduate Training Centre of Neuroscience, University of Tübingen, Germany, Graduate Training Centre of Neuroscience, University of Tübingen, Germany, 2014 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Cost-Sensitive Active Learning With Lookahead: Optimizing Field Surveys for Remote Sensing Data Classification

Persello, C., Boularias, A., Dalponte, M., Gobakken, T., Naesset, E., Schölkopf, B.

IEEE Transactions on Geoscience and Remote Sensing, 10(52):6652 - 6664, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Principles of PET/MR Imaging

Disselhorst, J. A., Bezrukov, I., Kolb, A., Parl, C., Pichler, B. J.

Journal of Nuclear Medicine, 55(6, Supplement 2):2S-10S, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
IM3SHAPE: Maximum likelihood galaxy shear measurement code for cosmic gravitational lensing

Zuntz, J., Kacprzak, T., Voigt, L., Hirsch, M., Rowe, B., Bridle, S.

Astrophysics Source Code Library, 1, pages: 09013, 2014 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Causal Discovery in the Presence of Time-Dependent Relations or Small Sample Size

Huang, B.

Graduate Training Centre of Neuroscience, University of Tübingen, Germany, Graduate Training Centre of Neuroscience, University of Tübingen, Germany, 2014 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Efficient nearest neighbors via robust sparse hashing

Cherian, A., Sra, S., Morellas, V., Papanikolopoulos, N.

IEEE Transactions on Image Processing, 23(8):3646-3655, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Efficient Structured Matrix Rank Minimization

Yu, A. W., Ma, W., Yu, Y., Carbonell, J., Sra, S.

Advances in Neural Information Processing Systems 27, pages: 1350-1358, (Editors: Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence and K.Q. Weinberger), Curran Associates, Inc., 28th Annual Conference on Neural Information Processing Systems (NIPS), 2014 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Towards building a Crowd-Sourced Sky Map

Lang, D., Hogg, D., Schölkopf, B.

In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, JMLR W\&CP 33, pages: 549–557, (Editors: S. Kaski and J. Corander), JMLR.org, AISTATS, 2014 (inproceedings)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Incremental Local Gaussian Regression

Meier, F., Hennig, P., Schaal, S.

In Advances in Neural Information Processing Systems 27, pages: 972-980, (Editors: Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence and K.Q. Weinberger), 28th Annual Conference on Neural Information Processing Systems (NIPS), 2014, clmc (inproceedings)

am ei pn

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Sérsic galaxy models in weak lensing shape measurement: model bias, noise bias and their interaction

Kacprzak, T., Bridle, S., Rowe, B., Voigt, L., Zuntz, J., Hirsch, M., MacCrann, N.

Monthly Notices of the Royal Astronomical Society, 441(3):2528-2538, Oxford University Press, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Learning to Deblur

Schuler, C. J., Hirsch, M., Harmeling, S., Schölkopf, B.

In NIPS 2014 Deep Learning and Representation Learning Workshop, 28th Annual Conference on Neural Information Processing Systems (NIPS), 2014 (inproceedings)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Analysis of Distance Functions in Graphs

Alamgir, M.

University of Hamburg, Germany, University of Hamburg, Germany, 2014 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Efficient Bayesian Local Model Learning for Control

Meier, F., Hennig, P., Schaal, S.

In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, pages: 2244 - 2249, IROS, 2014, clmc (inproceedings)

Abstract
Model-based control is essential for compliant controland force control in many modern complex robots, like humanoidor disaster robots. Due to many unknown and hard tomodel nonlinearities, analytical models of such robots are oftenonly very rough approximations. However, modern optimizationcontrollers frequently depend on reasonably accurate models,and degrade greatly in robustness and performance if modelerrors are too large. For a long time, machine learning hasbeen expected to provide automatic empirical model synthesis,yet so far, research has only generated feasibility studies butno learning algorithms that run reliably on complex robots.In this paper, we combine two promising worlds of regressiontechniques to generate a more powerful regression learningsystem. On the one hand, locally weighted regression techniquesare computationally efficient, but hard to tune due to avariety of data dependent meta-parameters. On the other hand,Bayesian regression has rather automatic and robust methods toset learning parameters, but becomes quickly computationallyinfeasible for big and high-dimensional data sets. By reducingthe complexity of Bayesian regression in the spirit of local modellearning through variational approximations, we arrive at anovel algorithm that is computationally efficient and easy toinitialize for robust learning. Evaluations on several datasetsdemonstrate very good learning performance and the potentialfor a general regression learning tool for robotics.

am ei pn

PDF link (url) DOI [BibTex]

PDF link (url) DOI [BibTex]


no image
Towards an optimal stochastic alternating direction method of multipliers

Azadi, S., Sra, S.

Proceedings of the 31st International Conference on Machine Learning, 32, pages: 620-628, (Editors: Xing, E. P. and Jebara, T.), JMLR, ICML, 2014 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Diminished White Matter Integrity in Patients with Systemic Lupus Erythematosus

Schmidt-Wilcke, T., Cagnoli, P., Wang, P., Schultz, T., Lotz, A., Mccune, W. J., Sundgren, P. C.

NeuroImage: Clinical, 5, pages: 291-297, 2014 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Open Problem: Finding Good Cascade Sampling Processes for the Network Inference Problem

Gomez Rodriguez, M., Song, L., Schölkopf, B.

Proceedings of the 27th Conference on Learning Theory, 35, pages: 1276-1279, (Editors: Balcan, M.-F. and Szepesvári, C.), JMLR.org, COLT, 2014 (conference)

ei

PDF [BibTex]

PDF [BibTex]


no image
Information-Theoretic Bounded Rationality and ϵ-Optimality

Braun, DA, Ortega, PA

Entropy, 16(8):4662-4676, August 2014 (article)

Abstract
Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple \(\epsilon\)-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an \(\epsilon\)-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of \(\epsilon\)-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on \(\epsilon\)-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.

ei

DOI [BibTex]

DOI [BibTex]


no image
Occam’s Razor in sensorimotor learning

Genewein, T, Braun, D

Proceedings of the Royal Society of London B, 281(1783):1-7, May 2014 (article)

Abstract
A large number of recent studies suggest that the sensorimotor system uses probabilistic models to predict its environment and makes inferences about unobserved variables in line with Bayesian statistics. One of the important features of Bayesian statistics is Occam's Razor—an inbuilt preference for simpler models when comparing competing models that explain some observed data equally well. Here, we test directly for Occam's Razor in sensorimotor control. We designed a sensorimotor task in which participants had to draw lines through clouds of noisy samples of an unobserved curve generated by one of two possible probabilistic models—a simple model with a large length scale, leading to smooth curves, and a complex model with a short length scale, leading to more wiggly curves. In training trials, participants were informed about the model that generated the stimulus so that they could learn the statistics of each model. In probe trials, participants were then exposed to ambiguous stimuli. In probe trials where the ambiguous stimulus could be fitted equally well by both models, we found that participants showed a clear preference for the simpler model. Moreover, we found that participants’ choice behaviour was quantitatively consistent with Bayesian Occam's Razor. We also show that participants’ drawn trajectories were similar to samples from the Bayesian predictive distribution over trajectories and significantly different from two non-probabilistic heuristics. In two control experiments, we show that the preference of the simpler model cannot be simply explained by a difference in physical effort or by a preference for curve smoothness. Our results suggest that Occam's Razor is a general behavioural principle already present during sensorimotor processing.

ei

DOI [BibTex]

DOI [BibTex]


no image
Generalized Thompson sampling for sequential decision-making and causal inference

Ortega, PA, Braun, DA

Complex Adaptive Systems Modeling, 2(2):1-23, March 2014 (article)

Abstract
Purpose Sampling an action according to the probability that the action is believed to be the optimal one is sometimes called Thompson sampling. Methods Although mostly applied to bandit problems, Thompson sampling can also be used to solve sequential adaptive control problems, when the optimal policy is known for each possible environment. The predictive distribution over actions can then be constructed by a Bayesian superposition of the policies weighted by their posterior probability of being optimal. Results Here we discuss two important features of this approach. First, we show in how far such generalized Thompson sampling can be regarded as an optimal strategy under limited information processing capabilities that constrain the sampling complexity of the decision-making process. Second, we show how such Thompson sampling can be extended to solve causal inference problems when interacting with an environment in a sequential fashion. Conclusion In summary, our results suggest that Thompson sampling might not merely be a useful heuristic, but a principled method to address problems of adaptive sequential decision-making and causal inference.

ei

DOI [BibTex]

DOI [BibTex]


no image
Assessing randomness and complexity in human motion trajectories through analysis of symbolic sequences

Peng, Z, Genewein, T, Braun, DA

Frontiers in Human Neuroscience, 8(168):1-13, March 2014 (article)

Abstract
Complexity is a hallmark of intelligent behavior consisting both of regular patterns and random variation. To quantitatively assess the complexity and randomness of human motion, we designed a motor task in which we translated subjects' motion trajectories into strings of symbol sequences. In the first part of the experiment participants were asked to perform self-paced movements to create repetitive patterns, copy pre-specified letter sequences, and generate random movements. To investigate whether the degree of randomness can be manipulated, in the second part of the experiment participants were asked to perform unpredictable movements in the context of a pursuit game, where they received feedback from an online Bayesian predictor guessing their next move. We analyzed symbol sequences representing subjects' motion trajectories with five common complexity measures: predictability, compressibility, approximate entropy, Lempel-Ziv complexity, as well as effective measure complexity. We found that subjects’ self-created patterns were the most complex, followed by drawing movements of letters and self-paced random motion. We also found that participants could change the randomness of their behavior depending on context and feedback. Our results suggest that humans can adjust both complexity and regularity in different movement types and contexts and that this can be assessed with information-theoretic measures of the symbolic sequences generated from movement trajectories.

ei

DOI [BibTex]

DOI [BibTex]


no image
Curiosity-driven learning with Context Tree Weighting

Peng, Z, Braun, DA

pages: 366-367, IEEE, Piscataway, NJ, USA, 4th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB), October 2014 (conference)

Abstract
In the first simulation, the intrinsic motivation of the agent was given by measuring learning progress through reduction in informational surprise (Figure 1 A-C). This way the agent should first learn the action that is easiest to learn (a1), and then switch to other actions that still allow for learning (a2) and ignore actions that cannot be learned at all (a3). This is exactly what we found in our simple environment. Compared to the original developmental learning algorithm based on learning progress proposed by Oudeyer [2], our Context Tree Weighting approach does not require local experts to do prediction, rather it learns the conditional probability distribution over observations given action in one structure. In the second simulation, the intrinsic motivation of the agent was given by measuring compression progress through improvement in compressibility (Figure 1 D-F). The agent behaves similarly: the agent first concentrates on the action with the most predictable consequence and then switches over to the regular action where the consequence is more difficult to predict, but still learnable. Unlike the previous simulation, random actions are also interesting to some extent because the compressed symbol strings use 8-bit representations, while only 2 bits are required for our observation space. Our preliminary results suggest that Context Tree Weighting might provide a useful representation to study problems of development.

ei

DOI [BibTex]

DOI [BibTex]


no image
Monte Carlo methods for exact & efficient solution of the generalized optimality equations

Ortega, PA, Braun, DA, Tishby, N

pages: 4322-4327, IEEE, Piscataway, NJ, USA, IEEE International Conference on Robotics and Automation (ICRA), June 2014 (conference)

Abstract
Previous work has shown that classical sequential decision making rules, including expectimax and minimax, are limit cases of a more general class of bounded rational planning problems that trade off the value and the complexity of the solution, as measured by its information divergence from a given reference. This allows modeling a range of novel planning problems having varying degrees of control due to resource constraints, risk-sensitivity, trust and model uncertainty. However, so far it has been unclear in what sense information constraints relate to the complexity of planning. In this paper, we introduce Monte Carlo methods to solve the generalized optimality equations in an efficient \& exact way when the inverse temperatures in a generalized decision tree are of the same sign. These methods highlight a fundamental relation between inverse temperatures and the number of Monte Carlo proposals. In particular, it is seen that the number of proposals is essentially independent of the size of the decision tree.

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2013


no image
Camera-specific Image Denoising

Schober, M.

Eberhard Karls Universität Tübingen, Germany, October 2013 (diplomathesis)

ei pn

PDF [BibTex]

2013


PDF [BibTex]


no image
Studying large-scale brain networks: electrical stimulation and neural-event-triggered fMRI

Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.

Twenty-Second Annual Computational Neuroscience Meeting (CNS*2013), July 2013, journal = {BMC Neuroscience}, year = {2013}, month = {7}, volume = {14}, number = {Supplement 1}, pages = {A1}, (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Correlation of Simultaneously Acquired Diffusion-Weighted Imaging and 2-Deoxy-[18F] fluoro-2-D-glucose Positron Emission Tomography of Pulmonary Lesions in a Dedicated Whole-Body Magnetic Resonance/Positron Emission Tomography System

Schmidt, H., Brendle, C., Schraml, C., Martirosian, P., Bezrukov, I., Hetzel, J., Müller, M., Sauter, A., Claussen, C., Pfannenberg, C., Schwenzer, N.

Investigative Radiology, 48(5):247-255, May 2013 (article)

ei

Web [BibTex]

Web [BibTex]


no image
Replacing Causal Faithfulness with Algorithmic Independence of Conditionals

Lemeire, J., Janzing, D.

Minds and Machines, 23(2):227-249, May 2013 (article)

Abstract
Independence of Conditionals (IC) has recently been proposed as a basic rule for causal structure learning. If a Bayesian network represents the causal structure, its Conditional Probability Distributions (CPDs) should be algorithmically independent. In this paper we compare IC with causal faithfulness (FF), stating that only those conditional independences that are implied by the causal Markov condition hold true. The latter is a basic postulate in common approaches to causal structure learning. The common spirit of FF and IC is to reject causal graphs for which the joint distribution looks ‘non-generic’. The difference lies in the notion of genericity: FF sometimes rejects models just because one of the CPDs is simple, for instance if the CPD describes a deterministic relation. IC does not behave in this undesirable way. It only rejects a model when there is a non-generic relation between different CPDs although each CPD looks generic when considered separately. Moreover, it detects relations between CPDs that cannot be captured by conditional independences. IC therefore helps in distinguishing causal graphs that induce the same conditional independences (i.e., they belong to the same Markov equivalence class). The usual justification for FF implicitly assumes a prior that is a probability density on the parameter space. IC can be justified by Solomonoff’s universal prior, assigning non-zero probability to those points in parameter space that have a finite description. In this way, it favours simple CPDs, and therefore respects Occam’s razor. Since Kolmogorov complexity is uncomputable, IC is not directly applicable in practice. We argue that it is nevertheless helpful, since it has already served as inspiration and justification for novel causal inference algorithms.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
What can neurons do for their brain? Communicate selectivity with bursts

Balduzzi, D., Tononi, G.

Theory in Biosciences , 132(1):27-39, Springer, March 2013 (article)

Abstract
Neurons deep in cortex interact with the environment extremely indirectly; the spikes they receive and produce are pre- and post-processed by millions of other neurons. This paper proposes two information-theoretic constraints guiding the production of spikes, that help ensure bursting activity deep in cortex relates meaningfully to events in the environment. First, neurons should emphasize selective responses with bursts. Second, neurons should propagate selective inputs by burst-firing in response to them. We show the constraints are necessary for bursts to dominate information-transfer within cortex, thereby providing a substrate allowing neurons to distribute credit amongst themselves. Finally, since synaptic plasticity degrades the ability of neurons to burst selectively, we argue that homeostatic regulation of synaptic weights is necessary, and that it is best performed offline during sleep.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Apprenticeship Learning with Few Examples

Boularias, A., Chaib-draa, B.

Neurocomputing, 104, pages: 83-96, March 2013 (article)

Abstract
We consider the problem of imitation learning when the examples, provided by an expert human, are scarce. Apprenticeship learning via inverse reinforcement learning provides an efficient tool for generalizing the examples, based on the assumption that the expert's policy maximizes a value function, which is a linear combination of state and action features. Most apprenticeship learning algorithms use only simple empirical averages of the features in the demonstrations as a statistics of the expert's policy. However, this method is efficient only when the number of examples is sufficiently large to cover most of the states, or the dynamics of the system is nearly deterministic. In this paper, we show that the quality of the learned policies is sensitive to the error in estimating the averages of the features when the dynamics of the system is stochastic. To reduce this error, we introduce two new approaches for bootstrapping the demonstrations by assuming that the expert is near-optimal and the dynamics of the system is known. In the first approach, the expert's examples are used to learn a reward function and to generate furthermore examples from the corresponding optimal policy. The second approach uses a transfer technique, known as graph homomorphism, in order to generalize the expert's actions to unvisited regions of the state space. Empirical results on simulated robot navigation problems show that our approach is able to learn sufficiently good policies from a significantly small number of examples.

ei

Web DOI [BibTex]

Web DOI [BibTex]


Thumb xl thumb hennigk2012 2
Quasi-Newton Methods: A New Direction

Hennig, P., Kiefel, M.

Journal of Machine Learning Research, 14(1):843-865, March 2013 (article)

Abstract
Four decades after their invention, quasi-Newton methods are still state of the art in unconstrained numerical optimization. Although not usually interpreted thus, these are learning algorithms that fit a local quadratic approximation to the objective function. We show that many, including the most popular, quasi-Newton methods can be interpreted as approximations of Bayesian linear regression under varying prior assumptions. This new notion elucidates some shortcomings of classical algorithms, and lights the way to a novel nonparametric quasi-Newton method, which is able to make more efficient use of available information at computational cost similar to its predecessors.

ei ps pn

website+code pdf link (url) [BibTex]

website+code pdf link (url) [BibTex]


no image
Regional effects of magnetization dispersion on quantitative perfusion imaging for pulsed and continuous arterial spin labeling

Cavusoglu, M., Pohmann, R., Burger, H. C., Uludag, K.

Magnetic Resonance in Medicine, 69(2):524-530, Febuary 2013 (article)

Abstract
Most experiments assume a global transit delay time with blood flowing from the tagging region to the imaging slice in plug flow without any dispersion of the magnetization. However, because of cardiac pulsation, nonuniform cross-sectional flow profile, and complex vessel networks, the transit delay time is not a single value but follows a distribution. In this study, we explored the regional effects of magnetization dispersion on quantitative perfusion imaging for varying transit times within a very large interval from the direct comparison of pulsed, pseudo-continuous, and dual-coil continuous arterial spin labeling encoding schemes. Longer distances between tagging and imaging region typically used for continuous tagging schemes enhance the regional bias on the quantitative cerebral blood flow measurement causing an underestimation up to 37% when plug flow is assumed as in the standard model.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
The multivariate Watson distribution: Maximum-likelihood estimation and other aspects

Sra, S., Karp, D.

Journal of Multivariate Analysis, 114, pages: 256-269, February 2013 (article)

Abstract
This paper studies fundamental aspects of modelling data using multivariate Watson distributions. Although these distributions are natural for modelling axially symmetric data (i.e., unit vectors where View the MathML source are equivalent), for high-dimensions using them can be difficult—largely because for Watson distributions even basic tasks such as maximum-likelihood are numerically challenging. To tackle the numerical difficulties some approximations have been derived. But these are either grossly inaccurate in high-dimensions [K.V. Mardia, P. Jupp, Directional Statistics, second ed., John Wiley & Sons, 2000] or when reasonably accurate [A. Bijral, M. Breitenbach, G.Z. Grudic, Mixture of Watson distributions: a generative model for hyperspherical embeddings, in: Artificial Intelligence and Statistics, AISTATS 2007, 2007, pp. 35–42], they lack theoretical justification. We derive new approximations to the maximum-likelihood estimates; our approximations are theoretically well-defined, numerically accurate, and easy to compute. We build on our parameter estimation and discuss mixture-modelling with Watson distributions; here we uncover a hitherto unknown connection to the “diametrical clustering” algorithm of Dhillon et al. [I.S. Dhillon, E.M. Marcotte, U. Roshan, Diametrical clustering for identifying anticorrelated gene clusters, Bioinformatics 19 (13) (2003) 1612–1619].

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
How the result of graph clustering methods depends on the construction of the graph

Maier, M., von Luxburg, U., Hein, M.

ESAIM: Probability & Statistics, 17, pages: 370-418, January 2013 (article)

Abstract
We study the scenario of graph-based clustering algorithms such as spectral clustering. Given a set of data points, one rst has to construct a graph on the data points and then apply a graph clustering algorithm to nd a suitable partition of the graph. Our main question is if and how the construction of the graph (choice of the graph, choice of parameters, choice of weights) in uences the outcome of the nal clustering result. To this end we study the convergence of cluster quality measures such as the normalized cut or the Cheeger cut on various kinds of random geometric graphs as the sample size tends to in nity. It turns out that the limit values of the same objective function are systematically di erent on di erent types of graphs. This implies that clustering results systematically depend on the graph and can be very di erent for di erent types of graph. We provide examples to illustrate the implications on spectral clustering.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Falsification and future performance

Balduzzi, D.

In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, 7070, pages: 65-78, Lecture Notes in Computer Science, Springer, Berlin, Germany, Solomonoff 85th Memorial Conference, January 2013 (inproceedings)

Abstract
We information-theoretically reformulate two measures of capacity from statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. We show these capacity measures count the number of hypotheses about a dataset that a learning algorithm falsifies when it finds the classifier in its repertoire minimizing empirical risk. It then follows from that the future performance of predictors on unseen data is controlled in part by how many hypotheses the learner falsifies. As a corollary we show that empirical VC-entropy quantifies the message length of the true hypothesis in the optimal code of a particular probability distribution, the so-called actual repertoire.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Explicit eigenvalues of certain scaled trigonometric matrices

Sra, S.

Linear Algebra and its Applications, 438(1):173-181, January 2013 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
How Sensitive Is the Human Visual System to the Local Statistics of Natural Images?

Gerhard, H., Wichmann, F., Bethge, M.

PLoS Computational Biology, 9(1):e1002873, January 2013 (article)

Abstract
Several aspects of primate visual physiology have been identified as adaptations to local regularities of natural images. However, much less work has measured visual sensitivity to local natural image regularities. Most previous work focuses on global perception of large images and shows that observers are more sensitive to visual information when image properties resemble those of natural images. In this work we measure human sensitivity to local natural image regularities using stimuli generated by patch-based probabilistic natural image models that have been related to primate visual physiology. We find that human observers can learn to discriminate the statistical regularities of natural image patches from those represented by current natural image models after very few exposures and that discriminability depends on the degree of regularities captured by the model. The quick learning we observed suggests that the human visual system is biased for processing natural images, even at very fine spatial scales, and that it has a surprisingly large knowledge of the regularities in natural images, at least in comparison to the state-of-the-art statistical models of natural images.

ei

DOI [BibTex]

DOI [BibTex]


no image
A neural population model for visual pattern detection

Goris, R., Putzeys, T., Wagemans, J., Wichmann, F.

Psychological Review, 120(3):472–496, 2013 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Feedback Error Learning for Rhythmic Motor Primitives

Gopalan, N., Deisenroth, M., Peters, J.

In Proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA 2013), pages: 1317-1322, 2013 (inproceedings)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Gaussian Process Vine Copulas for Multivariate Dependence

Lopez-Paz, D., Hernandez-Lobato, J., Ghahramani, Z.

In Proceedings of the 30th International Conference on Machine Learning, W&CP 28(2), pages: 10-18, (Editors: S Dasgupta and D McAllester), JMLR, ICML, 2013, Poster: http://people.tuebingen.mpg.de/dlopez/papers/icml2013_gpvine_poster.pdf (inproceedings)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Review of Performance Variations in SMR-Based Brain–Computer Interfaces (BCIs)

Grosse-Wentrup, M., Schölkopf, B.

In Brain-Computer Interface Research, pages: 39-51, 4, SpringerBriefs in Electrical and Computer Engineering, (Editors: Guger, C., Allison, B. Z. and Edlinger, G.), Springer, 2013 (inbook)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
The Randomized Dependence Coefficient

Lopez-Paz, D., Hennig, P., Schölkopf, B.

In Advances in Neural Information Processing Systems 26, pages: 1-9, (Editors: C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
On a link between kernel mean maps and Fraunhofer diffraction, with an application to super-resolution beyond the diffraction limit

Harmeling, S., Hirsch, M., Schölkopf, B.

In IEEE Conference on Computer Vision and Pattern Recognition, pages: 1083-1090, IEEE, CVPR, 2013 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
Output Kernel Learning Methods

Dinuzzo, F., Ong, C., Fukumizu, K.

In International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines: theory and applications, ROKS, 2013 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Alignment-based Transfer Learning for Robot Models

Bocsi, B., Csato, L., Peters, J.

In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN 2013), pages: 1-7, 2013 (inproceedings)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Accurate indel prediction using paired-end short reads

Grimm, D., Hagmann, J., Koenig, D., Weigel, D., Borgwardt, KM.

BMC Genomics, 14(132), 2013 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Nonlinear Causal Discovery for High Dimensional Data: A Kernelized Trace Method

Chen, Z., Zhang, K., Chan, L.

In 13th International Conference on Data Mining, pages: 1003-1008, (Editors: H. Xiong, G. Karypis, B. M. Thuraisingham, D. J. Cook and X. Wu), IEEE Computer Society, ICDM, 2013 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
A probabilistic approach to robot trajectory generation

Paraschos, A., Neumann, G., Peters, J.

In Proceedings of the 13th IEEE International Conference on Humanoid Robots (HUMANOIDS), pages: 477-483, IEEE, 13th IEEE-RAS International Conference on Humanoid Robots, 2013 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]