Header logo is


2013


Thumb xl implied flow whue
Puppet Flow

Zuffi, S., Black, M. J.

(7), Max Planck Institute for Intelligent Systems, October 2013 (techreport)

Abstract
We introduce Puppet Flow (PF), a layered model describing the optical flow of a person in a video sequence. We consider video frames composed by two layers: a foreground layer corresponding to a person, and background. We model the background as an affine flow field. The foreground layer, being a moving person, requires reasoning about the articulated nature of the human body. We thus represent the foreground layer with the Deformable Structures model (DS), a parametrized 2D part-based human body representation. We call the motion field defined through articulated motion and deformation of the DS model, a Puppet Flow. By exploiting the DS representation, Puppet Flow is a parametrized optical flow field, where parameters are the person's pose, gender and body shape.

ps

pdf Project Page Project Page [BibTex]

2013


pdf Project Page Project Page [BibTex]


Thumb xl submodularity nips
Learning and Optimization with Submodular Functions

Sankaran, B., Ghazvininejad, M., He, X., Kale, D., Cohen, L.

ArXiv, May 2013 (techreport)

Abstract
In many naturally occurring optimization problems one needs to ensure that the definition of the optimization problem lends itself to solutions that are tractable to compute. In cases where exact solutions cannot be computed tractably, it is beneficial to have strong guarantees on the tractable approximate solutions. In order operate under these criterion most optimization problems are cast under the umbrella of convexity or submodularity. In this report we will study design and optimization over a common class of functions called submodular functions. Set functions, and specifically submodular set functions, characterize a wide variety of naturally occurring optimization problems, and the property of submodularity of set functions has deep theoretical consequences with wide ranging applications. Informally, the property of submodularity of set functions concerns the intuitive principle of diminishing returns. This property states that adding an element to a smaller set has more value than adding it to a larger set. Common examples of submodular monotone functions are entropies, concave functions of cardinality, and matroid rank functions; non-monotone examples include graph cuts, network flows, and mutual information. In this paper we will review the formal definition of submodularity; the optimization of submodular functions, both maximization and minimization; and finally discuss some applications in relation to learning and reasoning using submodular functions.

am

arxiv link (url) [BibTex]

arxiv link (url) [BibTex]


Thumb xl secretstr
A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them

Sun, D., Roth, S., Black, M. J.

(CS-10-03), Brown University, Department of Computer Science, January 2013 (techreport)

ps

pdf [BibTex]

pdf [BibTex]


no image
A Review of Performance Variations in SMR-Based Brain–Computer Interfaces (BCIs)

Grosse-Wentrup, M., Schölkopf, B.

In Brain-Computer Interface Research, pages: 39-51, 4, SpringerBriefs in Electrical and Computer Engineering, (Editors: Guger, C., Allison, B. Z. and Edlinger, G.), Springer, 2013 (inbook)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Coupling between spiking activity and beta band spatio-temporal patterns in the macaque PFC

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

43rd Annual Meeting of the Society for Neuroscience (Neuroscience), 2013 (poster)

ei

[BibTex]

[BibTex]


no image
Gaussian Process Vine Copulas for Multivariate Dependence

Lopez-Paz, D., Hernandez-Lobato, J., Ghahramani, Z.

International Conference on Machine Learning (ICML), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K., Balduzzi, D., Schölkopf, B.

30th International Conference on Machine Learning (ICML2013), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Semi-supervised learning in causal and anticausal settings

Schölkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., Mooij, J.

In Empirical Inference, pages: 129-141, 13, Festschrift in Honor of Vladimir Vapnik, (Editors: Schölkopf, B., Luo, Z. and Vovk, V.), Springer, 2013 (inbook)

ei

DOI [BibTex]

DOI [BibTex]


no image
Analyzing locking of spikes to spatio-temporal patterns in the macaque prefrontal cortex

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

Bernstein Conference, 2013 (poster)

ei

DOI [BibTex]

DOI [BibTex]


no image
Tractable large-scale optimization in machine learning

Sra, S.

In Tractability: Practical Approaches to Hard Problems, pages: 202-230, 7, (Editors: Bordeaux, L., Hamadi , Y., Kohli, P. and Mateescu, R. ), Cambridge University Press , 2013 (inbook)

ei

[BibTex]

[BibTex]


no image
One-class Support Measure Machines for Group Anomaly Detection

Muandet, K., Schölkopf, B.

29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
The Randomized Dependence Coefficient

Lopez-Paz, D., Hennig, P., Schölkopf, B.

Neural Information Processing Systems (NIPS), 2013 (poster)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Characterization of different types of sharp-wave ripple signatures in the CA1 of the macaque hippocampus

Ramirez-Villegas, J., Logothetis, N., Besserve, M.

4th German Neurophysiology PhD Meeting Networks, 2013 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Animating Samples from Gaussian Distributions

Hennig, P.

(8), Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2013 (techreport)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Proceedings of the 10th European Workshop on Reinforcement Learning, Volume 24

Deisenroth, M., Szepesvári, C., Peters, J.

pages: 173, JMLR, European Workshop On Reinforcement Learning, EWRL, 2013 (proceedings)

ei

Web [BibTex]

Web [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

Hogg, D. W., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Lang, D., Montet, B. T., Schiminovich, D., Schölkopf, B.

arXiv:1309.0653, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Searching the habitable zones of the brightest stars

Montet, B. T., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Hogg, D. W., Lang, D., Schiminovich, D., Schölkopf, B.

arXiv:1309.0654, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
On the Relations and Differences between Popper Dimension, Exclusion Dimension and VC-Dimension

Seldin, Y., Schölkopf, B.

In Empirical Inference - Festschrift in Honor of Vladimir N. Vapnik, pages: 53-57, 6, (Editors: Schölkopf, B., Luo, Z. and Vovk, V.), Springer, 2013 (inbook)

ei

[BibTex]

[BibTex]


no image
Behavior as broken symmetry in embodied self-organizing robots

Der, R., Martius, G.

In Advances in Artificial Life, ECAL 2013, pages: 601-608, MIT Press, 2013 (incollection)

al

[BibTex]

[BibTex]


no image
Using Torque Redundancy to Optimize Contact Forces in Legged Robots

Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.

In Redundancy in Robot Manipulators and Multi-Robot Systems, 57, pages: 35-51, Lecture Notes in Electrical Engineering, Springer Berlin Heidelberg, 2013 (incollection)

Abstract
The development of legged robots for complex environments requires controllers that guarantee both high tracking performance and compliance with the environment. More specifically the control of contact interaction with the environment is of crucial importance to ensure stable, robust and safe motions. In the following, we present an inverse dynamics controller that exploits torque redundancy to directly and explicitly minimize any combination of linear and quadratic costs in the contact constraints and in the commands. Such a result is particularly relevant for legged robots as it allows to use torque redundancy to directly optimize contact interactions. For example, given a desired locomotion behavior, it can guarantee the minimization of contact forces to reduce slipping on difficult terrains while ensuring high tracking performance of the desired motion. The proposed controller is very simple and computationally efficient, and most importantly it can greatly improve the performance of legged locomotion on difficult terrains as can be seen in the experimental results.

am mg

link (url) [BibTex]

link (url) [BibTex]


Thumb xl houghforest
Class-Specific Hough Forests for Object Detection

Gall, J., Lempitsky, V.

In Decision Forests for Computer Vision and Medical Image Analysis, pages: 143-157, 11, (Editors: Criminisi, A. and Shotton, J.), Springer, 2013 (incollection)

ps

code Project Page [BibTex]

code Project Page [BibTex]

2009


no image
Learning an Interactive Segmentation System

Nickisch, H., Kohli, P., Rother, C.

Max Planck Institute for Biological Cybernetics, December 2009 (techreport)

Abstract
Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user - a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.

ei

Web [BibTex]

2009


Web [BibTex]


no image
An Incremental GEM Framework for Multiframe Blind Deconvolution, Super-Resolution, and Saturation Correction

Harmeling, S., Sra, S., Hirsch, M., Schölkopf, B.

(187), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
We develop an incremental generalized expectation maximization (GEM) framework to model the multiframe blind deconvolution problem. A simplistic version of this problem was recently studied by Harmeling etal~cite{harmeling09}. We solve a more realistic version of this problem which includes the following major features: (i) super-resolution ability emph{despite} noise and unknown blurring; (ii) saturation-correction, i.e., handling of overexposed pixels that can otherwise confound the image processing; and (iii) simultaneous handling of color channels. These features are seamlessly integrated into our incremental GEM framework to yield simple but efficient multiframe blind deconvolution algorithms. We present technical details concerning critical steps of our algorithms, especially to highlight how all operations can be written using matrix-vector multiplications. We apply our algorithm to real-world images from astronomy and super resolution tasks. Our experimental results show that our methods yield improve d resolution and deconvolution at the same time.

ei

PDF [BibTex]

PDF [BibTex]


no image
Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution

Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.

(188), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.

ei

PDF [BibTex]

PDF [BibTex]


no image
Clinical PET/MRI-System and Its Applications with MRI Based Attenuation Correction

Kolb, A., Hofmann, M., Sossi, V., Wehrl, H., Sauter, A., Schmid, A., Schlemmer, H., Claussen, C., Pichler, B.

IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC 2009), 2009, pages: 1, October 2009 (poster)

Abstract
Clinical PET/MRI is an emerging new hybrid imaging modality. In addition to provide an unique possibility for multifunctional imaging with temporally and spatially matched data, it also provides anatomical information that can also be used for attenuation correction with no radiation exposure to the subjects. A plus of combined compared to sequential PET and MR imaging is the reduction of total scan time. Here we present our initial experience with a hybrid brain PET/MRI system. Due to the ethical approval patient scans could only be performed after a diagnostic PET/CT. We estimate that in approximately 50% of the cases PET/MRI was of superior diagnostic value compared to PET/CT and was able to provide additional information, such as DTI, spectroscopy and Time Of Flight (TOF) angiography. Here we present 3 patient cases in oncology, a retropharyngeal carcinoma in neurooncology, a relapsing meningioma and in neurology a pharyngeal carcinoma in addition to an infraction of the right hemisphere. For quantitative PET imaging attenuation correction is obligatory. In current PET/MRI setup we used our MRI based atlas method for calculating the mu-map for attenuation correction. MR-based attenuation correction accuracy was quantitatively compared to CT-based PET attenuation correction. Extensive studies to assess potential mutual interferences between PET and MR imaging modalities as well as NEMA measurements have been performed. The first patient studies as well as the phantom tests clearly demonstrated the overall good imaging performance of this first human PET/MRI system. Ongoing work concentrates on advanced normalization and reconstruction methods incorporating count-rate based algorithms.

ei

Web [BibTex]

Web [BibTex]


no image
A flowering-time gene network model for association analysis in Arabidopsis thaliana

Klotzbücher, K., Kobayashi, Y., Shervashidze, N., Borgwardt, K., Weigel, D.

2009(39):95-96, German Conference on Bioinformatics (GCB '09), September 2009 (poster)

Abstract
In our project we want to determine a set of single nucleotide polymorphisms (SNPs), which have a major effect on the flowering time of Arabidopsis thaliana. Instead of performing a genome-wide association study on all SNPs in the genome of Arabidopsis thaliana, we examine the subset of SNPs from the flowering-time gene network model. We are interested in how the results of the association study vary when using only the ascertained subset of SNPs from the flowering network model, and when additionally using the information encoded by the structure of the network model. The network model is compiled from the literature by manual analysis and contains genes which have been found to affect the flowering time of Arabidopsis thaliana [Far+08; KW07]. The genes in this model are annotated with the SNPs that are located in these genes, or in near proximity to them. In a baseline comparison between the subset of SNPs from the graph and the set of all SNPs, we omit the structural information and calculate the correlation between the individual SNPs and the flowering time phenotype by use of statistical methods. Through this we can determine the subset of SNPs with the highest correlation to the flowering time. In order to further refine this subset, we include the additional information provided by the network structure by conducting a graph-based feature pre-selection. In the further course of this project we want to validate and examine the resulting set of SNPs and their corresponding genes with experimental methods.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Initial Data from a first PET/MRI-System and its Applications in Clinical Studies Using MRI Based Attenuation Correction

Kolb, A., Hofmann, M., Sossi, V., Wehrl, H., Sauter, A., Schmid, A., Judenhofer, M., Schlemmer, H., Claussen, C., Pichler, B.

2009 World Molecular Imaging Congress, 2009, pages: 1200, September 2009 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
A High-Speed Object Tracker from Off-the-Shelf Components

Lampert, C., Peters, J.

First IEEE Workshop on Computer Vision for Humanoid Robots in Real Environments at ICCV 2009, 1, pages: 1, September 2009 (poster)

Abstract
We introduce RTblob, an open-source real-time vision system for 3D object detection that achieves over 200 Hz tracking speed with only off-the-shelf hardware component. It allows fast and accurate tracking of colored objects in 3D without expensive and often custom-built hardware, instead making use of the PC graphics cards for the necessary image processing operations.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Estimating Critical Stimulus Features from Psychophysical Data: The Decision-Image Technique Applied to Human Faces

Macke, J., Wichmann, F.

Journal of Vision, 9(8):31, 9th Annual Meeting of the Vision Sciences Society (VSS), August 2009 (poster)

Abstract
One of the main challenges in the sensory sciences is to identify the stimulus features on which the sensory systems base their computations: they are a pre-requisite for computational models of perception. We describe a technique---decision-images--- for extracting critical stimulus features based on logistic regression. Rather than embedding the stimuli in noise, as is done in classification image analysis, we want to infer the important features directly from physically heterogeneous stimuli. A Decision-image not only defines the critical region-of-interest within a stimulus but is a quantitative template which defines a direction in stimulus space. Decision-images thus enable the development of predictive models, as well as the generation of optimized stimuli for subsequent psychophysical investigations. Here we describe our method and apply it to data from a human face discrimination experiment. We show that decision-images are able to predict human responses not only in terms of overall percent correct but are able to predict, for individual observers, the probabilities with which individual faces are (mis-) classified. We then test the predictions of the models using optimized stimuli. Finally, we discuss possible generalizations of the approach and its relationships with other models.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Consistent Nonparametric Tests of Independence

Gretton, A., Györfi, L.

(172), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, July 2009 (techreport)

Abstract
Three simple and explicit procedures for testing the independence of two multi-dimensional random variables are described. Two of the associated test statistics (L1, log-likelihood) are defined when the empirical distribution of the variables is restricted to finite partitions. A third test statistic is defined as a kernel-based independence measure. Two kinds of tests are provided. Distribution-free strong consistent tests are derived on the basis of large deviation bounds on the test statistcs: these tests make almost surely no Type I or Type II error after a random sample size. Asymptotically alpha-level tests are obtained from the limiting distribution of the test statistics. For the latter tests, the Type I error converges to a fixed non-zero value alpha, and the Type II error drops to zero, for increasing sample size. All tests reject the null hypothesis of independence if the test statistics become large. The performance of the tests is evaluated experimentally on benchmark data.

ei

PDF [BibTex]

PDF [BibTex]


no image
Semi-supervised Analysis of Human fMRI Data

Shelton, JA., Blaschko, MB., Lampert, CH., Bartels, A.

Berlin Brain Computer Interface Workshop on Advances in Neurotechnology, 2009, pages: 1, July 2009 (poster)

Abstract
Kernel Canonical Correlation Analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, CCA learns representations tied more closely to underlying process generating the the data and can ignore high-variance noise directions. However, for data where acquisition in a given modality is expensive or otherwise limited, CCA may suffer from small sample effects. We propose to use semisupervised Laplacian regularization to utilize data that are present in only one modality. This approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned. fMRI data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of CCA on human fMRI data, with regression to single and multivariate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of CCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Text Clustering with Mixture of von Mises-Fisher Distributions

Sra, S., Banerjee, A., Ghosh, J., Dhillon, I.

In Text mining: classification, clustering, and applications, pages: 121-161, Chapman & Hall/CRC data mining and knowledge discovery series, (Editors: Srivastava, A. N. and Sahami, M.), CRC Press, Boca Raton, FL, USA, June 2009 (inbook)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Semi-supervised subspace analysis of human functional magnetic resonance imaging data

Shelton, J., Blaschko, M., Bartels, A.

(185), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2009 (techreport)

Abstract
Kernel Canonical Correlation Analysis is a very general technique for subspace learning that incorporates PCA and LDA as special cases. Functional magnetic resonance imaging (fMRI) acquired data is naturally amenable to these techniques as data are well aligned. fMRI data of the human brain is a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single- and multi-variate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.

ei

PDF [BibTex]

PDF [BibTex]


no image
Data Mining for Biologists

Tsuda, K.

In Biological Data Mining in Protein Interaction Networks, pages: 14-27, (Editors: Li, X. and Ng, S.-K.), Medical Information Science Reference, Hershey, PA, USA, May 2009 (inbook)

Abstract
In this tutorial chapter, we review basics about frequent pattern mining algorithms, including itemset mining, association rule mining and graph mining. These algorithms can find frequently appearing substructures in discrete data. They can discover structural motifs, for example, from mutation data, protein structures and chemical compounds. As they have been primarily used for business data, biological applications are not so common yet, but their potential impact would be large. Recent advances in computers including multicore machines and ever increasing memory capacity support the application of such methods to larger datasets. We explain technical aspects of the algorithms, but do not go into details. Current biological applications are summarized and possible future directions are given.

ei

Web [BibTex]

Web [BibTex]


no image
Optimization of k-Space Trajectories by Bayesian Experimental Design

Seeger, M., Nickisch, H., Pohmann, R., Schölkopf, B.

17(2627), 17th Annual Meeting of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2009 (poster)

Abstract
MR image reconstruction from undersampled k-space can be improved by nonlinear denoising estimators since they incorporate statistical prior knowledge about image sparsity. Reconstruction quality depends crucially on the undersampling design (k-space trajectory), in a manner complicated by the nonlinear and signal-dependent characteristics of these methods. We propose an algorithm to assess and optimize k-space trajectories for sparse MRI reconstruction, based on Bayesian experimental design, which is scaled up to full MR images by a novel variational relaxation to iteratively reweighted FFT or gridding computations. Designs are built sequentially by adding phase encodes predicted to be most informative, given the combination of previous measurements with image prior information.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
MR-Based Attenuation Correction for PET/MR

Hofmann, M., Steinke, F., Bezrukov, I., Kolb, A., Aschoff, P., Lichy, M., Erb, M., Nägele, T., Brady, M., Schölkopf, B., Pichler, B.

17(260), 17th Annual Meeting of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2009 (poster)

Abstract
There has recently been a growing interest in combining PET and MR. Attenuation correction (AC), which accounts for radiation attenuation properties of the tissue, is mandatory for quantitative PET. In the case of PET/MR the attenuation map needs to be determined from the MR image. This is intrinsically difficult as MR intensities are not related to the electron density information of the attenuation map. Using ultra-short echo (UTE) acquisition, atlas registration and machine learning, we present methods that allow prediction of the attenuation map based on the MR image both for brain and whole body imaging.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Large Margin Methods for Part of Speech Tagging

Altun, Y.

In Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods, pages: 141-160, (Editors: Keshet, J. and Bengio, S.), Wiley, Hoboken, NJ, USA, January 2009 (inbook)

ei

Web [BibTex]

Web [BibTex]


no image
Covariate shift and local learning by distribution matching

Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., Schölkopf, B.

In Dataset Shift in Machine Learning, pages: 131-160, (Editors: Quiñonero-Candela, J., Sugiyama, M., Schwaighofer, A. and Lawrence, N. D.), MIT Press, Cambridge, MA, USA, 2009 (inbook)

Abstract
Given sets of observations of training and test data, we consider the problem of re-weighting the training data such that its distribution more closely matches that of the test data. We achieve this goal by matching covariate distributions between training and test sets in a high dimensional feature space (specifically, a reproducing kernel Hilbert space). This approach does not require distribution estimation. Instead, the sample weights are obtained by a simple quadratic programming procedure. We provide a uniform convergence bound on the distance between the reweighted training feature mean and the test feature mean, a transductive bound on the expected loss of an algorithm trained on the reweighted data, and a connection to single class SVMs. While our method is designed to deal with the case of simple covariate shift (in the sense of Chapter ??), we have also found benefits for sample selection bias on the labels. Our correction procedure yields its greatest and most consistent advantages when the learning algorithm returns a classifier/regressor that is simpler" than the data might suggest.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
The SL simulation and real-time control software package

Schaal, S.

University of Southern California, Los Angeles, CA, 2009, clmc (techreport)

Abstract
SL was originally developed as a Simulation Laboratory software package to allow creating complex rigid-body dynamics simulations with minimal development times. It was meant to complement a real-time robotics setup such that robot programs could first be debugged in simulation before trying them on the actual robot. For this purpose, the motor control setup of SL was copied from our experience with real-time robot setups with vxWorks (Windriver Systems, Inc.)Ñindeed, more than 90% of the code is identical to the actual robot software, as will be explained later in detail. As a result, SL is divided into three software components: 1) the generic code that is shared by the actual robot and the simulation, 2) the robot specific code, and 3) the simulation specific code. The robot specific code is tailored to the robotic environments that we have experienced over the years, in particular towards VME-based multi-processor real-time operating systems. The simulation specific code has all the components for OpenGL graphics simulations and mimics the robot multi-processor environment in simple C-code. Importantly, SL can be used stand-alone for creating graphics an-imationsÑthe heritage from real-time robotics does not restrict the complexity of possible simulations. This technical report describes SL in detail and can serve as a manual for new users of SL.

am

link (url) [BibTex]

link (url) [BibTex]


no image
The SL simulation and real-time control software package

Schaal, S.

University of Southern California, Los Angeles, CA, 2009, clmc (techreport)

Abstract
SL was originally developed as a Simulation Laboratory software package to allow creating complex rigid-body dynamics simulations with minimal development times. It was meant to complement a real-time robotics setup such that robot programs could first be debugged in simulation before trying them on the actual robot. For this purpose, the motor control setup of SL was copied from our experience with real-time robot setups with vxWorks (Windriver Systems, Inc.)â??indeed, more than 90% of the code is identical to the actual robot software, as will be explained later in detail. As a result, SL is divided into three software components: 1) the generic code that is shared by the actual robot and the simulation, 2) the robot specific code, and 3) the simulation specific code. The robot specific code is tailored to the robotic environments that we have experienced over the years, in particular towards VME-based multi-processor real-time operating systems. The simulation specific code has all the components for OpenGL graphics simulations and mimics the robot multi-processor environment in simple C-code. Importantly, SL can be used stand-alone for creating graphics an-imationsâ??the heritage from real-time robotics does not restrict the complexity of possible simulations. This technical report describes SL in detail and can serve as a manual for new users of SL.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Metal-Organic Frameworks

Panella, B., Hirscher, M.

In Encyclopedia of Electrochemical Power Sources, pages: 493-496, Elsevier, Amsterdam [et al.], 2009 (incollection)

mms

[BibTex]

[BibTex]


no image
Biologically Inspired Polymer Microfibrillar Arrays for Mask Sealing

Cheung, E., Aksak, B., Sitti, M.

CARNEGIE-MELLON UNIV PITTSBURGH PA, 2009 (techreport)

pi

[BibTex]

[BibTex]


no image
Carbon Materials

Hirscher, M.

In Encyclopedia of Electrochemical Power Sources, pages: 484-487, Elsevier, Amsterdam [et al.], 2009 (incollection)

mms

[BibTex]

[BibTex]

2008


no image
Frequent Subgraph Retrieval in Geometric Graph Databases

Nowozin, S., Tsuda, K.

(180), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
Discovery of knowledge from geometric graph databases is of particular importance in chemistry and biology, because chemical compounds and proteins are represented as graphs with 3D geometric coordinates. In such applications, scientists are not interested in the statistics of the whole database. Instead they need information about a novel drug candidate or protein at hand, represented as a query graph. We propose a polynomial-delay algorithm for geometric frequent subgraph retrieval. It enumerates all subgraphs of a single given query graph which are frequent geometric epsilon-subgraphs under the entire class of rigid geometric transformations in a database. By using geometric epsilon-subgraphs, we achieve tolerance against variations in geometry. We compare the proposed algorithm to gSpan on chemical compound data, and we show that for a given minimum support the total number of frequent patterns is substantially limited by requiring geometric matching. Although the computation time per pattern is larger than for non-geometric graph mining, the total time is within a reasonable level even for small minimum support.

ei

PDF [BibTex]

2008


PDF [BibTex]


no image
Variational Bayesian Model Selection in Linear Gaussian State-Space based Models

Chiappa, S.

International Workshop on Flexible Modelling: Smoothing and Robustness (FMSR 2008), 2008, pages: 1, November 2008 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Simultaneous Implicit Surface Reconstruction and Meshing

Giesen, J., Maier, M., Schölkopf, B.

(179), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We investigate an implicit method to compute a piecewise linear representation of a surface from a set of sample points. As implicit surface functions we use the weighted sum of piecewise linear kernel functions. For such a function we can partition Rd in such a way that these functions are linear on the subsets of the partition. For each subset in the partition we can then compute the zero level set of the function exactly as the intersection of a hyperplane with the subset.

ei

PDF [BibTex]

PDF [BibTex]


no image
Taxonomy Inference Using Kernel Dependence Measures

Blaschko, M., Gretton, A.

(181), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We introduce a family of unsupervised algorithms, numerical taxonomy clustering, to simultaneously cluster data, and to learn a taxonomy that encodes the relationship between the clusters. The algorithms work by maximizing the dependence between the taxonomy and the original data. The resulting taxonomy is a more informative visualization of complex data than simple clustering; in addition, taking into account the relations between different clusters is shown to substantially improve the quality of the clustering, when compared with state-of-the-art algorithms in the literature (both spectral clustering and a previous dependence maximization approach). We demonstrate our algorithm on image and text data.

ei

PDF [BibTex]

PDF [BibTex]


no image
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

Seeger, M., Nickisch, H.

(175), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

ei

PDF [BibTex]

PDF [BibTex]


no image
Block-Iterative Algorithms for Non-Negative Matrix Approximation

Sra, S.

(176), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

Abstract
In this report we present new algorithms for non-negative matrix approximation (NMA), commonly known as the NMF problem. Our methods improve upon the well-known methods of Lee & Seung [19] for both the Frobenius norm as well the Kullback-Leibler divergence versions of the problem. For the latter problem, our results are especially interesting because it seems to have witnessed much lesser algorithmic progress as compared to the Frobenius norm NMA problem. Our algorithms are based on a particular block-iterative acceleration technique for EM, which preserves the multiplicative nature of the updates and also ensures monotonicity. Furthermore, our algorithms also naturally apply to the Bregman-divergence NMA algorithms of Dhillon and Sra [8]. Experimentally, we show that our algorithms outperform the traditional Lee/Seung approach most of the time.

ei

PDF [BibTex]

PDF [BibTex]