Header logo is


2020


A gamified app that helps people overcome self-limiting beliefs by promoting metacognition
A gamified app that helps people overcome self-limiting beliefs by promoting metacognition

Amo, V., Lieder, F.

pages: 6, SIG 8 Meets SIG 16, September 2020 (conference)

Abstract
Previous research has shown that approaching learning with a growth mindset is key for maintaining motivation and overcoming setbacks. Mindsets are systems of beliefs that people hold to be true. They influence a person's attitudes, thoughts, and emotions when they learn something new or encounter challenges. In clinical psychology, metareasoning (reflecting on one's mental processes) and meta-awareness (recognizing thoughts as mental events instead of equating them to reality) have proven effective for overcoming maladaptive thinking styles. Hence, they are potentially an effective method for overcoming self-limiting beliefs in other domains as well. However, the potential of integrating assisted metacognition into mindset interventions has not been explored yet. Here, we propose that guiding and training people on how to leverage metareasoning and meta-awareness for overcoming self-limiting beliefs can significantly enhance the effectiveness of mindset interventions. To test this hypothesis, we develop a gamified mobile application that guides and trains people to use metacognitive strategies based on Cognitive Restructuring (CR) and Acceptance Commitment Therapy (ACT) techniques. The application helps users to identify and overcome self-limiting beliefs by working with aversive emotions when they are triggered by fixed mindsets in real-life situations. Our app aims to help people sustain their motivation to learn when they face inner obstacles (e.g. anxiety, frustration, and demotivation). We expect the application to be an effective tool for helping people better understand and develop the metacognitive skills of emotion regulation and self-regulation that are needed to overcome self-limiting beliefs and develop growth mindsets.

re

A gamified app that helps people overcome self-limiting beliefs by promoting metacognition [BibTex]


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

ps

arxiv project page [BibTex]


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
We present a fully-automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires (1) the generated human bodies should be semantically plausible with the 3D environment, e.g. people sitting on the sofa or cooking near the stove; (2) the generated human-scene interaction should be physically feasible in the way that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human pose conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR.

ps

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Where Does It End? - Reasoning About Hidden Surfaces by Object Intersection Constraints

Strecke, M., Stückler, J.

In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings) Accepted

ev

[BibTex]

[BibTex]


no image
A Continuous-time Perspective for Modeling Acceleration in Riemannian Optimization

F Alimisis, F., Orvieto, A., Becigneul, G., Lucchi, A.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), June 2020 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
A Kernel Mean Embedding Approach to Reducing Conservativeness in Stochastic Programming and Control

Zhu, J., Diehl, M., Schölkopf, B.

2nd Annual Conference on Learning for Dynamics and Control (L4DC), June 2020 (conference) Accepted

ei

arXiv [BibTex]

arXiv [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

ps

Paper [BibTex]

Paper [BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

ps

arXiv code [BibTex]

arXiv code [BibTex]


no image
Mixed-curvature Variational Autoencoders

Skopek, O., Ganea, O., Becigneul, G.

8th International Conference on Learning Representations (ICLR), April 2020 (conference) Accepted

ei

link (url) [BibTex]

link (url) [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

ei ps

arXiv [BibTex]

arXiv [BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

ps

pdf [BibTex]

pdf [BibTex]


no image
More Powerful Selective Kernel Tests for Feature Selection

Lim, J. N., Yamada, M., Jitkrittum, W., Terada, Y., Matsui, S., Shimodaira, H.

Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 2020 (conference) To be published

ei

arXiv [BibTex]

arXiv [BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


no image
ACTrain: Ein KI-basiertes Aufmerksamkeitstraining für die Wissensarbeit [ACTrain: An AI-based attention training for knowledge work]

Wirzberger, M., Oreshnikov, I., Passy, J., Lado, A., Shenhav, A., Lieder, F.

66th Spring Conference of the German Ergonomics Society, 2020 (conference)

Abstract
Unser digitales Zeitalter lebt von Informationen und stellt unsere begrenzte Verarbeitungskapazität damit täglich auf die Probe. Gerade in der Wissensarbeit haben ständige Ablenkungen erhebliche Leistungseinbußen zur Folge. Unsere intelligente Anwendung ACTrain setzt genau an dieser Stelle an und verwandelt Computertätigkeiten in eine Trainingshalle für den Geist. Feedback auf Basis maschineller Lernverfahren zeigt anschaulich den Wert auf, sich nicht von einer selbst gewählten Aufgabe ablenken zu lassen. Diese metakognitive Einsicht soll zum Durchhalten motivieren und das zugrunde liegende Fertigkeitsniveau der Aufmerksamkeitskontrolle stärken. In laufenden Feldexperimenten untersuchen wir die Frage, ob das Training mit diesem optimalen Feedback die Aufmerksamkeits- und Selbstkontrollfertigkeiten im Vergleich zu einer Kontrollgruppe ohne Feedback verbessern kann.

re sf

link (url) [BibTex]


no image
Computationally Tractable Riemannian Manifolds for Graph Embeddings

Cruceru, C., Becigneul, G., Ganea, O.

37th International Conference on Machine Learning (ICML), 2020 (conference) Submitted

ei

[BibTex]

[BibTex]


no image
Interaction of hydrogen isotopes with flexible metal-organic frameworks

Bondorf, L.

Universität Stuttgart, Stuttgart, 2020 (mastersthesis)

mms

[BibTex]

[BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


no image
Planning from Images with Deep Latent Gaussian Process Dynamics

Bosch, N., Achterhold, J., Leal-Taixe, L., Stückler, J.

2nd Annual Conference on Learning for Dynamics and Control (L4DC) , 2020 (conference) Accepted

ev

[BibTex]

[BibTex]


no image
Practical Accelerated Optimization on Riemannian Manifolds

F Alimisis, F., Orvieto, A., Becigneul, G., Lucchi, A.

37th International Conference on Machine Learning (ICML), 2020 (conference) Submitted

ei

[BibTex]

[BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


Electronics, Software and Analysis of a Bioinspired Sensorized Quadrupedal Robot
Electronics, Software and Analysis of a Bioinspired Sensorized Quadrupedal Robot

Petereit, R.

Technische Universität München, 2020 (mastersthesis)

dlg

[BibTex]


no image
Constant Curvature Graph Convolutional Networks

Bachmann*, G., Becigneul*, G., Ganea, O.

37th International Conference on Machine Learning (ICML), 2020, *equal contribution (conference) Submitted

ei

[BibTex]

[BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation

Wang, R., Yang, N., Stückler, J., Cremers, D.

In Accepted for IEEE international Conference on Robotics and Automation (ICRA), 2020, arXiv:1904.10097 (inproceedings) Accepted

ev

[BibTex]

[BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]

2011


no image
Statistical estimation for optimization problems on graphs

Langovoy, M., Sra, S.

In pages: 1-6, NIPS Workshop on Discrete Optimization in Machine Learning (DISCML): Uncertainty, Generalization and Feedback , December 2011 (inproceedings)

Abstract
Large graphs abound in machine learning, data mining, and several related areas. A useful step towards analyzing such graphs is that of obtaining certain summary statistics — e.g., or the expected length of a shortest path between two nodes, or the expected weight of a minimum spanning tree of the graph, etc. These statistics provide insight into the structure of a graph, and they can help predict global properties of a graph. Motivated thus, we propose to study statistical properties of structured subgraphs (of a given graph), in particular, to estimate the expected objective function value of a combinatorial optimization problem over these subgraphs. The general task is very difficult, if not unsolvable; so for concreteness we describe a more specific statistical estimation problem based on spanning trees. We hope that our position paper encourages others to also study other types of graphical structures for which one can prove nontrivial statistical estimates.

ei

PDF Web [BibTex]

2011


PDF Web [BibTex]


no image
On the discardability of data in Support Vector Classification problems

Del Favero, S., Varagnolo, D., Dinuzzo, F., Schenato, L., Pillonetto, G.

In pages: 3210-3215, IEEE, Piscataway, NJ, USA, 50th IEEE Conference on Decision and Control and European Control Conference (CDC - ECC), December 2011 (inproceedings)

Abstract
We analyze the problem of data sets reduction for support vector classification. The work is also motivated by distributed problems, where sensors collect binary measurements at different locations moving inside an environment that needs to be divided into a collection of regions labeled in two different ways. The scope is to let each agent retain and exchange only those measurements that are mostly informative for the collective reconstruction of the decision boundary. For the case of separable classes, we provide the exact conditions and an efficient algorithm to determine if an element in the training set can become a support vector when new data arrive. The analysis is then extended to the non-separable case deriving a sufficient discardability condition and a general data selection scheme for classification. Numerical experiments relative to the distributed problem show that the proposed procedure allows the agents to exchange a small amount of the collected data to obtain a highly predictive decision boundary.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Information, learning and falsification

Balduzzi, D.

In pages: 1-4, NIPS Philosophy and Machine Learning Workshop, December 2011 (inproceedings)

Abstract
There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out [2]. The third, statistical learning theory, has introduced measures of capacity that control (in part) the expected risk of classifiers [3]. These capacities quantify the expectations regarding future data that learning algorithms embed into classifiers. Solomonoff and Hutter have applied algorithmic information to prove remarkable results on universal induction. Shannon information provides the mathematical foundation for communication and coding theory. However, both approaches have shortcomings. Algorithmic information is not computable, severely limiting its practical usefulness. Shannon information refers to ensembles rather than actual events: it makes no sense to compute the Shannon information of a single string – or rather, there are many answers to this question depending on how a related ensemble is constructed. Although there are asymptotic results linking algorithmic and Shannon information, it is unsatisfying that there is such a large gap – a difference in kind – between the two measures. This note describes a new method of quantifying information, effective information, that links algorithmic information to Shannon information, and also links both to capacities arising in statistical learning theory [4, 5]. After introducing the measure, we show that it provides a non-universal analog of Kolmogorov complexity. We then apply it to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of our approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies [6], counted in two different ways for the two capacities. We also discuss how effective information relates to information gain, Shannon and mutual information.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A general linear non-Gaussian state-space model: Identifiability, identification, and applications

Zhang, K., Hyvärinen, A.

In JMLR Workshop and Conference Proceedings Volume 20, pages: 113-128, (Editors: Hsu, C.-N. , W.S. Lee ), MIT Press, Cambridge, MA, USA, 3rd Asian Conference on Machine Learning (ACML), November 2011 (inproceedings)

Abstract
State-space modeling provides a powerful tool for system identification and prediction. In linear state-space models the data are usually assumed to be Gaussian and the models have certain structural constraints such that they are identifiable. In this paper we propose a non-Gaussian state-space model which does not have such constraints. We prove that this model is fully identifiable. We then propose an efficient two-step method for parameter estimation: one first extracts the subspace of the latent processes based on the temporal information of the data, and then performs multichannel blind deconvolution, making use of both the temporal information and non-Gaussianity. We conduct a series of simulations to illustrate the performance of the proposed method. Finally, we apply the proposed model and parameter estimation method on real data, including major world stock indices and magnetoencephalography (MEG) recordings. Experimental results are encouraging and show the practical usefulness of the proposed model and method.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Non-stationary correction of optical aberrations

Schuler, C., Hirsch, M., Harmeling, S., Schölkopf, B.

In pages: 659-666 , (Editors: DN Metaxas and L Quan and A Sanfeliu and LJ Van Gool), IEEE, Piscataway, NJ, USA, 13th IEEE International Conference on Computer Vision (ICCV), November 2011 (inproceedings)

Abstract
Taking a sharp photo at several megapixel resolution traditionally relies on high grade lenses. In this paper, we present an approach to alleviate image degradations caused by imperfect optics. We rely on a calibration step to encode the optical aberrations in a space-variant point spread function and obtain a corrected image by non-stationary deconvolution. By including the Bayer array in our image formation model, we can perform demosaicing as part of the deconvolution.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Learning low-rank output kernels

Dinuzzo, F., Fukumizu, K.

In JMLR Workshop and Conference Proceedings Volume 20, pages: 181-196, (Editors: Hsu, C.-N. , W.S. Lee), JMLR, Cambridge, MA, USA, 3rd Asian Conference on Machine Learning (ACML) , November 2011 (inproceedings)

Abstract
Output kernel learning techniques allow to simultaneously learn a vector-valued function and a positive semidefinite matrix which describes the relationships between the outputs. In this paper, we introduce a new formulation that imposes a low-rank constraint on the output kernel and operates directly on a factor of the kernel matrix. First, we investigate the connection between output kernel learning and a regularization problem for an architecture with two layers. Then, we show that a variety of methods such as nuclear norm regularized regression, reduced-rank regression, principal component analysis, and low rank matrix approximation can be seen as special cases of the output kernel learning framework. Finally, we introduce a block coordinate descent strategy for learning low-rank output kernels.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Stability Condition for Teleoperation System with Packet Loss

Hong, A., Cho, JH., Lee, DY.

In pages: 760-761, 2011 KSME Annual Fall Conference, November 2011 (inproceedings)

Abstract
This paper focuses on the stability condition of teleoperation system where there is a packet loss in communication channel. Communication channel between master and slave cause packet loss and it obviously leads to a performance degradation and instability of teleoperation system. We consider two-channel control architecture for teleoperation system, and control inputs to remote site are produced by position of master and slave. In this paper, teleoperation system is modeled in discrete domain to include packet loss process. Also, the stability condition for teleoperation system with packet loss is discussed with input-to-state stability. Finally, the stability condition is presented in LMI approach.

ei

[BibTex]

[BibTex]


no image
Fast removal of non-uniform camera shake

Hirsch, M., Schuler, C., Harmeling, S., Schölkopf, B.

In pages: 463-470 , (Editors: DN Metaxas and L Quan and A Sanfeliu and LJ Van Gool), IEEE, Piscataway, NJ, USA, 13th IEEE International Conference on Computer Vision (ICCV), November 2011 (inproceedings)

Abstract
Camera shake leads to non-uniform image blurs. State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of the true image. While this is conceptually interesting, the resulting algorithms are computationally demanding. In this paper we develop a forward model based on the efficient filter flow framework, incorporating the particularities of camera shake, and show how an efficient algorithm for blur removal can be obtained. Comprehensive comparisons on a number of real-world blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


Home {3D} body scans from noisy image and range data
Home 3D body scans from noisy image and range data

Weiss, A., Hirshberg, D., Black, M.

In Int. Conf. on Computer Vision (ICCV), pages: 1951-1958, IEEE, Barcelona, November 2011 (inproceedings)

Abstract
The 3D shape of the human body is useful for applications in fitness, games and apparel. Accurate body scanners, however, are expensive, limiting the availability of 3D body models. We present a method for human shape reconstruction from noisy monocular image and range data using a single inexpensive commodity sensor. The approach combines low-resolution image silhouettes with coarse range data to estimate a parametric model of the body. Accurate 3D shape estimates are obtained by combining multiple monocular views of a person moving in front of the sensor. To cope with varying body pose, we use a SCAPE body model which factors 3D body shape and pose variations. This enables the estimation of a single consistent shape while allowing pose to vary. Additionally, we describe a novel method to minimize the distance between the projected 3D body contour and the image silhouette that uses analytic derivatives of the objective function. We propose a simple method to estimate standard body measurements from the recovered SCAPE model and show that the accuracy of our method is competitive with commercial body scanning systems costing orders of magnitude more.

ps

pdf YouTube poster Project Page Project Page [BibTex]

pdf YouTube poster Project Page Project Page [BibTex]


no image
Attenuation correction in MR-BrainPET with segmented T1-weighted MR images of the patient’s head: A comparative study with CT

Wagenknecht, G., Rota Kops, E., Mantlik, F., Fried, E., Pilz, T., Hautzel, H., Tellmann, L., Pichler, B., Herzog, H.

In pages: 2261-2266 , IEEE, Piscataway, NJ, USA, IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), October 2011 (inproceedings)

Abstract
Our method for attenuation correction (AC) in MR-BrainPET with segmented T1-weighted MR images of the pa-tient's head was applied to data from different MR-BrainPET scanners (Jülich, Tübingen) and compared to CT-based results. The study objectives presented in this paper are twofold. The first objective is to examine if the segmentation method developed for and successfully applied to 3D MP-RAGE data can also be used to segment other T1-weighted MR data such as 3D FLASH data. The second aim is to show if the similarity of segmented MR-based (SBA) and CT-based AC (CBA) obtained at HR+ PET can also be confirmed for BrainPET for which the new AC method is intended for. In order to reach the first objective, 14 segmented MR data sets (three 3D MP-RAGE data sets from Jülich and eleven 3D FLASH data sets from Tubingen) were compared to the resp. CT data based on the Dice coefficient and scatter plots. For bone, a CT threshold HU>;500 was applied. Dice coefficients (mean±std) for the upper cranial part of the skull, the skull above cavities, and in the caudal part including the cerebellum are 0.73±0.1, 0.79±0.04, and 0.49±0.02 for the Jülich data and 0.7U0.1, 0.72±0.1, and 0.60±0.05 for the Tubingen data. To reach the second aim, SBA and CBA were compared for six subjects based on VOI (AAL atlas) analysis. Mean absolute relative difference (maRD) values are maRD(JUFVBWl-FDG): 0.99%±0.83%, maRD(JüFVBW2-FDG): 0.90%±0.89%, and maRD(JUEP-Fluma- zenil): 1.85%±1.25% for the Jülich data and maRD(TuTP02- FDG): 2.99%±1.65%, maRD(TuNP01-FDG): 5.37%±2.29%, and maRD(TuNP02-FDG): 6.52%±1.69% for the three best-segmented Tübingen data sets. The results show similar segmentation quality for both Tl- weighted MR sequence types. The application to AC in BrainPET - hows a high similarity to CT-based AC if the standardized ACF value for bone used in SBA is in good accordance to the bone density of the patient in question.

ei

Web DOI [BibTex]

Web DOI [BibTex]


Evaluating the Automated Alignment of {3D} Human Body Scans
Evaluating the Automated Alignment of 3D Human Body Scans

Hirshberg, D. A., Loper, M., Rachlin, E., Tsoli, A., Weiss, A., Corner, B., Black, M. J.

In 2nd International Conference on 3D Body Scanning Technologies, pages: 76-86, (Editors: D’Apuzzo, Nicola), Hometrica Consulting, Lugano, Switzerland, October 2011 (inproceedings)

Abstract
The statistical analysis of large corpora of human body scans requires that these scans be in alignment, either for a small set of key landmarks or densely for all the vertices in the scan. Existing techniques tend to rely on hand-placed landmarks or algorithms that extract landmarks from scans. The former is time consuming and subjective while the latter is error prone. Here we show that a model-based approach can align meshes automatically, producing alignment accuracy similar to that of previous methods that rely on many landmarks. Specifically, we align a low-resolution, artist-created template body mesh to many high-resolution laser scans. Our alignment procedure employs a robust iterative closest point method with a regularization that promotes smooth and locally rigid deformation of the template mesh. We evaluate our approach on 50 female body models from the CAESAR dataset that vary significantly in body shape. To make the method fully automatic, we define simple feature detectors for the head and ankles, which provide initial landmark locations. We find that, if body poses are fairly similar, as in CAESAR, the fully automated method provides dense alignments that enable statistical analysis and anthropometric measurement.

ps

pdf slides DOI Project Page [BibTex]

pdf slides DOI Project Page [BibTex]


no image
Learning anticipation policies for robot table tennis

Wang, Z., Lampert, C., Mülling, K., Schölkopf, B., Peters, J.

In pages: 332-337 , (Editors: NM Amato), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Playing table tennis is a difficult task for robots, especially due to their limitations of acceleration. A key bottleneck is the amount of time needed to reach the desired hitting position and velocity of the racket for returning the incoming ball. Here, it often does not suffice to simply extrapolate the ball's trajectory after the opponent returns it but more information is needed. Humans are able to predict the ball's trajectory based on the opponent's moves and, thus, have a considerable advantage. Hence, we propose to incorporate an anticipation system into robot table tennis players, which enables the robot to react earlier while the opponent is performing the striking movement. Based on visual observation of the opponent's racket movement, the robot can predict the aim of the opponent and adjust its movement generation accordingly. The policies for deciding how and when to react are obtained by reinforcement learning. We conduct experiments with an existing robot player to show that the learned reaction policy can significantly improve the performance of the overall system.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Estimating integrated information with TMS pulses during wakefulness, sleep and under anesthesia

Balduzzi, D.

In pages: 4717-4720 , IEEE, Piscataway, NJ, USA, 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE EMBC), September 2011 (inproceedings)

Abstract
This paper relates a recently proposed measure of information integration to experiments investigating the evoked high-density electroencephalography (EEG) response to transcranial magnetic stimulation (TMS) during wakefulness, early non-rapid eye movement (NREM) sleep and under anesthesia. We show that bistability, arising at the cellular and population level during NREM sleep and under anesthesia, dramatically reduces the brain’s ability to integrate information.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Improving Denoising Algorithms via a Multi-scale Meta-procedure

Burger, H., Harmeling, S.

In Pattern Recognition, pages: 206-215, (Editors: Mester, R. , M. Felsberg), Springer, Berlin, Germany, 33rd DAGM Symposium, September 2011 (inproceedings)

Abstract
Many state-of-the-art denoising algorithms focus on recovering high-frequency details in noisy images. However, images corrupted by large amounts of noise are also degraded in the lower frequencies. Thus properly handling all frequency bands allows us to better denoise in such regimes. To improve existing denoising algorithms we propose a meta-procedure that applies existing denoising algorithms across different scales and combines the resulting images into a single denoised image. With a comprehensive evaluation we show that the performance of many state-of-the-art denoising algorithms can be improved.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Learning robot grasping from 3-D images with Markov Random Fields

Boularias, A., Kroemer, O., Peters, J.

In pages: 1548-1553 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Learning to grasp novel objects is an essential skill for robots operating in unstructured environments. We therefore propose a probabilistic approach for learning to grasp. In particular, we learn a function that predicts the success probability of grasps performed on surface points of a given object. Our approach is based on Markov Random Fields (MRF), and motivated by the fact that points that are geometrically close to each other tend to have similar grasp success probabilities. The MRF approach is successfully tested in simulation, and on a real robot using 3-D scans of various types of objects. The empirical results show a significant improvement over methods that do not utilize the smoothness assumption and classify each point separately from the others.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Neurofeedback of Fronto-Parietal Gamma-Oscillations

Grosse-Wentrup, M.

In pages: 172-175, (Editors: Müller-Putz, G.R. , R. Scherer, M. Billinger, A. Kreilinger, V. Kaiser, C. Neuper), Verlag der Technischen Universität Graz, Graz, Austria, 5th International Brain-Computer Interface Conference (BCI), September 2011 (inproceedings)

Abstract
In recent work, we have provided evidence that fronto-parietal γ-range oscillations are a cause of within-subject performance variations in brain-computer interfaces (BCIs) based on motor-imagery. Here, we explore the feasibility of using neurofeedback of fronto-parietal γ-power to induce a mental state that is beneficial for BCI-performance. We provide empirical evidence based on two healthy subjects that intentional attenuation of fronto-parietal γ-power results in an enhanced resting-state sensorimotor-rhythm (SMR). As a large resting-state amplitude of the SMR has been shown to correlate with good BCI-performance, our approach may provide a means to reduce performance variations in BCIs.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning inverse kinematics with structured prediction

Bocsi, B., Nguyen-Tuong, D., Csato, L., Schölkopf, B., Peters, J.

In pages: 698-703 , (Editors: NM Amato), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Learning inverse kinematics of robots with redundant degrees of freedom (DoF) is a difficult problem in robot learning. The difficulty lies in the non-uniqueness of the inverse kinematics function. Existing methods tackle non-uniqueness by segmenting the configuration space and building a global solution from local experts. The usage of local experts implies the definition of an oracle, which governs the global consistency of the local models; the definition of this oracle is difficult. We propose an algorithm suitable to learn the inverse kinematics function in a single global model despite its multivalued nature. Inverse kinematics is approximated from examples using structured output learning methods. Unlike most of the existing methods, which estimate inverse kinematics on velocity level, we address the learning of the direct function on position level. This problem is a significantly harder. To support the proposed method, we conducted real world experiments on a tracking control task and tested our algorithms on these models.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic foreground-background refocusing

Loktyushin, A., Harmeling, S.

In pages: 3445-3448, (Editors: Macq, B. , P. Schelkens), IEEE, Piscataway, NJ, USA, 18th IEEE International Conference on Image Processing (ICIP), September 2011 (inproceedings)

Abstract
A challenging problem in image restoration is to recover an image with a blurry foreground. Such images can easily occur with modern cameras, when the auto-focus aims mistakenly at the background (which will appear sharp) instead of the foreground, where usually the object of interest is. In this paper we propose an automatic procedure that (i) estimates the amount of out-of-focus blur, (ii) segments the image into foreground and background incorporating clues from the blurriness, (iii) recovers the sharp foreground, and finally (iv) blurs the background to refocus the scene. On several real photographs with blurry foreground and sharp background, we demonstrate the effectiveness and limitations of our method.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Reinforcement Learning to adjust Robot Movements to New Situations

Kober, J., Oztop, E., Peters, J.

In Robotics: Science and Systems VI, pages: 33-40, (Editors: Matsuoka, Y. , H. F. Durrant-Whyte, J. Neira), MIT Press, Cambridge, MA, USA, 2010 Robotics: Science and Systems Conference (RSS), September 2011 (inproceedings)

Abstract
Many complex robot motor skills can be represented using elementary movements, and there exist efficient techniques for learning parametrized motor plans using demonstrations and self-improvement. However, in many cases, the robot currently needs to learn a new elementary movement even if a parametrized motor plan exists that covers a similar, related situation. Clearly, a method is needed that modulates the elementary movement through the meta-parameters of its representation. In this paper, we show how to learn such mappings from circumstances to meta-parameters using reinforcement learning.We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. We compare this algorithm to several previous methods on a toy example and show that it performs well in comparison to standard algorithms. Subsequently, we show two robot applications of the presented setup; i.e., the generalization of throwing movements in darts, and of hitting movements in table tennis. We show that both tasks can be learned successfully using simulated and real robots.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Simultaneous EEG Recordings with Dry and Wet Electrodes in Motor-Imagery

Saab, J., Battes, B., Grosse-Wentrup, M.

In pages: 312-315, (Editors: Müller-Putz, G.R. , R. Scherer, M. Billinger, A. Kreilinger, V. Kaiser, C. Neuper), Verlag der Technischen Universität Graz, Graz, Austria, 5th International Brain-Computer Interface Conference (BCI), September 2011 (inproceedings)

Abstract
Robust dry EEG electrodes are arguably the key to making EEG Brain-Computer Interfaces (BCIs) a practical technology. Existing studies on dry EEG electrodes can be characterized by the recording method (stand-alone dry electrodes or simultaneous recording with wet electrodes), the dry electrode technology (e.g. active or passive), the paradigm used for testing (e.g. event-related potentials), and the measure of performance (e.g. comparing dry and wet electrode frequency spectra). In this study, an active-dry electrode prototype is tested, during a motor-imagery task, with EEG-BCI in mind. It is used simultaneously with wet electrodes and assessed using classification accuracy. Our results indicate that the two types of electrodes are comparable in their performance but there are improvements to be made, particularly in finding ways to reduce motion-related artifacts.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning task-space tracking control with kernels

Nguyen-Tuong, D., Peters, J.

In pages: 704-709 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Task-space tracking control is essential for robot manipulation. In practice, task-space control of redundant robot systems is known to be susceptive to modeling errors. Here, data driven learning methods may present an interesting alternative approach. However, learning models for task-space tracking control from sampled data is an ill-posed problem. In particular, the same input data point can yield many different output values which can form a non-convex solution space. Because the problem is ill-posed, models cannot be learned from such data using common regression methods. While learning of task-space control mappings is globally ill-posed, it has been shown in recent work that it is locally a well-defined problem. In this paper, we use this insight to formulate a local kernel-based learning approach for online model learning for taskspace tracking control. For evaluations, we show in simulation the ability of the method for online model learning for task-space tracking control of redundant robots.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic particle picking using diffusion filtering and random forest classification

Joubert, P., Nickell, S., Beck, F., Habeck, M., Hirsch, M., Schölkopf, B.

In pages: 6, International Workshop on Microscopic Image Analysis with Application in Biology (MIAAB), September 2011 (inproceedings)

Abstract
An automatic particle picking algorithm for processing electron micrographs of a large molecular complex, the 26S proteasome, is described. The algorithm makes use of a coherence enhancing diffusion filter to denoise the data, and a random forest classifier for removing false positives. It does not make use of a 3D reference model, but uses a training set of manually picked particles instead. False positive and false negative rates of around 25% to 30% are achieved on a testing set. The algorithm was developed for a specific particle, but contains steps that should be useful for developing automatic picking algorithms for other particles.

ei

PDF Web [BibTex]

PDF Web [BibTex]