Header logo is


2018


no image
Deep Reinforcement Learning for Resource-aware Control

Baumann, D., Zhu, J., Martius, G., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control, Miami, Fl, USA, December 2018 (inproceedings) Accepted

ics al

[BibTex]

2018


[BibTex]


no image
Minimum Information Exchange in Distributed Systems

Solowjow, F., Mehrjou, A., Schölkopf, B., Trimpe, S.

57th IEEE Conference on Decision and Control (CDC), December 2018 (conference) Submitted

ei ics

arXiv [BibTex]

arXiv [BibTex]


no image
Direct Sparse Odometry With Rolling Shutter

Schubert, D., Usenko, V., Demmel, N., Stueckler, J., Cremers, D.

European Conference on Computer Vision (ECCV), September 2018, accepted as oral presentation, to appear (conference)

ev

[BibTex]

[BibTex]


no image
Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry

Yang, N., Wang, R., Stueckler, J., Cremers, D.

European Conference on Computer Vision (ECCV), September 2018, accepted as oral presentation, to appear, arXiv 1807.02570 (conference)

ev

link (url) [BibTex]

link (url) [BibTex]


Thumb xl persondetect  copy
Learning Human Optical Flow

Ranjan, A., Romero, J., Black, M. J.

September 2018 (conference)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Given this, we devise an optical flow algorithm specifically for human motion and show that it is superior to generic flow methods. Designing a method by hand is impractical, so we develop a new training database of image sequences with ground truth optical flow. For this we use a 3D model of the human body and motion capture data to synthesize realistic flow fields. We then train a convolutional neural network to estimate human flow fields from pairs of images. Since many applications in human motion analysis depend on speed, and we anticipate mobile applications, we base our method on SpyNet with several modifications. We demonstrate that our trained network is more accurate than a wide range of top methods on held-out test data and that it generalizes well to real image sequences. When combined with a person detector/tracker, the approach provides a full solution to the problem of 2D human flow estimation. Both the code and the dataset are available for research.

ps

link (url) [BibTex]


Thumb xl sample3 merge black
Learning an Infant Body Model from RGB-D Data for Accurate Full Body Motion Analysis

Hesse, N., Pujades, S., Romero, J., Black, M. J., Bodensteiner, C., Arens, M., Hofmann, U. G., Tacke, U., Hadders-Algra, M., Weinberger, R., Muller-Felber, W., Schroeder, A. S.

In Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI), September 2018 (inproceedings)

Abstract
Infant motion analysis enables early detection of neurodevelopmental disorders like cerebral palsy (CP). Diagnosis, however, is challenging, requiring expert human judgement. An automated solution would be beneficial but requires the accurate capture of 3D full-body movements. To that end, we develop a non-intrusive, low-cost, lightweight acquisition system that captures the shape and motion of infants. Going beyond work on modeling adult body shape, we learn a 3D Skinned Multi-Infant Linear body model (SMIL) from noisy, low-quality, and incomplete RGB-D data. We demonstrate the capture of shape and motion with 37 infants in a clinical environment. Quantitative experiments show that SMIL faithfully represents the data and properly factorizes the shape and pose of the infants. With a case study based on general movement assessment (GMA), we demonstrate that SMIL captures enough information to allow medical assessment. SMIL provides a new tool and a step towards a fully automatic system for GMA.

ps

pdf Project page [BibTex]

pdf Project page [BibTex]


Thumb xl aircap ca 3
Decentralized MPC based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios

Tallamraju, R., Rajappa, S., Black, M., Karlapalem, K., Ahmad, A.

The 16th IEEE International Symposium on Safety, Security, and Rescue Robotics, August 2018 (conference) Accepted

ps

Project Page [BibTex]

Project Page [BibTex]


no image
Comparison-Based Random Forests

Haghiri, S., Garreau, D., Luxburg, U. V.

In ICML, 35th International Conference on Machine Learning, July 2018 (inproceedings)

slt

link (url) [BibTex]

link (url) [BibTex]


Thumb xl teaser image
Probabilistic Recurrent State-Space Models

Doerr, A., Daniel, C., Schiegg, M., Nguyen-Tuong, D., Schaal, S., Toussaint, M., Trimpe, S.

In Proceedings of the International Conference on Machine Learning, International Conference on Machine Learning (ICML), July 2018 (inproceedings) Accepted

Abstract
State-space models (SSMs) are a highly expressive model class for learning patterns in time series data and for system identification. Deterministic versions of SSMs (e.g., LSTMs) proved extremely successful in modeling complex time-series data. Fully probabilistic SSMs, however, unfortunately often prove hard to train, even for smaller problems. To overcome this limitation, we propose a scalable initialization and training algorithm based on doubly stochastic variational inference and Gaussian processes. In the variational approximation we propose in contrast to related approaches to fully capture the latent state temporal correlations to allow for robust training.

am ics

arXiv pdf Project Page [BibTex]

arXiv pdf Project Page [BibTex]


no image
Event-triggered Learning for Resource-efficient Networked Control

Solowjow, F., Baumann, D., Garcke, J., Trimpe, S.

In Proceedings of the 2018 American Control Conference (ACC), June 2018 (inproceedings)

ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


Thumb xl screen shot 2018 04 18 at 11.01.27 am
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fail with Grace

Heim, S., Spröwitz, A.

Proceedings of SIMPAR 2018, pages: 55-61, IEEE, 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), May 2018 (conference)

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
On Time Optimization of Centroidal Momentum Dynamics

Ponton, B., Herzog, A., Prete, A. D., Schaal, S., Righetti, L.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
Recently, the centroidal momentum dynamics has received substantial attention to plan dynamically consistent motions for robots with arms and legs in multi-contact scenarios. However, it is also non convex which renders any optimization approach difficult and timing is usually kept fixed in most trajectory optimization techniques to not introduce additional non convexities to the problem. But this can limit the versatility of the algorithms. In our previous work, we proposed a convex relaxation of the problem that allowed to efficiently compute momentum trajectories and contact forces. However, our approach could not minimize a desired angular momentum objective which seriously limited its applicability. Noticing that the non-convexity introduced by the time variables is of similar nature as the centroidal dynamics one, we propose two convex relaxations to the problem based on trust regions and soft constraints. The resulting approaches can compute time-optimized dynamically consistent trajectories sufficiently fast to make the approach realtime capable. The performance of the algorithm is demonstrated in several multi-contact scenarios for a humanoid robot. In particular, we show that the proposed convex relaxation of the original problem finds solutions that are consistent with the original non-convex problem and illustrate how timing optimization allows to find motion plans that would be difficult to plan with fixed timing.

am

video paper [BibTex]

video paper [BibTex]


Thumb xl andrease teaser 2
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page [BibTex]

pdf Video Project Page [BibTex]


Thumb xl cover tro paper
An Online Scalable Approach to Unified Multirobot Cooperative Localization and Object Tracking

Ahmad, A., Lawless, G., Lima, P.

In IEEE International Conference on Robotics and Automation (ICRA) 2018, Journal Track., ICRA 2018, May 2018 (inproceedings)

ps

Project Page [BibTex]

Project Page [BibTex]


Thumb xl meta learning overview
Online Learning of a Memory for Learning Rates

(nominated for best paper award)

Meier, F., Kappler, D., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018, accepted (inproceedings)

Abstract
The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.

am

pdf video code [BibTex]

pdf video code [BibTex]


Thumb xl screen shot 2018 02 03 at 9.09.06 am
Shaping in Practice: Training Wheels to Learn Fast Hopping Directly in Hardware

Heim, S., Ruppert, F., Sarvestani, A., Spröwitz, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, pages: 5076-5081, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.

dlg

Video Youtube link (url) [BibTex]

Video Youtube link (url) [BibTex]


Thumb xl learning ct w asm block diagram detailed
Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

Sutanto, G., Su, Z., Schaal, S., Meier, F.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

am

pdf video [BibTex]

pdf video [BibTex]


Thumb xl screen shot 2018 04 19 at 14.57.08
Motion-based Object Segmentation based on Dense RGB-D Scene Flow

Shao, L., Shah, P., Dwaracherla, V., Bohg, J.

arXiv, April 2018 (conference)

Abstract
Given two consecutive RGB-D images, we propose a model that estimates a dense 3D motion field, also known as scene flow. We take advantage of the fact that in robot manipulation scenarios, scenes often consist of a set of rigidly moving objects. Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow. We employ an hourglass, deep neural network architecture. In the encoding stage, the RGB and depth images undergo spatial compression and correlation. In the decoding stage, the model outputs three images containing a per-pixel estimate of the corresponding object center as well as object translation and rotation. This forms the basis for inferring the object segmentation and final object scene flow. To evaluate our model, we generated a new and challenging, large-scale, synthetic dataset that is specifically targeted at robotic manipulation: It contains a large number of scenes with a very diverse set of simultaneously moving 3D objects and is recorded with a commonly-used RGB-D camera. In quantitative experiments, we show that we significantly outperform state-of-the-art scene flow and motion-segmentation methods. In qualitative experiments, we show how our learned model transfers to challenging real-world scenes, visually generating significantly better results than existing methods.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Evaluating Low-Power Wireless Cyber-Physical Systems

Baumann, D., Mager, F., Singh, H., Zimmerling, M., Trimpe, S.

In Proceedings of the 1st Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench 2018), Porto, Portugal, April 2018 (inproceedings)

ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


Thumb xl despoina paper teaser
RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials

Paschalidou, D., Ulusoy, A. O., Schmitt, C., Gool, L., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.

avg

pdf suppmat Video Project Page code Poster Project Page [BibTex]

pdf suppmat Video Project Page code Poster Project Page [BibTex]


Thumb xl hmrteaser
End-to-end Recovery of Human Shape and Pose

Kanazawa, A., Black, M. J., Jacobs, D. W., Malik, J.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
We describe Human Mesh Recovery (HMR), an end-to-end framework for reconstructing a full 3D mesh of a human body from a single RGB image. In contrast to most current methods that compute 2D or 3D joint locations, we produce a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. The main objective is to minimize the reprojection loss of keypoints, which allows our model to be trained using in-the-wild images that only have ground truth 2D annotations. However, the reprojection loss alone is highly underconstrained. In this work we address this problem by introducing an adversary trained to tell whether human body shape and pose parameters are real or not using a large database of 3D human meshes. We show that HMR can be trained with and without using any paired 2D-to-3D supervision. We do not rely on intermediate 2D keypoint detections and infer 3D pose and shape parameters directly from image pixels. Our model runs in real-time given a bounding box containing the person. We demonstrate our approach on various images in-the-wild and out-perform previous optimization-based methods that output 3D meshes and show competitive results on tasks such as 3D joint location estimation and part segmentation.

ps

pdf code project video [BibTex]

pdf code project video [BibTex]


no image
Travelling Ultrasonic Wave Enhances Keyclick Sensation

Gueorguiev, D., Kaci, A., Amberg, M., Giraud, F., Lemaire-Semail, B.

In Haptics: Science, Technology, and Applications, pages: 302-312, Springer International Publishing, Cham, 2018 (inproceedings)

Abstract
A realistic keyclick sensation is a serious challenge for haptic feedback since vibrotactile rendering faces the limitation of the absence of contact force as experienced on physical buttons. It has been shown that creating a keyclick sensation is possible with stepwise ultrasonic friction modulation. However, the intensity of the sensation is limited by the impedance of the fingertip and by the absence of a lateral force component external to the finger. In our study, we compare this technique to rendering with an ultrasonic travelling wave, which exerts a lateral force on the fingertip. For both techniques, participants were asked to report the detection (or not) of a keyclick during a forced choice one interval procedure. In experiment 1, participants could press the surface as many time as they wanted for a given trial. In experiment 2, they were constrained to press only once. The results show a lower perceptual threshold for travelling waves. Moreover, participants pressed less times per trial and exerted smaller normal force on the surface. The subjective quality of the sensation was found similar for both techniques. In general, haptic feedback based on travelling ultrasonic waves is promising for applications without lateral motion of the finger.

hi

[BibTex]

[BibTex]


no image
Group invariance principles for causal generative models

Besserve, M., Shajarisales, N., Schölkopf, B., Janzing, D.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Spatio-temporal Transformer Network for Video Restoration

Kim, T. H., Sajjadi, M. S. M., Hirsch, M., Schölkopf, B.

European Conference on Computer Vision (ECCV), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Wasserstein Auto-Encoders

Tolstikhin, I., Bousquet, O., Gelly, S., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Fidelity-Weighted Learning

Dehghani, M., Mehrjou, A., Gouws, S., Kamps, J., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming

Yurtsever, A., Fercoq, O., Locatello, F., Cevher, V.

Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


Thumb xl tease pic
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

Balles, L., Hennig, P.

In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (inproceedings) Accepted

Abstract
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.

pn

link (url) [BibTex]

link (url) [BibTex]


Thumb xl yiyi paper teaser
Deep Marching Cubes: Learning Explicit Surface Representations

Liao, Y., Donne, S., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (eg, TSDF) from which 3D surface meshes must be extracted in a post-processing step (eg, via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques.

avg

pdf suppmat Video Project Page Poster Project Page [BibTex]

pdf suppmat Video Project Page Poster Project Page [BibTex]


Thumb xl teaser andreas
Semantic Visual Localization

Schönberger, J., Pollefeys, M., Geiger, A., Sattler, T.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, eg, in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

avg

pdf suppmat Poster [BibTex]

pdf suppmat Poster [BibTex]


no image
Learning-based solution to phase error correction in T2*-weighted GRE scans

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

1st International conference on Medical Imaging with Deep Learning (MIDL), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Exploring Fingers’ Limitation of Texture Density Perception on Ultrasonic Haptic Displays

Kalantari, F., Gueorguiev, D., Lank, E., Bremard, N., Grisoni, L.

In Haptics: Science, Technology, and Applications, pages: 354-365, Springer International Publishing, Cham, 2018 (inproceedings)

Abstract
Recent research in haptic feedback is motivated by the crucial role that tactile perception plays in everyday touch interactions. In this paper, we describe psychophysical experiments to investigate the perceptual threshold of individual fingers on both the right and left hand of right-handed participants using active dynamic touch for spatial period discrimination of both sinusoidal and square-wave gratings on ultrasonic haptic touchscreens. Both one-finger and multi-finger touch were studied and compared. Our results indicate that users' finger identity (index finger, middle finger, etc.) significantly affect the perception of both gratings in the case of one-finger exploration. We show that index finger and thumb are the most sensitive in all conditions whereas little finger followed by ring are the least sensitive for haptic perception. For multi-finger exploration, the right hand was found to be more sensitive than the left hand for both gratings. Our findings also demonstrate similar perception sensitivity between multi-finger exploration and the index finger of users' right hands (i.e. dominant hand in our study), while significant difference was found between single and multi-finger perception sensitivity for the left hand.

hi

[BibTex]

[BibTex]


no image
Comparison-Based Random Forests

Siavash Haghiri, Damien Garreau, Ulrike von Luxburg

ICML, 2018 (conference)

[BibTex]

[BibTex]


no image
From Deterministic ODEs to Dynamic Structural Causal Models

Rubenstein, P. K., Bongers, S., Mooij, J. M., Schölkopf, B.

Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI), 2018 (conference) Accepted

ei

Arxiv [BibTex]

Arxiv [BibTex]


no image
Detecting non-causal artifacts in multivariate linear regression models

Janzing, D., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Blind Justice: Fairness with Encrypted Sensitive Attributes

Kilbertus, N., Gascon, A., Kusner, M., Veale, M., Gummadi, K., Weller, A.

Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 464-472, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl eigval gradpen
Which Training Methods for GANs do actually Converge?

Mescheder, L., Geiger, A., Nowozin, S.

International Conference on Machine learning (ICML), 2018 (conference)

Abstract
Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.

avg

code paper poster slides Project Page [BibTex]


Thumb xl david paper teaser
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision

Stutz, D., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, ie, learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet and KITTI, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet, we additionally show that the approach is able to generalize to other object categories as well.

avg

pdf suppmat Project Page Poster Project Page [BibTex]

pdf suppmat Project Page Poster Project Page [BibTex]


no image
Feedback Control Goes Wireless: Guaranteed Stability over Low-power Multi-hop Networks

Mager, F., Baumann, D., Jacob, R., Thiele, L., Trimpe, S., Zimmerling, M.

2018, Under review at a conference (conference) Submitted

Abstract
Closing feedback loops fast and over long distances is key to emerging applications; for example, robot motion control and swarm coordination require update intervals below 100 ms. Low-power wireless is preferred for its flexibility, low cost, and small form factor, especially if the devices support multi-hop communication. Thus far, however, closed-loop control over multi-hop low-power wireless has only been demonstrated for update intervals on the order of multiple seconds. This paper presents a wireless embedded system that tames imperfections impairing control performance such as jitter or packet loss, and a control design that exploits the essential properties of this system to provably guarantee closed-loop stability for linear dynamic systems. Using experiments on a testbed with multiple cart-pole systems, we are the first to demonstrate the feasibility and to assess the performance of closed-loop control and coordination over multi-hop low-power wireless for update intervals from 20 ms to 50 ms.

am ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


Thumb xl publication alife 2018
Systematic self-exploration of behaviors for robots in a dynamical systems framework

Pinneri, C., Martius, G.

In 2018 (inproceedings)

Abstract
One of the challenges of this century is to understand the neural mechanisms behind cognitive control and learning. Recent investigations propose biologically plausible synaptic mechanisms for self-organizing controllers, in the spirit of Hebbian learning. In particular, differential extrinsic plasticity (DEP) [Der and Martius, PNAS 2015], has proven to enable embodied agents to self-organize their individual sensorimotor development, and generate highly coordinated behaviors during their interaction with the environment. These behaviors are attractors of a dynamical system. In this paper, we use the DEP rule to generate attractors and we combine it with a “repelling potential” which allows the system to actively explore all its attractor behaviors in a systematic way. With a view to a self-determined exploration of goal-free behaviors, our framework enables switching between different motion patterns in an autonomous and sequential fashion. Our algorithm is able to recover all the attractor behaviors in a toy system and it is also effective in two simulated environments. A spherical robot discovers all its major rolling modes and a hexapod robot learns to locomote in 50 different ways in 30min.

al

Arxiv Code [BibTex]


no image
Learning Independent Causal Mechanisms

Parascandolo, G., Kilbertus, N., Rojas-Carulla, M., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


Thumb xl 2017 frvsr
Frame-Recurrent Video Super-Resolution

Sajjadi, M. S. M., Vemulapalli, R., Brown, M.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2018 (conference) Accepted

ei

ArXiv [BibTex]

ArXiv [BibTex]


no image
PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos

Parmas, P., Doya, K., Rasmussen, C., Peters, J.

Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Temporal Difference Models: Model-Free Deep RL for Model-Based Control

Pong*, V., Gu*, S., Dalal, M., Levine, S.

6th International Conference on Learning Representations (ICLR), 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl 2018 tgan
Tempered Adversarial Networks

Sajjadi, M. S. M., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (conference) Accepted

ei

ArXiv [BibTex]

ArXiv [BibTex]


no image
Generalized Score Functions for Causal Discovery

Huang, B., Zhang, K., Lin, Y., Schölkopf, B., C., G.

Proceedings of the 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM), pages: 324-332, (Editors: Yi Chang, Chengxiang Zhai, Yan Liu, and Yoelle Maarek), ACM, 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]