Header logo is



Thumb xl 2017 frvsr
Frame-Recurrent Video Super-Resolution

Sajjadi, M. S. M., Vemulapalli, R., Brown, M.

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2018 (conference)

ei

ArXiv link (url) [BibTex]

ArXiv link (url) [BibTex]


no image
Learning Face Deblurring Fast and Wide

Jin, M., Hirsch, M., Favaro, P.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages: 745-753, June 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


Thumb xl screen shot 2018 04 18 at 11.01.27 am
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fail with Grace

Heim, S., Sproewitz, A.

Proceedings of SIMPAR 2018, pages: 55-61, IEEE, 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), May 2018 (conference)

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Wasserstein Auto-Encoders

Tolstikhin, I., Bousquet, O., Gelly, S., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Fidelity-Weighted Learning

Dehghani, M., Mehrjou, A., Gouws, S., Kamps, J., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Inducing Probabilistic Context-Free Grammars for the Sequencing of Movement Primitives

Lioutikov, R., Maeda, G., Veiga, F., Kersting, K., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 1-8, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Sobolev GAN

Mroueh, Y., Li*, C., Sercu*, T., Raj*, A., Cheng, Y.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl icra2018
Soft Miniaturized Linear Actuators Wirelessly Powered by Rotating Permanent Magnets

Qiu, T., Palagi, S., Sachs, J., Fischer, P.

In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages: 3595-3600, May 2018 (inproceedings)

Abstract
Wireless actuation by magnetic fields allows for the operation of untethered miniaturized devices, e.g. in biomedical applications. Nevertheless, generating large controlled forces over relatively large distances is challenging. Magnetic torques are easier to generate and control, but they are not always suitable for the tasks at hand. Moreover, strong magnetic fields are required to generate a sufficient torque, which are difficult to achieve with electromagnets. Here, we demonstrate a soft miniaturized actuator that transforms an externally applied magnetic torque into a controlled linear force. We report the design, fabrication and characterization of both the actuator and the magnetic field generator. We show that the magnet assembly, which is based on a set of rotating permanent magnets, can generate strong controlled oscillating fields over a relatively large workspace. The actuator, which is 3D-printed, can lift a load of more than 40 times its weight. Finally, we show that the actuator can be further miniaturized, paving the way towards strong, wirelessly powered microactuators.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Temporal Difference Models: Model-Free Deep RL for Model-Based Control

Pong*, V., Gu*, S., Dalal, M., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl andrease teaser 2
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page Project Page [BibTex]

pdf Video Project Page Project Page [BibTex]


no image
Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning

Eysenbach, B., Gu, S., Ibarz, J., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

Videos link (url) Project Page [BibTex]

Videos link (url) Project Page [BibTex]


Thumb xl meta learning overview
Online Learning of a Memory for Learning Rates

(nominated for best paper award)

Meier, F., Kappler, D., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018, accepted (inproceedings)

Abstract
The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.

am

pdf video code [BibTex]

pdf video code [BibTex]


Thumb xl 2018 tgan
Tempered Adversarial Networks

Sajjadi, M. S. M., Parascandolo, G., Mehrjou, A., Schölkopf, B.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Learning Coupled Forward-Inverse Models with Combined Prediction Errors

Koert, D., Maeda, G., Neumann, G., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 2433-2439, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning Disentangled Representations with Wasserstein Auto-Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl screen shot 2018 02 03 at 9.09.06 am
Shaping in Practice: Training Wheels to Learn Fast Hopping Directly in Hardware

Heim, S., Ruppert, F., Sarvestani, A., Sproewitz, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, pages: 5076-5081, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
Learning instead of designing robot controllers can greatly reduce engineering effort required, while also emphasizing robustness. Despite considerable progress in simulation, applying learning directly in hardware is still challenging, in part due to the necessity to explore potentially unstable parameters. We explore the of concept shaping the reward landscape with training wheels; temporary modifications of the physical hardware that facilitate learning. We demonstrate the concept with a robot leg mounted on a boom learning to hop fast. This proof of concept embodies typical challenges such as instability and contact, while being simple enough to empirically map out and visualize the reward landscape. Based on our results we propose three criteria for designing effective training wheels for learning in robotics.

dlg

Video Youtube link (url) Project Page [BibTex]

Video Youtube link (url) Project Page [BibTex]


no image
Automatic Estimation of Modulation Transfer Functions

Bauer, M., Volchkov, V., Hirsch, M., Schölkopf, B.

IEEE International Conference on Computational Photography (ICCP), May 2018 (conference)

ei sf

DOI [BibTex]

DOI [BibTex]


Thumb xl learning ct w asm block diagram detailed
Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

Sutanto, G., Su, Z., Schaal, S., Meier, F.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

am

pdf video [BibTex]

pdf video [BibTex]


no image
Causal Discovery Using Proxy Variables

Rojas-Carulla, M., Baroni, M., Lopez-Paz, D.

Workshop at 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences

Pinsler, R., Akrour, R., Osa, T., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 596-601, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Group invariance principles for causal generative models

Besserve, M., Shajarisales, N., Schölkopf, B., Janzing, D.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 464-472, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


Thumb xl testbed2
Evaluating Low-Power Wireless Cyber-Physical Systems

Baumann, D., Mager, F., Singh, H., Zimmerling, M., Trimpe, S.

In Proceedings of the IEEE Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench), pages: 13-18, IEEE Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench), April 2018 (inproceedings)

ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


Thumb xl thesis cover2
Model-based Optical Flow: Layers, Learning, and Geometry

Wulff, J.

Tuebingen University, April 2018 (phdthesis)

Abstract
The estimation of motion in video sequences establishes temporal correspondences between pixels and surfaces and allows reasoning about a scene using multiple frames. Despite being a focus of research for over three decades, computing motion, or optical flow, remains challenging due to a number of difficulties, including the treatment of motion discontinuities and occluded regions, and the integration of information from more than two frames. One reason for these issues is that most optical flow algorithms only reason about the motion of pixels on the image plane, while not taking the image formation pipeline or the 3D structure of the world into account. One approach to address this uses layered models, which represent the occlusion structure of a scene and provide an approximation to the geometry. The goal of this dissertation is to show ways to inject additional knowledge about the scene into layered methods, making them more robust, faster, and more accurate. First, this thesis demonstrates the modeling power of layers using the example of motion blur in videos, which is caused by fast motion relative to the exposure time of the camera. Layers segment the scene into regions that move coherently while preserving their occlusion relationships. The motion of each layer therefore directly determines its motion blur. At the same time, the layered model captures complex blur overlap effects at motion discontinuities. Using layers, we can thus formulate a generative model for blurred video sequences, and use this model to simultaneously deblur a video and compute accurate optical flow for highly dynamic scenes containing motion blur. Next, we consider the representation of the motion within layers. Since, in a layered model, important motion discontinuities are captured by the segmentation into layers, the flow within each layer varies smoothly and can be approximated using a low dimensional subspace. We show how this subspace can be learned from training data using principal component analysis (PCA), and that flow estimation using this subspace is computationally efficient. The combination of the layered model and the low-dimensional subspace gives the best of both worlds, sharp motion discontinuities from the layers and computational efficiency from the subspace. Lastly, we show how layered methods can be dramatically improved using simple semantics. Instead of treating all layers equally, a semantic segmentation divides the scene into its static parts and moving objects. Static parts of the scene constitute a large majority of what is shown in typical video sequences; yet, in such regions optical flow is fully constrained by the depth structure of the scene and the camera motion. After segmenting out moving objects, we consider only static regions, and explicitly reason about the structure of the scene and the camera motion, yielding much better optical flow estimates. Furthermore, computing the structure of the scene allows to better combine information from multiple frames, resulting in high accuracies even in occluded regions. For moving regions, we compute the flow using a generic optical flow method, and combine it with the flow computed for the static regions to obtain a full optical flow field. By combining layered models of the scene with reasoning about the dynamic behavior of the real, three-dimensional world, the methods presented herein push the envelope of optical flow computation in terms of robustness, speed, and accuracy, giving state-of-the-art results on benchmarks and pointing to important future research directions for the estimation of motion in natural scenes.

ps

Official link DOI Project Page [BibTex]


no image
Cause-Effect Inference by Comparing Regression Errors

Blöbaum, P., Janzing, D., Washio, T., Shimizu, S., Schölkopf, B.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) , 84, pages: 900-909, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Will People Like Your Image? Learning the Aesthetic Space

Schwarz, K., Wieschollek, P., Lensch, H. P. A.

2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages: 2048-2057, March 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


no image
Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation

Kim, J., Tabibian, B., Oh, A., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 11th ACM International Conference on Web Search and Data Mining (WSDM), pages: 324-332, (Editors: Yi Chang, Chengxiang Zhai, Yan Liu, and Yoelle Maarek), ACM, Febuary 2018 (conference)

ei

DOI Project Page Project Page [BibTex]

DOI Project Page Project Page [BibTex]


Thumb xl despoina paper teaser
RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials

Paschalidou, D., Ulusoy, A. O., Schmitt, C., Gool, L., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.

avg

pdf suppmat Video Project Page code Poster Project Page [BibTex]

pdf suppmat Video Project Page code Poster Project Page [BibTex]


Thumb xl hmrteaser
End-to-end Recovery of Human Shape and Pose

Kanazawa, A., Black, M. J., Jacobs, D. W., Malik, J.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
We describe Human Mesh Recovery (HMR), an end-to-end framework for reconstructing a full 3D mesh of a human body from a single RGB image. In contrast to most current methods that compute 2D or 3D joint locations, we produce a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. The main objective is to minimize the reprojection loss of keypoints, which allows our model to be trained using in-the-wild images that only have ground truth 2D annotations. However, the reprojection loss alone is highly underconstrained. In this work we address this problem by introducing an adversary trained to tell whether human body shape and pose parameters are real or not using a large database of 3D human meshes. We show that HMR can be trained with and without using any paired 2D-to-3D supervision. We do not rely on intermediate 2D keypoint detections and infer 3D pose and shape parameters directly from image pixels. Our model runs in real-time given a bounding box containing the person. We demonstrate our approach on various images in-the-wild and out-perform previous optimization-based methods that output 3D meshes and show competitive results on tasks such as 3D joint location estimation and part segmentation.

ps

pdf code project video Project Page [BibTex]

pdf code project video Project Page [BibTex]


no image
Functional Programming for Modular Bayesian Inference

Ścibior, A., Kammar, O., Ghahramani, Z.

Proceedings of the ACM on Functional Programming (ICFP), 2(Article No. 83):1-29, ACM, 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning to select computations

Callaway, F., Gul, S., Krueger, P., Griffiths, T. L., Lieder, F.

In Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference, 2018 (inproceedings)

re

Project Page [BibTex]

Project Page [BibTex]


no image
Automatic Bayesian Density Analysis

Vergari, A., Molina, A., Peharz, R., Ghahramani, Z., Kersting, K., Valera, I.

2018 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
A virtual reality environment for experiments in assistive robotics and neural interfaces

Bustamante, S.

Graduate School of Neural Information Processing, Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Optimal Trajectory Generation and Learning Control for Robot Table Tennis

Koc, O.

Technical University Darmstadt, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Enhanced Non-Steady Gliding Performance of the MultiMo-Bat through Optimal Airfoil Configuration and Control Strategy

Kim, H., Woodward, M. A., Sitti, M.

In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1382-1388, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Thumb xl tease pic
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

Balles, L., Hennig, P.

In Proceedings of the 35th International Conference on Machine Learning (ICML), 2018 (inproceedings) Accepted

Abstract
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner's toolbox for problems where ADAM fails.

pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl yiyi paper teaser
Deep Marching Cubes: Learning Explicit Surface Representations

Liao, Y., Donne, S., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (eg, TSDF) from which 3D surface meshes must be extracted in a post-processing step (eg, via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques.

avg

pdf suppmat Video Project Page Poster Project Page [BibTex]

pdf suppmat Video Project Page Poster Project Page [BibTex]


Thumb xl teaser andreas
Semantic Visual Localization

Schönberger, J., Pollefeys, M., Geiger, A., Sattler, T.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, eg, in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


no image
Comparison-Based Random Forests

Siavash Haghiri, Damien Garreau, Ulrike von Luxburg

ICML, 2018 (conference)

[BibTex]

[BibTex]


Thumb xl eigval gradpen
Which Training Methods for GANs do actually Converge?

Mescheder, L., Geiger, A., Nowozin, S.

International Conference on Machine learning (ICML), 2018 (conference)

Abstract
Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.

avg

code video paper supplement slides poster Project Page [BibTex]


no image
L4: Practical loss-based stepsize adaptation for deep learning

Rolinek, M., Martius, G.

In Advances in Neural Information Processing Systems 31 (NeurIPS 2018), pages: 6434-6444, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 2018 (inproceedings)

al

Github link (url) Project Page [BibTex]

Github link (url) Project Page [BibTex]


no image
Probabilistic Deep Learning using Random Sum-Product Networks

Peharz, R., Vergari, A., Stelzner, K., Molina, A., Trapp, M., Kersting, K., Ghahramani, Z.

2018 (conference) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
k–SVRG: Variance Reduction for Large Scale Optimization

Raj, A., Stich, S.

In 2018 (inproceedings) Submitted

ei

[BibTex]

[BibTex]


no image
Distribution-Dissimilarities in Machine Learning

Simon-Gabriel, C. J.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Collectives of Spinning Mobile Microrobots for Navigation and Object Manipulation at the Air-Water Interface

Wang, W., Kishore, V., Koens, L., Lauga, E., Sitti, M.

In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1-9, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Thumb xl david paper teaser
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision

Stutz, D., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, ie, learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet and KITTI, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet, we additionally show that the approach is able to generalize to other object categories as well.

avg

pdf suppmat Project Page Poster Project Page [BibTex]

pdf suppmat Project Page Poster Project Page [BibTex]