27 results (BibTeX)

2017


Thumb md octo turned
Real-time Perception meets Reactive Motion Generation

Kappler, D., Meier, F., Issac, J., Mainprice, J., Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.

In Robotics: Science and Systems, 2017 (inproceedings) Submitted

Abstract
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.

am

arxiv [BibTex]

2017


arxiv [BibTex]


Thumb md pami 2017 teaser
Efficient 2D and 3D Facade Segmentation using Auto-Context

Gadde, R., Jampani, V., Marlet, R., Gehler, P.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017 (article)

Abstract
This paper introduces a fast and efficient segmentation technique for 2D images and 3D point clouds of building facades. Facades of buildings are highly structured and consequently most methods that have been proposed for this problem aim to make use of this strong prior information. Contrary to most prior work, we are describing a system that is almost domain independent and consists of standard segmentation methods. We train a sequence of boosted decision trees using auto-context features. This is learned using stacked generalization. We find that this technique performs better, or comparable with all previous published methods and present empirical results on all available 2D and 3D facade benchmark datasets. The proposed method is simple to implement, easy to extend, and very efficient at test-time inference.

ps

arXiv [BibTex]

arXiv [BibTex]


Thumb md teaser reflectance filtering
Reflectance Adaptive Filtering Improves Intrinsic Image Estimation

Nestmeyer, T., Gehler, P.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 (inproceedings)

ps

pre-print Project Page [BibTex]

pre-print Project Page [BibTex]


Dynamic Time-of-Flight

Schober, M., Adam, A., Yair, O., Mazor, S., Nowozin, S.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 (conference) Accepted

ei pn

[BibTex]

[BibTex]


Discovering Causal Signals in Images

Lopez-Paz, D., Nishihara, R., Chintala, S., Schölkopf, B., Bottou, L.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 (conference) Accepted

ei

[BibTex]

[BibTex]


Flexible Spatio-Temporal Networks for Video Prediction

Lu, C., Hirsch, M., Schölkopf, B.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 (conference) Accepted

ei

[BibTex]

[BibTex]


Frequency Peak Features for Low-Channel Classification in Motor Imagery Paradigms

Jayaram, V., Schölkopf, B., Grosse-Wentrup, M.

Proceedings of the 8th International IEEE EMBS Conference on Neural Engineering (NER 2017), 2017 (conference) Accepted

ei

[BibTex]

[BibTex]


AdaGAN: Boosting Generative Models

Tolstikhin, I., Gelly, S., Bousquet, O., Simon-Gabriel, C., Schölkopf, B.

2017 (techreport) Submitted

ei

Arxiv [BibTex]

Arxiv [BibTex]


Thumb md learning ct block diagram v2
Learning Feedback Terms for Reactive Planning and Control

Rai, A., Sutanto, G., Schaal, S., Meier, F.

Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2017 (conference)

am

pdf video [BibTex]

pdf video [BibTex]


Thumb md vpn teaser
Video Propagation Networks

Jampani, V., Gadde, R., Gehler, P.

In IEEE Conference on Computer Vision and Patter Recognition (CVPR), 2017 (inproceedings)

ps

arXiv Preprint [BibTex]

arXiv Preprint [BibTex]


Thumb md web teaser
Detailed, accurate, human shape estimation from clothed 3D scan sequences

Zhang, C., Pujades, S., Black, M. J., Pons-Moll, G.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, Spotlight (inproceedings)

Abstract
We address the problem of estimating human body shape from 3D scans over time. Reliable estimation of 3D body shape is necessary for many applications including virtual try-on, health monitoring, and avatar creation for virtual reality. Scanning bodies in minimal clothing, however, presents a practical barrier to these applications. We address this problem by estimating body shape under clothing from a sequence of 3D scans. Previous methods that have exploited statistical models of body shape produce overly smooth shapes lacking personalized details. In this paper we contribute a new approach to recover not only an approximate shape of the person, but also their detailed shape. Our approach allows the estimated shape to deviate from a parametric model to fit the 3D scans. We demonstrate the method using high quality 4D data as well as sequences of visual hulls extracted from multi-view images. We also make available a new high quality 4D dataset that enables quantitative evaluation. Our method outperforms the previous state of the art, both qualitatively and quantitatively.

ps

arxiv_preprint [BibTex]

arxiv_preprint [BibTex]


Thumb md web teaser
Dynamic FAUST: Registering Human Bodies in Motion

Bogo, F., Romero, J., Pons-Moll, G., Black, M. J.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, Oral (inproceedings)

ps

[BibTex]

coming soon [BibTex]


Thumb md cover
Path Integral Guided Policy Search

Chebotar, Y., Kalakrishnan, M., Yahya, A., Li, A., Schaal, S., Levine, S.

Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), April 2017 (conference)

am

pdf video [BibTex]

pdf video [BibTex]


DeepCoder: Learning to Write Programs

Balog, M., Gaunt, A., Brockschmidt, M., Nowozin, S., Tarlow, D.

5th International Conference on Learning Representations (ICLR), 2017 (conference) Accepted

ei

Arxiv [BibTex]

Arxiv [BibTex]


Multi-frame blind image deconvolution through split frequency - phase recovery

Gauci, A., Abela, J., Cachia, E., Hirsch, M., ZarbAdami, K.

Proc. SPIE 10225, Eighth International Conference on Graphic and Image Processing (ICGIP 2016), pages: 1022511, (Editors: Yulin Wang, Tuan D. Pham, Vit Vozenilek, David Zhang, Yi Xie), 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb md web teaser eg
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

Marcard, T., Rosenhahn, B., Black, M. J., Pons-Moll, G.

Computer Graphics Forum 36(2), Proceedings of the 38th Annual Conference of the European Association for Computer Graphics (Eurographics), 2017 (article)

Abstract
We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall

ps

video pdf [BibTex]

video pdf [BibTex]


Memristor-based control methods for a bio-inspired robot

Tetzlaff, R., Ascoli, A., Baumann, D., Hild, M.

International Conference on Memristive Materials, Devices & Systems, April 2017 (conference) Accepted

am

[BibTex]

[BibTex]


Thumb md coverhand wilson
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects

Tzionas, D.

University of Bonn, 2017 (phdthesis)

Abstract
Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object's shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow.

ps

[BibTex]


Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

Klein, A., Falkner, S., Bartels, S., Hennig, P., Hutter, F.

Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017), 52, JMLR Workshop and Conference Proceedings, (Editors: Sign, Aarti and Zhu, Jerry), 2017 (conference) Accepted

pn

[BibTex]

[BibTex]


Thumb md img01
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Mescheder, L., Nowozin, S., Geiger, A.

Arxiv, 2017 (article)

Abstract
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model used during training. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

avg

pdf [BibTex]


Thumb md screen shot 2017 02 27 at 01.50.07
Virtual vs. Real: Trading Off Simulations and Physical Experiments in Reinforcement Learning with Bayesian Optimization

Marco, A., Berkenkamp, F., Hennig, P., Schoellig, A., Krause, A., Schaal, S., Trimpe, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE International Conference on Robotics and Automation, May 2017 (inproceedings) Accepted

am pn

PDF [BibTex]

PDF [BibTex]


Model-Based Policy Search for Automatic Tuning of Multivariate PID Controllers

Doerr, A., Nguyen-Tuong, D., Marco, A., Schaal, S., Trimpe, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2017 IEEE International Conference on Robotics and Automation, May 2017 (inproceedings) Accepted

am

PDF [BibTex]

PDF [BibTex]


Event-based State Estimation: An Emulation-based Approach

Trimpe, S.

IET Control Theory & Applications, 2017 (article) Accepted

Abstract
An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor agents observe a dynamic process and sporadically transmit their measurements to estimator agents over a shared bus network. Local event-triggering protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. The event-based design is shown to emulate the performance of a centralised state observer design up to guaranteed bounds, but with reduced communication. The stability results for state estimation are extended to the distributed control system that results when the local estimates are used for feedback control. Results from numerical simulations and hardware experiments illustrate the effectiveness of the proposed approach in reducing network communication.

am

Supplementary material PDF DOI [BibTex]


Thumb md reliability icon
Distilling Information Reliability and Source Trustworthiness from Digital Traces

Tabibian, B., Valera, I., Farajtabar, M., Song, L., Schölkopf, B., Gomez Rodriguez, M.

Proceedings of the 26th International Conference on World Wide Web (WWW2017), 2017 (conference) Accepted

ei

Project [BibTex]

Project [BibTex]


DiSMEC – Distributed Sparse Machines for Extreme Multi-label Classification

Babbar, R., Schölkopf, B.

Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (WSDM 2017), pages: 721-729, 2017 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Thumb md fig  quali  arm
Probabilistic Articulated Real-Time Tracking for Robot Manipulation

Garcia Cifuentes, C., Issac, J., Wüthrich, M., Schaal, S., Bohg, J.

IEEE Robotics and Automation Letters (RA-L), 2(2):577-584, April 2017 (article)

Abstract
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.

am

arXiv video code and dataset PDF DOI Project Page [BibTex]