Header logo is


2020


Learning Sensory-Motor Associations from Demonstration
Learning Sensory-Motor Associations from Demonstration

Berenz, V., Bjelic, A., Herath, L., Mainprice, J.

29th IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 2020), August 2020 (conference) Accepted

Abstract
We propose a method which generates reactive robot behavior learned from human demonstration. In order to do so, we use the Playful programming language which is based on the reactive programming paradigm. This allows us to represent the learned behavior as a set of associations between sensor and motor primitives in a human readable script. Distinguishing between sensor and motor primitives introduces a supplementary level of granularity and more importantly enforces feedback, increasing adaptability and robustness. As the experimental section shows, useful behaviors may be learned from a single demonstration covering a very limited portion of the task space.

am

[BibTex]

2020


[BibTex]


Self-supervised motion deblurring
Self-supervised motion deblurring

Liu, P., Janai, J., Pollefeys, M., Sattler, T., Geiger, A.

IEEE Robotics and Automation Letters, 2020 (article)

Abstract
Motion blurry images challenge many computer vision algorithms, e.g., feature detection, motion estimation, or object recognition. Deep convolutional neural networks are state-of-the-art for image deblurring. However, obtaining training data with corresponding sharp and blurry image pairs can be difficult. In this paper, we present a differentiable reblur model for self-supervised motion deblurring, which enables the network to learn from real-world blurry image sequences without relying on sharp images for supervision. Our key insight is that motion cues obtained from consecutive images yield sufficient information to inform the deblurring task. We therefore formulate deblurring as an inverse rendering problem, taking into account the physical image formation process: we first predict two deblurred images from which we estimate the corresponding optical flow. Using these predictions, we re-render the blurred images and minimize the difference with respect to the original blurry inputs. We use both synthetic and real dataset for experimental evaluations. Our experiments demonstrate that self-supervised single image deblurring is really feasible and leads to visually compelling results.

avg

pdf Project Page Blog [BibTex]

pdf Project Page Blog [BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]


Excursion Search for Constrained Bayesian Optimization under a Limited Budget of Failures
Excursion Search for Constrained Bayesian Optimization under a Limited Budget of Failures

Marco, A., Rohr, A. V., Baumann, D., Hernández-Lobato, J. M., Trimpe, S.

2020 (proceedings) In revision

Abstract
When learning to ride a bike, a child falls down a number of times before achieving the first success. As falling down usually has only mild consequences, it can be seen as a tolerable failure in exchange for a faster learning process, as it provides rich information about an undesired behavior. In the context of Bayesian optimization under unknown constraints (BOC), typical strategies for safe learning explore conservatively and avoid failures by all means. On the other side of the spectrum, non conservative BOC algorithms that allow failing may fail an unbounded number of times before reaching the optimum. In this work, we propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures. Empirical validation shows that our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods. In addition, we propose an original algorithm for unconstrained Bayesian optimization inspired by the notion of excursion sets in stochastic processes, upon which the failures-aware algorithm is built.

ics am

arXiv code (python) PDF [BibTex]


no image
A Real-Robot Dataset for Assessing Transferability of Learned Dynamics Models

Agudelo-España, D., Zadaianchuk, A., Wenk, P., Garg, A., Akpo, J., Grimminger, F., Viereck, J., Naveau, M., Righetti, L., Martius, G., Krause, A., Schölkopf, B., Bauer, S., Wüthrich, M.

IEEE International Conference on Robotics and Automation (ICRA), 2020 (conference) Accepted

am al ei mg

Project Page PDF [BibTex]

Project Page PDF [BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page Slides Video Poster [BibTex]

pdf Project Page Slides Video Poster [BibTex]


Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control
Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control

Nubert, J., Koehler, J., Berenz, V., Allgower, F., Trimpe, S.

IEEE Robotics and Automation Letters, 2020 (article) Accepted

Abstract
Fast feedback control and safety guarantees are essential in modern robotics. We present an approach that achieves both by combining novel robust model predictive control (MPC) with function approximation via (deep) neural networks (NNs). The result is a new approach for complex tasks with nonlinear, uncertain, and constrained dynamics as are common in robotics. Specifically, we leverage recent results in MPC research to propose a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction. The presented robust MPC scheme constitutes a one-layer approach that unifies the often separated planning and control layers, by directly computing the control command based on a reference and possibly obstacle positions. As a separate contribution, we show how the computation time of the MPC can be drastically reduced by approximating the MPC law with a NN controller. The NN is trained and validated from offline samples of the MPC, yielding statistical guarantees, and used in lieu thereof at run time. Our experiments on a state-of-the-art robot manipulator are the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.

am ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]

2015


Exploiting Object Similarity in 3D Reconstruction
Exploiting Object Similarity in 3D Reconstruction

Zhou, C., Güney, F., Wang, Y., Geiger, A.

In International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
Despite recent progress, reconstructing outdoor scenes in 3D from movable platforms remains a highly difficult endeavor. Challenges include low frame rates, occlusions, large distortions and difficult lighting conditions. In this paper, we leverage the fact that the larger the reconstructed area, the more likely objects of similar type and shape will occur in the scene. This is particularly true for outdoor scenes where buildings and vehicles often suffer from missing texture or reflections, but share similarity in 3D shape. We take advantage of this shape similarity by locating objects using detectors and jointly reconstructing them while learning a volumetric model of their shape. This allows us to reduce noise while completing missing surfaces as objects of similar shape benefit from all observations for the respective category. We evaluate our approach with respect to LIDAR ground truth on a novel challenging suburban dataset and show its advantages over the state-of-the-art.

avg ps

pdf suppmat [BibTex]

2015


pdf suppmat [BibTex]


FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation
FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation

Lenz, P., Geiger, A., Urtasun, R.

In International Conference on Computer Vision (ICCV), International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.

avg ps

pdf suppmat video project [BibTex]

pdf suppmat video project [BibTex]


no image
Distributed Event-based State Estimation

Trimpe, S.

Max Planck Institute for Intelligent Systems, November 2015 (techreport)

Abstract
An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor-actuator-agents observe a dynamic process and sporadically exchange their measurements and inputs over a bus network. Based on these data, each agent estimates the full state of the dynamic system, which may exhibit arbitrary inter-agent couplings. Local event-based protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. This event-based scheme is shown to mimic a centralized Luenberger observer design up to guaranteed bounds, and stability is proven in the sense of bounded estimation errors for bounded disturbances. The stability result extends to the distributed control system that results when the local state estimates are used for distributed feedback control. Simulation results highlight the benefit of the event-based approach over classical periodic ones in reducing communication requirements.

am ics

arXiv [BibTex]

arXiv [BibTex]


no image
Learning Torque Control in Presence of Contacts using Tactile Sensing from Robot Skin

Calandra, R., Ivaldi, S., Deisenroth, M., Peters, J.

In 15th IEEE-RAS International Conference on Humanoid Robots, pages: 690-695, Humanoids, November 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Evaluation of Interactive Object Recognition with Tactile Sensing

Hoelscher, J., Peters, J., Hermans, T.

In 15th IEEE-RAS International Conference on Humanoid Robots, pages: 310-317, Humanoids, November 2015 (inproceedings)

am ei

DOI [BibTex]

DOI [BibTex]


no image
Optimizing Robot Striking Movement Primitives with Iterative Learning Control

Koc, O., Maeda, G., Neumann, G., Peters, J.

In 15th IEEE-RAS International Conference on Humanoid Robots, pages: 80-87, Humanoids, November 2015 (inproceedings)

am ei

DOI [BibTex]

DOI [BibTex]


no image
A Comparison of Contact Distribution Representations for Learning to Predict Object Interactions

Leischnig, S., Luettgen, S., Kroemer, O., Peters, J.

In 15th IEEE-RAS International Conference on Humanoid Robots, pages: 616-622, Humanoids, November 2015 (inproceedings)

am ei

DOI [BibTex]

DOI [BibTex]


no image
First-Person Tele-Operation of a Humanoid Robot

Fritsche, L., Unverzagt, F., Peters, J., Calandra, R.

In 15th IEEE-RAS International Conference on Humanoid Robots, pages: 997-1002, Humanoids, November 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Probabilistic Segmentation Applied to an Assembly Task

Lioutikov, R., Neumann, G., Maeda, G., Peters, J.

In 15th IEEE-RAS International Conference on Humanoid Robots, pages: 533-540, Humanoids, November 2015 (inproceedings)

am ei

DOI [BibTex]

DOI [BibTex]


Automatic LQR Tuning Based on Gaussian Process Optimization: Early Experimental Results
Automatic LQR Tuning Based on Gaussian Process Optimization: Early Experimental Results

Marco, A., Hennig, P., Bohg, J., Schaal, S., Trimpe, S.

Machine Learning in Planning and Control of Robot Motion Workshop at the IEEE/RSJ International Conference on Intelligent Robots and Systems (iROS), pages: , , Machine Learning in Planning and Control of Robot Motion Workshop, October 2015 (conference)

Abstract
This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree-of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Preliminary results of a low-dimensional tuning problem highlight the method’s potential for automatic controller tuning on robotic platforms.

am ei ics pn

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


Gaussian Process Optimization for Self-Tuning Control
Gaussian Process Optimization for Self-Tuning Control

Marco, A.

Polytechnic University of Catalonia (BarcelonaTech), October 2015 (mastersthesis)

am ics

PDF Project Page [BibTex]

PDF Project Page [BibTex]


Towards Probabilistic Volumetric Reconstruction using Ray Potentials
Towards Probabilistic Volumetric Reconstruction using Ray Potentials

(Best Paper Award)

Ulusoy, A. O., Geiger, A., Black, M. J.

In 3D Vision (3DV), 2015 3rd International Conference on, pages: 10-18, Lyon, October 2015 (inproceedings)

Abstract
This paper presents a novel probabilistic foundation for volumetric 3-d reconstruction. We formulate the problem as inference in a Markov random field, which accurately captures the dependencies between the occupancy and appearance of each voxel, given all input images. Our main contribution is an approximate highly parallelized discrete-continuous inference algorithm to compute the marginal distributions of each voxel's occupancy and appearance. In contrast to the MAP solution, marginals encode the underlying uncertainty and ambiguity in the reconstruction. Moreover, the proposed algorithm allows for a Bayes optimal prediction with respect to a natural reconstruction loss. We compare our method to two state-of-the-art volumetric reconstruction algorithms on three challenging aerial datasets with LIDAR ground truth. Our experiments demonstrate that the proposed algorithm compares favorably in terms of reconstruction accuracy and the ability to expose reconstruction uncertainty.

avg ps

code YouTube pdf suppmat DOI Project Page [BibTex]

code YouTube pdf suppmat DOI Project Page [BibTex]


no image
Stabilizing Novel Objects by Learning to Predict Tactile Slip

Veiga, F., van Hoof, H., Peters, J., Hermans, T.

In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems, pages: 5065-5072, IROS, September 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Model-Free Probabilistic Movement Primitives for Physical Interaction

Paraschos, A., Rueckert, E., Peters, J., Neumann, G.

In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems, pages: 2860-2866, IROS, September 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Combined Pose-Wrench and State Machine Representation for Modeling Robotic Assembly Skills

Wahrburg, A., Zeiss, S., Matthias, B., Peters, J., Ding, H.

In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems, pages: 852-857, IROS, September 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Probabilistic Progress Prediction and Sequencing of Concurrent Movement Primitives

Manschitz, S., Kober, J., Gienger, M., Peters, J.

In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems, pages: 449-455, IROS, September 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Reinforcement Learning vs Human Programming in Tetherball Robot Games

Parisi, S., Abdulsamad, H., Paraschos, A., Daniel, C., Peters, J.

In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems, pages: 6428-6434, IROS, September 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning Motor Skills from Partially Observed Movements Executed at Different Speeds

Ewerton, M., Maeda, G., Peters, J., Neumann, G.

In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems, pages: 456-463, IROS, September 2015 (inproceedings)

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Adaptive and Learning Concepts in Hydraulic Force Control

Doerr, A.

University of Stuttgart, September 2015 (mastersthesis)

am ics

[BibTex]

[BibTex]


Direct Loss Minimization Inverse Optimal Control
Direct Loss Minimization Inverse Optimal Control

Doerr, A., Ratliff, N., Bohg, J., Toussaint, M., Schaal, S.

In Proceedings of Robotics: Science and Systems, Rome, Italy, Robotics: Science and Systems XI, July 2015 (inproceedings)

Abstract
Inverse Optimal Control (IOC) has strongly impacted the systems engineering process, enabling automated planner tuning through straightforward and intuitive demonstration. The most successful and established applications, though, have been in lower dimensional problems such as navigation planning where exact optimal planning or control is feasible. In higher dimensional systems, such as humanoid robots, research has made substantial progress toward generalizing the ideas to model free or locally optimal settings, but these systems are complicated to the point where demonstration itself can be difficult. Typically, real-world applications are restricted to at best noisy or even partial or incomplete demonstrations that prove cumbersome in existing frameworks. This work derives a very flexible method of IOC based on a form of Structured Prediction known as Direct Loss Minimization. The resulting algorithm is essentially Policy Search on a reward function that rewards similarity to demonstrated behavior (using Covariance Matrix Adaptation (CMA) in our experiments). Our framework blurs the distinction between IOC, other forms of Imitation Learning, and Reinforcement Learning, enabling us to derive simple, versatile, and practical algorithms that blend imitation and reinforcement signals into a unified framework. Our experiments analyze various aspects of its performance and demonstrate its efficacy on conveying preferences for motion shaping and combined reach and grasp quality optimization.

am ics

PDF Video Project Page [BibTex]

PDF Video Project Page [BibTex]


no image
LMI-Based Synthesis for Distributed Event-Based State Estimation

Muehlebach, M., Trimpe, S.

In Proceedings of the American Control Conference, July 2015 (inproceedings)

Abstract
This paper presents an LMI-based synthesis procedure for distributed event-based state estimation. Multiple agents observe and control a dynamic process by sporadically exchanging data over a broadcast network according to an event-based protocol. In previous work [1], the synthesis of event-based state estimators is based on a centralized design. In that case three different types of communication are required: event-based communication of measurements, periodic reset of all estimates to their joint average, and communication of inputs. The proposed synthesis problem eliminates the communication of inputs as well as the periodic resets (under favorable circumstances) by accounting explicitly for the distributed structure of the control system.

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
Guaranteed H2 Performance in Distributed Event-Based State Estimation

Muehlebach, M., Trimpe, S.

In Proceeding of the First International Conference on Event-based Control, Communication, and Signal Processing, June 2015 (inproceedings)

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
On the Choice of the Event Trigger in Event-based Estimation

Trimpe, S., Campi, M.

In Proceeding of the First International Conference on Event-based Control, Communication, and Signal Processing, June 2015 (inproceedings)

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


Displets: Resolving Stereo Ambiguities using Object Knowledge
Displets: Resolving Stereo Ambiguities using Object Knowledge

Güney, F., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 4165-4175, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
Stereo techniques have witnessed tremendous progress over the last decades, yet some aspects of the problem still remain challenging today. Striking examples are reflecting and textureless surfaces which cannot easily be recovered using traditional local regularizers. In this paper, we therefore propose to regularize over larger distances using object-category specific disparity proposals (displets) which we sample using inverse graphics techniques based on a sparse disparity estimate and a semantic segmentation of the image. The proposed displets encode the fact that objects of certain categories are not arbitrarily shaped but typically exhibit regular structures. We integrate them as non-local regularizer for the challenging object class 'car' into a superpixel based CRF framework and demonstrate its benefits on the KITTI stereo evaluation.

avg ps

pdf abstract suppmat [BibTex]

pdf abstract suppmat [BibTex]


Object Scene Flow for Autonomous Vehicles
Object Scene Flow for Autonomous Vehicles

Menze, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 3061-3070, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which can't be handled by existing methods.

avg ps

pdf abstract suppmat DOI [BibTex]

pdf abstract suppmat DOI [BibTex]


Object Detection Using Deep Learning - Learning where to search using visual attention
Object Detection Using Deep Learning - Learning where to search using visual attention

Kloss, A.

Eberhard Karls Universität Tübingen, May 2015 (mastersthesis)

Abstract
Detecting and identifying the different objects in an image fast and reliably is an important skill for interacting with one’s environment. The main problem is that in theory, all parts of an image have to be searched for objects on many different scales to make sure that no object instance is missed. It however takes considerable time and effort to actually classify the content of a given image region and both time and computational capacities that an agent can spend on classification are limited. Humans use a process called visual attention to quickly decide which locations of an image need to be processed in detail and which can be ignored. This allows us to deal with the huge amount of visual information and to employ the capacities of our visual system efficiently. For computer vision, researchers have to deal with exactly the same problems, so learning from the behaviour of humans provides a promising way to improve existing algorithms. In the presented master’s thesis, a model is trained with eye tracking data recorded from 15 participants that were asked to search images for objects from three different categories. It uses a deep convolutional neural network to extract features from the input image that are then combined to form a saliency map. This map provides information about which image regions are interesting when searching for the given target object and can thus be used to reduce the parts of the image that have to be processed in detail. The method is based on a recent publication of Kümmerer et al., but in contrast to the original method that computes general, task independent saliency, the presented model is supposed to respond differently when searching for different target categories.

am

PDF Project Page [BibTex]


Leveraging Big Data for Grasp Planning
Leveraging Big Data for Grasp Planning

Kappler, D., Bohg, B., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation, May 2015 (inproceedings)

Abstract
We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with different grasp stability metrics. We use a descriptive and efficient representation of the local object shape at which each grasp is applied. Given this data, we present a two-fold analysis: (i) We use crowdsourcing to analyze the correlation of the metrics with grasp success as predicted by humans. The results show that the metric based on physics simulation is a more consistent predictor for grasp success than the standard ε-metric. The results also support the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used to generate datasets in simulation that may then be used to bootstrap learning in the real world. (ii) We apply a deep learning method and show that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression. Furthermore, the results suggest that labels based on the physics-metric are less noisy than those from the ε-metric and therefore lead to a better classification performance.

am

PDF data DOI Project Page [BibTex]

PDF data DOI Project Page [BibTex]


Robot Arm Tracking with Random Decision Forests
Robot Arm Tracking with Random Decision Forests

Widmaier, F.

Eberhard-Karls-Universität Tübingen, May 2015 (mastersthesis)

Abstract
For grasping and manipulation with robot arms, knowing the current pose of the arm is crucial for successful controlling its motion. Often, pose estimations can be acquired from encoders inside the arm, but they can have significant inaccuracy which makes the use of additional techniques necessary. In this master thesis, a novel approach of robot arm pose estimation is presented, that works on single depth images without the need of prior foreground segmentation or other preprocessing steps. A random regression forest is used, which is trained only on synthetically generated data. The approach improves former work by Bohg et al. by considerably reducing the computational effort both at training and test time. The forest in the new method directly estimates the desired joint angles while in the former approach, the forest casts 3D position votes for the joints, which then have to be clustered and fed into an iterative inverse kinematic process to finally get the joint angles. To improve the estimation accuracy, the standard training objective of the forest training is replaced by a specialized function that makes use of a model-dependent distance metric, called DISP. Experimental results show that the specialized objective indeed improves pose estimation and it is shown that the method, despite of being trained on synthetic data only, is able to provide reasonable estimations for real data at test time.

am

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Event-based Estimation and Control for Remote Robot Operation with Reduced Communication

Trimpe, S., Buchli, J.

In Proceedings of the IEEE International Conference on Robotics and Automation, May 2015 (inproceedings)

Abstract
An event-based communication framework for remote operation of a robot via a bandwidth-limited network is proposed. The robot sends state and environment estimation data to the operator, and the operator transmits updated control commands or policies to the robot. Event-based communication protocols are designed to ensure that data is transmitted only when required: the robot sends new estimation data only if this yields a significant information gain at the operator, and the operator transmits an updated control policy only if this comes with a significant improvement in control performance. The developed framework is modular and can be used with any standard estimation and control algorithms. Simulation results of a robotic arm highlight its potential for an efficient use of limited communication resources, for example, in disaster response scenarios such as the DARPA Robotics Challenge.

am ics

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
Lernende Roboter

Trimpe, S.

In Jahrbuch der Max-Planck-Gesellschaft, Max Planck Society, May 2015, (popular science article in German) (inbook)

am ics

link (url) [BibTex]

link (url) [BibTex]


The Coordinate Particle Filter - A novel Particle Filter for High Dimensional Systems
The Coordinate Particle Filter - A novel Particle Filter for High Dimensional Systems

Wüthrich, M., Bohg, J., Kappler, D., Pfreundt, C., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation, May 2015 (inproceedings)

Abstract
Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form. For nonparametric filters, such as the Particle Filter, the converse holds. Such methods are able to approximate any posterior, but the computational requirements scale exponentially with the number of dimensions of the state space. In this paper, we present the Coordinate Particle Filter which alleviates this problem. We propose to compute the particle weights recursively, dimension by dimension. This allows us to explore one dimension at a time, and resample after each dimension if necessary. Experimental results on simulated as well as real data con- firm that the proposed method has a substantial performance advantage over the Particle Filter in high-dimensional systems where not all dimensions are highly correlated. We demonstrate the benefits of the proposed method for the problem of multi-object and robotic manipulator tracking.

am

arXiv Video Bayesian Filtering Framework Bayesian Object Tracking DOI Project Page [BibTex]


no image
Autonomous Robots

Schaal, S.

In Jahrbuch der Max-Planck-Gesellschaft, May 2015 (incollection)

am

[BibTex]

[BibTex]


no image
Understanding the Geometry of Workspace Obstacles in Motion Optimization

Ratliff, N., Toussaint, M., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation, March 2015 (inproceedings)

am

PDF Video Project Page [BibTex]

PDF Video Project Page [BibTex]


no image
Policy Search for Imitation Learning

Doerr, A.

University of Stuttgart, January 2015 (thesis)

am ics

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Learning of Non-Parametric Control Policies with High-Dimensional State Features

van Hoof, H., Peters, J., Neumann, G.

In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 38, pages: 995–1003, (Editors: Lebanon, G. and Vishwanathan, S.V.N. ), JMLR, AISTATS, 2015 (inproceedings)

am ei

link (url) [BibTex]

link (url) [BibTex]


no image
Data-Driven Online Decision Making for Autonomous Manipulation

Kappler, D., Pastor, P., Kalakrishnan, M., Wuthrich, M., Schaal, S.

In Proceedings of Robotics: Science and Systems, Rome, Italy, 2015 (inproceedings)

am

Project Page [BibTex]

Project Page [BibTex]


Predicting Human Reaching Motion in Collaborative Tasks Using Inverse Optimal Control and Iterative Re-planning
Predicting Human Reaching Motion in Collaborative Tasks Using Inverse Optimal Control and Iterative Re-planning

Mainprice, J., Hayne, R., Berenson, D.

In Proceedings of the IEEE International Conference on Robotics and Automation, 2015 (inproceedings)

am

Project Page [BibTex]

Project Page [BibTex]