Header logo is


2018


Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs
Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs

Sproewitz, A., Tuleu, A., Ajallooeian, M., Vespignani, M., Moeckel, R., Eckert, P., D’Haene, M., Degrave, J., Nordmann, A., Schrauwen, B., Steil, J., Ijspeert, A. J.

Frontiers in Robotics and AI, 5(67), June 2018, arXiv: 1803.06259 (article)

Abstract
We present Oncilla robot, a novel mobile, quadruped legged locomotion machine. This large-cat sized, 5.1 robot is one of a kind of a recent, bioinspired legged robot class designed with the capability of model-free locomotion control. Animal legged locomotion in rough terrain is clearly shaped by sensor feedback systems. Results with Oncilla robot show that agile and versatile locomotion is possible without sensory signals to some extend, and tracking becomes robust when feedback control is added (Ajaoolleian 2015). By incorporating mechanical and control blueprints inspired from animals, and by observing the resulting robot locomotion characteristics, we aim to understand the contribution of individual components. Legged robots have a wide mechanical and control design parameter space, and a unique potential as research tools to investigate principles of biomechanics and legged locomotion control. But the hardware and controller design can be a steep initial hurdle for academic research. To facilitate the easy start and development of legged robots, Oncilla-robot's blueprints are available through open-source. [...]

dlg

link (url) DOI Project Page [BibTex]

2018


link (url) DOI Project Page [BibTex]


Impact of Trunk Orientation  for Dynamic Bipedal Locomotion
Impact of Trunk Orientation for Dynamic Bipedal Locomotion

Drama, Ö.

Dynamic Walking Conference, May 2018 (talk)

Abstract
Impact of trunk orientation for dynamic bipedal locomotion My research revolves around investigating the functional demands of bipedal running, with focus on stabilizing trunk orientation. When we think about postural stability, there are two critical questions we need to answer: What are the necessary and sufficient conditions to achieve and maintain trunk stability? I am concentrating on how morphology affects control strategies in achieving trunk stability. In particular, I denote the trunk pitch as the predominant morphology parameter and explore the requirements it imposes on a chosen control strategy. To analyze this, I use a spring loaded inverted pendulum model extended with a rigid trunk, which is actuated by a hip motor. The challenge for the controller design here is to have a single hip actuator to achieve two coupled tasks of moving the legs to generate motion and stabilizing the trunk. I enforce orthograde and pronograde postures and aim to identify the effect of these trunk orientations on the hip torque and ground reaction profiles for different control strategies.

dlg

Impact of trunk orientation for dynamic bipedal locomotion [DW 2018] link (url) Project Page [BibTex]


Learning 3D Shape Completion under Weak Supervision
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

Arxiv, May 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with fully supervised baselines and outperforms data-driven approaches, while requiring less supervision and being significantly faster.

avg

PDF Project Page Project Page [BibTex]


Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes
Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes

Alhaija, H., Mustikovela, S., Mescheder, L., Geiger, A., Rother, C.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Learning 3D Shape Completion under Weak Supervision
Learning 3D Shape Completion under Weak Supervision

Stutz, D., Geiger, A.

International Journal of Computer Vision (IJCV), 2018, 2018 (article)

Abstract
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and outperforms the data-driven approach of Engelmann et al., while requiring less supervision and being significantly faster.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Object Scene Flow
Object Scene Flow

Menze, M., Heipke, C., Geiger, A.

ISPRS Journal of Photogrammetry and Remote Sensing, 2018 (article)

Abstract
This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.

avg

Project Page [BibTex]

Project Page [BibTex]

2017


Spinal joint compliance and actuation in a simulated bounding quadruped robot
Spinal joint compliance and actuation in a simulated bounding quadruped robot

Pouya, S., Khodabakhsh, M., Sproewitz, A., Ijspeert, A.

{Autonomous Robots}, pages: 437–452, Kluwer Academic Publishers, Springer, Dordrecht, New York, NY, Febuary 2017 (article)

dlg

link (url) DOI Project Page [BibTex]

2017


link (url) DOI Project Page [BibTex]

2015


Optimizing Average Precision using Weakly Supervised Data
Optimizing Average Precision using Weakly Supervised Data

Behl, A., Mohapatra, P., Jawahar, C. V., Kumar, M. P.

IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 2015 (article)

avg

[BibTex]

2015


[BibTex]


Exciting Engineered Passive Dynamics in a Bipedal Robot
Exciting Engineered Passive Dynamics in a Bipedal Robot

Renjewski, D., Spröwitz, A., Peekema, A., Jones, M., Hurst, J.

{IEEE Transactions on Robotics and Automation}, 31(5):1244-1251, IEEE, New York, NY, 2015 (article)

Abstract
A common approach in designing legged robots is to build fully actuated machines and control the machine dynamics entirely in soft- ware, carefully avoiding impacts and expending a lot of energy. However, these machines are outperformed by their human and animal counterparts. Animals achieve their impressive agility, efficiency, and robustness through a close integration of passive dynamics, implemented through mechanical components, and neural control. Robots can benefit from this same integrated approach, but a strong theoretical framework is required to design the passive dynamics of a machine and exploit them for control. For this framework, we use a bipedal spring–mass model, which has been shown to approximate the dynamics of human locomotion. This paper reports the first implementation of spring–mass walking on a bipedal robot. We present the use of template dynamics as a control objective exploiting the engineered passive spring–mass dynamics of the ATRIAS robot. The results highlight the benefits of combining passive dynamics with dynamics-based control and open up a library of spring–mass model-based control strategies for dynamic gait control of robots.

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]

2013


Vision meets Robotics: The {KITTI} Dataset
Vision meets Robotics: The KITTI Dataset

Geiger, A., Lenz, P., Stiller, C., Urtasun, R.

International Journal of Robotics Research, 32(11):1231 - 1237 , Sage Publishing, September 2013 (article)

Abstract
We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.

avg ps

pdf DOI [BibTex]

2013


pdf DOI [BibTex]


Towards Dynamic Trot Gait Locomotion: Design, Control, and Experiments with Cheetah-cub, a Compliant Quadruped Robot
Towards Dynamic Trot Gait Locomotion: Design, Control, and Experiments with Cheetah-cub, a Compliant Quadruped Robot

Spröwitz, A., Tuleu, A., Vespignani, M., Ajallooeian, M., Badri, E., Ijspeert, A. J.

{The International Journal of Robotics Research}, 32(8):932-950, Sage Publications, Inc., Cambridge, MA, 2013 (article)

Abstract
We present the design of a novel compliant quadruped robot, called Cheetah-cub, and a series of locomotion experiments with fast trotting gaits. The robot’s leg configuration is based on a spring-loaded, pantograph mechanism with multiple segments. A dedicated open-loop locomotion controller was derived and implemented. Experiments were run in simulation and in hardware on flat terrain and with a step down, demonstrating the robot’s self-stabilizing properties. The robot reached a running trot with short flight phases with a maximum Froude number of FR = 1.30, or 6.9 body lengths per second. Morphological parameters such as the leg design also played a role. By adding distal in-series elasticity, self- stability and maximum robot speed improved. Our robot has several advantages, especially when compared with larger and stiffer quadruped robot designs. (1) It is, to the best of the authors’ knowledge, the fastest of all quadruped robots below 30 kg (in terms of Froude number and body lengths per second). (2) It shows self-stabilizing behavior over a large range of speeds with open-loop control. (3) It is lightweight, compact, and electrically powered. (4) It is cheap, easy to reproduce, robust, and safe to handle. This makes it an excellent tool for research of multi-segment legs in quadruped robots.

dlg

Youtube1 Youtube2 Youtube3 Youtube4 Youtube5 DOI Project Page [BibTex]

Youtube1 Youtube2 Youtube3 Youtube4 Youtube5 DOI Project Page [BibTex]


Horse-Like Walking, Trotting, and Galloping derived from Kinematic Motion Primitives (kMPs) and their Application to Walk/Trot Transitions in a Compliant Quadruped Robot
Horse-Like Walking, Trotting, and Galloping derived from Kinematic Motion Primitives (kMPs) and their Application to Walk/Trot Transitions in a Compliant Quadruped Robot

Moro, F., Spröwitz, A., Tuleu, A., Vespignani, M., Tsagakiris, N. G., Ijspeert, A. J., Caldwell, D. G.

Biological Cybernetics, 107(3):309-320, 2013 (article)

Abstract
This manuscript proposes a method to directly transfer the features of horse walking, trotting, and galloping to a quadruped robot, with the aim of creating a much more natural (horse-like) locomotion profile. A principal component analysis on horse joint trajectories shows that walk, trot, and gallop can be described by a set of four kinematic Motion Primitives (kMPs). These kMPs are used to generate valid, stable gaits that are tested on a compliant quadruped robot. Tests on the effects of gait frequency scaling as follows: results indicate a speed optimal walking frequency around 3.4 Hz, and an optimal trotting frequency around 4 Hz. Following, a criterion to synthesize gait transitions is proposed, and the walk/trot transitions are successfully tested on the robot. The performance of the robot when the transitions are scaled in frequency is evaluated by means of roll and pitch angle phase plots.

dlg

DOI [BibTex]

DOI [BibTex]

2008


Learning to Move in Modular Robots using Central Pattern Generators and Online Optimization
Learning to Move in Modular Robots using Central Pattern Generators and Online Optimization

Spröwitz, A., Moeckel, R., Maye, J., Ijspeert, A. J.

The International Journal of Robotics Research, 27(3-4):423-443, 2008 (article)

Abstract
This article addresses the problem of how modular robotics systems, i.e. systems composed of multiple modules that can be configured into different robotic structures, can learn to locomote. In particular, we tackle the problems of online learning, that is, learning while moving, and the problem of dealing with unknown arbitrary robotic structures. We propose a framework for learning locomotion controllers based on two components: a central pattern generator (CPG) and a gradient-free optimization algorithm referred to as Powell's method. The CPG is implemented as a system of coupled nonlinear oscillators in our YaMoR modular robotic system, with one oscillator per module. The nonlinear oscillators are coupled together across modules using Bluetooth communication to obtain specific gaits, i.e. synchronized patterns of oscillations among modules. Online learning involves running the Powell optimization algorithm in parallel with the CPG model, with the speed of locomotion being the criterion to be optimized. Interesting aspects of the optimization include the fact that it is carried out online, the robots do not require stopping or resetting and it is fast. We present results showing the interesting properties of this framework for a modular robotic system. In particular, our CPG model can readily be implemented in a distributed system, it is computationally cheap, it exhibits limit cycle behavior (temporary perturbations are rapidly forgotten), it produces smooth trajectories even when control parameters are abruptly changed and it is robust against imperfect communication among modules. We also present results of learning to move with three different robot structures. Interesting locomotion modes are obtained after running the optimization for less than 60 minutes.

dlg

link (url) DOI [BibTex]

2008


link (url) DOI [BibTex]