Institute Talks

Multi-contact locomotion control for legged robots

Talk
  • 25 April 2017 • 11:00 12:30
  • Dr. Andrea Del Prete
  • N2.025 (AMD seminar room - 2nd floor)

This talk will survey recent work to achieve multi-contact locomotion control of humanoid and legged robots. I will start by presenting some results on robust optimization-based control. We exploited robust optimization techniques, either stochastic or worst-case, to improve the robustness of Task-Space Inverse Dynamics (TSID), a well-known control framework for legged robots. We modeled uncertainties in the joint torques, and we immunized the constraints of the system to any of the realizations of these uncertainties. We also applied the same methodology to ensure the balance of the robot despite bounded errors in the its inertial parameters. Extensive simulations in a realistic environment show that the proposed robust controllers greatly outperform the classic one. Then I will present preliminary results on a new capturability criterion for legged robots in multi-contact. "N-step capturability" is the ability of a system to come to a stop by taking N or fewer steps. Simplified models to compute N-step capturability already exist and are widely used, but they are limited to locomotion on flat terrains. We propose a new efficient algorithm to compute 0-step capturability for a robot in arbitrary contact scenarios. Finally, I will present our recent efforts to transfer the above-mentioned techniques to the real humanoid robot HRP-2, on which we recently implemented joint torque control.

Organizers: Ludovic Righetti

Learning from Synthetic Humans

Talk
  • 04 May 2017 • 15:00 16:00
  • Gul Varol
  • N3.022 (Greenhouse)

Estimating human pose, shape, and motion from images and video are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL: a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.

Organizers: Dimitris Tzionas

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

  • Dimitris Tzionas
  • MRZ Seminar Room

Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priory knowledge of the object's shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow.

Organizers: Javier Romero


Bipartite Matching and Multi-target Tracking

Talk
  • 22 July 2016 • 12:00 12:45
  • Anton Milan
  • MRZ Seminar Room

Matching between two sets arises in various areas in computer vision, such as feature point matching for 3D reconstruction, person re-identification for surveillance or data association for multi-target tracking. Most previous work focused either on designing suitable features and matching cost functions, or on developing faster and more accurate solvers for quadratic or higher-order problems. In the first part of my talk, I will present a strategy for improving state-of-the-art solutions by efficiently computing the marginals of the joint matching probability. The second part of my talk will revolve around our recent work on online multi-target tracking using recurrent neural networks (RNNs). I will mention some fundamental challenges we encountered and present our current solution.


Dynamic and Groupwise Statistical Analysis of 3D Faces

Talk
  • 09 June 2016 • 11:00 11:45
  • Timo Bolkart
  • MRC seminar room

The accurate reconstruction of facial shape is important for applications such as telepresence and gaming. It can be solved efficiently with the help of statistical shape models that constrain the shape of the reconstruction. In this talk, several methods to statistically analyze static and dynamic 3D face data are discussed. When statistically analyzing faces, various challenges arise from noisy, corrupt, or incomplete data. To overcome the limitations imposed by the poor data quality, we leverage redundancy in the data for shape processing. This is done by processing entire motion sequences in the case of dynamic data, and by jointly processing large databases in a groupwise fashion in the case of static data. First, a fully automatic approach to robustly register and statistically analyze facial motion sequences using a multilinear face model as statistical prior is proposed. Further, a statistical face model is discussed, which consists of many localized, decorrelated multilinear models. The localized and multi-scale nature of this model allows for recovery of fine-scale details while retaining robustness to severe noise and occlusions. Finally, the learning of statistical face models is formulated as a groupwise optimization framework that aims to learn a multilinear model while jointly optimizing the correspondence, or correcting the data.


  • Christian Ebenbauer
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

In many control applications it is the goal to operate a dynamical system in an optimal way with respect to a certain performance criterion. In a combustion engine, for example, the goal could be to control the engine such that the emissions are minimized. Due to the complexity of an engine, the desired operating point is unknown or may even change over time so that it cannot be determined a priori. Extremum seeking control is a learning-control methodology to solve such kind of control problems. It is a model-free method that optimizes the steady-state behavior of a dynamical system. Since it can be implemented with very limited resources, it has found several applications in industry. In this talk we give an introduction to extremum seeking theory based on a recently developed framework which relies on tools from geometric control. Furthermore, we discuss how this framework can be utilized to solve distributed optimization and coordination problems in multi-agent systems.

Organizers: Sebastian Trimpe


Safe Learning Control for Mobile Robots

IS Colloquium
  • 25 April 2016 • 11:15 12:15
  • Angela Schoellig
  • Max Planck Haus Lecture Hall

In the last decade, there has been a major shift in the perception, use and predicted applications of robots. In contrast to their early industrial counterparts, robots are envisioned to operate in increasingly complex and uncertain environments, alongside humans, and over long periods of time. In my talk, I will argue that machine learning is indispensable in order for this new generation of robots to achieve high performance. Based on various examples (and videos) ranging from aerial-vehicle dancing to ground-vehicle racing, I will demonstrate the effect of robot learning, and highlight how our learning algorithms intertwine model-based control with machine learning. In particular, I will focus on our latest work that provides guarantees during learning (for example, safety and robustness guarantees) by combining traditional controls methods (nonlinear, robust and model predictive control) with Gaussian process regression.

Organizers: Sebastian Trimpe


Pose-based human action recognition.

Talk
  • 21 April 2016 • 11:30 12:30
  • Cordelia Schmid
  • MRZ Seminar Room

In this talk we present some recent results on human action recognition in videos. We, first, show how to use human pose for action recognition. To this end we propose a new pose-based convolutional neural network descriptor for action recognition, which aggregates motion and appearance information along tracks of human body parts. Next, we present an approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and then tracks high-scoring proposals in the video. Our tracker relies simultaneously on instance-level and class-level detectors. Action are localized in time with a sliding window approach at the track level. Finally, we show how to extend this method to weakly supervised learning of actions, which allows to scale to large amounts of data without manual annotation.


Long-term Temporal Convolutions for Action Recognition

Talk
  • 12 April 2016 • 14:00 15:00
  • Gül Varol
  • MRZ Seminar Room

Typical human actions such as hand-shaking and drinking last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of single frames or short video clips and fail to model actions at their full temporal scale. In this work we learn video representations using neural networks with long-term temporal convolutions. We demonstrate that CNN models with increased temporal extents improve the accuracy of action recognition despite reduced spatial resolution. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 and HMDB51.


Ray Tracing for Computer Vision

Talk
  • 08 April 2016 • 10:30 11:30
  • Helge Rhodin
  • MRC seminar room

Proper handling of occlusions is a big challenge for model based reconstruction, e.g. for multi-view motion capture a major difficulty is the handling of occluding body parts. We propose a smooth volumetric scene representation, which implicitly converts occlusion into a smooth and differentiable phenomena (ICCV2015). Our ray tracing image formation model helps to express the objective in a single closed-form expression. This is in contrast to existing surface(mesh) representations, where occlusion is a local effect, causes non-differentiability, and is difficult to optimize. We demonstrate improvements for multi-view scene reconstruction, rigid object tracking, and motion capture. Moreover, I will show an application of motion tracking to the interactive control of virtual characters (SigAsia2015).


  • Aamir Ahmad
  • MRC seminar room

The core focus of my research is on robot perception. Within this broad categorization, I am mainly interested in understanding how teams of robots and sensors can cooperate and/or collaborate to improve the perception of themselves (self-localization) as well as their surroundings (target tracking, mapping, etc.). In this talk I will describe the inter-dependencies of such perception modules and present state-of-the-art methods to perform unified cooperative state estimation. The trade-off between accuracy of estimation and computational speed will be highlighted through a new optimization-based method for unified-state estimation. Furthermore, I will also describe how perception-based multirobot formation control can be achieved. Towards the end, I will present some recent results on cooperative vision-based target tracking and a few comments on our ongoing work regarding cooperative aerial mapping with human-in-the-loop.


  • Valsamis Ntouskos
  • MRC seminar room

Modeling and reconstruction of shape and motion are problems of fundamental importance in computer vision. Inverse Problem theory constitutes a powerful mathematical framework for dealing with ill-posed problems as the ones typically arising in shape and motion modeling. In this talk, I will present methods inspired by Inverse Problem theory, for dealing with four different shape and motion modeling problems. In particular, in the context of shape modeling, I will present a method for component-wise modeling of articulated objects and its application in computing 3D models of animals. Additionally, I will discuss the problem of modeling of specular surfaces via the properties of their material, and I will also present a model for confidence driven depth image fusion based on total variation regularization. Regarding motion, I will discuss a method for the recognition of human actions from motion capture data based on Nonparametric Bayesian models.