Institute Talks

Structured Deep Visual Dynamics Models for Robot Manipulation

Talk
  • 23 October 2017 • 10:00 11:15
  • Arunkumar Byravan
  • AMD meeting room

The ability to predict how an environment changes based on forces applied to it is fundamental for a robot to achieve specific goals. Traditionally in robotics, this problem is addressed through the use of pre-specified models or physics simulators, taking advantage of prior knowledge of the problem structure. While these models are general and have broad applicability, they depend on accurate estimation of model parameters such as object shape, mass, friction etc. On the other hand, learning based methods such as Predictive State Representations or more recent deep learning approaches have looked at learning these models directly from raw perceptual information in a model-free manner. These methods operate on raw data without any intermediate parameter estimation, but lack the structure and generality of model-based techniques. In this talk, I will present some work that tries to bridge the gap between these two paradigms by proposing a specific class of deep visual dynamics models (SE3-Nets) that explicitly encode strong physical and 3D geometric priors (specifically, rigid body dynamics) in their structure. As opposed to traditional deep models that reason about dynamics/motion a pixel level, we show that the physical priors implicit in our network architectures enable them to reason about dynamics at the object level - our network learns to identify objects in the scene and to predict rigid body rotation and translation per object. I will present results on applying our deep architectures to two specific problems: 1) Modeling scene dynamics where the task is to predict future depth observations given the current observation and an applied action and 2) Real-time visuomotor control of a Baxter manipulator based only on raw depth data. We show that: 1) Our proposed architectures significantly outperform baseline deep models on dynamics modelling and 2) Our architectures perform comparably or better than baseline models for visuomotor control while operating at camera rates (30Hz) and relying on far less information.

Organizers: Franzi Meier

Modern Optimization for Structured Machine Learning

IS Colloquium
  • 23 October 2017 • 11:15 12:15
  • Simon Lacoste-Julien
  • IS Lecture Hall

Machine learning has become a popular application domain for modern optimization techniques, pushing its algorithmic frontier. The need for large scale optimization algorithms which can handle millions of dimensions or data points, typical for the big data era, have brought a resurgence of interest for first order algorithms, making us revisit the venerable stochastic gradient method [Robbins-Monro 1951] as well as the Frank-Wolfe algorithm [Frank-Wolfe 1956]. In this talk, I will review recent improvements on these algorithms which can exploit the structure of modern machine learning approaches. I will explain why the Frank-Wolfe algorithm has become so popular lately; and present a surprising tweak on the stochastic gradient method which yields a fast linear convergence rate. Motivating applications will include weakly supervised video analysis and structured prediction problems.

Organizers: Philipp Hennig

Safe Learning Control for Mobile Robots

IS Colloquium
  • 25 April 2016 • 11:15 12:15
  • Angela Schoellig
  • Max Planck Haus Lecture Hall

In the last decade, there has been a major shift in the perception, use and predicted applications of robots. In contrast to their early industrial counterparts, robots are envisioned to operate in increasingly complex and uncertain environments, alongside humans, and over long periods of time. In my talk, I will argue that machine learning is indispensable in order for this new generation of robots to achieve high performance. Based on various examples (and videos) ranging from aerial-vehicle dancing to ground-vehicle racing, I will demonstrate the effect of robot learning, and highlight how our learning algorithms intertwine model-based control with machine learning. In particular, I will focus on our latest work that provides guarantees during learning (for example, safety and robustness guarantees) by combining traditional controls methods (nonlinear, robust and model predictive control) with Gaussian process regression.

Organizers: Sebastian Trimpe


Pose-based human action recognition.

Talk
  • 21 April 2016 • 11:30 12:30
  • Cordelia Schmid
  • MRZ Seminar Room

In this talk we present some recent results on human action recognition in videos. We, first, show how to use human pose for action recognition. To this end we propose a new pose-based convolutional neural network descriptor for action recognition, which aggregates motion and appearance information along tracks of human body parts. Next, we present an approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and then tracks high-scoring proposals in the video. Our tracker relies simultaneously on instance-level and class-level detectors. Action are localized in time with a sliding window approach at the track level. Finally, we show how to extend this method to weakly supervised learning of actions, which allows to scale to large amounts of data without manual annotation.


Long-term Temporal Convolutions for Action Recognition

Talk
  • 12 April 2016 • 14:00 15:00
  • Gül Varol
  • MRZ Seminar Room

Typical human actions such as hand-shaking and drinking last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of single frames or short video clips and fail to model actions at their full temporal scale. In this work we learn video representations using neural networks with long-term temporal convolutions. We demonstrate that CNN models with increased temporal extents improve the accuracy of action recognition despite reduced spatial resolution. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 and HMDB51.


Ray Tracing for Computer Vision

Talk
  • 08 April 2016 • 10:30 11:30
  • Helge Rhodin
  • MRC seminar room

Proper handling of occlusions is a big challenge for model based reconstruction, e.g. for multi-view motion capture a major difficulty is the handling of occluding body parts. We propose a smooth volumetric scene representation, which implicitly converts occlusion into a smooth and differentiable phenomena (ICCV2015). Our ray tracing image formation model helps to express the objective in a single closed-form expression. This is in contrast to existing surface(mesh) representations, where occlusion is a local effect, causes non-differentiability, and is difficult to optimize. We demonstrate improvements for multi-view scene reconstruction, rigid object tracking, and motion capture. Moreover, I will show an application of motion tracking to the interactive control of virtual characters (SigAsia2015).


  • Aamir Ahmad
  • MRC seminar room

The core focus of my research is on robot perception. Within this broad categorization, I am mainly interested in understanding how teams of robots and sensors can cooperate and/or collaborate to improve the perception of themselves (self-localization) as well as their surroundings (target tracking, mapping, etc.). In this talk I will describe the inter-dependencies of such perception modules and present state-of-the-art methods to perform unified cooperative state estimation. The trade-off between accuracy of estimation and computational speed will be highlighted through a new optimization-based method for unified-state estimation. Furthermore, I will also describe how perception-based multirobot formation control can be achieved. Towards the end, I will present some recent results on cooperative vision-based target tracking and a few comments on our ongoing work regarding cooperative aerial mapping with human-in-the-loop.


  • Valsamis Ntouskos
  • MRC seminar room

Modeling and reconstruction of shape and motion are problems of fundamental importance in computer vision. Inverse Problem theory constitutes a powerful mathematical framework for dealing with ill-posed problems as the ones typically arising in shape and motion modeling. In this talk, I will present methods inspired by Inverse Problem theory, for dealing with four different shape and motion modeling problems. In particular, in the context of shape modeling, I will present a method for component-wise modeling of articulated objects and its application in computing 3D models of animals. Additionally, I will discuss the problem of modeling of specular surfaces via the properties of their material, and I will also present a model for confidence driven depth image fusion based on total variation regularization. Regarding motion, I will discuss a method for the recognition of human actions from motion capture data based on Nonparametric Bayesian models.


Computer Vision on UAVs – practical considerations

Talk
  • 10 March 2016 • 11:00 12:00
  • Eric Price
  • MRZ Seminar Room

Computer vision on flying robots - or UAVs - brings its own challenges, especially if conducted in real time. On-board processing is limited by tight weight and size constraints for the electronics while off-board processing is challenged by signal delays and connection quality, especially considering the data rates required for high fps high resolution video. Unlike ground based vehicles, precision odometry is unavailable. Positional information is provided by GPS, which can have signal losses and limited precision, especially near terrain. Exact orientation can be even more problematic due to magnetic interference and vibration affecting sensors. In this talk I'd like to present and discuss some examples of practical problems encountered when trying to get robotics airborne – as well as possible solutions.

Organizers: Alina Allakhverdieva


  • Catrin Misselhorn
  • Max Planck Haus Lecture Hall

The development of increasingly intelligent and autonomous technologies will inevitably lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. It will, therefore, be necessary in the long run to develop machines which have the capacity for a certain amount of autonomous moral decision-making. The goal of this talk is to provide the theoretical foundations for artificial morality, i.e., for implementing moral capacities in artificial systems in general and a roadmap for developing an assistive system in geriatric care which is capable of moral learning.

Organizers: Ludovic Righetti Philipp Hennig


From image restoration to image understanding

Talk
  • 03 March 2016 • 11:30 12:00
  • Lars Mescheder
  • MRZ Seminar Room

Inverse problems are ubiquitous in image processing and applied science in general. Such problems describe the challenge of computing the parameters that characterize a system from the outcomes. While this might seem easy at first for simple systems, many inverse problems share a property that makes them much more intricate: they are ill-posed. This means that either the problem does not have a unique solution or this solution does not depend continuously on the outcomes of the system. Bayesian statistics provides a framework that allows to treat such problems in a systematic way. The missing piece of information is encoded as a prior distribution on the space of possible solutions. In this talk, we will study probabilistic image models as priors for statistical inversion. In particular, we will give a probabilistic interpretation of the classical TV-prior and discuss how this interpretation can be used as a starting point for more complex models. We will see that many important auxiliary quantities such as edges and regions can be incorporated into the model in the form of latent variables. This leads to the conjecture that many image processing tasks, such as denoising and segmentation, should not be considered separately, but instead be treated together.


Images of planets orbiting other stars

Talk
  • 01 March 2016 • 11:00 12:00
  • Sascha Quantz
  • AGBS Seminar Room

The detection and characterization of planets orbiting other stars than the Sun, i.e., so-called extrasolar planets, is one of the fastest growing and most vibrant research fields in modern astrophysics. In the last 25 years, more than 5400 extrasolar planets and planet candidates were revealed, but the vast majority of these objects was detected with indirect techniques, where the existence of the planet is inferred from periodic changes in the light coming from the central star. No photons from the planets themselves are detected. In this talk, however, I will focus on the direct detection of extrasolar planets. On the one hand I will describe the main challenges that have to be overcome in order to image planets around other stars. In addition to using the world’s largest telescopes and optimized cameras it was realized in last few years that by applying advanced image processing techniques significant sensitivity gains can be achieved. On the other hand I will demonstrate what can be learned if one is successful in “taking a picture” of an extrasolar planet. After all, there must be good scientific reasons and a strong motivation why the direct detection of extrasolar planets is one of the key science drivers for current and future projects on major ground- and space-based telescopes.

Organizers: Diana Rebmann