Institute Talks

John Cunningham -TBA

IS Colloquium
  • 06 March 2017 • 11:15 12:15
  • John Cunningham
  • MPH Lecture Hall

Organizers: Philipp Hennig

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

  • Hedvig Kjellström
  • MRZ Seminar Room

In this talk I will first outline my different research projects. I will then focus on one project with applications in Health, and introduce the Inter-Battery Topic Model (IBTM). Our approach extends traditional topic models by learning a factorized latent variable representation. The structured representation leads to a model that marries benefits traditionally associated with a discriminative approach, such as feature selection, with those of a generative model, such as principled regularization and ability to handle missing data. The factorization is provided by representing data in terms of aligned pairs of observations as different views. This provides means for selecting a representation that separately models topics that exist in both views from the topics that are unique to a single view. This structured consolidation allows for efficient and robust inference and provides a compact and efficient representation.


Optical Robot Skin and Whole Body Vision

Talk
  • 19 October 2016 • 14:00 15:00
  • Chris Atkeson and Akihiko Yamaguchi
  • Max Planck House, Lecture Hall

Chris Atkeson will talk about the motivation for optical robot skin and whole-body vision. Akihiko Yamaguchi will talk about a first application, FingerVision.

Organizers: Ludovic Righetti


Numerics in Computational Stellar Astrophysics

Talk
  • 29 September 2016 • 14:00 15:00
  • Jean-Claude Passy
  • AGBS Seminar room (Spemmanstr. 38)

The importance of computer science in astrophysical research has increased tremendously over the past 15 years. Indeed, as observational facilities and missions are constantly pushing their precision limit, theorists need to provide observers with more and more realistic numerical models. These models need to be verified, validated, and their uncertainties must be assessed. In this talk, I will present the results of two independent numerical studies aiming at solving some fundamental problems in stellar astrophysics. First, I will explain how we have used different 3D hydrodynamics codes to simulate stellar mergers. In particular I will focus on the verification and validation steps, and describe a new algorithm to compute self-gravity that I have developed and implemented in a grid-based code. Then, I will introduce the concept of a ‘stellar evolution' code which models the full evolution of a star, from its birth until its death. I will present a code comparison of several such codes widely used by the astrophysical community, and assess their systematic uncertainties. These modeling uncertainties must be taken into account by observers if they wish to derive observed parameters more reliably.

Organizers: Raffi Enficiaud


  • Jose R. Medina
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Control under uncertainty is an omnipresent problem in robotics that typically arises when robots must cope with unknown environments/tasks. Robot control typically ignores uncertainty by considering only the expected outcomes of the robot’s internal model. Interestingly, neuroscientist have shown that humans adapt their decisions depending on the level of uncertainty which is not reflected in the expected values, but in higher order statistics. In this talk I will first present an approach to systematically address this problem in the context of stochastic optimal control. I will then give an example of how the robot’s internal model structure defines the level uncertainty and its distribution. Finally, experiments in a physical human-robot interaction setting will illustrate the capabilities of this approach.

Organizers: Ludovic Righetti


  • Stéphane Caron
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Humanoid locomotion on horizontal floors was solved by closing the feedback loop on the Zero-tiling Moment Point (ZMP), a measurable dynamic point that needs to stay inside the foot contact area to prevent the robot from falling (contact stability criterion). However, this criterion does not apply to general multi-contact settings, the "new frontier" in humanoid locomotion. In this talk, we will see how the ideas of ZMP and support area can be generalized and applied to multi-contact locomotion. First, we will show how support areas can be calculated in any virtual plane, allowing one to apply classical schemes even when contacts are not coplanar. Yet, these schemes constraint the center-of-mass (COM) to planar motions. We overcome this limitation by extending the calculation of the contact-stability criterion from a support area to a support cone of 3D COM accelerations. We use this new criterion to implement a multi-contact walking pattern generator based on predictive control of COM accelerations, which we will demonstrate in real-time simulations during the presentation.

Organizers: Ludovic Righetti


  • Siyu Tang
  • MRZ Seminar Room (Spemannstr 41)

Understanding people in images and videos is a problem studied intensively in computer vision. While continuous progress has been made, occlusions, cluttered background, complex poses and large variety of appearance remain challenging, especially for crowded scenes. In this talk, I will explore the algorithms and tools that enable computer to interpret people's position, motion and articulated poses in the real-world challenging images and videos.More specifically, I will discuss an optimization problem whose feasible solutions define a decomposition of a given graph. I will highlight the applications of this problem in computer vision, which range from multi-person tracking [1,2,3] to motion segmentation [4]. I will also cover an extended optimization problem whose feasible solutions define a decomposition of a given graph and a labeling of its nodes with the application on multi-person pose estimation [5]. Reference: [1] Subgraph Decomposition for Multi-Object Tracking; S. Tang, B. Andres, M. Andriluka and B. Schiele; CVPR 2015 [2] Multi-Person Tracking by Multicut and Deep Matching; S. Tang, B. Andres, M. Andriluka and B. Schiele; arXiv 2016 [3] Multi-Person Tracking by Lifted Multicut and Person Re-identification; S. Tang, B. Andres, M. Andriluka and B. Schiele [4] A Multi-cut Formulation for Joint Segmentation and Tracking of Multiple Objects; M. Keuper, S. Tang, Z. Yu, B. Andres, T. Brox and B. Schiele; arXiv 2016 [5] DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation.: L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler and B. Schiele; CVPR16

Organizers: Naureen Mahmood


  • Hannes Nickisch, Philips Research, Hamburg
  • MRZ seminar room

Coronary artery disease (CAD) is the single leading cause of death worldwide and Cardiac Computed Tomography Angiography (CCTA) is a non-invasive test to rule out CAD using the anatomical characterization of the coronary lesions. Recent studies suggest that coronary lesions’ hemodynamic significance can be assessed by Fractional Flow Reserve (FFR), which is usually measured invasively in the CathLab but can also be simulated from a patient-specific biophysical model based on CCTA data. We learn a parametric lumped model (LM) enabling fast computational fluid dynamic simulations of blood flow in elongated vessel networks to alleviate the computational burden of 3D finite element (FE) simulations. We adapt the coefficients balancing the local nonlinear hydraulic effects from a training set of precomputed FE simulations. Our LM yields accurate pressure predictions suggesting that costly FE simulations can be replaced by our fast LM paving the way to use a personalised interactive biophysical model with realtime feedback in clinical practice.


  • Dimitris Tzionas
  • MRZ Seminar Room

Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priory knowledge of the object's shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow.

Organizers: Javier Romero


Bipartite Matching and Multi-target Tracking

Talk
  • 22 July 2016 • 12:00 12:45
  • Anton Milan
  • MRZ Seminar Room

Matching between two sets arises in various areas in computer vision, such as feature point matching for 3D reconstruction, person re-identification for surveillance or data association for multi-target tracking. Most previous work focused either on designing suitable features and matching cost functions, or on developing faster and more accurate solvers for quadratic or higher-order problems. In the first part of my talk, I will present a strategy for improving state-of-the-art solutions by efficiently computing the marginals of the joint matching probability. The second part of my talk will revolve around our recent work on online multi-target tracking using recurrent neural networks (RNNs). I will mention some fundamental challenges we encountered and present our current solution.


Dynamic and Groupwise Statistical Analysis of 3D Faces

Talk
  • 09 June 2016 • 11:00 11:45
  • Timo Bolkart
  • MRC seminar room

The accurate reconstruction of facial shape is important for applications such as telepresence and gaming. It can be solved efficiently with the help of statistical shape models that constrain the shape of the reconstruction. In this talk, several methods to statistically analyze static and dynamic 3D face data are discussed. When statistically analyzing faces, various challenges arise from noisy, corrupt, or incomplete data. To overcome the limitations imposed by the poor data quality, we leverage redundancy in the data for shape processing. This is done by processing entire motion sequences in the case of dynamic data, and by jointly processing large databases in a groupwise fashion in the case of static data. First, a fully automatic approach to robustly register and statistically analyze facial motion sequences using a multilinear face model as statistical prior is proposed. Further, a statistical face model is discussed, which consists of many localized, decorrelated multilinear models. The localized and multi-scale nature of this model allows for recovery of fine-scale details while retaining robustness to severe noise and occlusions. Finally, the learning of statistical face models is formulated as a groupwise optimization framework that aims to learn a multilinear model while jointly optimizing the correspondence, or correcting the data.