Header logo is
Institute Talks

Learning Dynamics from Kinematics: Estimating Foot Pressure from Video

Talk
  • 12 December 2018 • 10:00 11:00
  • Yanxi Liu
  • Aquarium (N3.022)

Human pose stability analysis is the key to understanding locomotion and control of body equilibrium, with numerous applications in the fields of Kinesiology, Medicine and Robotics. We propose and validate a novel approach to learn dynamics from kinematics of a human body to aid stability analysis. More specifically, we propose an end-to-end deep learning architecture to regress foot pressure from a human pose derived from video. We have collected and utilized a set of long (5min +) choreographed Taiji (Tai Chi) sequences of multiple subjects with synchronized motion capture, foot pressure and video data. The derived human pose data and corresponding foot pressure maps are used jointly in training a convolutional neural network with residual architecture, named “PressNET”. Cross validation results show promising performance of PressNet, significantly outperforming the baseline method under reasonable sensor noise ranges.

Organizers: Nadine Rueegg

Self-Supervised Representation Learning for Visual Behavior Analysis and Synthesis

Talk
  • 14 December 2018 • 12:00 13:00
  • Prof. Dr. Björn Ommer
  • PS Aquarium

Understanding objects and their behavior from images and videos is a difficult inverse problem. It requires learning a metric in image space that reflects object relations in real world. This metric learning problem calls for large volumes of training data. While images and videos are easily available, labels are not, thus motivating self-supervised metric and representation learning. Furthermore, I will present a widely applicable strategy based on deep reinforcement learning to improve the surrogate tasks underlying self-supervision. Thereafter, the talk will cover the learning of disentangled representations that explicitly separate different object characteristics. Our approach is based on an analysis-by-synthesis paradigm and can generate novel object instances with flexible changes to individual characteristics such as their appearance and pose. It nicely addresses diverse applications in human and animal behavior analysis, a topic we have intensive collaboration on with neuroscientists. Time permitting, I will discuss the disentangling of representations from a wider perspective including novel strategies to image stylization and new strategies for regularization of the latent space of generator networks.

Organizers: Joel Janai

Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitris Tzionas

Deep learning on 3D face reconstruction, modelling and applications

Talk
  • 19 December 2018 • 11:00 12:00
  • Yao Feng
  • PS Aquarium

In this talk, I will present my understanding on 3D face reconstruction, modelling and applications from a deep learning perspective. In the first part of my talk, I will discuss the relationship between representations (point clouds, meshes, etc) and network layers (CNN, GCN, etc) on face reconstruction task, then present my ECCV work PRN which proposed a new representation to help achieve state-of-the-art performance on face reconstruction and dense alignment tasks. I will also introduce my open source project face3d that provides examples for generating different 3D face representations. In the second part of the talk, I will talk some publications in integrating 3D techniques into deep networks, then introduce my upcoming work which implements this. In the third part, I will present how related tasks could promote each other in deep learning, including face recognition for face reconstruction task and face reconstruction for face anti-spoofing task. Finally, with such understanding of these three parts, I will present my plans on 3D face modelling and applications.

Organizers: Timo Bolkart

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.

TBA

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt

Organizers: Matthias Bauer

Examples of Machine Learning and Data Science at Facebook

IS Colloquium
  • 11 August 2014 • 11:15 12:30
  • Joaquin Quiñonero Candela
  • Max Planck Haus Lecture Hall

Facebook serves close to a billion people every day, who are only able to consume a small subset of the information available to them. In this talk I will give some examples of how machine learning is used to personalize people’s Facebook experience. I will also present some data science experiments with fairly counter-intuitive results.


  • Lourdes Agapito
  • MRZ seminar room

In this talk I will discuss two related problems in 3D reconstruction: (i) recovering the 3D shape of a temporally varying non-rigid 3D surface given a single video sequence and (ii) reconstructing different instances of the same object class category given a large collection of images from that category. In both cases we extract dense 3D shape information by analysing shape variation -- in one case of the same object instance over time and in the other across different instances of objects that belong to the same class.

First I will discuss the problem of dense capture of 3D non-rigid surfaces from a monocular video sequence. We take a purely model-free approach where no strong assumptions are made about the object we are looking at or the way it deforms. We apply low rank and spatial smoothness priors to obtain dense non-rigid models using a variational approach.

Second I will describe our recent approach to populating the Pascal VOC dataset with dense, per-object 3D reconstructions, bootstrapped from class labels, ground truth figure-ground segmentations and a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion, then reconstructs objects shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions.


Approximate inference for stochastic differential equations

IS Colloquium
  • 15 July 2014 • 11:15 12:15
  • Manfred Opper
  • Max Planck Haus Lecture Hall

Stochastic differential equations (SDEs) arise naturally as descriptions of continuous time dynamical systems. My talk addresses the problem of inferring the dynamical state and parameters of such systems from observations taken at discrete times. I will discuss the application of approximate inference methods such as the variational method and expectation propagation and show how higher dimensional systems can be treated by a mean field approximation. In the second part of my talk I will discuss the nonparametric estimation of the drift (i.e. the deterministic part of the ‘force’ which governs the dynamics) as a function of the state using Gaussian process approaches.

Organizers: Philipp Hennig Michel Besserve


  • Christian Theobalt
  • Max Planck House Lecture Hall

Even though many challenges remain unsolved, in recent years computer graphics algorithms to render photo-realistic imagery have seen tremendous progress. An important prerequisite for high-quality renderings is the availability of good models of the scenes to be rendered, namely models of shape, motion and appearance. Unfortunately, the technology to create such models has not kept pace with the technology to render the imagery. In fact, we observe a content creation bottleneck, as it often takes man months of tedious manual work by a animation artists to craft models of moving virtual scenes.
To overcome this limitation, the research community has been developing techniques to capture models of dynamic scenes from real world examples, for instance methods that rely on footage recorded with cameras or other sensors. One example are performance capture methods that measure detailed dynamic surface models, for example of actors or an actor's face, from multi-view video and without markers in the scene. Even though such 4D capture methods made big strides ahead, they are still at an early stage of their development. Their application is limited to scenes of moderate complexity in controlled environments, reconstructed detail is limited, and captured content cannot be easily modified, to name only a few restrictions.
In this talk, I will elaborate on some ideas on how to go beyond this limited scope of 4D reconstruction, and show some results from our recent work. For instance, I will show how we can capture more complex scenes with many objects or subjects in close interaction, as well as very challenging scenes of a smaller scale, such a hand motion. The talk will also show how we can capitalize on more sophisticated light transport models and inverse rendering to enable high-quality reconstruction in much more uncontrolled scenes, eventually also outdoors, and with very few cameras. I will also demonstrate how to represent captured scenes such that they can be conveniently modified. If time allows, the talk will cover some of our recent ideas on how to perform advanced edits of videos (e.g. removing or modifying dynamic objects in scenes) by exploiting reconstructed 4D models, as well as robustly found inter- and intra-frame correspondences.

Organizers: Gerard Pons-Moll


Compressive Sensing and Beyond

IS Colloquium
  • 23 June 2014 • 15:00 16:15
  • Holger Rauhut
  • Max Planck Haus Lecture Hall

The recent theory of compressive sensing predicts that (approximately) sparse vectors can be recovered from vastly incomplete linear measurements using efficient algorithms. This principle has a large number of potential applications in signal and image processing, machine learning and more. Optimal measurement matrices in this context known so far are based on randomness. Recovery algorithms include convex optimization approaches (l1-minimization) as well as greedy methods. Gaussian and Bernoulli random matrices are provably optimal in the sense that the smallest possible number of samples is required. Such matrices, however, are of limited practical interest because of the lack of any structure. In fact, applications demand for certain structure so that there is only limited freedom to inject randomness. We present recovery results for various structured random matrices including random partial Fourier matrices and partial random circulant matrices. We will also review recent extensions of compressive sensing for recovering matrices of low rank from incomplete information via efficient algorithms such as nuclear norm minimization. This principle has recently found applications for phaseless estimation, i.e., in situations where only the magnitude of measurements is available. Another extension considers the recovery of low rank tensors (multi-dimensional arrays) from incomplete linear information. Several obstacles arise when passing from matrices and tensors such as the lack of a singular value decomposition which shares all the nice properties of the matrix singular value decomposition. Although only partial theoretical results are available, we discuss algorithmic approaches for this problem.

Organizers: Michel Besserve


  • Brian Corner
  • MRC Seminar Room

A goal in virtual reality is for the user to experience a synthetic environment as if it were real. Engagement with virtual actors is a big part of the sensory context, thus getting the people "right" is critical for success. Size, shape, gender, ethnicity, clothing, color, texture, movement, among other attributes must be layered and nuanced to provide an accurate encounter between an actor and a user. In this talk, I discuss the development of digital human models and how they may be improved to obtain the high realism for successful engagement in a virtual world.


  • Christian Häne
  • MRC-SR

Volumetric 3D modeling has attracted a lot of attention in the past. In this talk I will explain how the standard volumetric formulation can be extended to include semantic information by using a convex multi-label formulation. One of the strengths of our formulation is that it allows us to directly account for the expected surface orientations. I will focus on two applications. Firstly, I will introduce a method that allows for joint volumetric reconstruction and class segmentation. This is achieved by taking into account the expected orientations of object classes such as ground and building. Such a joint approach considerably improves the quality of the geometry while at the same time it gives a consistent semantic segmentation. In the second application I will present a method that allows for the reconstruction of challenging objects such as for example glass bottles. The main difficulty with reconstructing such objects are the texture-less, transparent and reflective areas in the input images. We propose to formulate a shape prior based on the locally expected surface orientation to account for the ambiguous input data. Our multi-label approach also directly enables us to segment the object from its surrounding.


Low-rank dynamics

IS Colloquium
  • 26 May 2014 • 15:15 16:30
  • Christian Lubich
  • AGBS seminar room

This talk reviews differential equations on manifolds of matrices or tensors of low rank. They serve to approximate, in a low-rank format, large time-dependent matrices and tensors that are either given explicitly via their increments or are unknown solutions of differential equations. Furthermore, low-rank differential equations are used in novel algorithms for eigenvalue optimisation, for instance in robust-stability problems.

Organizers: Philipp Hennig


Embedded Optimization for Nonlinear Model Predictive Control

IS Colloquium
  • 19 May 2014 • 10:15 11:30
  • Prof. Moritz Diehl
  • Max Planck House Lecture Hall

This talk shows how embedded optimization - i.e. autonomous optimization algorithms receiving data, solving problems, and sending answers continuously - are able to address challenging control problems. When nonlinear differential equation models are used to predict and optimize future system behaviour, one speaks of Nonlinear Model Predictive Control (NMPC).The talk presents experimental applications of NMPC to time and energy optimal control of mechatronic systems and discusses some of the algorithmic tricks that make NMPC optimization rates up to 1 MHz possible. Finally, we present on particular challenging application, tethered flight for airborne wind energy systems.

Organizers: Sebastian Trimpe


Towards Lifelong Learning for Visual Scene Understanding

IS Colloquium
  • 12 May 2014 • 11:15
  • Christoph Lampert
  • Max Planck House Lecture Hall

The goal of lifelong visual learning is to develop techniques that continuously and autonomously learn from visual data, potentially for years or decades. During this time the system should build an ever-improving base of generic visual information, and use it as background knowledge and context for solving specific computer vision tasks. In my talk, I will highlight two recent results from our group on the road towards lifelong visual scene understanding: the derivation of theoretical guarantees for lifelong learning systems and the development of practical methods for object categorization based on semantic attributes.

Organizers: Gerard Pons-Moll