Institute Talks

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

  • Christoph Garbe

Recovering the depth of a scene is important for bridging the gap between the real and the virtual world, but also for tasks such as segmenting objects in cluttered scenes. Very cheap single view depth imaging cameras, i.e. Time of Fight cameras (ToF) or Microsoft's Kinect system, are entering the mass consumer market. In general, the acquired images have a low spatial resolution and suffer from noise as well as technology specific artifacts. In this talk I will present algorithmic solutions to the entire depth imaging pipeline, ranging from preprocessing to depth image analysis. For enhancing image intensity and depth maps, a higher order total variation based approach has been developed which exhibits superior results as compared to current state-of-the-art approaches. This performance has been achieved by allowing jumps across object boundaries, computed both from the image gradients and the depth maps. Within objects, staircasing effects as observed in standard total variation approaches is circumvented by higher order regularization. The 2.5 D motion or range flow of the observed scenes is computed by a combined global-local approach.

Particularly on Kinect-data, best results were achieved by discarding information on object edges. These are prone to errors due to the data acquisition process. In conjunction with a calibration procedure, this leads to very accurate and robust motion estimation. On these computed range flow data, we have developed the estimation of robust, scale- and rotation-invariant features. These make it feasible to use our algorithms for a novel approach to gesture recognition for man-machine interactions. This step is currently work inprogress and I will present very promising first results.

For evaluating the results of our algorithms, we plan to use realistic simulations and renderings. We have made significant advances in analyzing the feasibility of these synthetic test images and data. The bidirectional reflectance distribution function (BRDF) of several objects have been measured using a purpose-build “light-dome” setup. This, together with the development of an accurate stereo-acquisition system for measuring 3D-objects lays the ground work for performing realistic renderings. Additionally, we have started to create a test-image database with ground truth for depth, segmentation and light-field data.

  • Ryusuke Sagawa

3D scanning of moving objects has many applications, for example, marker-less motion capture, analysis on fluid dynamics, object explosion and so on. One of the approach to acquire accurate shape is a projector-camera system, especially the methods that reconstructs a shape by using a single image with static pattern is suitable for capturing fast moving object. In this research, we propose a method that uses a grid pattern consisting of sets of parallel lines. The pattern is spatially encoded by a periodic color pattern. While informations are sparse in the camera image, the proposed method extracts the dense (pixel-wise) phase informations from the sparse pattern.

As the result, continuous regions in the camera images can be extracted by analyzing the phase. Since there remain one DOF for each region, we propose the linear solution to eliminate the DOF by using geometric informations of the devices, i.e. epipolar constraint. In addition, solution space is finite because projected pattern consists of parallel lines with same intervals, the linear equation can be efficiently solved by integer least square method.

In the experiments, a scanning system that can capture an object in fast motion has been actually developed by using a high-speed camera. In the experiments, we show the sequence of dense shapes of an exploding balloon, and other objects at more than 1000 fps.

  • Brian Amberg

Fitting statistical 2D and 3D shape models to images is necessary for a variety of tasks, such as video editing and face recognition. Much progress has been made on local fitting from an initial guess, but determining a close enough initial guess is still an open problem. One approach is to detect distinct landmarks in the image and initialize the model fit from these correspondences. This is difficult, because detection of landmarks based only on their local appearance is inherently ambiguous, making it necessary to use global shape information for the detections. We propose a method to solve the combinatorial problem of selecting out of a large number of candidate landmark detections the configuration which is best supported by a shape model.

Our method, as opposed to previous approaches, always finds the globally optimal configuration. The algorithm can be applied to a very general class of shape models and is independent of the underlying feature point detector.

  • Prof. Dr. David Fleet

This talk concerns the use of physics-based models for human pose tracking and scene inference. We outline our motivation for physics-based models, some results with monocular pose tracking in terms of biomechanically inspired controllers, and recent results on the inference of scene interactions. We show that physics-based models facilitate the estimation of physically plausible human motion with little or no mocap data required. Scene interactions play an integral role in modeling sources of external forces acting on the body.

  • Larry Davis

In spite of the significant effort that has been devoted to the core problems of object and action recognition in images and videos, the recognition performance of state of the art algorithms is well below what would be required for any successful deployment in many applications. Additionally, there are challenging combinatorial problems associated with constructing globally “optimal” descriptions of images and videos in terms of potentially very large collections of object and action models. The constraints that are utilized in these optimization procedures are loosely referred to as “context.” So, for example, vehicles are generally supported by the ground, so that an estimate of ground plane location parameters in an image constrains positions and apparent sizes of vehicles. Another source of context are the everyday spatial and temporal relationships between objects and actions; so, for example, keyboards are typically “on” tables and not “on” cats.

The first part of the talk will discuss how visually grounded models of object appearance and relations between objects can be simultaneously learned from weakly labeled images (images which are linguistically but not spatially annotated – i.e., we are told there is a car in the image, but not where the car is located).

Next, I will discuss how these models can be more efficiently learned using active learning methods. Once these models are acquired, one approach to inferring what objects appear in a new image is to segment the image into pieces, construct a graph based on the regions in the segmentation and the relationships modeled, and then apply probabilistic inference to the graph. However, this typically results in a very dense graph with many “noisy” edges, leading to inefficient and inaccurate inference. I will briefly describe a learning approach that can construct smaller and more informative graphs for inference.

Finally, I will relax the (unreasonable) assumption that one can segment an image into regions that correspond to objects, and describe an approach that can simultaneously construct instances of objects out of collections of connected segments that look like objects, while also softly enforcing contextual constraints.

Organizers: Michel Besserve

  • Ben Sapp

Human pose estimation from monocular images is one of the most challenging and computationally demanding problems in computer vision. Standard models such as Pictorial Structures consider interactions between kinematically-connected joints or limbs, leading to inference quadratic in the number of pixels.

As a result, researchers and practitioners have restricted themselves to simple models which only measure the quality of limb-pair possibilities by their 2D geometric plausibility. In this talk, we propose novel methods which allow for efficient inference in richer models with data-dependent interaction cliques.

First, we introduce structured prediction cascades, a structured analog of binary cascaded classifiers, which learn to focus computational effort where it is needed, filtering out many states cheaply while ensuring the correct output is unfiltered.

Second, we propose a way to decompose models of human pose with cyclic dependencies into a collection of tree models, and provide novel methods to impose model agreement. These techniques allow for sparse and efficient inference on the order of minutes per image or video clip.

As a result, we can afford to model pairwise interaction potentials much more richly with data-dependent features such as contour continuity, segmentation alignment, color consistency, optical flow and more.

Finally, we apply these techniques to higher-order cliques, extending the idea of poselets to structured models. We show empirically that these richer models are worthwhile, obtaining significantly more accurate pose estimation on popular datasets.

Organizers: Michel Besserve

  • Leonid Sigal

Pose estimation and tracking has been a focus of computer vision research for many years. Despite many successes, however, most approaches to date are still not able to recover physically realistic (natural looking) 3d motions and are restricted to captures indoors or with simplified backgrounds. In the first part of this talk, I will briefly introduce a class of models that use physics to constrain the motion of the subject to more realistic interpretations.

In particular, we formulate the pose tracking problem as one of inference of control mechanisms which implicitly (through physical simulation) generate the kinematic motion matching the image observations. This formulation of the problem has a number of benefits with respect to more traditional kinematic models. In the second part of the talk, I will describe a new proof-of-concept framework for capturing human motion in outdoor environments where traditional motion capture systems, including marker-less motion systems, would typically be inapplicable.

The proposed system consists of a number of small body-mounted cameras, placed on all major segments of the body, and is capable of recovering the underlying skeletal motion by observing the scene as it changes, within each camera view, with the motion of the subjects’ body.

Organizers: Michel Besserve

Isometry-Invariant Shape Analysis

  • 26 September 2011
  • Stefanie Wuhrer

Shape analysis aims to describe either a single shape or a population of shapes in an efficient and informative way. This is a key problem in various applications such as mesh deformation and animation, object recognition, and mesh parameterization.

I will present a number of approaches to process shapes that are nearly isometric. The first approach computes the correspondence information between a population of shapes in this setting. Second and third are approaches to morph between two shapes and to segment a population of shapes into near-rigid components. Next, I will present an approach for isometry-invariant shape description and feature extraction.

Furthermore, I will present an algorithm to compute the correspondence information between human bodies in varying postures. In addition to being nearly isometric, human body shapes share the same geometric structure, and we can take advantage of this prior geometric information to find accurate correspondences. Finally, I will discuss some applications of shape analysis in computer-aided design.

  • Søren Hauberg

We propose a geometric approach to articulated tracking, where the human pose representation is expressed on the Riemannian manifold of joint positions. This is in contrast to conventional methods where the problem is phrased in terms of intrinsic parameters of the human pose. Our model is based on a physically natural metric that also has strong links to neurological models of human motion planning. Some benefits of the model is that it allows for easy modeling of interaction with the environment, for data-driven optimization schemes and for well-posed low-pass filtering properties.

To apply the Riemannian model in practice, we derive simulation schemes for Brownian motion on manifolds as well as computationally efficient approximation schemes. The resulting algorithms seem to outperform gold standards both in terms of accuracy and running times.

Organizers: Michel Besserve

  • Hao Li

A pure refinement procedure for non-rigid registration can be highly effective for establishing dense correspondences between pairs of scanned data, even for significant deformations. I will explain how to design robust non-rigid algorithms and why it is important to couple the optimization of correspondence positions, warping field, and overlapping regions. I will show several applications where it has been successfully applied ranging from film/game production to radiation oncology. One particular interest of mine is facial animation. I will present a fully integrated system for real-time facial performance capture and expression transfer and give a live demo of our latest technology, faceshift. At the end of the talk I

Organizers: Gerard Pons-Moll