Institute Talks

Building Multi-Family Animal Models

Talk
  • 07 April 2017 • 11:00 12:00
  • Silvia Zuffi
  • Aquarium, N.3.022, Spemannstr. 34, third floor

There has been significant prior work on learning realistic, articulated, 3D statistical shape models of the human body. In contrast, there are few such models for animals, despite their many applications in biology, neuroscience, agriculture, and entertainment. The main challenge is that animals are much less cooperative subjects than humans: the best human body models are learned from thousands of 3D scans of people in specific poses, which is infeasible with live animals. In the talk I will illustrate how we extend a state-of-the-art articulated 3D human body model (SMPL) to animals learning from toys a multi-family shape space that can represent lions, cats, dogs, horses, cows and hippos. The generalization of the model is illustrated by fitting it to images of real animals, where it captures realistic animal shapes, even for new species not seen in training.

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

  • Paul G. Kry
  • MRC seminar room (0.A.03)

In this talk I will give an overview of work I have done over the years exploring physically based simulation of contact, deformation, and articulated structures where there are trade-offs between computational speed and physical fidelity that can be made.  I will also discuss examples that mix data-driven and physically based approaches in animation and control.

Paul Kry is an associate professor in the School of Computer Science at McGill University.  He has a BMath from University of Waterloo, and MSc and PhD from University of British Columbia.  His research focuses on physically based simulation, motion capture, and control of character animation.


What is biological motion?

Talk
  • 18 February 2015 • 15:00:00
  • Nikolaus F. Troje
  • MRC seminar room (0.A.03)

Everyone in visual psychology seems to know what Biological Motion is. Yet, it is not easy to come up with a definition that is specific enough to justify a distinct label, but is also general enough to include the many different experiments to which the term has been applied in the past. I will present a number of tasks, stimuli, and experiments, including some of my own work, to demonstrate the diversity and the appeal of the field of biological motion perception. In trying to come up with a definition of the term, I will particularly focus on a type of motion that has been considered “non-biological” in some contexts, even though it might contain -- as more recent work shows -- one of the most important visual invariants used by the visual system to distinguish animate from inanimate motion.


Reconstructing Complete 3D Models from Single Images

Talk
  • 17 February 2015 • 09:00:00
  • Vladlen Koltun
  • MRC seminar room (0.A.03)

We present an approach to creating 3D models of objects depicted in Web images, even when each object may only be shown in a single image. Our approach uses a comparatively small collection of existing 3D models to guide the reconstruction process. These existing shapes are used to derive information about shape structure. Our guiding idea is to jointly analyze the images and the available 3D models. Joint analysis of all images along with the available shapes regularizes the formulated optimization problems, stabilizes estimation of camera parameters and construction of dense pixel-level correspondences, and leads to reasonable reproduction of object appearance in the absence of traditional multi-view cues. Joint work with Qixing Huang and Hai Wang.


Reflecting in and on the Gradient Domain

IS Colloquium
  • 16 February 2015 • 10:15:00
  • Michael Goesele
  • MPH Hall

Image-based rendering has been introduced in the 1990s as an alternative approach to photorealistic rendering. Its key idea is to novel renderings by re-projecting pixels from nearby views. The basic approach works well for many scenes but breaks down if the scene contains “non-standard” elements such as reflective surfaces. In this talk, I will first show how we can extend image-based rendering to handle scenes with reflections. I will then discuss a novel gradient-based technique for image-based rendering that can intrinsically handle scenes with reflections.


Haptics: The Technology of Touch

Talk
  • 26 January 2015 • 10:00 am 11:00 am
  • Katherine Kuchenbecker
  • Köster Lecture Hall, Stuttgart

When you touch objects in your surroundings, you can discern each item’ s physical properties from the rich array of haptic cues you experience, including both the tactile sensations arising in your skin and the kinesthetic cues originating in your muscles and joints. Although physical interaction with the world is at the core of human experience, few computer and machine interfaces provide the operator with high-fidelity touch feedback, limiting their usability . Similarly , autonomous robots rarely take advantage of touch perception and thus struggle to match the manipulation capabilities of humans. This talk will describe several research projects from Professor Kuchenbecker's laboratory , including data-driven haptic texture rendering, vibrotactile feedback of tool vibrations for robotic surgery , and robotic learning of haptic adjectives

Organizers: Jane Walters


Introduction to the Scenario Approach

IS Colloquium
  • 19 January 2015 • 11:15 12:15
  • Marco Claudio Campi
  • MPH Lecture Hall, Tübingen

The scenario approach is a broad methodology to deal with decision-making in an uncertain environment. By resorting to observations, or by sampling uncertainty from a given model, one obtains an optimization problem (the scenario problem), whose solution bears precise probabilistic guarantees in relation to new, unseen, situations. The scenario approach opens up new avenues to address data-based problems in learning, identification, finance, and other fields.

Organizers: Sebastian Trimpe


  • Wenzel Jakob
  • MRZ seminar room

Driven by the increasing demand for photorealistic computer-generated images, graphics is currently undergoing a substantial transformation to physics-based approaches which accurately reproduce the interaction of light and matter. Progress on both sides of this transformation -- physical models and simulation techniques -- has been steady but mostly independent from another. When combined, the resulting methods are in many cases impracticably slow and require unrealistic workarounds to process even simple everyday scenes. My research lies at the interface of these two research fields; my goal is to break down the barriers between simulation techniques and the underlying physical models, and to use the resulting insights to develop realistic methods that remain efficient over a wide range of inputs.

I will cover three areas of recent work: the first involves volumetric modeling approaches to create realistic images of woven and knitted cloth. Next, I will discuss reflectance models for glitter/sparkle effects and arbitrarily layered materials that are specially designed to allow for efficient simulations. In the last part of the talk, I will give an overview of Manifold Exploration, a Markov Chain Monte Carlo technique that is able to reason about the geometric structure of light paths in high dimensional configuration spaces defined by the underlying physical models, and which uses this information to compute images more efficiently.


  • Konrad Schindler
  • Max Planck House Lecture Hall

I will present selected research projects of the Photogrammetry and Remote Sensing Group at ETH, including (i) 3D scene flow estimation for stereo video captured from a car; (ii) extraction of road networks from aerial images; and (iii) 3D reconstruction from large, unstructured (e.g. crowd-sourced) image collections.


  • Leonid Sigal
  • MRC Seminar Room

The growing scale of image and video datasets in vision makes labeling and annotation of such datasets, for training of recognition models, difficult and time consuming. Further, richer models often require richer labelings of the data, that are typically even more difficult to obtain. In this talk I will focus on two models that make use of different forms of supervision for two different vision tasks.  

In the first part of this talk I will focus on object detection. The appearance of an object changes profoundly with pose, camera view and interactions of the object with other objects in the scene. This makes it challenging to learn detectors based on an object-level labels (e.g., “car”). We postulate that having a richer set of labelings (at different levels of granularity) for an object, including finer-grained sub-categories, consistent in appearance and view, and higher-order composites – contextual groupings of objects consistent in their spatial layout and appearance, can significantly alleviate these problems. However, obtaining such a rich set of annotations, including annotation of an exponentially growing set of object groupings, is infeasible. To this end, we propose a weakly-supervised framework for object detection where we discover subcategories and the composites automatically with only traditional object-level category labels as input.

In the second part of the talk I will focus on the framework for large scale image set and video summarization. Starting from the intuition that the characteristics of the two media types are different but complementary, we develop a fast and easily-parallelizable approach for creating not only video summaries but also novel structural summaries of events in the form of the storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in the form of a branching directed network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frame, thereby treating consumer image as essentially a form of weak-supervision. The reconstruction of storyline graphs on the other hand is formulated as inference of the sparse time-varying directed graphs from a set of photo streams with assistance of consumer videos.

Time permitting I will also talk about a few other recent project highlights.


  • Jonathan Taylor
  • MRZ Seminar Room

Abstract: I will present a general framework for modelling and recovering 3D shape and pose using subdivision surfaces. To demonstrate this frameworks generality, I will show how to recover both a personalized rigged hand model from a sequence of depth images and a blend shape model of dolphin pose from a collection of 2D dolphin images. The core requirement is the formulation of a generative model in which the control vertices of a smooth subdivision surface are parameterized (e.g. with joint angles or blend weights) by a differentiable deformation function. The energy function that falls out of measuring the deviation between the surface and the observed data is also differentiable and can be minimized through standard, albeit tricky, gradient based non-linear optimization from a reasonable initial guess. The latter can often be obtained using machine learning methods when manual intervention is undesirable. Satisfyingly, the "tricks" involved in the former are elegant and widen the applicability of these methods.