Header logo is
Institute Talks

Constructing Artificial Characters - Traditional versus Deep Learning Approaches

Talk
  • 27 April 2018 • 16:30 17:30
  • JP Lewis
  • PS Aquarium, 3rd floor, north, MPI-IS

The definition of art has been debated for more than 1000 years, and continues to be a puzzle. While scientific investigations offer hope of resolving this puzzle, machine learning classifiers that discriminate art from non-art images generally do not provide an explicit definition, and brain imaging and psychological theories are at present too coarse to provide a formal characterization. In this work, rather than approaching the problem using a machine learning approach trained on existing artworks, we hypothesize that art can be defined in terms of preexisting properties of the visual cortex. Specifically, we propose that a broad subset of visual art can be defined as patterns that are exciting to a visual brain. Resting on the finding that artificial neural networks trained on visual tasks can provide predictive models of processing in the visual cortex, our definition is operationalized by using a trained deep net as a surrogate “visual brain”, where “exciting” is defined as the activation energy of particular layers of this net. We find that this definition easily discriminates a variety of art from non-art, and further provides a ranking of art genres that is consistent with our subjective notion of ‘visually exciting’. By applying a deep net visualization technique, we can also validate the definition by generating example images that would be classified as art. The images synthesized under our definition resemble visually exciting art such as Op Art and other human- created artistic patterns.

Organizers: Michael Black

  • Christian Theobalt
  • Max Planck House Lecture Hall

Even though many challenges remain unsolved, in recent years computer graphics algorithms to render photo-realistic imagery have seen tremendous progress. An important prerequisite for high-quality renderings is the availability of good models of the scenes to be rendered, namely models of shape, motion and appearance. Unfortunately, the technology to create such models has not kept pace with the technology to render the imagery. In fact, we observe a content creation bottleneck, as it often takes man months of tedious manual work by a animation artists to craft models of moving virtual scenes.
To overcome this limitation, the research community has been developing techniques to capture models of dynamic scenes from real world examples, for instance methods that rely on footage recorded with cameras or other sensors. One example are performance capture methods that measure detailed dynamic surface models, for example of actors or an actor's face, from multi-view video and without markers in the scene. Even though such 4D capture methods made big strides ahead, they are still at an early stage of their development. Their application is limited to scenes of moderate complexity in controlled environments, reconstructed detail is limited, and captured content cannot be easily modified, to name only a few restrictions.
In this talk, I will elaborate on some ideas on how to go beyond this limited scope of 4D reconstruction, and show some results from our recent work. For instance, I will show how we can capture more complex scenes with many objects or subjects in close interaction, as well as very challenging scenes of a smaller scale, such a hand motion. The talk will also show how we can capitalize on more sophisticated light transport models and inverse rendering to enable high-quality reconstruction in much more uncontrolled scenes, eventually also outdoors, and with very few cameras. I will also demonstrate how to represent captured scenes such that they can be conveniently modified. If time allows, the talk will cover some of our recent ideas on how to perform advanced edits of videos (e.g. removing or modifying dynamic objects in scenes) by exploiting reconstructed 4D models, as well as robustly found inter- and intra-frame correspondences.

Organizers: Gerard Pons-Moll


Compressive Sensing and Beyond

IS Colloquium
  • 23 June 2014 • 15:00 16:15
  • Holger Rauhut
  • Max Planck Haus Lecture Hall

The recent theory of compressive sensing predicts that (approximately) sparse vectors can be recovered from vastly incomplete linear measurements using efficient algorithms. This principle has a large number of potential applications in signal and image processing, machine learning and more. Optimal measurement matrices in this context known so far are based on randomness. Recovery algorithms include convex optimization approaches (l1-minimization) as well as greedy methods. Gaussian and Bernoulli random matrices are provably optimal in the sense that the smallest possible number of samples is required. Such matrices, however, are of limited practical interest because of the lack of any structure. In fact, applications demand for certain structure so that there is only limited freedom to inject randomness. We present recovery results for various structured random matrices including random partial Fourier matrices and partial random circulant matrices. We will also review recent extensions of compressive sensing for recovering matrices of low rank from incomplete information via efficient algorithms such as nuclear norm minimization. This principle has recently found applications for phaseless estimation, i.e., in situations where only the magnitude of measurements is available. Another extension considers the recovery of low rank tensors (multi-dimensional arrays) from incomplete linear information. Several obstacles arise when passing from matrices and tensors such as the lack of a singular value decomposition which shares all the nice properties of the matrix singular value decomposition. Although only partial theoretical results are available, we discuss algorithmic approaches for this problem.

Organizers: Michel Besserve


  • Brian Corner
  • MRC Seminar Room

A goal in virtual reality is for the user to experience a synthetic environment as if it were real. Engagement with virtual actors is a big part of the sensory context, thus getting the people "right" is critical for success. Size, shape, gender, ethnicity, clothing, color, texture, movement, among other attributes must be layered and nuanced to provide an accurate encounter between an actor and a user. In this talk, I discuss the development of digital human models and how they may be improved to obtain the high realism for successful engagement in a virtual world.


  • Christian Häne
  • MRC-SR

Volumetric 3D modeling has attracted a lot of attention in the past. In this talk I will explain how the standard volumetric formulation can be extended to include semantic information by using a convex multi-label formulation. One of the strengths of our formulation is that it allows us to directly account for the expected surface orientations. I will focus on two applications. Firstly, I will introduce a method that allows for joint volumetric reconstruction and class segmentation. This is achieved by taking into account the expected orientations of object classes such as ground and building. Such a joint approach considerably improves the quality of the geometry while at the same time it gives a consistent semantic segmentation. In the second application I will present a method that allows for the reconstruction of challenging objects such as for example glass bottles. The main difficulty with reconstructing such objects are the texture-less, transparent and reflective areas in the input images. We propose to formulate a shape prior based on the locally expected surface orientation to account for the ambiguous input data. Our multi-label approach also directly enables us to segment the object from its surrounding.


Low-rank dynamics

IS Colloquium
  • 26 May 2014 • 15:15 16:30
  • Christian Lubich
  • AGBS seminar room

This talk reviews differential equations on manifolds of matrices or tensors of low rank. They serve to approximate, in a low-rank format, large time-dependent matrices and tensors that are either given explicitly via their increments or are unknown solutions of differential equations. Furthermore, low-rank differential equations are used in novel algorithms for eigenvalue optimisation, for instance in robust-stability problems.

Organizers: Philipp Hennig


Embedded Optimization for Nonlinear Model Predictive Control

IS Colloquium
  • 19 May 2014 • 10:15 11:30
  • Prof. Moritz Diehl
  • Max Planck House Lecture Hall

This talk shows how embedded optimization - i.e. autonomous optimization algorithms receiving data, solving problems, and sending answers continuously - are able to address challenging control problems. When nonlinear differential equation models are used to predict and optimize future system behaviour, one speaks of Nonlinear Model Predictive Control (NMPC).The talk presents experimental applications of NMPC to time and energy optimal control of mechatronic systems and discusses some of the algorithmic tricks that make NMPC optimization rates up to 1 MHz possible. Finally, we present on particular challenging application, tethered flight for airborne wind energy systems.

Organizers: Sebastian Trimpe


Towards Lifelong Learning for Visual Scene Understanding

IS Colloquium
  • 12 May 2014 • 11:15
  • Christoph Lampert
  • Max Planck House Lecture Hall

The goal of lifelong visual learning is to develop techniques that continuously and autonomously learn from visual data, potentially for years or decades. During this time the system should build an ever-improving base of generic visual information, and use it as background knowledge and context for solving specific computer vision tasks. In my talk, I will highlight two recent results from our group on the road towards lifelong visual scene understanding: the derivation of theoretical guarantees for lifelong learning systems and the development of practical methods for object categorization based on semantic attributes.

Organizers: Gerard Pons-Moll


  • Nikolaus Troje
  • MRC Seminar room (0.A.03)

Point-light walkers and stick figures rendered orthographically and without self-occlusion do not contain any information as to their depth. For instance, a frontoparallel projection could depict a walker from the front or from the back. Nevertheless, observers show a strong bias towards seeing the walker as facing the viewer. A related stimulus, the silhouette of a human figure, does not seem to show such a bias. We develop these observations into a tool to study the cause of the facing the viewer bias observed for biological motion displays.

I will give a short overview about existing theories with respect to the facing-the-viewer bias, and about a number of findings that seem hard to explain with any single one of them. I will then present the results of our studies on both stick figures and silhouettes which gave rise to a new theory about the facing the viewer bias, and I will eventually present an experiment that tests a hypothesis resulting from it. The studies are discussed in the context of one of the most general problems the visual system has to solve: How do we disambiguate an initially ambiguous sensory world and eventually arrive at the perception of a stable, predictable "reality"?


Video Segmentation

IS Colloquium
  • 05 May 2014 • 09:15:00
  • Thomas Brox
  • Max Planck House Lecture Hall

Compared to static image segmentation, video segmentation is still in its infancy. Various research groups have different tasks in mind when they talk of video segmentation. For some it is motion segmentation, some think of an over-segmentation with thousands of regions per video, and others understand video segmentation as contour tracking. I will go through what I think are reasonable video segmentation subtasks and will touch the issue of benchmarking. I will also discuss the difference between image and video segmentation. Due to the availability of motion and the redundancy of successive frames, video segmentation should actually be easier than image segmentation. However, recent evidence indicates the opposite: at least at the level of superpixel segmentation, image segmentation methodology is more advanced than what can be found in the video segmentation literature.

Organizers: Gerard Pons-Moll


  • Cordelia Schmid
  • MRC seminar room (0.A.03)

In the first part of our talk, we present an approach for large displacement optical flow. Optical flow computation is a key component in many computer vision systems designed for tasks such as action
detection or activity  recognition. Inspired by the large displacement optical flow of Brox and  Malik, our approach  DeepFlow  combines a novel matching algorithm with a variational approach . Our matching algorithm builds upon a multi-stage architecture interleaving convolutions and max-pooling.  DeepFlow efficiently handles large displacements  occurring in realistic videos, and shows competitive performance on optical flow benchmarks.

In the second part of our talk, we present a state-of-the-art approach  for action recognition based  on motion stabilized trajectory  descriptors and a Fisher vector representation.  We briefly review the recent trajectory-based video features and, then, introduce their motion stabilized version, combining human detection and dominant motion estimation. Fisher vectors summarize the information of a video efficiently. Results on several of the recent action datasets as well as the TrecVid MED dataset show that our approach outperforms the state-of-the-art