Institute Talks

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

  • Edward H. Adelson

Human can easily see 3D shape from single 2D images, exploiting multiple kinds of information. This has given rise to multiple subfields (in both human vision and computer vision) devoted to the study of shape-from-shading, shape-from-texture, shape-from-contours, and so on.

The proposed algorithms for each type of shape-from-x remain specialized and fragile (in contrast with the flexibility and robustness of human vision). Recent work in graphics and psychophysics has demonstrated the importance of local orientation structure in conveying 3D shape. This information is fairly stable and reliable, even when a given shape is rendered in multiple styles (including non-photorealistic styles such as line drawings.)

We have developed an exemplar-based system (which we call Shape Collage) that learns to associate image patches with corresponding 3D shape patches. We train it with synthetic images of “blobby” objects rendered in various ways, including solid texture, Phong shading, and line drawings. Given a new image, it finds the best candidate scene patches and assembles them into a coherent interpretation of the object shape.

Our system is the first that can retrieve the shape of naturalistic objects from line drawings. The same system, without modification, works for shape-from-texture and can also get shape from shading, even with non-Lambertian surfaces. Thus disparate types of image information can be processed by a single mechanism to extract 3D shape. Collaborative work with Forrester Cole, Phillip Isola, Fredo Durand, and William Freeman.


  • E.J. Chichilnisky

A central aspect of visual processing in the retina is the existence of nonlinear subunits within the receptive fields of retinal ganglion cells. These subunits have been implicated in visual computations such as segregation of object motion from background motion. However, relatively little is known about the spatial structure of subunits and its emergence from nonlinear interactions in the interneuron circuitry of the retina.

We used physiological measurements of functional circuitry in the isolated primate retina at single-cell resolution, combined with novel computational approaches, to explore the neural computations that produce subunits. Preliminary results suggest that these computations can be understood in terms of convergence of photoreceptor signals via specific types of interneurons to ganglion cells.


  • Ruth Rosenholtz

Considerable research has demonstrated that the representation is not equally faithful throughout the visual field; representation appears to be coarser in peripheral vision, perhaps as a strategy for dealing with an information bottleneck in visual processing. In the last few years, a convergence of evidence has suggested that in peripheral and unattended regions, the information available consists of summary statistics.

For a complex set of statistics, such a representation can provide a rich and detailed percept of many aspects of a visual scene. However, such a representation is also lossy; we would expect the inherent ambiguities and confusions to have profound implications for vision.

For example, a complex pattern, viewed peripherally, might be poorly represented by its summary statistics, leading to the degraded recognition experienced under conditions of visual crowding. Difficult visual search might occur when summary statistics could not adequately discriminate between a target-present and distractor-only patch of the stimuli. Certain illusory percepts might arise from valid interpretations of the available – lossy – information. It is precisely visual tasks upon which a statistical representation has significant impact that provide the evidence for such a representation in early vision. I will summarize recent evidence that early vision computes summary statistics based upon such tasks.



  • Martin Giese

Human body movements are highly complex spatio-temporal patterns and their control and recognition represent challenging problems for technical as well as neural systems. The talk will present an overview of recent work of our group, exploiting biologically-inspired learning-based reprensentations for the recognition and synthesis of body motion.

The first part of the talk will present a neural theory for the visual processing of goal-directed actions, which reproduces and partially correctly predicts electrophysiological results from action-selective cortical neurons in monkey cortex. In particular, we show that the same neural circuits might account for the recognition of natural and abstract action stimuli.

In the second part of the talk different techniques for the learning of structured online-capable synthesis models for complex body movements are discussed. One approach is based on the learning of kinematic primitives, exploiting anechoic demixing, and the generation of such primitives by networks of canonical dynamical systems.

An approach for the design of a stable overall system dynamics of such nonlinear networks is discussed. The second approach is the learning of hierarchical models for interactive movements, combining Gaussian Process Latent Variable models and Gaussian process Dynamical Models, and resulting in animations that pass the Turing test of computer graphics. The presented work was funded by the DFG, and EC FP 7 projects SEARISE, TANGO and AMARSI.


  • Ronen Basri

Variations in lighting can have a significant effect on the appearance of an object. Modeling these variations is important for object recognition and shape reconstruction, particularly of smooth, textureless objects. The recent decade has seen significant progress in handling lambertian objects. In that context I will present our work on using harmonic representations to represent the reflectance of lambertian objects under complex lighting configurations and their application to photometric stereo and prior-assisted shape from shading. In addition, I will present preliminary results in handling specular objects and methods for dealing with moving objects.


  • Carlos Vargas-Irwin

Dimensionality reduction applied to neural ensemble data has led to the concept of a 'neural trajectory', a low-dimensional representation of how the state of the network evolves over time. Here we present a novel neural trajectory extraction algorithm which combines spike train distance metrics (Victor and Purpura, 1996) with dimensionality reduction based on local neighborhood statistics (van der Maaten and Hinton, 2008.) . We apply this technique to describe and quantify the activity of primate ventral premotor cortex neuronal ensembles in the context of a cued reaching and grasping task with instructed delay.


  • Martin Butz

Humans interact with their environment in a highly flexible manner. One important component for the successful control of such flexible interactions is an internal body model. To maintain a consistent internal body model, the brain appears to continuously and probabilistically integrate multiple sources of information, including various sensory modalities but also anticipatory, re-afferent information about current body motion. A modular, multimodal arm model (MMM) is presented.

The model represents a seven degree of freedom arm in various interactive modality frames. The modality frames distinguish between proprioceptive, limb-relative orientation, head-relative orientation, and head-relative location frames. Each arm limb is represented separately but highly interactively in each of these modality frames. Incoming sensory and motor feedback information is continuously exchanged in a rigorous, probabilistic fashion, while a consistent overall arm model is maintained due to the local interactions.

The model is able to automatically identify sensory failures and sensory noise. Moreover, it is able to mimic the rubber hand illusion phenomenon. Currently, we endow the model with neural representations for each modality frame to play-out its full potential for planning and goal-directed control.


  • Cordelia Schmid

The amount of digital video content available is growing daily, on sites such as YouTube. Recent statistics on the YouTube website show that around 48 hours of video are uploaded every minute. This massive data production calls for automatic analysis.

In this talk we present some recent results for action recognition in videos. Bag-of-features have shown very good performance for action recognition in videos. We briefly review the underlying principles and introduce trajectory-based video features, which have shown to outperform the state of the art. These trajectory features are obtained by dense point sampling and tracking based on displacement information from a dense optical flow field. Trajectory descriptors are obtained with motion boundary histograms, which are robust to camera motion. We, then, show how to integrate temporal structure into a bag-of-features based on an actom sequence model. Action sequence models localize actions based on sequences of atomic actions, i.e., represent the temporal structure by sequences of histograms of actom-anchored visual features. This representation is flexible, sparse and discriminative. The resulting actom sequence model is shown to significantly improve performance over existing methods for temporal action localization.

Finally, we show how to move towards more structured representations by explicitly modeling human-object interactions. We learn how to represent human actions as interactions between persons and objects. We localize in space and track over time both the object and the person, and represent an action as the trajectory of the object with respect to the person position, i.e., our human-object interaction features capture the relative trajectory of the object with respect to the human. This is joint work with A Gaidon, V. Ferrari, Z. Harchaoui, A. Klaeser, A. Prest, H. Wang.


Information-driven Surveillance

Talk
  • 15 March 2012
  • Eric Sommerlade

The supervision of public spaces aims at multiple objectives, such as early acquisition of targets, their identification and pursuit throughout the supervised area. To achieve these, typical sensors such as pan-tilt-zoom cameras need to either focus on individuals, or provide a broad field of view, which are conflicting control settings. We address this problem in an information-theoretic manner: by phrasing each of the objectives in terms of mutual information, they become comparable. The problem turns into maximisation of information, which is predicted for the next time step and phrased as a decision process.

Our approach results in decisions that on average satisfy objectives in desired proportions. At the end of the talk I will address an application of information maximisation to aid in the interactive calibration of cameras.