Institute Talks

Dynamic Scene Analysis Using CrowdCam Data

Talk
  • 24 May 2017 • 11:00 12:00
  • Yael Moses
  • Greenhouse (PS)

Dynamic events such as family gatherings, concerts or sports events are often photographed by a group of people. The set of still images obtained this way is rich in dynamic content. We consider the question of whether such a set of still images, rather the traditional video sequences, can be used for analyzing the dynamic content of the scene. This talk will describe several instances of this problem, their solutions and directions for future studies. In particular, we will present a method to extend epipolar geometry to predict location of a moving feature in CrowdCam images. The method assumes that the temporal order of the set of images, namely photo-sequencing, is given. We will briefly describe our method to compute photo-sequencing using geometric considerations and rank aggregation. We will also present a method for identifying the moving regions in a scene, which is a basic component in dynamic scene analysis. Finally, we will consider a new vision of developing collaborative CrowdCam, and a first step toward this goal.

Organizers: Jonas Wulff

Geometry of Neural Networks

Talk
  • 24 May 2017 • 2:30 3:30
  • Guido Montúfar
  • N4.022 (EI Dept. meeting room / 4th floor, north building)

Deep Learning is one of the most successful machine learning approaches to artificial intelligence. In this talk I discuss the geometry of neural networks as a way to study the success of Deep Learning at a mathematical level and to develop a theoretical basis for making further advances, especially in situations with limited amounts of data and challenging problems in reinforcement learning. I present a few recent results on the representational power of neural networks and then demonstrate how to align this with structures from perception-action problems in order to obtain more efficient learning systems.

Organizers: Jane Walters

The Perceptual Advantage of Symmetry for Scene Perception

Talk
  • 29 May 2017 • 14:00 15:00
  • Sven Dickinson
  • Green-House (PS)

Human observers can classify photographs of real-world scenes after only a very brief exposure to the image (Potter & Levy, 1969; Thorpe, Fize, Marlot, et al., 1996; VanRullen & Thorpe, 2001). Line drawings of natural scenes have been shown to capture essential structural information required for successful scene categorization (Walther et al., 2011). Here, we investigate how the spatial relationships between lines and line segments in the line drawings affect scene classification. In one experiment, we tested the effect of removing either the junctions or the middle segments between junctions. Surprisingly, participants performed better when shown the middle segments (47.5%) than when shown the junctions (42.2%). It appeared as if the images with middle segments tended to maintain the most parallel/locally symmetric portions of the contours. In order to test this hypothesis, in a second experiment, we either removed the most symmetric half of the contour pixels or the least symmetric half of the contour pixels using a novel method of measuring the local symmetry of each contour pixel in the image. Participants were much better at categorizing images containing the most symmetric contour pixels (49.7%) than the least symmetric (38.2%).  Thus, results from both experiments demonstrate that local contour symmetry is a crucial organizing principle in complex real-world scenes. Joint work with John Wilder (UofT CS, Psych), Morteza Rezanejad (McGill CS), Kaleem Siddiqi (McGill CS), Allan Jepson (UofT CS), and Dirk Bernhardt-Walther (UofT Psych), to be presented at VSS 2017.

Organizers: Ahmed Osman

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

Deep Learning and its Relationship with Time

Talk
  • 08 December 2016 • 11:00 12:00
  • Laura Leal-Taixé
  • MRZ Seminar Room

In this talk I am going to present the work we have been doing at the Computer Vision Lab of the Technical University of Munich which started as an attempt to better deal with videos (and therefore the time domain) within neural network architectures. Oddly enough, we ended up not including time at all in our proposed solutions. In the first work, we tackle the task of semi-supervised video object segmentation, i.e., the separation of an object from the background in a video, given the mask of the first frame. I will present One-Shot Video Object Segmentation (OSVOS), based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence (hence one-shot). OSVOS is fast and improves the state of the art by a significant margin (79.8% vs 68.0%). The second work I will present is a new CNN+LSTM architecture for camera pose regression for indoor and outdoor scenes. Contrary to most works, we make use of LSTM units on the CNN output in spatial coordinates in order to capture contextual information. This substantially enlarges the receptive field of each pixel leading to drastic improvements in localization performance. I will also present a new large-scale indoor dataset with accurate ground truth from a laser scanner.

Organizers: Joel Janai


  • Kathleen Robinette
  • MRZ Seminar Room

Kathleen is the creator of the well-known CAESAR anthropomorphic dataset and is an expert on body shape and apparel fit.

Organizers: Javier Romero


Intelligent control of uncertain underactuated mechanical systems

Talk
  • 01 December 2016 • 11:00 - 01 November 2016 • 12:00
  • Wallace M. Bessa
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Underactuated mechanical systems (UMS) play an essential role in several branches of industrial activity and their application scope ranges from robotic manipulators and overhead cranes to aerospace vehicles and watercrafts. Despite this broad spectrum of applications, the problem of designing accurate controllers for underactuated systems is, however, much more tricky than for fully actuated ones. Moreover, the dynamic behavior of an UMS is frequently uncertain and highly nonlinear, which in fact makes the design of control schemes for such systems a challenge for conventional and well established methods. In this talk, it will be shown that intelligent algorithms, such as fuzzy logic and artificial neural networks, could be combined with nonlinear control techniques (feedback linearization or sliding modes) in order to improve both set-point regulation and trajectory tracking of uncertain underactuated mechanical systems.

Organizers: Sebastian Trimpe


  • Carsten Rother
  • MRZ seminar room

In this talk I will present the portfolio of work we conduct in our lab. Herby, I will present three recent body of work in more detail. This is firstly our work on learning 6D Object Pose estimation and Camera localizing from RGB or RGBD images. I will show that by utilizing the concepts of uncertainty and learning to score hypothesis, we can improve the state of the art. Secondly, I will present a new approach for inferring multiple diverse labeling in a graphical model. Besides guarantees of an exact solution, our method is also faster than existing techniques. Finally, I will present a recent work in which we show that popular Auto-context Decision Forests can be mapped to Deep ConvNets for Semantic Segmentation. We use this to detect the spine of a zebrafish, in case when little training data is available.

Organizers: Aseem Behl


  • Dr. Bogdan Savchynskyy
  • MRZ seminar room

We propose a new computational framework for combinatorial problems arising in machine learning and computer vision. This framework is a special case of Lagrangean (dual) decomposition, but allows for efficient dual ascent (message passing) optimization. In a sense, one can understand both the framework and the optimization technique as a generalization of those for standard undirected graphical models (conditional random fields). We will make an overview of our recent results and plans for the nearest future.

Organizers: Aseem Behl


  • Bogdan Savchynskyy
  • Mrz Seminar Room (room no. 0.A.03)

We propose a new computational framework for combinatorial problems arising in machine learning and computer vision. This framework is a special case of Lagrangean (dual) decomposition, but allows for efficient dual ascent (message passing) optimization. In a sense, one can understand both the framework and the optimization technique as a generalization of those for standard undirected graphical models (conditional random fields). We will make an overview of our recent results and plans for the nearest future.

Organizers: Aseem Behl


Monte Carlo with determinantal point processes

Talk
  • 02 November 2016 • 15:00 16:00
  • Rémi Bardenet
  • AGBS Seminar room (Spemannstr. 38)

In this talk, we show that using repulsive random variables, it is possible to build Monte Carlo methods that converge faster than vanilla Monte Carlo. More precisely, we build estimators of integrals, the variance of which decreases as $N^{-1-1/d}$, where $N$ is the number of integrand evaluations, and $d$ is the ambient dimension. To do so, we propose stochastic numerical quadratures involving determinantal point processes (DPPs) associated to multivariate orthogonal polynomials. The proposed method can be seen as a stochastic version of Gauss' quadrature, where samples from a determinantal point process replace zeros of orthogonal polynomials. Furthermore, integration with DPPs is close in spirit to randomized quasi-Monte Carlo methods, leveraging repulsive point processes to ensure low discrepancy samples. The talk is based on the following preprint https://arxiv.org/abs/1605.00361

Organizers: Alexandra Gessner


  • Hedvig Kjellström
  • MRZ Seminar Room

In this talk I will first outline my different research projects. I will then focus on one project with applications in Health, and introduce the Inter-Battery Topic Model (IBTM). Our approach extends traditional topic models by learning a factorized latent variable representation. The structured representation leads to a model that marries benefits traditionally associated with a discriminative approach, such as feature selection, with those of a generative model, such as principled regularization and ability to handle missing data. The factorization is provided by representing data in terms of aligned pairs of observations as different views. This provides means for selecting a representation that separately models topics that exist in both views from the topics that are unique to a single view. This structured consolidation allows for efficient and robust inference and provides a compact and efficient representation.


Optical Robot Skin and Whole Body Vision

Talk
  • 19 October 2016 • 14:00 15:00
  • Chris Atkeson and Akihiko Yamaguchi
  • Max Planck House, Lecture Hall

Chris Atkeson will talk about the motivation for optical robot skin and whole-body vision. Akihiko Yamaguchi will talk about a first application, FingerVision.

Organizers: Ludovic Righetti


Numerics in Computational Stellar Astrophysics

Talk
  • 29 September 2016 • 14:00 15:00
  • Jean-Claude Passy
  • AGBS Seminar room (Spemmanstr. 38)

The importance of computer science in astrophysical research has increased tremendously over the past 15 years. Indeed, as observational facilities and missions are constantly pushing their precision limit, theorists need to provide observers with more and more realistic numerical models. These models need to be verified, validated, and their uncertainties must be assessed. In this talk, I will present the results of two independent numerical studies aiming at solving some fundamental problems in stellar astrophysics. First, I will explain how we have used different 3D hydrodynamics codes to simulate stellar mergers. In particular I will focus on the verification and validation steps, and describe a new algorithm to compute self-gravity that I have developed and implemented in a grid-based code. Then, I will introduce the concept of a ‘stellar evolution' code which models the full evolution of a star, from its birth until its death. I will present a code comparison of several such codes widely used by the astrophysical community, and assess their systematic uncertainties. These modeling uncertainties must be taken into account by observers if they wish to derive observed parameters more reliably.

Organizers: Raffi Enficiaud