Institute Talks

John Cunningham -TBA

IS Colloquium
  • 06 March 2017 • 11:15 12:15
  • John Cunningham
  • MPH Lecture Hall

Organizers: Philipp Hennig

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

Making Robots Learn

IS Colloquium
  • 13 November 2015 • 11:30 12:30
  • Prof. Pieter Abbeel
  • Max Planck House Tübingen, Lecture Hall

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what often ends up being time-consuming task specific programming. In this talk I will describe the ideas behind two promising types of robot learning: First I will discuss apprenticeship learning, in which robots learn from human demonstrations, and which has enabled autonomous helicopter aerobatics, knot tying, basic suturing, and cloth manipulation. Then I will discuss deep reinforcement learning, in which robots learn through their own trial and error, and which has enabled learning locomotion as well as a range of assembly and manipulation tasks.

Organizers: Stefan Schaal


Understanding Plants and Animals

Talk
  • 10 November 2015 • 11:00 12:00
  • Prof. David W. Jacobs
  • MRZ seminar room

I will describe a series of work that aims to automatically understand images of animals and plants. I will begin by describing recent work that uses Bounded Distortion matching to model pose variation in animals. Using a generic 3D model of an animal and multiple images of different individuals in various poses, we construct a model that captures the way in which the animal articulates. This is done by solving for the pose of the template that matches each image while simultaneously solving for the stiffness of each tetrahedron of the model. We minimize an L1 norm on stiffness, producing a model that bends easily at joints, but that captures the rigidity of other parts of the animal. We show that this model can determine the pose of animals such as cats in a wide range of positions. Bounded distortion forms a core part of the matching between 3D model and 2D images. I will also show that Bounded Distortion can be used for 2D matching. We use it to find corresponding features in images very robustly, optimizing an L0 distance to maximize the number of matched features, while bounding the amount of non-rigid variation between the images. We demonstrate the use of this approach in matching non-rigid objects and in wide-baseline matching of features. I will also give an overview of a method for identifying the parts of animals in images, to produce an automatic correspondence between images of animals. Building on these correspondences we develop methods for recognizing the species of a bird, or the breed of a dog. We use these recognition algorithms to construct electronic field guides. I will describe three field guides that we have published, Birdsnap, Dogsnap, and Leafsnap. Leafsnap identifies the species of trees using shape-based matching to compare images of leaves. Leafsnap has been downloaded by over 1.5 million users, and has been used in schools and in biodiversity studies. This work has been done in collaboration with many University of Maryland students and with groups at Columbia University, the Smithsonian Institution National Museum of Natural History, and the Weizmann Institute.

Organizers: Stephan Streuber


Understanding Plants and Animals

Talk
  • 10 November 2015 • 11:00 12:00
  • Prof. David W. Jacobs
  • MRZ seminar room

I will describe a series of work that aims to automatically understand images of animals and plants. I will begin by describing recent work that uses Bounded Distortion matching to model pose variation in animals. Using a generic 3D model of an animal and multiple images of different individuals in various poses, we construct a model that captures the way in which the animal articulates. This is done by solving for the pose of the template that matches each image while simultaneously solving for the stiffness of each tetrahedron of the model. We minimize an L1 norm on stiffness, producing a model that bends easily at joints, but that captures the rigidity of other parts of the animal. We show that this model can determine the pose of animals such as cats in a wide range of positions. Bounded distortion forms a core part of the matching between 3D model and 2D images. I will also show that Bounded Distortion can be used for 2D matching. We use it to find corresponding features in images very robustly, optimizing an L0 distance to maximize the number of matched features, while bounding the amount of non-rigid variation between the images. We demonstrate the use of this approach in matching non-rigid objects and in wide-baseline matching of features. I will also give an overview of a method for identifying the parts of animals in images, to produce an automatic correspondence between images of animals. Building on these correspondences we develop methods for recognizing the species of a bird, or the breed of a dog. We use these recognition algorithms to construct electronic field guides. I will describe three field guides that we have published, Birdsnap, Dogsnap, and Leafsnap. Leafsnap identifies the species of trees using shape-based matching to compare images of leaves. Leafsnap has been downloaded by over 1.5 million users, and has been used in schools and in biodiversity studies. This work has been done in collaboration with many University of Maryland students and with groups at Columbia University, the Smithsonian Institution National Museum of Natural History, and the Weizmann Institute.

Organizers: Stephan Streuber


  • Olga Diamanti
  • MRZ Seminar room

The design of tangent vector fields on discrete surfaces is a basic building block for many geometry processing applications, such as surface remeshing, parameterization and architectural geometric design. Many applications require the design of multiple vector fields (vector sets) coupled in a nontrivial way; for example, sets of more than two vectors are used for meshing of triangular, quadrilateral and hexagonal meshes. In this talk, a new, polynomial-based representation for general unordered vector sets will be presented. Using this representation we can efficiently interpolate user provided vector constraints to design vector set fields. Our interpolation scheme will require neither integer period jumps, nor explicit pairings of vectors between adjacent sets on a manifold, as is common in field design literature. Several extensions to the basic interpolation scheme are possible, which make our representation applicable in various scenarios; in this talk, we will focus on generating vector set fields particularly suited for mesh parameterization and show applications in architectural modeling.

Organizers: Gerard Pons-Moll


Learning to generate

Talk
  • 19 October 2015 • 14:00 15:00
  • Max Welling
  • MPI Lecture Hall

The recent amazing success of deep learning has been mainly in discriminative learning, that is, classification and regression. An important factor for this success has been, besides Moore's law, the availability of large labeled datasets. However, it is not clear whether in the future the amount of available labels grows as fast as the amount of unlabeled data, providing one argument to be interested in unsupervised and semi-supervised learning. Besides this there are a number of other reasons why unsupervised learning is still important, such as the fact that data in the life sciences often has many more features than instances (p>>n), the fact that probabilities over feature space are useful for planning and control problems and the fact that complex simulator models are the norm in the sciences. In this talk I will discuss deep generative models that can be jointly trained with discriminative models and that facilitate semi-supervised learning. I will discuss recent progress in learning and Bayesian inference in these "variational auto-encoders". I will then extend the deep generative models to the class of simulators for which no tractable likelihood exists and discuss new Bayesian inference procedures to fit these models to data.

Organizers: Peter Vincent Gehler


Imaging genomics of functional brain networks

IS Colloquium
  • 19 October 2015 • 11:15 12:15
  • Jonas Richiardi
  • Max Planck House, Lecture Hall

During rest, brain activity is intrinsically synchronized between different brain regions, forming networks of coherent activity. These functional networks (FNs), consisting of multiple regions widely distributed across lobes and hemispheres, appear to be a fundamental theme of neural organization in mammalian brains. Despite hundreds of studies detailing this phenomenon, the genetic and molecular mechanisms supporting these functional networks remain undefined. Previous work has mostly focused on polymorphisms in candidate genes, or used a twin study approach to demonstrate heritability of aspects of resting-state connectivity. The recent availability of high spatial resolution post-mortem brain gene expression datasets, together with several large-scale imaging genetics datasets, which contain joint in-vivo functional brain imaging data and genotype data for several hundred subjects, opens intriguing data analysis avenues. Using novel cross-modal graph-based statistics, we show that functional brain networks defined with resting-state fMRI can be recapitulated using measures of correlated gene expression, and that the relationship is not driven by gross tissue types. The set of genes we identify is significantly enriched for certain types of ion channels and synapse-related genes. We validate results by showing that polymorphisms in this set significantly correlate with alterations of in-vivo resting-state functional connectivity in a group of 259 adolescents. We further validate results on another species by showing that our list of genes is significantly associated with neuronal connectivity in the mouse brain. These results provide convergent, multimodal evidence that resting-state functional networks emerge from the orchestrated activity of dozens of genes linked to ion channel activity and synaptic function. Functional brain networks are also known to be perturbed in a variety of neurological and neuropsychological disorders, including Alzheimer's and schizophrenia. Given this link between disease and networks, and the fact that many brain disorders have genetic contributions, it seems that functional brain networks may be an interesting endophenotype for clinical use. We discuss the translational potential of the imaging genomics techniques we developed.

Organizers: Moritz Grosse-Wentrup Michel Besserve


  • Yasemin Bekiroglu
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Unknown information required to plan grasps such as object shape and pose needs to be extracted from the environment through sensors. However, sensory measurements are noisy and associated with a degree of uncertainty. Furthermore, object parameters relevant to grasp planning may not be accurately estimated, e.g., friction and mass. In real-world settings, these issues can lead to grasp failures with serious consequences. I will talk about learning approaches using real sensory data, e.g., visual and tactile, to assess grasp success (discriminative and generative) that can be used to trigger plan corrections. I will also present a probabilistic approach for learning object models based on visual and tactile data through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape.

Organizers: Jeannette Bohg


Intelligent Learning

IS Colloquium
  • 12 October 2015 • 11:15 12:15
  • Vladimir Vapnik
  • Max Planck House Lecture Hall

Organizers: Michel Besserve


  • Gael Varoquaux
  • Max Planck House Lecture Hall

Organizers: Moritz Grosse-Wentrup


Causal Models and How to Refute Them

Talk
  • 29 September 2015 • 10:30 11:30
  • Robin Evans

Directed acyclic graph models (DAG models, also called Bayesian networks) are widely used in the context of causal inference, and they can be manipulated to represent the consequences of intervention in a causal system. However, DAGs cannot fully represent causal models with confounding; other classes of graphs, such as ancestral graphs and ADMGs, have been introduced to deal with this using additional kinds of edge, but we show that these are not sufficiently rich to capture the range of possible models. In fact, no mixed graph over the observed variables is rich enough, regardless of how many edges are used. Instead we introduce mDAGs, a class of hyper-graphs appropriate for representing causal models when some of the variables are unobserved. Results on the Markov equivalence of these marginal models show that when interpreted causally, mDAGs are the minimal class of graphs which can be sensibly used. Understanding such equivalences is critical for the use of automatic causal structure learning methods, a topic in which there is considerable interest. We elucidate the state of the art as well as some open problems.

Organizers: Sabrina Rehbaum