Institute Talks

John Cunningham -TBA

IS Colloquium
  • 06 March 2017 • 11:15 12:15
  • John Cunningham
  • MPH Lecture Hall

Organizers: Philipp Hennig

Frederick Eberhardt - TBA

IS Colloquium
  • 03 July 2017 • 11:15 12:15
  • Frederick Eberhardt
  • Max Planck House Lecture Hall

Organizers: Sebastian Weichwald

  • Kevin T. Kelly
  • Max Planck House Lecture Hall

In machine learning, the standard explanation of Ockham's razor is to minimize predictive risk. But prediction is interpreted passively---one may not rely on predictions to change the probability distribution used for training. That limitation may be overcome by studying alternatively manipulated systems in randomized experimental trials, but experiments on multivariate systems or on human subjects are often infeasible or immoral. Happily, the past three decades have witnessed the development of a range of statistical techniques for discovering causal relations from non-experimental data. One characteristic of such methods is a strong Ockham bias toward simpler causal theories---i.e., theories with fewer causal connections among the variables of interest. Our question is what Ockham's razor has to do with finding true (rather than merely plausible) causal theories from non-experimental data. The traditional story of minimizing predictive risk does not apply, because uniform consistency is often infeasible in non-experimental causal discovery: without strong and implausible assumptions, the probability of erroneous causal orientation may be arbitrarily high at any sample size. The standard justification for causal discovery methods is point-wise consistency, or convergence in probability to the true causes. But Ockham's razor is not necessary for point-wise convergence: a Bayesian with a strong prior bias toward a complex model would also be point-wise consistent. Either way, the crucial Ockham bias remains disconnected from learning performance. A method reverses its opinion in probability when it probably says A at some sample size and probably says B incompatible with A at a higher sample size. A method cycles in probability when it probably says A, then probably says B incompatible with A, and then probably says A again. Uniform consistency allows for no reversals or cycles in probability. Point-wise consistency allows for arbitrarily many. Lying plausibly between those two extremes is straightest possible convergence to the truth, which allows for only as many cycles and reversals in probability as are necessary to solve the learning problem at hand. We show that Ockham's razor is necessary for cycle-minimal convergence and that patience, or waiting for nature to choose among simplest theories, is necessary for reversal-minimal convergence. The idea yields very tight constraints on inductive statistical methods, both classical and Bayesian, with causal discovery methods as an important special case. It also provides a valid interpretation of significance and power when tests are used to fish inductively for models. The talk is self-contained for a general scientific audience. Novel concepts are illustrated amply with figures and simulations.

Organizers: Michel Besserve Kun Zhang


(Matter) Waves in disordered media

Talk
  • 16 July 2015 • 15:00 15:30
  • Valentin Volchkov
  • AGBS Seminar Room

The propagation of waves in inhomogeneous media is a vast subject, spanning many different research communities. The ability of waves to interfere leads to the celebrated phenomenon of Anderson localization. Constructive interference increases the probability of return and therefore it can reduce or even cancel the propagation in a disordered medium. Anderson localization was first predicted for electrons in 'dirty' condensed matter systems, very soon however, it was generalized to all kind of waves and has been studied since with light, microwaves, ultrasound, and ultra cold atoms. Here I will give a brief introduction into the basic ideas of Anderson physics and mention some applications. In fact, I will argue that disorder can be used as a resource rather being a nuisance. I will discuss ultra cold atoms as a good candidate for studying Anderson localization and wave propagation in disorder in general and present related experiments.

Organizers: Senya Polikovsky


  • Garrett Stanley
  • MRZ Seminar room

The external world is represented in the brain as spatiotemporal patterns of electrical activity. Sensory signals, such as light, sound, and touch, are transduced at the periphery and subsequently transformed by various stages of neural circuitry, resulting in increasingly abstract representations through the sensory pathways of the brain. It is these representations that ultimately give rise to sensory perception. Deciphering the messages conveyed in the representations is often referred to as “reading the neural code”. True understanding of the neural code requires knowledge of not only the representation of the external world at one particular stage of the neural pathway, but ultimately how sensory information is communicated from the periphery to successive downstream brain structures. Our laboratory has focused on various challenges posed by this problem, some of which I will discuss. In contrast, prosthetic devices designed to augment or replace sensory function rely on the principle of artificially activating neural circuits to induce a desired perception, which we might refer to as “writing the neural code”. This requires not only significant challenges in biomaterials and interfaces, but also in knowing precisely what to tell the brain to do. Our laboratory has begun some preliminary work in this direction that I will discuss. Taken together, an understanding of these complexities and others is critical for understanding how information about the outside world is acquired and communicated to downstream brain structures, in relating spatiotemporal patterns of neural activity to sensory perception, and for the development of engineered devices for replacing or augmenting sensory function lost to trauma or disease.

Organizers: Jonas Wulff


Autonomous Systems At Moog

Talk
  • 06 July 2015 • 14:00 15:00
  • Gonzalo Rey
  • AMD Seminar Room

The talk will briefly introduce Moog Inc. It will then describe Moog's view of its value proposition to robotics and autonomous systems. If robots and autonomous system are to achieve their enormous potential to positively impact the world economy, the technology has to achieve equivalent the levels of robustness, availability, reliability and safety that are expected from current solutions. The commercial aircraft industry has seen an order of magnitude increase in machine complexity in the last fifty years in order to reach the highest ever levels of cost per seat-mile and safety in its history. Today one can travel cheaper and safer than ever. Moog believes that there are opportunities to apply the methodologies and principles that enabled the lowest ever costs while at the same time managing the highest ever complexity and safety levels for aircraft to robotics and autonomous systems. The talk will briefly describe the type of approaches used in aircraft to achieve such low levels of failures that are hard to comprehend (or believe for those not familiar with the engineering approach), while at the same time, relying on low cost commercial off the shelf components in electronics, materials and manufacturing processes. Next the talk will move onto a couple of active research projects Moog is engaged in with ETHZ and IIT. Finally, it will give an overview of an emerging research effort in certification of advanced (robot) control laws.

Organizers: Ludovic Righetti


  • Trevor Darrell
  • MPH Lecture Hall, Tübingen

Learning of layered or "deep" representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data. New results show that such methods can also excel when learning in sparse/weakly labeled settings across modalities and domains. I'll present our recent long-term recurrent network model which can learn cross-modal translation and can provide open-domain video to text transcription. I'll also describe state-of-the-art models for fully convolutional pixel-dense segmentation from weakly labeled input, and finally will discuss new methods for adapting deep recognition models to new domains with few or no target labels for categories of interest.

Organizers: Jonas Wulff


  • Andre Seyfarth
  • MRZ Seminar Room

In this presentation a series of conceptual models for describing human and animal locomotion will be presented ranging from standing to walking and running. By subsequently increasing the complexity of the models we show that basic properties of the underlying spring-mass model can be inherited by the more detailed models. Model extensions include the consideration of a rigid trunk (instead of a point mass), non-elastic leg properties (instead of a mass-less leg spring), additional legs (two and four legs), leg masses, leg segments (e.g. a compliantly attached foot) and energy management protocols. Furthermore we propose a methodology to evaluate and refine conceptual models based on the test trilogy. This approach consists of a simulation test, a hardware test and a behavioral comparison of biological experiments with model predictions and hardware models.


  • Andre Seyfarth
  • MRZ Seminar room

In this presentation a series of conceptual models for describing human and animal locomotion will be presented ranging from standing to walking and running. By subsequently increasing the complexity of the models we show that basic properties of the underlying spring-mass model can be inherited by the more detailed models. Model extensions include the consideration of a rigid trunk (instead of a point mass), non-elastic leg properties (instead of a mass-less leg spring), additional legs (two and four legs), leg masses, leg segments (e.g. a compliantly attached foot) and energy management protocols. Furthermore we propose a methodology to evaluate and refine conceptual models based on the test trilogy. This approach consists of a simulation test, a hardware test and a behavioral comparison of biological experiments with model predictions and hardware models.

Organizers: Ludovic Righetti


Learning Rich and Fair Representations from Images and Text

Talk
  • 10 June 2015 • 03:00 pm 04:00 pm
  • Rich Zemel
  • MPH Lecture Hall, Tübingen

I will talk about two types of machine learning problems, which are important but have received little attention. The first are problems naturally formulated as learning a one-to-many mapping, which can handle the inherent ambiguity in tasks such as generating segmentations or captions for images. A second problem involves learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. The primary approach we formulate for both problems is a constrained form of joint embedding in a deep generative model, that can develop informative representations of sentences and images. Applications discussed will include image captioning, question-answering, segmentation, classification without discrimination, and domain adaptation.

Organizers: Gerard Pons-Moll


  • Hans-Peter Seidel
  • MPH Hall

During the last three decades computer graphics established itself as a core discipline within computer science and information technology. Two decades ago, most digital content was textual. Today it has expanded to include audio, images, video, and a variety of graphical representations. New and emerging technologies such as multimedia, social networks, digital television, digital photography and the rapid development of new sensing devices, telecommunication and telepresence, virtual reality, or 3D-internet further indicate the potential of computer graphics in the years to come. Typical for the field is the coincidence of very large data sets with the demand for fast, and possibly interactive, high quality visual feedback. Furthermore, the user should be able to interact with the environment in a natural and intuitive way. In order to address the challenges mentioned above, a new and more integrated scientific view of computer graphics is required. In contrast to the classical approach to computer graphics which takes as input a scene model -- consisting of a set of light sources, a set of objects (specified by their shape and material properties), and a camera -- and uses simulation to compute an image, we like to take the more integrated view of `3D Image Analysis and Synthesis’ for our research. We consider the whole pipeline from data acquisition, over data processing to rendering in our work. In our opinion, this point of view is necessary in order to exploit the capabilities and perspectives of modern hardware, both on the input (sensors, scanners, digital photography, digital video) and output (graphics hardware, multiple platforms) side. Our vision and long term goal is the development of methods and tools to efficiently handle the huge amount of data during the acquisition process, to extract structure and meaning from the abundance of digital data, and to turn this into graphical representations that facilitate further processing, rendering, and interaction. In this presentation I will highlight some of our ongoing research by means of examples. Topics covered include 3D reconstruction and digital geometry processing, shape analysis and shape design, motion and performance capture, and 3D video processing.


  • Andrea Vedaldi
  • MPH Hall

Learnable representations, and deep convolutional neural networks (CNNs) in particular, have become the preferred way of extracting visual features for image understanding tasks, from object recognition to semantic segmentation. In this talk I will discuss several recent advances in deep representations for computer vision. After reviewing modern CNN architectures, I will give an example of a state-of-the-art network in text spotting; in particular, I will show that, by using only synthetic data and a sufficiently large deep model, it is possible directly map image regions to English words, a classification problem with 90K classes, obtaining in this manner state-of-the-art performance in text spotting. I will also briefly touch on other applications of deep learning to object recognition and discuss feature universality and transfer learning. In the last part of the talk I will move to the problem of understanding deep networks, which remain largely black boxes, presenting two possible approaches to their analysis. The first one are visualisation techniques that can investigate the information retained and learned by a visual representation. The second one is a method that allows exploring how representation capture geometric notions such as image transformations, and to find whether different representations are related and how.