Institute Talks

Metrics Matter, Examples from Binary and Multilabel Classification

IS Colloquium
  • 21 August 2017 • 11:15 12:15
  • Sanmi Koyejo
  • Empirical Inference meeting room (MPI-IS building, 4th floor)

Performance metrics are a key component of machine learning systems, and are ideally constructed to reflect real world tradeoffs. In contrast, much of the literature simply focuses on algorithms for maximizing accuracy. With the increasing integration of machine learning into real systems, it is clear that accuracy is an insufficient measure of performance for many problems of interest. Unfortunately, unlike accuracy, many real world performance metrics are non-decomposable i.e. cannot be computed as a sum of losses for each instance. Thus, known algorithms and associated analysis are not trivially extended, and direct approaches require expensive combinatorial optimization. I will outline recent results characterizing population optimal classifiers for large families of binary and multilabel classification metrics, including such nonlinear metrics as F-measure and Jaccard measure. Perhaps surprisingly, the prediction which maximizes the utility for a range of such metrics takes a simple form. This results in simple and scalable procedures for optimizing complex metrics in practice. I will also outline how the same analysis gives optimal procedures for selecting point estimates from complex posterior distributions for structured objects such as graphs. Joint work with Nagarajan Natarajan, Bowei Yan, Kai Zhong, Pradeep Ravikumar and Inderjit Dhillon.

Organizers: Mijung Park

Physical Blendshapes - Controllable Physics for Human Faces

Talk
  • 23 August 2017 • 11:00 12:00
  • Yeara Kozlov
  • Aquarium

Creating convincing human facial animation is challenging. Face animation is often hand-crafted by artists separately from body motion. Alternatively, if the face animation is derived from motion capture, it is typically performed while the actor is relatively still. Recombining the isolated face animation with body motion is non-trivial and often results in uncanny results if the body dynamics are not properly reflected on the face (e.g. cheeks wiggling when running). In this talk, I will discuss the challenges of human soft tissue simulation and control. I will then present our method for adding physical effects to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method can combine facial animation and rigid body motion consistently while preserving the original animation as closely as possible. Our novel simulation framework uses the original animation as per-frame rest-poses without adding spurious forces. We also propose the concept of blendmaterials to give artists an intuitive means to control the changing material properties due to muscle activation.

Organizers: Timo Bolkart

Developing an embodied agent to detect early signs of dementia

Talk
  • 25 August 2017 • 11:00 12:00:00
  • Prof. Dr. Hedvig Kjellström
  • N3.022 / Aquarium

In this talk I will first outline my different research projects. I will then focus on the EACare project, a quite newly started multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e.g., due to Alzheimer's disease. The system will use methods from Machine Learning and Social Robotics, and be trained with examples of recorded clinician-patient interactions. The interaction will be developed using a participatory design approach. I describe the scope and method of the project, and report on a first Wizard of Oz prototype.

Dominik Bach - TBA

IS Colloquium
  • 02 October 2017 • 11:15 12:15
  • Dominik Bach

  • Jun Nakanishi
  • TTR, AMD Seminar Room (first floor)

Understanding the principles of natural movement generation has been and continues to be one of the most interesting and important open problems in the fields of robotics and neural control of movement. In this talk, I introduce an overview of our previous work on the control of dynamic movements in robotic systems towards the goal of control design principles and understanding of motion generation. Our research has focused in the fields of dynamical systems theory, adaptive and optimal control and statistical learning, and their application to robotics towards achieving dynamically dexterous behavior in robotic systems. First, our studies on dynamical systems based task encoding in robot brachiation, movement primitives for imitation learning, and oscillator based biped locomotion control will be presented. Then, our recent work on optimal control of robotic systems with variable stiffness actuation will be introduced towards the aim of achieving highly dynamic movements by exploiting the natural dynamics of the system. Finally, our new humanoid robot H-1 at TUM-ICS will be introduced.

Organizers: Ludovic Righetti


  • Alexander Sprowitz
  • TTR, AMD Seminar Room (first floor)

The current performance gap between legged animals and legged robots is large. Animals can reach high locomotion speed in complex terrain, or run at a low cost of transport. They are able to rapidly sense their environment, process sensor data, learn and plan locomotion strategies, and execute feedforward and feedback controlled locomotion patterns fluently on the fly. Animals use hardware that has, compared to the latest man-made actuators, electronics, and processors, relatively low bandwidth, medium power density, and low speed. The most common approach to legged robot locomotion is still assuming rigid linkage hardware, high torque actuators, and model based control algorithms with high bandwidth and high gain feedback mechanisms. State of the art robotic demonstrations such as the 2015 DARPA challenge showed that seemingly trivial locomotion tasks such as level walking, or walking over soft sand still stops most of our biped and quadruped robots. This talk focuses on an alternative class of legged robots and control algorithms designed and implemented on several quadruped and biped platforms, for a new generation of legged robotic systems. Biomechanical blueprints inspired by nature, and mechanisms from locomotion neurocontrol were designed, tested, and can be compared to their biological counterparts. We focus on hardware and controllers that allow comparably cheap robotics, in terms of computation, control, and mechanical complexity. Our goal are highly dynamic, robust legged systems with low weight and inertia, relatively low mechanical complexity and cost of transport, and little computational demands for standard locomotion tasks. Ideally, such system can also be used as testing platforms to explain not yet understood biomechanical and neurocontrol aspects of animals.

Organizers: Ludovic Righetti


  • Gernot Müller-Putz
  • MPH Lecture Hall

More than half of the persons with spinal cord injuries (SCI) are suffering from impairments of both hands, which results in a tremendous decrease of quality of life and represents a major barrier for inclusion in society. Functional restoration is possible with neuroprostheses (NPs) based on functional electrical stimulation (FES). A Brain-Computer Interface provides a means of control for such neuroprosthetics since users have limited abilities to use traditional assistive devices. This talk presents our early research on BCI-based NP control based on motor imagery, discusses hybrid BCI solutions and shows our work and effort on movement trajectory decoding. An outlook to future BCI applications will conclude this talk.

Organizers: Moritz Grosse-Wentrup


Making Robots Learn

IS Colloquium
  • 13 November 2015 • 11:30 12:30
  • Prof. Pieter Abbeel
  • Max Planck House Tübingen, Lecture Hall

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what often ends up being time-consuming task specific programming. In this talk I will describe the ideas behind two promising types of robot learning: First I will discuss apprenticeship learning, in which robots learn from human demonstrations, and which has enabled autonomous helicopter aerobatics, knot tying, basic suturing, and cloth manipulation. Then I will discuss deep reinforcement learning, in which robots learn through their own trial and error, and which has enabled learning locomotion as well as a range of assembly and manipulation tasks.

Organizers: Stefan Schaal


Understanding Plants and Animals

Talk
  • 10 November 2015 • 11:00 12:00
  • Prof. David W. Jacobs
  • MRZ seminar room

I will describe a series of work that aims to automatically understand images of animals and plants. I will begin by describing recent work that uses Bounded Distortion matching to model pose variation in animals. Using a generic 3D model of an animal and multiple images of different individuals in various poses, we construct a model that captures the way in which the animal articulates. This is done by solving for the pose of the template that matches each image while simultaneously solving for the stiffness of each tetrahedron of the model. We minimize an L1 norm on stiffness, producing a model that bends easily at joints, but that captures the rigidity of other parts of the animal. We show that this model can determine the pose of animals such as cats in a wide range of positions. Bounded distortion forms a core part of the matching between 3D model and 2D images. I will also show that Bounded Distortion can be used for 2D matching. We use it to find corresponding features in images very robustly, optimizing an L0 distance to maximize the number of matched features, while bounding the amount of non-rigid variation between the images. We demonstrate the use of this approach in matching non-rigid objects and in wide-baseline matching of features. I will also give an overview of a method for identifying the parts of animals in images, to produce an automatic correspondence between images of animals. Building on these correspondences we develop methods for recognizing the species of a bird, or the breed of a dog. We use these recognition algorithms to construct electronic field guides. I will describe three field guides that we have published, Birdsnap, Dogsnap, and Leafsnap. Leafsnap identifies the species of trees using shape-based matching to compare images of leaves. Leafsnap has been downloaded by over 1.5 million users, and has been used in schools and in biodiversity studies. This work has been done in collaboration with many University of Maryland students and with groups at Columbia University, the Smithsonian Institution National Museum of Natural History, and the Weizmann Institute.

Organizers: Stephan Streuber


Understanding Plants and Animals

Talk
  • 10 November 2015 • 11:00 12:00
  • Prof. David W. Jacobs
  • MRZ seminar room

I will describe a series of work that aims to automatically understand images of animals and plants. I will begin by describing recent work that uses Bounded Distortion matching to model pose variation in animals. Using a generic 3D model of an animal and multiple images of different individuals in various poses, we construct a model that captures the way in which the animal articulates. This is done by solving for the pose of the template that matches each image while simultaneously solving for the stiffness of each tetrahedron of the model. We minimize an L1 norm on stiffness, producing a model that bends easily at joints, but that captures the rigidity of other parts of the animal. We show that this model can determine the pose of animals such as cats in a wide range of positions. Bounded distortion forms a core part of the matching between 3D model and 2D images. I will also show that Bounded Distortion can be used for 2D matching. We use it to find corresponding features in images very robustly, optimizing an L0 distance to maximize the number of matched features, while bounding the amount of non-rigid variation between the images. We demonstrate the use of this approach in matching non-rigid objects and in wide-baseline matching of features. I will also give an overview of a method for identifying the parts of animals in images, to produce an automatic correspondence between images of animals. Building on these correspondences we develop methods for recognizing the species of a bird, or the breed of a dog. We use these recognition algorithms to construct electronic field guides. I will describe three field guides that we have published, Birdsnap, Dogsnap, and Leafsnap. Leafsnap identifies the species of trees using shape-based matching to compare images of leaves. Leafsnap has been downloaded by over 1.5 million users, and has been used in schools and in biodiversity studies. This work has been done in collaboration with many University of Maryland students and with groups at Columbia University, the Smithsonian Institution National Museum of Natural History, and the Weizmann Institute.

Organizers: Stephan Streuber


  • Olga Diamanti
  • MRZ Seminar room

The design of tangent vector fields on discrete surfaces is a basic building block for many geometry processing applications, such as surface remeshing, parameterization and architectural geometric design. Many applications require the design of multiple vector fields (vector sets) coupled in a nontrivial way; for example, sets of more than two vectors are used for meshing of triangular, quadrilateral and hexagonal meshes. In this talk, a new, polynomial-based representation for general unordered vector sets will be presented. Using this representation we can efficiently interpolate user provided vector constraints to design vector set fields. Our interpolation scheme will require neither integer period jumps, nor explicit pairings of vectors between adjacent sets on a manifold, as is common in field design literature. Several extensions to the basic interpolation scheme are possible, which make our representation applicable in various scenarios; in this talk, we will focus on generating vector set fields particularly suited for mesh parameterization and show applications in architectural modeling.

Organizers: Gerard Pons-Moll


Learning to generate

Talk
  • 19 October 2015 • 14:00 15:00
  • Max Welling
  • MPI Lecture Hall

The recent amazing success of deep learning has been mainly in discriminative learning, that is, classification and regression. An important factor for this success has been, besides Moore's law, the availability of large labeled datasets. However, it is not clear whether in the future the amount of available labels grows as fast as the amount of unlabeled data, providing one argument to be interested in unsupervised and semi-supervised learning. Besides this there are a number of other reasons why unsupervised learning is still important, such as the fact that data in the life sciences often has many more features than instances (p>>n), the fact that probabilities over feature space are useful for planning and control problems and the fact that complex simulator models are the norm in the sciences. In this talk I will discuss deep generative models that can be jointly trained with discriminative models and that facilitate semi-supervised learning. I will discuss recent progress in learning and Bayesian inference in these "variational auto-encoders". I will then extend the deep generative models to the class of simulators for which no tractable likelihood exists and discuss new Bayesian inference procedures to fit these models to data.

Organizers: Peter Vincent Gehler


Imaging genomics of functional brain networks

IS Colloquium
  • 19 October 2015 • 11:15 12:15
  • Jonas Richiardi
  • Max Planck House, Lecture Hall

During rest, brain activity is intrinsically synchronized between different brain regions, forming networks of coherent activity. These functional networks (FNs), consisting of multiple regions widely distributed across lobes and hemispheres, appear to be a fundamental theme of neural organization in mammalian brains. Despite hundreds of studies detailing this phenomenon, the genetic and molecular mechanisms supporting these functional networks remain undefined. Previous work has mostly focused on polymorphisms in candidate genes, or used a twin study approach to demonstrate heritability of aspects of resting-state connectivity. The recent availability of high spatial resolution post-mortem brain gene expression datasets, together with several large-scale imaging genetics datasets, which contain joint in-vivo functional brain imaging data and genotype data for several hundred subjects, opens intriguing data analysis avenues. Using novel cross-modal graph-based statistics, we show that functional brain networks defined with resting-state fMRI can be recapitulated using measures of correlated gene expression, and that the relationship is not driven by gross tissue types. The set of genes we identify is significantly enriched for certain types of ion channels and synapse-related genes. We validate results by showing that polymorphisms in this set significantly correlate with alterations of in-vivo resting-state functional connectivity in a group of 259 adolescents. We further validate results on another species by showing that our list of genes is significantly associated with neuronal connectivity in the mouse brain. These results provide convergent, multimodal evidence that resting-state functional networks emerge from the orchestrated activity of dozens of genes linked to ion channel activity and synaptic function. Functional brain networks are also known to be perturbed in a variety of neurological and neuropsychological disorders, including Alzheimer's and schizophrenia. Given this link between disease and networks, and the fact that many brain disorders have genetic contributions, it seems that functional brain networks may be an interesting endophenotype for clinical use. We discuss the translational potential of the imaging genomics techniques we developed.

Organizers: Moritz Grosse-Wentrup Michel Besserve


  • Yasemin Bekiroglu
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

Unknown information required to plan grasps such as object shape and pose needs to be extracted from the environment through sensors. However, sensory measurements are noisy and associated with a degree of uncertainty. Furthermore, object parameters relevant to grasp planning may not be accurately estimated, e.g., friction and mass. In real-world settings, these issues can lead to grasp failures with serious consequences. I will talk about learning approaches using real sensory data, e.g., visual and tactile, to assess grasp success (discriminative and generative) that can be used to trigger plan corrections. I will also present a probabilistic approach for learning object models based on visual and tactile data through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape.

Organizers: Jeannette Bohg