Header logo is
Institute Talks

Recognizing the Pain Expressions of Horses

Talk
  • 10 December 2018 • 14:00 15:00
  • Prof. Dr. Hedvig Kjellström
  • Aquarium (N3.022)

Recognition of pain in horses and other animals is important, because pain is a manifestation of disease and decreases animal welfare. Pain diagnostics for humans typically includes self-evaluation and location of the pain with the help of standardized forms, and labeling of the pain by an clinical expert using pain scales. However, animals cannot verbalize their pain as humans can, and the use of standardized pain scales is challenged by the fact that animals as horses and cattle, being prey animals, display subtle and less obvious pain behavior - it is simply beneficial for a prey animal to appear healthy, in order lower the interest from predators. We work together with veterinarians to develop methods for automatic video-based recognition of pain in horses. These methods are typically trained with video examples of behavioral traits labeled with pain level and pain characteristics. This automated, user independent system for recognition of pain behavior in horses will be the first of its kind in the world. A successful system might change the concept for how we monitor and care for our animals.

Robot Learning for Advanced Manufacturing – An Overview

Talk
  • 10 December 2018 • 11:00 12:00
  • Dr. Eugen Solowjow
  • MPI-IS Stuttgart, seminar room 2P4

A dominant trend in manufacturing is the move toward small production volumes and high product variability. It is thus anticipated that future manufacturing automation systems will be characterized by a high degree of autonomy, and must be able to learn new behaviors without explicit programming. Robot Learning, and more generic, Autonomous Manufacturing, is an exciting research field at the intersection of Machine Learning and Automation. The combination of "traditional" control techniques with data-driven algorithms holds the promise of allowing robots to learn new behaviors through experience. This talk introduces selected Siemens research projects in the area of Autonomous Manufacturing.

Organizers: Sebastian Trimpe Friedrich Solowjow

Physical Reasoning and Robot Manipulation

Talk
  • 11 December 2018 • 15:00 16:00
  • Marc Toussaint
  • 2R4 Werner Köster lecture hall

Animals and humans are excellent in conceiving of solutions to physical and geometric problems, for instance in using tools, coming up with creative constructions, or eventually inventing novel mechanisms and machines. Cognitive scientists coined the term intuitive physics in this context. It is a shame we do not yet have good computational models of such capabilities. A main stream of current robotics research focusses on training robots for narrow manipulation skills - often using massive data from physical simulators. Complementary to that we should also try to understand how basic principles underlying physics can directly be used to enable general purpose physical reasoning in robots, rather than sampling data from physical simulations. In this talk I will discuss an approach called Logic-Geometric Programming, which builds a bridge between control theory, AI planning and robot manipulation. It demonstrates strong performance on sequential manipulation problems, but also raises a number of highly interesting fundamental problems, including its probabilistic formulation, reactive execution and learning.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Barbara Kettemann Matthias Tröndle

Magnetically Guided Multiscale Robots and Soft-robotic Grippers

Talk
  • 11 December 2018 • 11:00 12:00
  • Dr. František Mach
  • Stuttgart 2P4

The state-of-the-art robotic systems adopting magnetically actuated ferromagnetic bodies or even whole miniature robots have recently become a fast advancing technological field, especially at the nano and microscale. The mesoscale and above all multiscale magnetically guided robotic systems appear to be the advanced field of study, where it is difficult to reflect different forces, precision and also energy demands. The major goal of our talk is to discuss the challenges in the field of magnetically guided mesoscale and multiscale actuation, followed by the results of our research in the field of magnetic positioning systems and the magnetic soft-robotic grippers.

Organizers: Metin Sitti

Learning Dynamics from Kinematics: Estimating Foot Pressure from Video

Talk
  • 12 December 2018 • 10:00 11:00
  • Yanxi Liu
  • Aquarium (N3.022)

Human pose stability analysis is the key to understanding locomotion and control of body equilibrium, with numerous applications in the fields of Kinesiology, Medicine and Robotics. We propose and validate a novel approach to learn dynamics from kinematics of a human body to aid stability analysis. More specifically, we propose an end-to-end deep learning architecture to regress foot pressure from a human pose derived from video. We have collected and utilized a set of long (5min +) choreographed Taiji (Tai Chi) sequences of multiple subjects with synchronized motion capture, foot pressure and video data. The derived human pose data and corresponding foot pressure maps are used jointly in training a convolutional neural network with residual architecture, named “PressNET”. Cross validation results show promising performance of PressNet, significantly outperforming the baseline method under reasonable sensor noise ranges.

Organizers: Nadine Rueegg

Self-Supervised Representation Learning for Visual Behavior Analysis and Synthesis

Talk
  • 14 December 2018 • 12:00 13:00
  • Prof. Dr. Björn Ommer
  • PS Aquarium

Understanding objects and their behavior from images and videos is a difficult inverse problem. It requires learning a metric in image space that reflects object relations in real world. This metric learning problem calls for large volumes of training data. While images and videos are easily available, labels are not, thus motivating self-supervised metric and representation learning. Furthermore, I will present a widely applicable strategy based on deep reinforcement learning to improve the surrogate tasks underlying self-supervision. Thereafter, the talk will cover the learning of disentangled representations that explicitly separate different object characteristics. Our approach is based on an analysis-by-synthesis paradigm and can generate novel object instances with flexible changes to individual characteristics such as their appearance and pose. It nicely addresses diverse applications in human and animal behavior analysis, a topic we have intensive collaboration on with neuroscientists. Time permitting, I will discuss the disentangling of representations from a wider perspective including novel strategies to image stylization and new strategies for regularization of the latent space of generator networks.

Organizers: Joel Janai

Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitris Tzionas

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.

TBA

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt

Organizers: Matthias Bauer

Political Science and Data Science: What we can learn from each other

IS Colloquium
  • 12 March 2018 • 11:15 12:15
  • Simon Hegelich
  • MPI-IS lecture hall (N0.002)

Political science is integrating computational methods like machine learning into its own toolbox. At the same time the awareness rises that the utilization of machine learning algorithms in our daily life is a highly political issue. These two trends - the integration of computational methods into political science and the political analysis of the digital revolution - form the ground for a new transdisciplinary approach: political data science. Interestingly, there is a rich tradition of crossing the borders of the disciplines, as can be seen in the works of Paul Werbos and Herbert Simon (both political scientists). Building on this tradition and integrating ideas from deep learning and Hegel's philosophy of logic a new perspective on causality might arise.

Organizers: Philipp Geiger


  • Giacomo Garegnani
  • Tübingen, S2 seminar room

We present a novel probabilistic integrator for ordinary differential equations (ODEs) which allows for uncertainty quantification of the numerical error [1]. In particular, we randomise the time steps and build a probability measure on the deterministic solution, which collapses to the true solution of the ODE with the same rate of convergence as the underlying deterministic scheme. The intrinsic nature of the random perturbation guarantees that our probabilistic integrator conserves some geometric properties of the deterministic method it is built on, such as the conservation of first integrals or the symplecticity of the flow. Finally, we present a procedure to incorporate our probabilistic solver into the frame of Bayesian inference inverse problems, showing how inaccurate posterior concentrations given by deterministic methods can be corrected by a probabilistic interpretation of the numerical solution.

Organizers: Hans Kersting


  • Bin Yu
  • Tübingen, IS Lecture Hall (N0.002)

In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title. They will be demonstrated in the context of two collaborative projects in neuroscience and genomics, respectively. The first project in neuroscience uses transfer learning to integrate fitted convolutional neural networks (CNNs) on ImageNet with regression methods to provide predictive and stable characterizations of neurons from the challenging primary visual cortex V4. The second project proposes iterative random forests (iRF) as a stablized RF to seek predictable and interpretable high-order interactions among biomolecules.

Organizers: Michel Besserve


  • Prof. Constantin Rothkopf
  • Tübingen, 3rd Floor Intelligent Systems: Aquarium

Active vision has long put forward the idea, that visual sensation and our actions are inseparable, especially when considering naturalistic extended behavior. Further support for this idea comes from theoretical work in optimal control, which demonstrates that sensing, planning, and acting in sequential tasks can only be separated under very restricted circumstances. The talk will present experimental evidence together with computational explanations of human visuomotor behavior in tasks ranging from classic psychophysical detection tasks to ball catching and visuomotor navigation. Along the way it will touch topics such as the heuristics hypothesis and learning of visual representations. The connecting theme will be that, from the switching of visuomotor behavior in response to changing task-constraints down to cortical visual representations in V1, action and perception are inseparably intertwined in an ambiguous and uncertain world

Organizers: Betty Mohler


A naturalistic perspective on optic flow processing in the fly

Talk
  • 27 February 2018 • 3:00 p.m. 4:00 p.m.
  • Aljoscha Leonhardt
  • N4.022, EI Glass Seminar Room

Optic flow offers a rich source of information about an organism’s environment. Flies, for instance, are thought to make use of motion vision to control and stabilise their course during acrobatic airborne manoeuvres. How these computations are implemented in neural hardware and how such circuits cope with the visual complexity of natural scenes, however, remain open questions. This talk outlines some of the progress we have made in unraveling the computational substrate underlying optic flow processing in Drosophila. In particular, I will focus on our efforts to connect neural mechanisms and real-world demands via task-driven modelling.

Organizers: Michel Besserve


Patient Inspired Engineering: Problem, device, solution

Talk
  • 26 February 2018 • 11:00 12:00
  • Professor Rahmi Oklu
  • Room 3P02 - Stuttgart

Minimally invasive approaches to the treatment of vascular diseases are constantly evolving. These diseases are among the most prevalent medical problems today including stroke, myocardial infarction, pulmonary emboli, hemorrhage and aneurysms. I will review current approaches to vascular embolization and thrombosis, the challenges they pose and the limitations of current devices and end with patient inspired engineering approaches to the treatment of these conditions.

Organizers: Metin Sitti


Deriving a Tongue Model from MRI Data

Talk
  • 20 February 2018 • 14:00 14:45
  • Alexander Hewer
  • Aquarium

The tongue plays a vital part in everyday life where we use it extensively during speech production. Due to this importance, we want to derive a parametric shape model of the tongue. This model enables us to reconstruct the full tongue shape from a sparse set of points, like for example motion capture data. Moreover, we can use such a model in simulations of the vocal tract to perform articulatory speech synthesis or to create animated virtual avatars. In my talk, I describe a framework for deriving such a model from MRI scans of the vocal tract. In particular, this framework uses image denoising and segmentation methods to produce a point cloud approximating the vocal tract surface. In this context, I will also discuss how palatal contacts of the tongue can be handled, i.e., situations where the tongue touches the palate and thus no tongue boundary is visible. Afterwards, template matching is used to derive a mesh representation of the tongue from this cloud. The acquired meshes are finally used to construct a multilinear model.

Organizers: Timo Bolkart


Nonstandard Analysis - The Comeback of Infinitesimals

Talk
  • 19 February 2018 • 11:00 12:15
  • Randolf Scholz
  • Tübingen,

The early Calculus of Newton and Leibniz made heavy use of infinitesimal quantities and flourished for over a hundred years until it was superseded by the more rigorous epsilon-delta formalism. It took until the 1950's for A. Robinson to find a proper way to construct a number system containing actual infinitesimals -- the Hyperreals *|R. This talk outlines their construction and possible applications in modern analysis.

Organizers: Philipp Hennig


  • Dr. Adam Spiers
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

This talk will focus on three topics of my research at Yale University, which centers on themes of human and robotic manipulation and haptic perception. My major research undertaking at Yale has involved running a quantitative study of daily upper-limb prosthesis use in unilateral amputees. This work aims to better understand the techniques employed by long-term users of artificial arms and hands in order to inform future prosthetic device design and therapeutic interventions. While past attempts to quantify prosthesis-use have implemented either behavioral questionnaires or observations of specific tasks in a structured laboratory settings, our approach involves participants completing many hours of self-selected household chores in their own homes while wearing a head mounted video camera. I will discuss how we have addressed the processing of such a large and unstructured data set, in addition to our current findings. Complementary to my work in prosthetics, I will also discuss my work on several novel robotic grippers which aim to enhance the grasping, manipulation and object identification capabilities of robotic systems. These grippers implement underactuated designs, machine learning approaches or variable friction surfaces to provide low-cost, model-free and easily reproducible solutions to what have been traditionally been considered complex problems in robotic manipulation, i.e. stable grasp acquisition, fast tactile object recognition and within-hand object manipulation. Finally, I will present a brief overview of my efforts designing and testing shape-changing haptic interfaces, a largely unexplored feedback modality that I believe has huge potential for discretely communicating information to people with and without sensory impairments. This technology has been implemented in a pedestrian navigation system and evaluated in a variety of scenarios, including a large scale immersive theatre production with visually impaired artistic collaborators and almost 100 participants.

Organizers: Katherine Kuchenbecker


  • Prof. Christian Wallraven
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 5H7

Already starting at birth, humans integrate information from several sensory modalities in order to form a representation of the environment - such as when a baby explores, manipulates, and interacts with objects. The combination of visual and touch information is one of the most fundamental sensory integration processes, as touch information (such as body-relative size, shape, texture, material, temperature, and weight) can easily be linked to the visual image, thereby providing a grounding for later visual-only recognition. Previous research on such integration processes has so far mainly focused on low-level object properties (such as curvature, or surface granularity) such that little is known on how the human actually forms a high-level multisensory representation of objects. Here, I will review research from our lab that investigates how the human brain processes shape using input from vision and touch. Using a large variety of novel, 3D-printed shapes we were able to show that touch is actually equally good at shape processing than vision, suggesting a common, multisensory representation of shape. We next conducted a series of imaging experiments (using anatomical, functional, and white-matter analyses) that chart the brain networks that process this shape representation. I will conclude the talk with a brief medley of other haptics-related research in the lab, including robot learning, braille, and haptic face recognition.

Organizers: Katherine Kuchenbecker