Header logo is
Institute Talks

Recognizing the Pain Expressions of Horses

Talk
  • 10 December 2018 • 14:00 15:00
  • Prof. Dr. Hedvig Kjellström
  • Aquarium (N3.022)

Recognition of pain in horses and other animals is important, because pain is a manifestation of disease and decreases animal welfare. Pain diagnostics for humans typically includes self-evaluation and location of the pain with the help of standardized forms, and labeling of the pain by an clinical expert using pain scales. However, animals cannot verbalize their pain as humans can, and the use of standardized pain scales is challenged by the fact that animals as horses and cattle, being prey animals, display subtle and less obvious pain behavior - it is simply beneficial for a prey animal to appear healthy, in order lower the interest from predators. We work together with veterinarians to develop methods for automatic video-based recognition of pain in horses. These methods are typically trained with video examples of behavioral traits labeled with pain level and pain characteristics. This automated, user independent system for recognition of pain behavior in horses will be the first of its kind in the world. A successful system might change the concept for how we monitor and care for our animals.

Robot Learning for Advanced Manufacturing – An Overview

Talk
  • 10 December 2018 • 11:00 12:00
  • Dr. Eugen Solowjow
  • MPI-IS Stuttgart, seminar room 2P4

A dominant trend in manufacturing is the move toward small production volumes and high product variability. It is thus anticipated that future manufacturing automation systems will be characterized by a high degree of autonomy, and must be able to learn new behaviors without explicit programming. Robot Learning, and more generic, Autonomous Manufacturing, is an exciting research field at the intersection of Machine Learning and Automation. The combination of "traditional" control techniques with data-driven algorithms holds the promise of allowing robots to learn new behaviors through experience. This talk introduces selected Siemens research projects in the area of Autonomous Manufacturing.

Organizers: Sebastian Trimpe Friedrich Solowjow

Physical Reasoning and Robot Manipulation

Talk
  • 11 December 2018 • 15:00 16:00
  • Marc Toussaint
  • 2R4 Werner Köster lecture hall

Animals and humans are excellent in conceiving of solutions to physical and geometric problems, for instance in using tools, coming up with creative constructions, or eventually inventing novel mechanisms and machines. Cognitive scientists coined the term intuitive physics in this context. It is a shame we do not yet have good computational models of such capabilities. A main stream of current robotics research focusses on training robots for narrow manipulation skills - often using massive data from physical simulators. Complementary to that we should also try to understand how basic principles underlying physics can directly be used to enable general purpose physical reasoning in robots, rather than sampling data from physical simulations. In this talk I will discuss an approach called Logic-Geometric Programming, which builds a bridge between control theory, AI planning and robot manipulation. It demonstrates strong performance on sequential manipulation problems, but also raises a number of highly interesting fundamental problems, including its probabilistic formulation, reactive execution and learning.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Barbara Kettemann Matthias Tröndle

Magnetically Guided Multiscale Robots and Soft-robotic Grippers

Talk
  • 11 December 2018 • 11:00 12:00
  • Dr. František Mach
  • Stuttgart 2P4

The state-of-the-art robotic systems adopting magnetically actuated ferromagnetic bodies or even whole miniature robots have recently become a fast advancing technological field, especially at the nano and microscale. The mesoscale and above all multiscale magnetically guided robotic systems appear to be the advanced field of study, where it is difficult to reflect different forces, precision and also energy demands. The major goal of our talk is to discuss the challenges in the field of magnetically guided mesoscale and multiscale actuation, followed by the results of our research in the field of magnetic positioning systems and the magnetic soft-robotic grippers.

Organizers: Metin Sitti

Learning Dynamics from Kinematics: Estimating Foot Pressure from Video

Talk
  • 12 December 2018 • 10:00 11:00
  • Yanxi Liu
  • Aquarium (N3.022)

Human pose stability analysis is the key to understanding locomotion and control of body equilibrium, with numerous applications in the fields of Kinesiology, Medicine and Robotics. We propose and validate a novel approach to learn dynamics from kinematics of a human body to aid stability analysis. More specifically, we propose an end-to-end deep learning architecture to regress foot pressure from a human pose derived from video. We have collected and utilized a set of long (5min +) choreographed Taiji (Tai Chi) sequences of multiple subjects with synchronized motion capture, foot pressure and video data. The derived human pose data and corresponding foot pressure maps are used jointly in training a convolutional neural network with residual architecture, named “PressNET”. Cross validation results show promising performance of PressNet, significantly outperforming the baseline method under reasonable sensor noise ranges.

Organizers: Nadine Rueegg

Self-Supervised Representation Learning for Visual Behavior Analysis and Synthesis

Talk
  • 14 December 2018 • 12:00 13:00
  • Prof. Dr. Björn Ommer
  • PS Aquarium

Understanding objects and their behavior from images and videos is a difficult inverse problem. It requires learning a metric in image space that reflects object relations in real world. This metric learning problem calls for large volumes of training data. While images and videos are easily available, labels are not, thus motivating self-supervised metric and representation learning. Furthermore, I will present a widely applicable strategy based on deep reinforcement learning to improve the surrogate tasks underlying self-supervision. Thereafter, the talk will cover the learning of disentangled representations that explicitly separate different object characteristics. Our approach is based on an analysis-by-synthesis paradigm and can generate novel object instances with flexible changes to individual characteristics such as their appearance and pose. It nicely addresses diverse applications in human and animal behavior analysis, a topic we have intensive collaboration on with neuroscientists. Time permitting, I will discuss the disentangling of representations from a wider perspective including novel strategies to image stylization and new strategies for regularization of the latent space of generator networks.

Organizers: Joel Janai

Generating Faces & Heads: Texture, Shape and Beyond.

Talk
  • 17 December 2018 • 11:00 12:00
  • Stefanos Zafeiriou
  • PS Aquarium

The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that are needed. In this talk, I will show how, using DCNNs, we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. Finally, I will demonstrate how to create very lightweight networks for representing 3D face texture and shape structure by capitalising upon intrinsic mesh convolutions.

Organizers: Dimitris Tzionas

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00 12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.

TBA

IS Colloquium
  • 28 January 2019 • 11:15 12:15
  • Florian Marquardt

Organizers: Matthias Bauer

Machine Ethics

Talk
  • 20 October 2017 • 11:00 am 12:00 am
  • Michael and Susan Leigh Anderson
  • AMD Seminar Room

We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior.

Organizers: Vincent Berenz


  • Slobodan Ilic and Mira Slavcheva
  • PS Seminar Room (N3.022)

In this talk we will address the problem of 3D reconstruction of rigid and deformable objects from a single depth video stream. Traditional 3D registration techniques, such as ICP and its variants, are wide-spread and effective, but sensitive to initialization and noise due to the underlying correspondence estimation procedure. Therefore, we have developed SDF-2-SDF, a dense, correspondence-free method which aligns a pair of implicit representations of scene geometry, e.g. signed distance fields, by minimizing their direct voxel-wise difference. In its rigid variant, we apply it for static object reconstruction via real-time frame-to-frame camera tracking and posterior multiview pose optimization, achieving higher accuracy and a wider convergence basin than ICP variants. Its extension to scene reconstruction, SDF-TAR, carries out the implicit-to-implicit registration over several limited-extent volumes anchored in the scene and runs simultaneous GPU tracking and CPU refinement, with a lower memory footprint than other SLAM systems. Finally, to handle non-rigidly moving objects, we incorporate the SDF-2-SDF energy in a variational framework, regularized by a damped approximately Killing vector field. The resulting system, KillingFusion, is able to reconstruct objects undergoing topological changes and fast inter-frame motion in near-real time.

Organizers: Fatma Güney


  • Dominik Bach

Under acute threat, biological agents need to choose adaptive actions to survive. In my talk, I will provide a decision-theoretic view on this problem and ask, what are potential computational algorithms for this choice, and how are they implemented in neural circuits. Rational design principles and non-human animal data tentatively suggest a specific architecture that heavily relies on tailored algorithms for specific threat scenarios. Virtual reality computer games provide an opportunity to translate non-human animal tasks to humans and investigate these algorithms across species. I will discuss the specific challenges for empirical inference on underlying neural circuits given such architecture.

Organizers: Michel Besserve


  • Anton Van Den Hengel
  • Aquarium

Visual Question Answering is one of the applications of Deep Learning that is pushing towards real Artificial Intelligence. It turns the typical deep learning process around by only defining the task to be carried out after the training has taken place, which changes the task fundamentally. We have developed a range of strategies for incorporating other information sources into deep learning-based methods, and the process taken a step towards developing algorithms which learn how to use other algorithms to solve a problem, rather than solving it directly. This talk thus covers some of the high-level questions about the types of challenges Deep Learning can be applied to, and how we might separate the things its good at from those that it’s not.

Organizers: Siyu Tang


The Gentle Robot

Talk
  • 27 September 2017 • 13:13 14:50
  • Prof. Sami Haddadin
  • Main Seminar Room (N0.002)

Enabling robots for interaction with humans and unknown environments has been one of the primary goals of robotics research over decades. I will outline how human-centered robot design, nonlinear soft-robotics control inspired by human neuromechanics and physics grounded learning algorithms will let robots become a commodity in our near-future society. In particular, compliant and energy-controlled ultra-lightweight systems capable of complex collision handling enable high-performance human assistance over a wide variety of application domains. Together with novel methods for dynamics and skill learning, flexible and easy-to-use robotic power tools and systems can be designed. Recently, our work has led to the first next generation robot Franka Emika that has recently become commercially available. The system is able to safely interact with humans, execute and even learn sensitive manipulation skills, is affordable and designed as a distributed interconnected system.

Organizers: Eva Laemmerhirt


Meta-learning statistics and augmentations for few shot learning

IS Colloquium
  • 25 September 2017 • 11:15 12:15
  • Amos Storkey
  • Tübingen, MPI_IS Lecture Hall (ground floor)

In this talk I introduce the neural statistician as an approach for meta learning. The neural statistician learns to appropriately summarise datasets through a learnt statistic vector. This can be used for few shot learning, by computing the statistic vectors for the presented data, and using these statistics as context variables for one-shot classification and generation. I will show how we can generalise the neural statistician to a context aware learner that learns to characterise and combine independently learnt contexts. I will also demonstrate an approach for meta-learning data augmentation strategies. Acknowledgments: This work is joint work with Harri Edwards, Antreas Antoniou, and Conor Durkan.

Organizers: Philipp Hennig


The Three Pillars of Fully Autonomous Driving

IS Colloquium
  • 18 September 2017 • 11:00 12:00
  • Prof. Amnon Shashua
  • MPI_IS Stuttgart, Lecture Room 2 D5

The field of transportation is undergoing a seismic change with the coming introduction of autonomous driving. The technologies required to enable computer driven cars involves the latest cutting edge artificial intelligence algorithms along three major thrusts: Sensing, Planning and Mapping. Prof. Amnon Shashua, Co-founder and Chairman of Mobileye, will describe the challenges and the kind of machine learning algorithms involved, but will do that through the perspective of Mobileye’s activity in this domain.

Organizers: Michael Black


A locally Adaptive Normal Distribution

Talk
  • 05 September 2017 • 14:00 15:30
  • Georgios Arvanitidis
  • S2 Seminar Room

The fundamental building block in many learning models is the distance measure that is used. Usually, the linear distance is used for simplicity. Replacing this stiff distance measure with a flexible one could potentially give a better representation of the actual distance between two points. I will present how the normal distribution changes if the distance measure respects the underlying structure of the data. In particular, a Riemannian manifold will be learned based on observations. The geodesic curve can then be computed—a length-minimizing curve under the Riemannian measure. With this flexible distance measure we get a normal distribution that locally adapts to the data. A maximum likelihood estimation scheme is provided for inference of the parameters mean and covariance, and also, a systematic way to choose the parameter defining the Riemannian manifold. Results on synthetic and real world data demonstrate the efficiency of the proposed model to fit non-trivial probability distributions.

Organizers: Philipp Hennig


  • Prof. Dr. Hedvig Kjellström
  • N3.022 / Aquarium

In this talk I will first outline my different research projects. I will then focus on the EACare project, a quite newly started multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e.g., due to Alzheimer's disease. The system will use methods from Machine Learning and Social Robotics, and be trained with examples of recorded clinician-patient interactions. The interaction will be developed using a participatory design approach. I describe the scope and method of the project, and report on a first Wizard of Oz prototype.


  • Yeara Kozlov
  • Aquarium

Creating convincing human facial animation is challenging. Face animation is often hand-crafted by artists separately from body motion. Alternatively, if the face animation is derived from motion capture, it is typically performed while the actor is relatively still. Recombining the isolated face animation with body motion is non-trivial and often results in uncanny results if the body dynamics are not properly reflected on the face (e.g. cheeks wiggling when running). In this talk, I will discuss the challenges of human soft tissue simulation and control. I will then present our method for adding physical effects to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method can combine facial animation and rigid body motion consistently while preserving the original animation as closely as possible. Our novel simulation framework uses the original animation as per-frame rest-poses without adding spurious forces. We also propose the concept of blendmaterials to give artists an intuitive means to control the changing material properties due to muscle activation.

Organizers: Timo Bolkart