Header logo is
Institute Talks

Learning Control for Intelligent Physical Systems

Talk
  • 13 July 2018 • 14:15 14:45
  • Dr. Sebastian Trimpe
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

Modern technology allows us to collect, process, and share more data than ever before. This data revolution opens up new ways to design control and learning algorithms, which will form the algorithmic foundation for future intelligent systems that shall act autonomously in the physical world. Starting from a discussion of the special challenges when combining machine learning and control, I will present some of our recent research in this exciting area. Using the example of the Apollo robot learning to balance a stick in its hand, I will explain how intelligent agents can learn new behavior from just a few experimental trails. I will also discuss the need for theoretical guarantees in learning-based control, and how we can obtain them by combining learning and control theory.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Household Assistants: the Path from the Care-o-bot Vision to First Products

Talk
  • 13 July 2018 • 14:45 15:15
  • Dr. Martin Hägele
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

In 1995 Fraunhofer IPA embarked on a mission towards designing a personal robot assistant for everyday tasks. In the following years Care-O-bot developed into a long-term experiment for exploring and demonstrating new robot technologies and future product visions. The recent fourth generation of the Care-O-bot, introduced in 2014 aimed at designing an integrated system which addressed a number of innovations such as modularity, “low-cost” by making use of new manufacturing processes, and advanced human-user interaction. Some 15 systems were built and the intellectual property (IP) generated by over 20 years of research was recently licensed to a start-up. The presentation will review the path from an experimental platform for building up expertise in various robotic disciplines to recent pilot applications based on the now commercial Care-O-bot hardware.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

The Critical Role of Atoms at Surfaces and Interfaces: Do we really have control? Can we?

Talk
  • 13 July 2018 • 15:45 16:15
  • Prof. Dr. Dawn Bonnell
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

With the ubiquity of catalyzed reactions in manufacturing, the emergence of the device laden internet of things, and global challenges with respect to water and energy, it has never been more important to understand atomic interactions in the functional materials that can provide solutions in these spaces.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Interactive Visualization – A Key Discipline for Big Data Analysis

Talk
  • 13 July 2018 • 15:00 15:30
  • Prof. Dr. Thomas Ertl
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

Big Data has become the general term relating to the benefits and threats which result from the huge amount of data collected in all parts of society. While data acquisition, storage and access are relevant technical aspects, the analysis of the collected data turns out to be at the core of the Big Data challenge. Automatic data mining and information retrieval techniques have made much progress but many application scenarios remain in which the human in the loop plays an essential role. Consequently, interactive visualization techniques have become a key discipline of Big Data analysis and the field is reaching out to many new application domains. This talk will give examples from current visualization research projects at the University of Stuttgart demonstrating the thematic breadth of application scenarios and the technical depth of the employed methods. We will cover advances in scientific visualization of fields and particles, visual analytics of document collections and movement patterns as well as cognitive aspects.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Imitation of Human Motion Planning

Talk
  • 27 July 2018 • 12:00 12:45
  • Jim Mainprice
  • N3.022 (Aquarium)

Humans act upon their environment through motion, the ability to plan their movements is therefore an essential component of their autonomy. In recent decades, motion planning has been widely studied in robotics and computer graphics. Nevertheless robots still fail to achieve human reactivity and coordination. The need for more efficient motion planning algorithms has been present through out my own research on "human-aware" motion planning, which aims to take the surroundings humans explicitly into account. I believe imitation learning is the key to this particular problem as it allows to learn both, new motion skills and predictive models, two capabilities that are at the heart of "human-aware" robots while simultaneously holding the promise of faster and more reactive motion generation. In this talk I will present my work in this direction.

  • Alexander Mathis
  • Tübingen, Aquarium (N3.022)

Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, yet markers are intrusive (especially for smaller animals), and the number and location of the markers must be determined a priori. Here, we present a highly efficient method for markerless tracking based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in a broad collection of experimental settings: mice odor trail-tracking, egg-laying behavior in drosophila, and mouse hand articulation in a skilled forelimb task. For example, during the skilled reaching behavior, individual joints can be automatically tracked (and a confidence score is reported). Remarkably, even when a small number of frames are labeled (≈200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.

Organizers: Melanie Feldhofer


Machine Learning for Tactile Manipulation

IS Colloquium
  • 13 April 2018 • 11:00 12:00
  • Jan Peters
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Today’s robots have motor abilities and sensors that exceed those of humans in many ways: They move more accurately and faster; their sensors see more and at a higher precision and in contrast to humans they can accurately measure even the smallest forces and torques. Robot hands with three, four, or five fingers are commercially available, and, so are advanced dexterous arms. Indeed, modern motion-planning methods have rendered grasp trajectory generation a largely solved problem. Still, no robot to date matches the manipulation skills of industrial assembly workers despite that manipulation of mechanical objects remains essential for the industrial assembly of complex products. So, why are current robots still so bad at manipulation and humans so good?

Organizers: Katherine Kuchenbecker


BodyNet: Volumetric Inference of 3D Human Body Shapes

Talk
  • 10 April 2018 • 16:00 17:00
  • Gül Varol
  • N3.022

Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric body-part segmentation.


A New Perspective on Usability Applied to Robotics

Talk
  • 04 April 2018 • 14:00 15:00
  • Dr. Vincent Berenz
  • Stuttgart 2P4

For many service robots, reactivity to changes in their surroundings is a must. However, developing software suitable for dynamic environments is difficult. Existing robotic middleware allows engineers to design behavior graphs by organizing communication between components. But because these graphs are structurally inflexible, they hardly support the development of complex reactive behavior. To address this limitation, we propose Playful, a software platform that applies reactive programming to the specification of robotic behavior. The front-end of Playful is a scripting language which is simple (only five keywords), yet results in the runtime coordinated activation and deactivation of an arbitrary number of higher-level sensory-motor couplings. When using Playful, developers describe actions of various levels of abstraction via behaviors trees. During runtime an underlying engine applies a mixture of logical constructs to obtain the desired behavior. These constructs include conditional ruling, dynamic prioritization based on resources management and finite state machines. Playful has been successfully used to program an upper-torso humanoid manipulator to perform lively interaction with any human approaching it.

Organizers: Katherine Kuchenbecker Mayumi Mohan Alexis Block


  • Omar Costilla Reyes
  • Aquarium @ PS

Human footsteps can provide a unique behavioural pattern for robust biometric systems. Traditionally, security systems have been based on passwords or security access cards. Biometric recognition deals with the design of security systems for automatic identification or verification of a human subject (client) based on physical and behavioural characteristics. In this talk, I will present spatio-temporal raw and processed footstep data representations designed and evaluated on deep machine learning models based on a two-stream resnet architecture, by using the SFootBD database the largest footstep database to date with more than 120 people and almost 20,000 footstep signals. Our models deliver an artificial intelligence capable of effectively differentiating the fine-grained variability of footsteps between legitimate users (clients) and impostor users of the biometric system. We provide experimental results in 3 critical data-driven security scenarios, according to the amount of footstep data available for model training: at airports security checkpoints (smallest training set), workspace environments (medium training set) and home environments (largest training set). In these scenarios we report state-of-the-art footstep recognition rates.

Organizers: Dimitris Tzionas


  • Silvia Zuffi
  • N3.022

Animals are widespread in nature and the analysis of their shape and motion is of importance in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. In our previous SMAL model, we learn animal shape from toys figurines, but toys are limited in number and realism, and not every animal is sufficiently popular for there to be realistic toys depicting it. What is available in large quantities are images and videos of animals from nature photographs, animal documentaries, and webcams. In this talk I will present our recent work for capturing the detailed 3D shape of animals from images alone. Our method extracts significantly more 3D shape detail than previous work and is able to model new species using only a few video frames. Additionally, we extract realistic texture map from images for capturing both animal shape and appearance.


  • Sergio Pascual Díaz
  • S2.014

My plan is to present the motivation behind Deep GPs as well as some of the current approximate inference schemes available with their limitations. Then, I will explain how Deep GPs fit into the BayesOpt framework and the specific problems they could potentially solve.

Organizers: Philipp Hennig Diana Rebmann


  • Patrick Bajari
  • MPI IS lecture hall (N0.002)

In academic and policy circles, there has been considerable interest in the impact of “big data” on firm performance. We examine the question of how the amount of data impacts the accuracy of Machine Learned models of weekly retail product forecasts using a proprietary data set obtained from Amazon. We examine the accuracy of forecasts in two relevant dimensions: the number of products (N), and the number of time periods for which a product is available for sale (T). Theory suggests diminishing returns to larger N and T, with relative forecast errors diminishing at rate 1/sqrt(N) + 1/sqrt(T) . Empirical results indicate gains in forecast improvement in the T dimension; as more and more data is available for a particular product, demand forecasts for that product improve over time, though with diminishing returns to scale. In contrast, we find an essentially flat N effect across the various lines of merchandise: with a few exceptions, expansion in the number of retail products within a category does not appear associated with increases in forecast performance. We do find that the firm’s overall forecast performance, controlling for N and T effects across product lines, has improved over time, suggesting gradual improvements in forecasting from the introduction of new models and improved technology.

Organizers: Michel Besserve Michael Hirsch


Political Science and Data Science: What we can learn from each other

IS Colloquium
  • 12 March 2018 • 11:15 12:15
  • Simon Hegelich
  • MPI-IS lecture hall (N0.002)

Political science is integrating computational methods like machine learning into its own toolbox. At the same time the awareness rises that the utilization of machine learning algorithms in our daily life is a highly political issue. These two trends - the integration of computational methods into political science and the political analysis of the digital revolution - form the ground for a new transdisciplinary approach: political data science. Interestingly, there is a rich tradition of crossing the borders of the disciplines, as can be seen in the works of Paul Werbos and Herbert Simon (both political scientists). Building on this tradition and integrating ideas from deep learning and Hegel's philosophy of logic a new perspective on causality might arise.

Organizers: Philipp Geiger


  • Giacomo Garegnani
  • Tübingen, S2 seminar room

We present a novel probabilistic integrator for ordinary differential equations (ODEs) which allows for uncertainty quantification of the numerical error [1]. In particular, we randomise the time steps and build a probability measure on the deterministic solution, which collapses to the true solution of the ODE with the same rate of convergence as the underlying deterministic scheme. The intrinsic nature of the random perturbation guarantees that our probabilistic integrator conserves some geometric properties of the deterministic method it is built on, such as the conservation of first integrals or the symplecticity of the flow. Finally, we present a procedure to incorporate our probabilistic solver into the frame of Bayesian inference inverse problems, showing how inaccurate posterior concentrations given by deterministic methods can be corrected by a probabilistic interpretation of the numerical solution.

Organizers: Hans Kersting