Header logo is
Institute Talks

Imitation of Human Motion Planning

Talk
  • 29 June 2018 • 12:00 12:45
  • Jim Mainprice
  • N3.022 (Aquarium)

Humans act upon their environment through motion, the ability to plan their movements is therefore an essential component of their autonomy. In recent decades, motion planning has been widely studied in robotics and computer graphics. Nevertheless robots still fail to achieve human reactivity and coordination. The need for more efficient motion planning algorithms has been present through out my own research on "human-aware" motion planning, which aims to take the surroundings humans explicitly into account. I believe imitation learning is the key to this particular problem as it allows to learn both, new motion skills and predictive models, two capabilities that are at the heart of "human-aware" robots while simultaneously holding the promise of faster and more reactive motion generation. In this talk I will present my work in this direction.

Learning Control for Intelligent Physical Systems

Talk
  • 13 July 2018 • 14:15 14:45
  • Dr. Sebastian Trimpe
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

Modern technology allows us to collect, process, and share more data than ever before. This data revolution opens up new ways to design control and learning algorithms, which will form the algorithmic foundation for future intelligent systems that shall act autonomously in the physical world. Starting from a discussion of the special challenges when combining machine learning and control, I will present some of our recent research in this exciting area. Using the example of the Apollo robot learning to balance a stick in its hand, I will explain how intelligent agents can learn new behavior from just a few experimental trails. I will also discuss the need for theoretical guarantees in learning-based control, and how we can obtain them by combining learning and control theory.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Household Assistants: the Path from the Care-o-bot Vision to First Products

Talk
  • 13 July 2018 • 14:45 15:15
  • Dr. Martin Hägele
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

In 1995 Fraunhofer IPA embarked on a mission towards designing a personal robot assistant for everyday tasks. In the following years Care-O-bot developed into a long-term experiment for exploring and demonstrating new robot technologies and future product visions. The recent fourth generation of the Care-O-bot, introduced in 2014 aimed at designing an integrated system which addressed a number of innovations such as modularity, “low-cost” by making use of new manufacturing processes, and advanced human-user interaction. Some 15 systems were built and the intellectual property (IP) generated by over 20 years of research was recently licensed to a start-up. The presentation will review the path from an experimental platform for building up expertise in various robotic disciplines to recent pilot applications based on the now commercial Care-O-bot hardware.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

The Critical Role of Atoms at Surfaces and Interfaces: Do we really have control? Can we?

Talk
  • 13 July 2018 • 15:45 16:15
  • Prof. Dr. Dawn Bonnell
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

With the ubiquity of catalyzed reactions in manufacturing, the emergence of the device laden internet of things, and global challenges with respect to water and energy, it has never been more important to understand atomic interactions in the functional materials that can provide solutions in these spaces.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

Interactive Visualization – A Key Discipline for Big Data Analysis

Talk
  • 13 July 2018 • 15:00 15:30
  • Prof. Dr. Thomas Ertl
  • MPI-IS, Stuttgart, Lecture Hall 2 D5

Big Data has become the general term relating to the benefits and threats which result from the huge amount of data collected in all parts of society. While data acquisition, storage and access are relevant technical aspects, the analysis of the collected data turns out to be at the core of the Big Data challenge. Automatic data mining and information retrieval techniques have made much progress but many application scenarios remain in which the human in the loop plays an essential role. Consequently, interactive visualization techniques have become a key discipline of Big Data analysis and the field is reaching out to many new application domains. This talk will give examples from current visualization research projects at the University of Stuttgart demonstrating the thematic breadth of application scenarios and the technical depth of the employed methods. We will cover advances in scientific visualization of fields and particles, visual analytics of document collections and movement patterns as well as cognitive aspects.

Organizers: Katherine Kuchenbecker Ildikó Papp-Wiedmann Matthias Tröndle Claudia Daefler

  • Bin Yu
  • Tübingen, IS Lecture Hall (N0.002)

In this talk, I'd like to discuss the intertwining importance and connections of three principles of data science in the title. They will be demonstrated in the context of two collaborative projects in neuroscience and genomics, respectively. The first project in neuroscience uses transfer learning to integrate fitted convolutional neural networks (CNNs) on ImageNet with regression methods to provide predictive and stable characterizations of neurons from the challenging primary visual cortex V4. The second project proposes iterative random forests (iRF) as a stablized RF to seek predictable and interpretable high-order interactions among biomolecules.

Organizers: Michel Besserve


  • Prof. Constantin Rothkopf
  • Tübingen, 3rd Floor Intelligent Systems: Aquarium

Active vision has long put forward the idea, that visual sensation and our actions are inseparable, especially when considering naturalistic extended behavior. Further support for this idea comes from theoretical work in optimal control, which demonstrates that sensing, planning, and acting in sequential tasks can only be separated under very restricted circumstances. The talk will present experimental evidence together with computational explanations of human visuomotor behavior in tasks ranging from classic psychophysical detection tasks to ball catching and visuomotor navigation. Along the way it will touch topics such as the heuristics hypothesis and learning of visual representations. The connecting theme will be that, from the switching of visuomotor behavior in response to changing task-constraints down to cortical visual representations in V1, action and perception are inseparably intertwined in an ambiguous and uncertain world

Organizers: Betty Mohler


A naturalistic perspective on optic flow processing in the fly

Talk
  • 27 February 2018 • 3:00 p.m. 4:00 p.m.
  • Aljoscha Leonhardt
  • N4.022, EI Glass Seminar Room

Optic flow offers a rich source of information about an organism’s environment. Flies, for instance, are thought to make use of motion vision to control and stabilise their course during acrobatic airborne manoeuvres. How these computations are implemented in neural hardware and how such circuits cope with the visual complexity of natural scenes, however, remain open questions. This talk outlines some of the progress we have made in unraveling the computational substrate underlying optic flow processing in Drosophila. In particular, I will focus on our efforts to connect neural mechanisms and real-world demands via task-driven modelling.

Organizers: Michel Besserve


Patient Inspired Engineering: Problem, device, solution

Talk
  • 26 February 2018 • 11:00 12:00
  • Professor Rahmi Oklu
  • Room 3P02 - Stuttgart

Minimally invasive approaches to the treatment of vascular diseases are constantly evolving. These diseases are among the most prevalent medical problems today including stroke, myocardial infarction, pulmonary emboli, hemorrhage and aneurysms. I will review current approaches to vascular embolization and thrombosis, the challenges they pose and the limitations of current devices and end with patient inspired engineering approaches to the treatment of these conditions.

Organizers: Metin Sitti


Deriving a Tongue Model from MRI Data

Talk
  • 20 February 2018 • 14:00 14:45
  • Alexander Hewer
  • Aquarium

The tongue plays a vital part in everyday life where we use it extensively during speech production. Due to this importance, we want to derive a parametric shape model of the tongue. This model enables us to reconstruct the full tongue shape from a sparse set of points, like for example motion capture data. Moreover, we can use such a model in simulations of the vocal tract to perform articulatory speech synthesis or to create animated virtual avatars. In my talk, I describe a framework for deriving such a model from MRI scans of the vocal tract. In particular, this framework uses image denoising and segmentation methods to produce a point cloud approximating the vocal tract surface. In this context, I will also discuss how palatal contacts of the tongue can be handled, i.e., situations where the tongue touches the palate and thus no tongue boundary is visible. Afterwards, template matching is used to derive a mesh representation of the tongue from this cloud. The acquired meshes are finally used to construct a multilinear model.

Organizers: Timo Bolkart


Nonstandard Analysis - The Comeback of Infinitesimals

Talk
  • 19 February 2018 • 11:00 12:15
  • Randolf Scholz
  • Tübingen,

The early Calculus of Newton and Leibniz made heavy use of infinitesimal quantities and flourished for over a hundred years until it was superseded by the more rigorous epsilon-delta formalism. It took until the 1950's for A. Robinson to find a proper way to construct a number system containing actual infinitesimals -- the Hyperreals *|R. This talk outlines their construction and possible applications in modern analysis.

Organizers: Philipp Hennig


  • Dr. Adam Spiers
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

This talk will focus on three topics of my research at Yale University, which centers on themes of human and robotic manipulation and haptic perception. My major research undertaking at Yale has involved running a quantitative study of daily upper-limb prosthesis use in unilateral amputees. This work aims to better understand the techniques employed by long-term users of artificial arms and hands in order to inform future prosthetic device design and therapeutic interventions. While past attempts to quantify prosthesis-use have implemented either behavioral questionnaires or observations of specific tasks in a structured laboratory settings, our approach involves participants completing many hours of self-selected household chores in their own homes while wearing a head mounted video camera. I will discuss how we have addressed the processing of such a large and unstructured data set, in addition to our current findings. Complementary to my work in prosthetics, I will also discuss my work on several novel robotic grippers which aim to enhance the grasping, manipulation and object identification capabilities of robotic systems. These grippers implement underactuated designs, machine learning approaches or variable friction surfaces to provide low-cost, model-free and easily reproducible solutions to what have been traditionally been considered complex problems in robotic manipulation, i.e. stable grasp acquisition, fast tactile object recognition and within-hand object manipulation. Finally, I will present a brief overview of my efforts designing and testing shape-changing haptic interfaces, a largely unexplored feedback modality that I believe has huge potential for discretely communicating information to people with and without sensory impairments. This technology has been implemented in a pedestrian navigation system and evaluated in a variety of scenarios, including a large scale immersive theatre production with visually impaired artistic collaborators and almost 100 participants.

Organizers: Katherine Kuchenbecker


  • Prof. Christian Wallraven
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 5H7

Already starting at birth, humans integrate information from several sensory modalities in order to form a representation of the environment - such as when a baby explores, manipulates, and interacts with objects. The combination of visual and touch information is one of the most fundamental sensory integration processes, as touch information (such as body-relative size, shape, texture, material, temperature, and weight) can easily be linked to the visual image, thereby providing a grounding for later visual-only recognition. Previous research on such integration processes has so far mainly focused on low-level object properties (such as curvature, or surface granularity) such that little is known on how the human actually forms a high-level multisensory representation of objects. Here, I will review research from our lab that investigates how the human brain processes shape using input from vision and touch. Using a large variety of novel, 3D-printed shapes we were able to show that touch is actually equally good at shape processing than vision, suggesting a common, multisensory representation of shape. We next conducted a series of imaging experiments (using anatomical, functional, and white-matter analyses) that chart the brain networks that process this shape representation. I will conclude the talk with a brief medley of other haptics-related research in the lab, including robot learning, braille, and haptic face recognition.

Organizers: Katherine Kuchenbecker


  • Haliza Mat Husin
  • MPI-IS Stuttgart, Heisenbergstr. 3, Room 2P4

Background: Pre-pregnancy obesity and inadequate maternal weight gain during pregnancy can lead to adverse effects in the newborn but also to metabolic, cardiovascular and even neurological diseases in older ages of the offspring. Heart activity can be used as a proxy for the activity of the autonomic nervous system (ANS). The aim of this study is to evaluate the effect of pre-pregnancy weight, maternal weight gain and maternal metabolism on the ANS of the fetus in healthy pregnancies.

Organizers: Katherine Kuchenbecker


Appearance Modeling for 4D Multi-view Representations

Talk
  • 15 December 2017 • 12:00 12:45
  • Vagia Tsiminaki
  • PS Seminar Room (N3.022)

The emergence of multi-view capture systems has yield a tremendous amount of video sequences. The task of capturing spatio-temporal models from real world imagery (4D modeling) should arguably benefit from this enormous visual information. In order to achieve highly realistic representations both geometry and appearance need to be modeled in high precision. Yet, even with the great progress of the geometric modeling, the appearance aspect has not been fully explored and visual quality can still be improved. I will explain how we can optimally exploit the redundant visual information of the captured video sequences and provide a temporally coherent, super-resolved, view-independent appearance representation. I will further discuss how to exploit the interdependency of both geometry and appearance as separate modalities to enhance visual perception and finally how to decompose appearance representations into intrinsic components (shading & albedo) and super-resolve them jointly to allow for more realistic renderings.

Organizers: Despoina Paschalidou